diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..e78612a4 --- /dev/null +++ b/404.html @@ -0,0 +1,1935 @@ + + + + + + + + + + + + + + + + + + Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..bdec7cba --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +docs.kuadrant.io \ No newline at end of file diff --git a/architecture/rfcs/0001-rlp-v2/index.html b/architecture/rfcs/0001-rlp-v2/index.html new file mode 100644 index 00000000..fd7eafff --- /dev/null +++ b/architecture/rfcs/0001-rlp-v2/index.html @@ -0,0 +1,3414 @@ + + + + + + + + + + + + + + + + + + + + RateLimitPolicy API v2 - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

RateLimitPolicy API v2

+ +

Summary

+

Proposal of new API for the Kuadrant's RateLimitPolicy (RLP) CRD, for improved UX.

+

Motivation

+

The RateLimitPolicy API (v1beta1), particularly its RateLimit type used in ratelimitpolicy.spec.rateLimits, designed in part to fit the underlying implementation based on the Envoy Rate limit filter, has been proven to be complex, as well as somewhat limiting for the extension of the API for other platforms and/or for supporting use cases of not contemplated in the original design.

+

Users of the RateLimitPolicy will immediately recognize elements of Envoy's Rate limit API in the definitions of the RateLimit type, with almost 1:1 correspondence between the Configuration type and its counterpart in the Envoy configuration. Although compatibility between those continue to be desired, leaking such implementation details to the level of the API can be avoided to provide a better abstraction for activators ("matchers") and payload ("descriptors"), stated by users in a seamless way.

+

Furthermore, the Limit type – used as well in the RLP's RateLimit type – implies presently a logical relationship between its inner concepts – i.e. conditions and variables on one side, and limits themselves on the other – that otherwise could be shaped in a different manner, to provide clearer understanding of the meaning of these concepts by the user and avoid repetition. I.e., one limit definition contains multiple rate limits, and not the other way around.

+

Goals

+
    +
  1. Decouple the API from the underlying implementation - i.e. provide a more generic and more user-friendly abstraction
  2. +
  3. Prepare the API for upcoming changes in the Gateway API Policy Attachment specification
  4. +
  5. Improve consistency of the API with respect to Kuadrant's AuthPolicy CRD - i.e. same language, similar UX
  6. +
+

Current WIP to consider

+
    +
  1. Policy attachment update (kubernetes-sigs/gateway-api#1565)
  2. +
  3. No merging of policies (kuadrant/architecture#10)
  4. +
  5. A single Policy scoped to HTTPRoutes and HTTPRouteRule (kuadrant/architecture#4) - future
  6. +
  7. Implement skip_if_absent for the RequestHeaders action (kuadrant/wasm-shim#29)
  8. +
+

Highlights

+
    +
  • spec.rateLimits[] replaced with spec.limits{<limit-name>: <limit-definition>}
  • +
  • spec.rateLimits.limits replaced with spec.limits.<limit-name>.rates
  • +
  • spec.rateLimits.limits.maxValue replaced with spec.limits.<limit-name>.rates.limit
  • +
  • spec.rateLimits.limits.seconds replaced with spec.limits.<limit-name>.rates.duration + spec.limits.<limit-name>.rates.unit
  • +
  • spec.rateLimits.limits.conditions replaced with spec.limits.<limit-name>.when, structured field based on well-known selectors, mainly for expressing conditions not related to the HTTP route (although not exclusively)
  • +
  • spec.rateLimits.limits.variables replaced with spec.limits.<limit-name>.counters, based on well-known selectors
  • +
  • spec.rateLimits.rules replaced with spec.limits.<limit-name>.routeSelectors, for selecting (or "sub-targeting") HTTPRouteRules that trigger the limit
  • +
  • new matcher spec.limits.<limit-name>.routeSelectors.hostnames[]
  • +
  • spec.rateLimits.configurations removed – descriptor actions configuration (previously spec.rateLimits.configurations.actions) generated from spec.limits.<limit-name>.when.selectorspec.limits.<limit-name>.counters and unique identifier of the limit (associated with spec.limits.<limit-name>.routeSelectors)
  • +
  • Limitador conditions composed of "soft" spec.limits.<limit-name>.when conditions + a "hard" condition that binds the limit to its trigger HTTPRouteRules
  • +
+

For detailed differences between current and vew RLP API, see Comparison to current RateLimitPolicy.

+

Guide-level explanation

+

Examples of RLPs based on the new API

+

Given the following network resources:

+
apiVersion: gateway.networking.k8s.io/v1alpha2
+kind: Gateway
+metadata:
+  name: istio-ingressgateway
+  namespace: istio-system
+spec:
+  gatewayClassName: istio
+  listeners:
+  - hostname:
+    - "*.acme.com"
+---
+apiVersion: gateway.networking.k8s.io/v1alpha2
+kind: HTTPRoute
+metadata:
+  name: toystore
+  namespace: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - "*.toystore.acme.com"
+  rules:
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: GET
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: POST
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/assets/"
+    backendRefs:
+    - name: toystore
+      port: 80
+    filters:
+    - type: ResponseHeaderModifier
+      responseHeaderModifier:
+        set:
+        - name: Cache-Control
+          value: "max-age=31536000, immutable"
+
+

The following are examples of RLPs targeting the route and the gateway. Each example is independent from the other.

+

Example 1. Minimal example - network resource targeted entirely without filtering, unconditional and unqualified rate limiting

+

In this example, all traffic to *.toystore.acme.com will be limited to 5rps, regardless of any other attribute of the HTTP request (method, path, headers, etc), without any extra "soft" conditions (conditions non-related to the HTTP route), across all consumers of the API (unqualified rate limiting).

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-infra-rl
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    base: # user-defined name of the limit definition - future use for handling hierarchical policy attachment
+    - rates: # at least one rate limit required
+      - limit: 5
+        unit: second
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/assets/*"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-infra-rl/base"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-infra-rl/base == "1"
+  max_value: 5
+  seconds: 1
+  namespace: TDB
+
+
+ +

Example 2. Targeting specific route rules, with counter qualifiers, multiple rates per limit definition and "soft" conditions

+

In this example, a distinct limit will be associated ("bound") to each individual HTTPRouteRule of the targeted HTTPRoute, by using the routeSelectors field for selecting (or "sub-targeting") the HTTPRouteRule.

+

The following limit definitions will be bound to each HTTPRouteRule: +- /toys* → 50rpm, enforced per username (counter qualifier) and only in case the user is not an admin ("soft" condition). +- /assets/* → 5rpm / 100rp12h

+

Each set of trigger matches in the RLP will be matched to all HTTPRouteRules whose HTTPRouteMatches is a superset of the set of trigger matches in the RLP. For every HTTPRouteRule matched, the HTTPRouteRule will be bound to the corresponding limit definition that specifies that trigger. In case no HTTPRouteRule is found containing at least one HTTPRouteMatch that is identical to some set of matching rules of a particular limit definition, the limit definition is considered invalid and reported as such in the status of RLP.

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-per-endpoint
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    toys:
+      rates:
+      - limit: 50
+        duration: 1
+        unit: minute
+      counters:
+      - auth.identity.username
+      routeSelectors:
+      - matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
+        - path:
+            type: PathPrefix
+            value: "/toys"
+      when:
+      - selector: auth.identity.group
+        operator: neq
+        value: admin
+
+    assets:
+      rates:
+      - limit: 5
+        duration: 1
+        unit: minute
+      - limit: 100
+        duration: 12
+        unit: hour
+      routeSelectors:
+      - matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)
+        - path:
+            type: PathPrefix
+            value: "/assets/"
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-endpoint/toys"
+      descriptor_value: "1"
+  - metadata:
+      descriptor_key: "auth.identity.group"
+      metadata_key:
+        key: "envoy.filters.http.ext_authz"
+        path:
+        - segment:
+            key: "identity"
+        - segment:
+            key: "group"
+  - metadata:
+      descriptor_key: "auth.identity.username"
+      metadata_key:
+        key: "envoy.filters.http.ext_authz"
+        path:
+        - segment:
+            key: "identity"
+        - segment:
+            key: "username"
+- rules:
+  - paths: ["/assets/*"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-endpoint/assets"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-per-endpoint/toys == "1"
+  - auth.identity.group != "admin"
+  variables:
+  - auth.identity.username
+  max_value: 50
+  seconds: 60
+  namespace: kuadrant
+- conditions:
+  - toystore/toystore-per-endpoint/assets == "1"
+  max_value: 5
+  seconds: 60
+  namespace: kuadrant
+- conditions:
+  - toystore/toystore-per-endpoint/assets == "1"
+  max_value: 100
+  seconds: 43200 # 12 hours
+  namespace: kuadrant
+
+
+ +

Example 3. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch missing

+

Consider a 150rps rate limit set on requests to GET /toys/special. Such specific application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, to avoid binding limits to HTTPRouteRules that are more permissive than the actual intended scope of the limit, the RateLimitPolicy controller requires trigger matches to find identical matching rules explicitly defined amongst the sets of HTTPRouteMatches of the HTTPRouteRules potentially targeted.

+

As a consequence, by simply defining a trigger match for GET /toys/special in the RLP, the GET|POST /toys* HTTPRouteRule will NOT be bound to the limit definition. In order to ensure the limit definition is properly bound to a routing rule that strictly covers the GET /toys/special application endpoint, first the user has to modify the spec of the HTTPRoute by adding an explicit HTTPRouteRule for this case:

+
apiVersion: gateway.networking.k8s.io/v1alpha2
+kind: HTTPRoute
+metadata:
+  name: toystore
+  namespace: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - "*.toystore.acme.com"
+  rules:
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: GET
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: POST
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/assets/"
+    backendRefs:
+    - name: toystore
+      port: 80
+    filters:
+    - type: ResponseHeaderModifier
+      responseHeaderModifier:
+        set:
+        - name: Cache-Control
+          value: "max-age=31536000, immutable"
+  - matches: # new (more specific) HTTPRouteRule added
+    - path:
+        type: Exact
+        value: "/toys/special"
+      method: GET
+    backendRefs:
+    - name: toystore
+      port: 80
+
+

After that, the RLP can target the new HTTPRouteRule strictly:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-special-toys
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    specialToys:
+      rates:
+      - limit: 150
+        unit: second
+      routeSelectors:
+      - matches: # matches the new HTTPRouteRule (i.e. GET /toys/special)
+        - path:
+            type: Exact
+            value: "/toys/special"
+          method: GET
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys/special"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-special-toys/specialToys"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-special-toys/specialToys == "1"
+  max_value: 150
+  seconds: 1
+  namespace: kuadrant
+
+
+ +

Example 4. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch found

+

This example is similar to Example 3. Consider the use case of setting a 150rpm rate limit on requests to GET /toys*.

+

The targeted application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, unlike in the previous example where, at first, no HTTPRouteRule included an explicit HTTPRouteMatch for GET /toys/special, in this example the HTTPRouteMatch for the targeted application endpoint GET /toys* does exist explicitly in one of the HTTPRouteRules, thus the RateLimitPolicy controller would find no problem to bind the limit definition to the HTTPRouteRule. That would nonetheless cause a unexpected behavior of the limit triggered not strictly for GET /toys*, but also for POST /toys*.

+

To avoid extending the scope of the limit beyond desired, with no extra "soft" conditions, again the user must modify the spec of the HTTPRoute, so an exclusive HTTPRouteRule exists for the GET /toys* application endpoint:

+
apiVersion: gateway.networking.k8s.io/v1alpha2
+kind: HTTPRoute
+metadata:
+  name: toystore
+  namespace: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - "*.toystore.acme.com"
+  rules:
+  - matches: # first HTTPRouteRule split into two – one for GET /toys*, other for POST /toys*
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: GET
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: POST
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/assets/"
+    backendRefs:
+    - name: toystore
+      port: 80
+    filters:
+    - type: ResponseHeaderModifier
+      responseHeaderModifier:
+        set:
+        - name: Cache-Control
+          value: "max-age=31536000, immutable"
+
+

The RLP can then target the new HTTPRouteRule strictly:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toy-readers
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    toyReaders:
+      rates:
+      - limit: 150
+        unit: second
+      routeSelectors:
+      - matches: # matches the new more speficic HTTPRouteRule (i.e. GET /toys*)
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: GET
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toy-readers/toyReaders"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - toystore/toy-readers/toyReaders == "1"
+  max_value: 150
+  seconds: 1
+  namespace: kuadrant
+
+
+ +

Example 5. One limit triggered by multiple HTTPRouteRules

+

In this example, both HTTPRouteRules, i.e. GET|POST /toys* and /assets/*, are targeted by the same limit of 50rpm per username.

+

Because the HTTPRoute has no other rule, this is technically equivalent to targeting the entire HTTPRoute and therefore similar to Example 1. However, if the HTTPRoute had other rules or got other rules added afterwards, this would ensure the limit applies only to the two original route rules.

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-per-user
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    toysOrAssetsPerUsername:
+      rates:
+      - limit: 50
+        duration: 1
+        unit: minute
+      counters:
+      - auth.identity.username
+      routeSelectors:
+      - matches:
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: GET
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: POST
+      - matches:
+        - path:
+            type: PathPrefix
+            value: "/assets/"
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/assets/*"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-user/toysOrAssetsPerUsername"
+      descriptor_value: "1"
+  - metadata:
+      descriptor_key: "auth.identity.username"
+      metadata_key:
+        key: "envoy.filters.http.ext_authz"
+        path:
+        - segment:
+            key: "identity"
+        - segment:
+            key: "username"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-per-user/toysOrAssetsPerUsername == "1"
+  variables:
+  - auth.identity.username
+  max_value: 50
+  seconds: 60
+  namespace: kuadrant
+
+
+ +

Example 6. Multiple limit definitions targeting the same HTTPRouteRule

+

In case multiple limit definitions target a same HTTPRouteRule, all those limit definitions will be bound to the HTTPRouteRule. No limit "shadowing" will be be enforced by the RLP controller. Due to how things work as of today in Limitador nonetheless (i.e. the rule of the most restrictive limit wins), in some cases, across multiple limits triggered, one limit ends up "shadowing" others, depending on further qualification of the counters and the actual RL values.

+

E.g., the following RLP intends to set 50rps per username on GET /toys*, and 100rps on POST /toys* or /assets/*:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-per-endpoint
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    readToys:
+      rates:
+      - limit: 50
+        unit: second
+      counters:
+      - auth.identity.username
+      routeSelectors:
+      - matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: GET
+
+    postToysOrAssets:
+      rates:
+      - limit: 100
+        unit: second
+      routeSelectors:
+      - matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: POST
+      - matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)
+        - path:
+            type: PathPrefix
+            value: "/assets/"
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-endpoint/readToys"
+      descriptor_value: "1"
+  - metadata:
+      descriptor_key: "auth.identity.username"
+      metadata_key:
+        key: "envoy.filters.http.ext_authz"
+        path:
+        - segment:
+            key: "identity"
+        - segment:
+            key: "username"
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/assets/*"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-endpoint/readToys"
+      descriptor_value: "1"
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-endpoint/postToysOrAssets"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions: # actually applies to GET|POST /toys*
+  - toystore/toystore-per-endpoint/readToys == "1"
+  variables:
+  - auth.identity.username
+  max_value: 50
+  seconds: 1
+  namespace: kuadrant
+- conditions: # actually applies to GET|POST /toys* and /assets/*
+  - toystore/toystore-per-endpoint/postToysOrAssets == "1"
+  max_value: 100
+  seconds: 1
+  namespace: kuadrant
+
+
+ +

This example was only written in this way to highlight that it is possible that multiple limit definitions select a same HTTPRouteRule. To avoid over-limiting between GET|POST /toys* and thus ensure the originally intended limit definitions for each of these routes apply, the HTTPRouteRule should be split into two, like done in Example 4.

+

Example 7. Limits triggered for specific hostnames

+

In the previous examples, the limit definitions and therefore the counters were set indistinctly for all hostnames – i.e. no matter if the request is sent to games.toystore.acme.com or dolls.toystore.acme.com, the same counters are expected to be affected. In this example on the other hand, a 1000rpd rate limit is set for requests to /assets/* only when the hostname matches games.toystore.acme.com.

+

First, the user needs to edit the HTTPRoute to make the targeted hostname games.toystore.acme.com explicit:

+
apiVersion: gateway.networking.k8s.io/v1alpha2
+kind: HTTPRoute
+metadata:
+  name: toystore
+  namespace: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - "*.toystore.acme.com"
+  - games.toystore.acme.com # new (more specific) hostname added
+  rules:
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: GET
+    - path:
+        type: PathPrefix
+        value: "/toys"
+      method: POST
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches:
+    - path:
+        type: PathPrefix
+        value: "/assets/"
+    backendRefs:
+    - name: toystore
+      port: 80
+    filters:
+    - type: ResponseHeaderModifier
+      responseHeaderModifier:
+        set:
+        - name: Cache-Control
+          value: "max-age=31536000, immutable"
+
+

After that, the RLP can target specifically the newly added hostname:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-per-hostname
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    games:
+      rates:
+      - limit: 1000
+        unit: day
+      routeSelectors:
+      - matches:
+        - path:
+            type: PathPrefix
+            value: "/assets/"
+        hostnames:
+        - games.toystore.acme.com
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/assets/*"]
+    hosts: ["games.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-per-hostname/games"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-per-hostname/games == "1"
+  max_value: 1000
+  seconds: 86400 # 1 day
+  namespace: kuadrant
+
+
+ +

Example 8. Targeting the Gateway

+
+

Note: Additional meaning and context may be given to this use case in the future, when discussing defaults and overrides.

+
+

Targeting a Gateway is a shortcut to targeting all individual HTTPRoutes referencing the gateway as parent. This differs from Example 1 nonetheless because, by targeting the gateway rather than an individual HTTPRoute, the RLP applies automatically to all HTTPRoutes pointing to the gateway, including routes created before and after the creation of the RLP. Moreover, all those routes will share the same limit counters specified in the RLP.

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: gw-rl
+  namespace: istio-ingressgateway
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: Gateway
+    name: istio-ingressgateway
+  limits:
+    base:
+    - rates:
+      - limit: 5
+        unit: second
+
+
+ How is this RLP implemented under the hood? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/assets/*"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "istio-system/gw-rl/base"
+      descriptor_value: "1"
+
+ +
limits:
+- conditions:
+  - istio-system/gw-rl/base == "1"
+  max_value: 5
+  seconds: 1
+  namespace: TDB
+
+
+ +

Comparison to current RateLimitPolicy

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CurrentNewReason
1:1 relation between Limit (the object) and the actual Rate limit (the value) (spec.rateLimits.limits)Rate limit becomes a detail of Limit where each limit may define one or more rates (1:N) (spec.limits.<limit-name>.rates) +
    +
  • It allows to reuse when conditions and counters for groups of rate limits
  • +
+
Parsed spec.rateLimits.limits.conditions field, directly exposing the Limitador's APIStructured spec.limits.<limit-name>.when condition field composed of 3 well-defined properties: selector, operator and value + +
spec.rateLimits.configurations as a list of "variables assignments" and direct exposure of Envoy's RL descriptor actions APIDescriptor actions composed from selectors used in the limit definitions (spec.limits.<limit-name>.when.selector and spec.limits.<limit-name>.counters) plus a fixed identifier of the route rules (spec.limits.<limit-name>.routeSelectors) +
    +
  • Abstract the Envoy-specific concepts of "actions" and "descriptors"
  • +
  • No risk of mismatching descriptors keys between "actions" and actual usage in the limits
  • +
  • No user-defined generic descriptors (e.g. "limited = 1")
  • +
  • Source value of the selectors defined from an implicit "context" data structure
  • +
+
Key-value descriptorsStructured descriptors referring to a contextual well-known data structure + +
Limitador conditions independent from the route rulesArtificial Limitador condition injected to bind routes and corresponding limits +
    +
  • Ensure the limit is enforced only for corresponding selected HTTPRouteRules
  • +
+
translate(spec.rateLimits.rules) ⊂ httproute.spec.rulesspec.limits.<limit-name>.routeSelectors.matches ⊆ httproute.spec.rules.matches +
    +
  • HTTPRouteRule selector (via HTTPRouteMatch subset)
  • +
  • Gateway API language
  • +
  • Preparation for inherited policies and defaults & overrides
  • +
+
spec.rateLimits.limits.secondsspec.limits.<limit-name>.rates.duration and spec.limits.<limit-name>.rates.unit +
    +
  • Support for more units beyond seconds
  • +
  • duration: 1 by default
  • +
+
spec.rateLimits.limits.variablesspec.limits.<limit-name>.counters +
    +
  • Improved (more specific) naming
  • +
+
spec.rateLimits.limits.maxValuespec.limits.<limit-name>.rates.limit +
    +
  • Improved (more generic) naming
  • +
+
+ +

Reference-level explanation

+

By completely dropping out the configurations field from the RLP, composing the RL descriptor actions is now done based essentially on the selectors listed in the when conditions and the counters, plus an artificial condition used to bind the HTTPRouteRules to the corresponding limits to trigger in Limitador.

+

The descriptor actions composed from the selectors in the "soft" when conditions and counter qualifiers originate from the direct references these selectors make to paths within a well-known data structure that stores information about the context (HTTP request and ext-authz filter). These selectors in "soft" when conditions and counter qualifiers are thereby called well-known selectors.

+

Other descriptor actions might be composed by the RLP controller to define additional RL conditions to bind HTTPRouteRules and corresponding limits.

+

Well-known selectors

+

Each selector used in a when condition or counter qualifier is a direct reference to a path within a well-known data structure that stores information about the context (L4 and L7 data of the original request handled by the proxy), as well as auth data (dynamic metadata occasionally exported by the external authorization filter and injected by the proxy into the rate-limit filter).

+

The well-known data structure for building RL descriptor actions resembles Authorino's "Authorization JSON", whose context component consists of Envoy's AttributeContext type of the external authorization API (marshalled as JSON). Compared to the more generic RateLimitRequest struct, the AttributeContext provides a more structured and arguibly more intuitive relation between the data sources for the RL descriptors actions and their corresponding key names through which the values are referred within the RLP, in a context of predominantly serving for HTTP applications.

+

To keep compatibility with the Envoy Rate Limit API, the well-known data structure can optionally be extended with the RateLimitRequest, thus resulting in the following final structure.

+
context: # Envoy's Ext-Authz `CheckRequest.AttributeContext` type
+  source:
+    address: 
+    service: 
+    
+  destination:
+    address: 
+    service: 
+    
+  request:
+    http:
+      host: 
+      path: 
+      method: 
+      headers: {}
+
+auth: # Dynamic metadata exported by the external authorization service
+
+ratelimit: # Envoy's Rate Limit `RateLimitRequest` type
+  domain:  # generated by the Kuadrant controller
+  descriptors: {} # descriptors configured by the user directly in the proxy (not generated by the Kuadrant controller, if allowed)
+  hitsAddend:  # only in case we want to allow users to refer to this value in a policy
+
+

Mechanics of generating RL descriptor actions

+

From the perspective of a user who writes a RLP, the selectors used in then when and counters fields are paths to the well-known data structure (see Well-known selectors). While desiging a policy, the user intuitively pictures the well-known data structure and states each limit definition having in mind the possible values assumed by each of those paths in the data plane. For example,

+

The user story:

+
+

Each distinct user (auth.identity.username) can send no more than 1rps to the same HTTP path (context.request.http.path).

+
+

...materializes as the following RLP:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    dolls:
+      rates:
+      - limit: 1
+        unit: second
+      counters:
+      - auth.identity.username
+      - context.request.http.path
+
+

The following selectors are to be interpreted by the RLP controller: +- auth.identity.username +- context.request.http.path

+

The RLP controller uses a map to translate each selector into its corresponding descriptor action. (Roughly described:)

+
context.source.address    → source_cluster(...) # TBC
+context.source.service    → source_cluster(...) # TBC
+context.destination...    → destination_cluster(...)
+context.destination...    → destination_cluster(...)
+context.request.http.<X>  → request_headers(header_name: ":<X>")
+context.request...        → ...
+auth.<X>                  → metadata(key: "envoy.filters.http.ext_authz", path: <X>)
+ratelimit.domain          → <hostname>
+
+

...to yield effectively:

+
rate_limits:
+- actions:
+  - metadata:
+      descriptor_key: "auth.identity.username"
+      metadata_key:
+        key: "envoy.filters.http.ext_authz"
+        path:
+        - segment:
+            key: "identity"
+        - segment:
+            key: "username"
+  - request_headers:
+      descriptor_key: "context.request.http.path"
+      header_name: ":path"
+
+

Artificial Limitador condition for routeSelectors

+

For each limit definition that explicitly or implicitly defines a routeSelectors field, the RLP controller will generate an artificial Limitador condition that ensures that the limit applies only when the filterred rules are honoured when serving the request. This can be implemented with a 2-step procedure: +1. generate an unique identifier of the limit - i.e. <policy-namespace>/<policy-name>/<limit-name> +2. associate a generic_key type descriptor action with each HTTPRouteRule targeted by the limit – i.e. { descriptor_key: <unique identifier of the limit>, descriptor_value: "1" }.

+

For example, given the following RLP:

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-non-admin-users
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    toys:
+      routeSelectors:
+      - matches:
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: GET
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: POST
+      rates:
+      - limit: 50
+        duration: 1
+        unit: minute
+      when:
+      - selector: auth.identity.group
+        operator: neq
+        value: admin
+
+    assets:
+      routeSelectors:
+      - matches:
+        - path:
+            type: PathPrefix
+            value: "/assets/"
+      rates:
+      - limit: 5
+        duration: 1
+        unit: minute
+      when:
+      - selector: auth.identity.group
+        operator: neq
+        value: admin
+
+

Apart from the following descriptor action associated with both routes:

+
- metadata:
+    descriptor_key: "auth.identity.group"
+    metadata_key:
+      key: "envoy.filters.http.ext_authz"
+      path:
+      - segment:
+          key: "identity"
+      - segment:
+          key: "group"
+
+

...and its corresponding Limitador condition:

+
auth.identity.group != "admin"
+
+

The following additional artificial descriptor actions will be generated:

+
# associated with route rule GET|POST /toys*
+- generic_key:
+    descriptor_key: "toystore/toystore-non-admin-users/toys"
+    descriptor_value: "1"
+
+# associated with route rule /assets/*
+- generic_key:
+    descriptor_key: "toystore/toystore-non-admin-users/assets"
+    descriptor_value: "1"
+
+

...and their corresponding Limitador conditions.

+

In the end, the following Limitador configuration is yielded:

+
- conditions:
+  - toystore/toystore-non-admin-users/toys == "1"
+  - auth.identity.group != "admin"
+  max_value: 50
+  seconds: 60
+  namespace: kuadrant
+
+- conditions:
+  - toystore/toystore-non-admin-users/assets == "1"
+  - auth.identity.group != "admin"
+  max_value: 5
+  seconds: 60
+  namespace: kuadrant
+
+

Support in wasm shim and Envoy RL API

+

This proposal tries to keep compatibility with the Envoy API for rate limit and does not introduce any new requirement that otherwise would require the use of wasm shim to be implemented.

+

In the case of implementation of this proposal in the wasm shim, all types of matchers supported by the HTTPRouteMatch type of Gateway API must be also supported in the rate_limit_policies.gateway_actions.rules field of the wasm plugin configuration. These include matchers based on path (prefix, exact), headers, query string parameters and method.

+

Drawbacks

+

HTTPRoute editing occasionally required
+Need to duplicate rules that don't explicitly include a matcher wanted for the policy, so that matcher can be added as a special case for each of those rules.

+

Risk of over-targeting
+Some HTTPRouteRules might need to be split into more specific ones so a limit definition is not bound to beyond intended (e.g. target method: GET when the route matches method: POST|GET).

+

Prone to consistency issues
+Typos and updates to the HTTPRoute can easily cause a mismatch and invalidate a RLP.

+

Two types of conditions – routeSelectors and when conditions
+Although with different meanings (evaluates in the gateway vs. evaluated in Limitador) and meant for expressing different types of rules (HTTPRouteRule selectors vs. "soft" conditions based on attributes not related to the HTTP request), users might still perceive these as two ways of expressing conditions and find difficult to understand at first that "soft" conditions do not accept expressions related to attributes of the HTTP request.

+

Rationale and alternatives

+

Targeting full HTTPRouteRules

+

Requiring users to specify full HTTPRouteRule matches in the RLP (as opposed to any subset of HTTPRoureMatches of targeted HTTPRouteRules – current proposal) contains some of the same drawbacks of this proposal, such as HTTPRoute editing occasionally required and prone to consistency issues. If, on one hand, it eliminates the risk of over-targeting, on the other hand, it does it at the cost of requiring excessively verbose policies written by the users, to the point of sometimes expecting user to have to specify trigger matching rules that are significantly more than what's originally and strictly intended.

+

E.g.:

+

On a HTTPRoute that contains the following HTTPRouteRules (simplified representation):

+
{ header: x-canary=true } → backend-canary
+{ * } → backend-rest
+
+

Where the user wants to define a RLP that targets { method: POST }. First, the user needs to edit the HTTPRoute and duplicate the HTTPRouteRules:

+
{ header: x-canary=true, method: POST } → backend-canary
+{ header: x-canary=true } → backend-canary
+{ method: POST } → backend-rest
+{ * } → backend-rest
+
+

Then, user needs to include the following trigger in the RLP so only full HTTPRouteRules are specified:

+
{ header: x-canary=true, method: POST }
+{ method: POST }
+
+

The first matching rule of the trigger (i.e. { header: x-canary=true, method: POST }) is beoynd the original user intent of targeting simply { method: POST }.

+

This issue can be even more concerning in the case of targeting gateways with multiple child HTTPRoutes. All the HTTPRoutes would have to be fixed and the HTTPRouteRules that cover for all the cases in all HTTPRoutes listed in the policy targeting the gateway.

+

All limit definitions apply vs. Limit "shadowing"

+

The proposed binding between limit definition and HTTPRouteRules that trigger the limits was thought so multiple limit definitions can be bound to a same HTTPRouteRule that triggers those limits in Limitador. That means that no limit definition will "shadow" another at the level of the RLP controller, i.e. the RLP controller will honour the intended binding according to the selectors specified in the policy.

+

Due to how things work as of today in Limitador nonetheless, i.e., the rule of the most restrictive limit wins, and because all limit definitions triggered by a given shared HTTPRouteRule, it might be the case that, across multiple limits triggered, one limit ends up "shadowing" other limits. However, that is by implementation of Limitador and therefore beyond the scope of the API.

+

An alternative to the approach of allowing all limit definitions to be bound to a same selected HTTPRouteRules would be enforcing that, amongst multiple limit definitions targeting a same HTTPRouteRule, only the first of those limits definitions is bound to the HTTPRouteRule. This alternative approach effectively would cause the first limit to "shadow" any other on that particular HTTPRouteRule, as by implementation of the RLP controller (i.e., at API level).

+

While the first approach causes an artificial Limitador condition of the form <policy-ns>/<policy-name>/<limit-name> == "1", the alternative approach ("limit shadowing") could be implemented by generating a descriptor of the following form instead: ratelimit.binding == "<policy-ns>/<policy-name>/<limit-name>".

+

The downside of allowing multiple bindings to the same HTTPRouteRule is that all limits apply in Limitador, thus making status report frequently harder. The most restritive rate limit strategy implemented by Limitador might not be obvious to users who set multiple limit definitions and will require additional information reported back to the user about the actual status of the limit definitions stated in a RLP. On the other hand, it allows enables use cases of different limit definitions that vary on the counter qualifiers, additional "soft" conditions, or actual rate limit values to be triggered by a same HTTPRouteRule.

+

Writing "soft" when conditions based on attributes of the HTTP request

+

As a first step, users will not be able to write "soft" when conditions to selective apply rate limit definitions based on attributes of the HTTP request that otherwise could be specified using the routeSelectors field of the RLP instead.

+

On one hand, using when conditions for route filtering would make it easy to define limits when the HTTPRoute cannot be modified to include the special rule. On the other hand, users would miss information in the status. An HTTPRouteRule for GET|POST /toys*, for example, that is targeted with an additional "soft" when condition that specifies that the method must be equal to GET and the path exactly equal to /toys/special (see Example 3) would be reported as rate limited with extra details that this is in fact only for GET /toys/special. For small deployments, this might be considered acceptable; however it would easily explode to unmanageable number of cases for deployments with only a few limit definitions and HTTPRouteRules.

+

Moreover, by not specifying a more strict HTTPRouteRule for GET /toys/special, the RLP controller would bind the limit definition to other rules that would cause the rate limit filter to invoke the rate limit service (Limitador) for cases other than strictly GET /toys/special. Even if the rate limits would still be ensured to apply in Limitador only for GET /toys/special (due to the presence of a hypothetical "soft" when condition), an extra no-op hop to the rate limit service would happen. This is avoided with the current imposed limitation.

+

Example of "soft" when conditions for rate limit based on attributes of the HTTP request (NOT SUPPORTED):

+
apiVersion: kuadrant.io/v2beta1
+kind: RateLimitPolicy
+metadata:
+  name: toystore-special-toys
+  namespace: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    specialToys:
+      rates:
+      - limit: 150
+        unit: second
+      routeSelectors:
+      - matches: # matches the original HTTPRouteRule GET|POST /toys*
+        - path:
+            type: PathPrefix
+            value: "/toys"
+          method: GET
+      when:
+      - selector: context.request.http.method # cannot omit this selector or POST /toys/special would also be rate limited
+        operator: eq
+        value: GET
+      - selector: context.request.http.path
+        operator: eq
+        value: /toys/special
+
+
+ How is this RLP would be implemented under the hood if supported? + +
gateway_actions:
+- rules:
+  - paths: ["/toys*"]
+    methods: ["GET"]
+    hosts: ["*.toystore.acme.com"]
+  - paths: ["/toys*"]
+    methods: ["POST"]
+    hosts: ["*.toystore.acme.com"]
+  configurations:
+  - generic_key:
+      descriptor_key: "toystore/toystore-special-toys/specialToys"
+      descriptor_value: "1"
+  - request_headers:
+      descriptor_key: "context.request.http.method"
+      header_name: ":method"
+  - request_headers:
+      descriptor_key: "context.request.http.path"
+      header_name: ":path"
+
+ +
limits:
+- conditions:
+  - toystore/toystore-special-toys/specialToys == "1"
+  - context.request.http.method == "GET"
+  - context.request.http.path == "/toys/special"
+  max_value: 150
+  seconds: 1
+  namespace: kuadrant
+
+
+ +

Possible variations for the selectors (conditions and counter qualifiers)

+

The main drivers behind the proposed design for the selectors (conditions and counter qualifiers), based on (i) structured condition expressions composed of fields selector, operator, and value, and (ii) when conditions and counters separated in two distinct fields (variation "C" below), are: +1. consistency with the Authorino AuthConfig API, which also specifies when conditions expressed in selector, operator, and value fields; +2. explicit user intent, without subtle distinction of meaning based on presence of optional fields.

+

Nonetheless here are a few alternative variations to consider:

+ + + + + + + + + + + + + + + + + + + + +
Structured condition expressionsParsed condition expressions
Single field + A +
+selectors:
+  - selector: context.request.http.method
+    operator: eq
+    value: GET
+  - selector: auth.identity.username
+
+ B +
+selectors:
+  - context.request.http.method == "GET"
+  - auth.identity.username
+
Distinct fields + C ⭐️ +
+when:
+  - selector: context.request.http.method
+    operator: eq
+    value: GET
+counters:
+  - auth.identity.username
+
+ D +
+when:
+  - context.request.http.method == "GET"
+counters:
+  - auth.identity.username
+
+ +

⭐️ Variation adopted for the examples and (so far) final design proposal.

+

Prior art

+

Most implementations currently orbiting around Gateway API (e.g. Istio, Envoy Gateway, etc) for added RL functionality seem to have been leaning more to the direct route extension pattern instead of Policy Attachment. That might be an option particularly suitable for gateway implementations (gateway providers) and for those aiming to avoid dealing with defaults and overrides.

+

Unresolved questions

+
    +
  1. In case a limit definition lists route selectors such that some can be bound to HTTPRouteRules and some cannot (see Example 6), do we bind the valid route selectors and ignore the invalid ones or the limit definition is invalid altogether and bound to no HTTPRouteRule at all?
    + A: By allowing multiple limit definitions to target a same HTTPRouteRule, the issue here stated will become less often. For the other cases where a limit definition still fails to select an HTTPRouteRule (e.g. due to mismatching trigger matches), the limit definition is not considered invalid. Possibly the limit definitions is considered "stale" (or "orphan"), i.e., not bound to any HTTPRouteRule.
  2. +
  3. What should we fill domain/namespace with, if no longer with the hostname? This can be useful for multi-tenancy.
    + A: For now, the domain/namespace field of the RL configuration (Envoy and Limitador ends) will be filled with a fixed (configurable) string (e.g. "kuadrant"). This can change in future to better support multi-tenancy and/or other use cases where a total sharding of the limit definitions within a same instance of Kuadrant is desired.
  4. +
  5. How do we support lists of hostnames in Limitador conditions (single counter)? Should we open an issue for a new in operator?
    + A: Not needed. The hostnames must exist in the targeted object explicitly, just like any other routing rules intended to be targeted by a limit definition. By setting the explicit hostname in the targeted network object (Gateway or HTTPRoute), the also becomes a route rules available for "hard" trigger configuration.
  6. +
  7. What "soft" condition operators do we need to support (e.g. eq, neq, exists, nexists, matches)?
  8. +
  9. Do we need special field to define shared counters across clusters/Limitador instances or that's to be solved at another layer (Limitador, Kuadrant CRDs, MCTC)?
  10. +
+

Future possibilities

+
    +
  • Port routeSelectors and the semantics around it to the AuthPolicy API (aka "KAP v2").
  • +
  • Defaults and overrides, either along the lines of architecture#4 or architecture#10.
  • +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/architecture/rfcs/0002-well-known-attributes/index.html b/architecture/rfcs/0002-well-known-attributes/index.html new file mode 100644 index 00000000..20677067 --- /dev/null +++ b/architecture/rfcs/0002-well-known-attributes/index.html @@ -0,0 +1,2631 @@ + + + + + + + + + + + + + + + + + + + + Well-known Attributes - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Well-known Attributes

+ +

Summary

+

Define a well-known structure for users to declare request data selectors in their RateLimitPolicies and AuthPolicies. This structure is referred to as the Kuadrant Well-known Attributes.

+

Motivation

+

The well-known attributes let users write policy rules – conditions and, in general, dynamic values that refer to attributes in the data plane - in a concise and seamless way.

+

Decoupled from the policy CRDs, the well-known attributes: +1. define a common language for referring to values of the data plane in the Kuadrant policies; +2. allow dynamically evolving the policy APIs regarding how they admit references to data plane attributes; +3. encompass all common and component-specific selectors for data plane attributes; +4. have a single and unified specification, although this specification may occasionally link to additional, component-specific, external docs.

+

Guide-level explanation

+

One who writes a Kuadrant policy and wants to build policy constructs such as conditions, qualifiers, variables, etc, based on dynamic values of the data plane, must refer the attributes that carry those values, using the declarative language of Kuadrant's Well-known Attributes.

+

A dynamic data plane value is typically a value of an attribute of the request or an Envoy Dynamic Metadata entry. It can be a value of the outer request being handled by the API gatway or proxy that is managed by Kuadrant ("context request") or an attribute of the direct request to the Kuadrant component that delivers the functionality in the data plane (rate-limiting or external auth).

+

A Well-known Selector is a construct of a policy API whose value contains a direct reference to a well-known attribute. The language of the well-known attributes and therefore what one would declare within a well-known selector resembles a JSON path for navigating a possibly complex JSON object.

+

Example 1. Well-known selector used in a condition

+
apiGroup: examples.kuadrant.io
+kind: PaintPolicy
+spec:
+  rules:
+  - when:
+    - selector: auth.identity.group
+      operator: eq
+      value: admin
+    color: red
+
+

In the example, auth.identity.group is a well-known selector of an attribute group, known to be injected by the external authorization service (auth) to describe the group the user (identity) belongs to. In the data plane, whenever this value is equal to admin, the abstract PaintPolicy policy states that the traffic must be painted red.

+

Example 2. Well-known selector used in a variable

+
apiGroup: examples.kuadrant.io
+kind: PaintPolicy
+spec:
+  rules:
+  - color: red
+    alpha:
+      dynamic: request.headers.x-color-alpha
+
+

In the example, request.headers.x-color-alpha is a selector of a well-known attribute request.headers that gives access to the headers of the context HTTP request. The selector retrieves the value of the x-color-alpha request header to dynamically fill the alpha property of the abstract PaintPolicy policy at each request.

+

Reference-level explanation

+

The Well-known Attributes are a compilation inspired by some of the Envoy attributes and Authorino's Authorization JSON and its related JSON paths.

+

From the Envoy attributes, only attributes that are available before establishing connection with the upstream server qualify as a Kuadrant well-known attribute. This excludes attributes such as the response attributes and the upstream attributes.

+

As for the attributes inherited from Authorino, these are either based on Envoy's AttributeContext type of the external auth request API or from internal types defined by Authorino to fulfill the Auth Pipeline.

+

These two subsets of attributes are unified into a single set of well-known attributes. For each attribute that exists in both subsets, the name of the attribute as specified in the Envoy attributes subset prevails. Example of such is request.id (to refer to the ID of the request) superseding context.request.http.id (as the same attribute is referred in an Authorino AuthConfig).

+


+

The next sections specify the well-known attributes organized in the following groups: +- Request attributes +- Connection attributes +- Metadata and filter state attributes +- Auth attributes +- Rate-limit attributes

+

Request attributes

+

The following attributes are related to the context HTTP request that is handled by the API gateway or proxy managed by Kuadrant.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

request.id

String

Request ID corresponding to x-request-id header value

request.time

Timestamp

Time of the first byte received

request.protocol

String

Request protocol (“HTTP/1.0”, “HTTP/1.1”, “HTTP/2”, or “HTTP/3”)

request.scheme

String

The scheme portion of the URL e.g. “http”

request.host

String

The host portion of the URL

request.method

String

Request method e.g. “GET”

request.path

String

The path portion of the URL

request.url_path

String

The path portion of the URL without the query string

request.query

String

The query portion of the URL in the format of “name1=value1&name2=value2”

request.headers

Map<String, String>

All request headers indexed by the lower-cased header name

request.referer

String

Referer request header

request.useragent

String

User agent request header

request.size

Number

The HTTP request size in bytes. If unknown, it must be -1

request.body

String

The HTTP request body. (Disabled by default. Requires additional proxy configuration to enabled it.)

request.raw_body

Array<Number>

The HTTP request body in bytes. This is sometimes used instead of body depending on the proxy configuration.

request.context_extensions

Map<String, String>

This is analogous to request.headers, however these contents are not sent to the upstream server. It provides an extension mechanism for sending additional information to the auth service without modifying the proto definition. It maps to the internal opaque context in the proxy filter chain. (Requires additional configuration in the proxy.)

+ +

Connection attributes

+

The following attributes are available once the downstream connection with the API gateway or proxy managed by Kuadrant is established. They apply to HTTP requests (L7) as well, but also to proxied connections limited at L3/L4.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

source.address

String

Downstream connection remote address

source.port

Number

Downstream connection remote port

source.service

String

The canonical service name of the peer

source.labels

Map<String, String>

The labels associated with the peer. These could be pod labels for Kubernetes or tags for VMs. The source of the labels could be an X.509 certificate or other configuration.

source.principal

String

The authenticated identity of this peer. If an X.509 certificate is used to assert the identity in the proxy, this field is sourced from “URI Subject Alternative Names“, “DNS Subject Alternate Names“ or “Subject“ in that order. The format is issuer specific – e.g. SPIFFE format is spiffe://trust-domain/path, Google account format is https://accounts.google.com/{userid}.

source.certificate

String

The X.509 certificate used to authenticate the identify of this peer. When present, the certificate contents are encoded in URL and PEM format.

destination.address

String

Downstream connection local address

destination.port

Number

Downstream connection local port

destination.service

String

The canonical service name of the peer

destination.labels

Map<String, String>

The labels associated with the peer. These could be pod labels for Kubernetes or tags for VMs. The source of the labels could be an X.509 certificate or other configuration.

destination.principal

String

The authenticated identity of this peer. If an X.509 certificate is used to assert the identity in the proxy, this field is sourced from “URI Subject Alternative Names“, “DNS Subject Alternate Names“ or “Subject“ in that order. The format is issuer specific – e.g. SPIFFE format is spiffe://trust-domain/path, Google account format is https://accounts.google.com/{userid}.

destination.certificate

String

The X.509 certificate used to authenticate the identify of this peer. When present, the certificate contents are encoded in URL and PEM format.

connection.id

Number

Downstream connection ID

connection.mtls

Boolean

Indicates whether TLS is applied to the downstream connection and the peer ceritificate is presented

connection.requested_server_name

String

Requested server name in the downstream TLS connection

connection.tls_session.sni

String

SNI used for TLS session

connection.tls_version

String

TLS version of the downstream TLS connection

connection.subject_local_certificate

String

The subject field of the local certificate in the downstream TLS connection

connection.subject_peer_certificate

String

The subject field of the peer certificate in the downstream TLS connection

connection.dns_san_local_certificate

String

The first DNS entry in the SAN field of the local certificate in the downstream TLS connection

connection.dns_san_peer_certificate

String

The first DNS entry in the SAN field of the peer certificate in the downstream TLS connection

connection.uri_san_local_certificate

String

The first URI entry in the SAN field of the local certificate in the downstream TLS connection

connection.uri_san_peer_certificate

String

The first URI entry in the SAN field of the peer certificate in the downstream TLS connection

connection.sha256_peer_certificate_digest

String

SHA256 digest of the peer certificate in the downstream TLS connection if present

+ +

Metadata and filter state attributes

+

The following attributes are related to the Envoy proxy filter chain. They include metadata exported by the proxy throughout the filters and information about the states of the filters themselves.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

metadata

Metadata

Dynamic request metadata

filter_state

Map<String, String>

Mapping from a filter state name to its serialized string value

+ +

Auth attributes

+

The following attributes are exclusive of the external auth service (Authorino).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

auth.identity

Any

Single resolved identity object, post-identity verification

auth.metadata

Map<String, Any>

External metadata fetched

auth.authorization

Map<String, Any>

Authorization results resolved by each authorization rule, access granted only

auth.response

Map<String, Any>

Response objects exported by the auth service post-access granted

auth.callbacks

Map<String, Any>

Response objects returned by the callback requests issued by the auth service

+ +

The auth service also supports modifying selected values by chaining modifiers in the path.

+

Rate-limit attributes

+

The following attributes are exclusive of the rate-limiting service (Limitador).

+ + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

ratelimit.domain

String

The rate limit domain. This enables the configuration to be namespaced per application (multi-tenancy).

ratelimit.hits_addend

Number

Specifies the number of hits a request adds to the matched limit. Fixed value: `1`. Reserved for future usage.

+ +

Drawbacks

+

The decoupling of the well-known attributes and the language of well-known attributes and selectors from the individual policy CRDs is what makes it somewhat flexible and common across the components (rate-limiting and auth). However, it's less structured and it introduces another syntax for users to get familiar with.

+

This additional language competes with the language of the route selectors (RFC 0001), based on Gateway API's HTTPRouteMatch type.

+

Being "soft-coded" in the policy specs (as opposed to a hard-coded sub-structure inside of each policy type) does not mean it's completely decoupled from implementation in the control plane and/or intermediary data plane components. Although many attributes can be supported almost as a pass-through, from being used in a selector in a policy, to a corresponding value requested by the wasm-shim to its host, that is not always the case. Some translation may be required for components not integrated via wasm-shim (e.g. Authorino), as well as for components integrated via wasm-shim (e.g. Limitador) in special cases of composite or abstraction well-known attributes (i.e. attributes not available as-is via ABI, e.g. auth.identity in a RLP). Either way, some validation of the values introduced by users in the selectors may be needed at some point in the control plane, thus requiring arguably a level of awaresness and coupling between the well-known selectors specification and the control plane (policy controllers) or intermediary data plane (wasm-shim) components.

+

Rationale and alternatives

+

As an alternative to JSON path-like selectors based on a well-known structure that induces the proposed language of well-known attributes, these same attributes could be defined as sub-types of each policy CRD. The Golang packages defining the common attributes across CRDs could be shared by the policy type definitions to reduce repetition. However, that approach would possibly involve a staggering number of new type definitions to cover all the cases for all the groups of attributes to be supported. These are constructs that not only need to be understood by the policy controllers, but also known by the user who writes a policy.

+

Additionally, all attributes, including new attributes occasionally introduced by Envoy and made available to the wasm-shim via ABI, would always require translation from the user-level abstraction how it's represented in a policy, to the actual form how it's used in the wasm-shim configuration and Authorino AuthConfigs.

+

Not implementing this proposal and keeping the current state of things mean little consistency between these common constructs for rules and conditions on how they are represented in each type of policy. This lack of consistency has a direct impact on the overhead faced by users to learn how to interact with Kuadrant and write different kinds of policies, as well as for the maintainers on tasks of coding for policy validation and reconciliation of data plane configurations.

+

Prior art

+

Authorino's dynamic JSON paths, related to Authorino's Authorization JSON and used in when conditions and inside of multiple other constructs of the AuthConfig, are an example of feature of very similar approach to the one proposed here.

+

Arguably, Authorino's perceived flexibility would not have been possible with the Authorization JSON selectors. Users can write quite sophisticated policy rules (conditions, variable references, etc) by leveraging the those dynamic selectors. Beacause they are backed by JSON-based machinery in the code, Authorino's selectors have very little to, in some cases, none at all variation compared Open Policy Agent's Rego policy language, which is often used side by side in the same AuthConfigs.

+

Authorino's Authorization JSON selectors are, in one hand, more restrict to the structure of the CheckRequest payload (context.* attributes). At the same time, they are very open in the part associated with the internal attributes built along the Auth Pipeline (i.e. auth.* attributes). That makes Authorino's Authorization JSON selectors more limited, compared to the Envoy attributes made available to the wasm-shim via ABI, but also harder to validate. In some cases, such as of deep references to inside objects fetched from external sources of metadata, resolved OPA objects, JWT claims, etc, it is impossible to validate for correct references.

+

Another experience learned from Authorino's Authorization JSON selectors is that they depend substantially on the so-called "modifiers". Many use cases involving parsing and breaking down attributes that are originally available in a more complex form would not be possible without the modifiers. Examples of such cases are: extracting portions of the path and/or query string parameters (e.g. collection and resource identifiers), applying translations on HTTP verbs into corresponding operations, base64-decoding values from the context HTTP request, amongst several others.

+

Unresolved questions

+
    +
  1. +

    How to deal with the differences regarding the availability and data types of the attributes across clients/hosts?

    +
  2. +
  3. +

    Can we make more attributes that are currently available to only one of the components common to both?

    +
  4. +
  5. +

    Will we need some kind of global support for modifiers (functions) in the well-known selectors or those can continue to be an Authorino-only feature?

    +
  6. +
  7. +

    Does Authorino, which is more strict regarding the data structure that induces the selectors, need to implement this specification or could/should it keep its current selectors and a translation be performed by the AuthPolicy controller?

    +
  8. +
+

Future possibilities

+
    +
  1. Extend with more well-known attributes that abstract common patterns and/or for rather opinioned use cases. Examples:
  2. +
  3. auth.* attributes supported in the rate limit service
  4. +
  5. request.authenticated
  6. +
  7. request.operation.(read|write)
  8. +
  9. request.param.my-param
  10. +
  11. +

    connection.secure

    +
  12. +
  13. +

    Other Envoy attributes

    +
  14. +
+
+ Wasm attributes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

wasm.plugin_name

String

Plugin name

wasm.plugin_root_id

String

Plugin root ID

wasm.plugin_vm_id

String

Plugin VM ID

wasm.node

Node

Local node description

wasm.cluster_name

String

Upstream cluster name

wasm.cluster_metadata

Metadata

Upstream cluster metadata

wasm.listener_direction

Number

Enumeration value of the listener traffic direction

wasm.listener_metadata

Metadata

Listener metadata

wasm.route_name

String

Route name

wasm.route_metadata

Metadata

Route metadata

wasm.upstream_host_metadata

Metadata

Upstream host metadata

+
+ +
+ Proxy configuration attributes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Attribute

Type

Description

Auth

RL

xds.cluster_name

String

Upstream cluster name

xds.cluster_metadata

Metadata

Upstream cluster metadata

xds.route_name

String

Route name

xds.route_metadata

Metadata

Route metadata

xds.upstream_host_metadata

Metadata

Upstream host metadata

xds.filter_chain_name

String

Listener filter chain name

+
+ +
    +
  1. Add some support for value modifiers (functions), along the lines of Authorino's JSON path modifiers and/or Envoy attributes' path expressions.
  2. +
+ + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/assets/images/architecture-multi-cluster.png b/assets/images/architecture-multi-cluster.png new file mode 100644 index 00000000..82b9671e Binary files /dev/null and b/assets/images/architecture-multi-cluster.png differ diff --git a/assets/images/architecture-single-cluster.png b/assets/images/architecture-single-cluster.png new file mode 100644 index 00000000..bf200653 Binary files /dev/null and b/assets/images/architecture-single-cluster.png differ diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..ca869960 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/images/logo.png b/assets/images/logo.png new file mode 100644 index 00000000..a84a1b5e Binary files /dev/null and b/assets/images/logo.png differ diff --git a/assets/images/redhat.png b/assets/images/redhat.png new file mode 100644 index 00000000..22357a79 Binary files /dev/null and b/assets/images/redhat.png differ diff --git a/assets/javascripts/bundle.b425cdc4.min.js b/assets/javascripts/bundle.b425cdc4.min.js new file mode 100644 index 00000000..201e5235 --- /dev/null +++ b/assets/javascripts/bundle.b425cdc4.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var Ci=Object.create;var gr=Object.defineProperty;var Ri=Object.getOwnPropertyDescriptor;var ki=Object.getOwnPropertyNames,Ht=Object.getOwnPropertySymbols,Hi=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,nn=Object.prototype.propertyIsEnumerable;var rn=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,P=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&rn(e,r,t[r]);if(Ht)for(var r of Ht(t))nn.call(t,r)&&rn(e,r,t[r]);return e};var on=(e,t)=>{var r={};for(var n in e)yr.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&Ht)for(var n of Ht(e))t.indexOf(n)<0&&nn.call(e,n)&&(r[n]=e[n]);return r};var Pt=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Pi=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of ki(t))!yr.call(e,o)&&o!==r&&gr(e,o,{get:()=>t[o],enumerable:!(n=Ri(t,o))||n.enumerable});return e};var yt=(e,t,r)=>(r=e!=null?Ci(Hi(e)):{},Pi(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var sn=Pt((xr,an)=>{(function(e,t){typeof xr=="object"&&typeof an!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(xr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(O){return!!(O&&O!==document&&O.nodeName!=="HTML"&&O.nodeName!=="BODY"&&"classList"in O&&"contains"in O.classList)}function f(O){var Qe=O.type,De=O.tagName;return!!(De==="INPUT"&&s[Qe]&&!O.readOnly||De==="TEXTAREA"&&!O.readOnly||O.isContentEditable)}function c(O){O.classList.contains("focus-visible")||(O.classList.add("focus-visible"),O.setAttribute("data-focus-visible-added",""))}function u(O){O.hasAttribute("data-focus-visible-added")&&(O.classList.remove("focus-visible"),O.removeAttribute("data-focus-visible-added"))}function p(O){O.metaKey||O.altKey||O.ctrlKey||(a(r.activeElement)&&c(r.activeElement),n=!0)}function m(O){n=!1}function d(O){a(O.target)&&(n||f(O.target))&&c(O.target)}function h(O){a(O.target)&&(O.target.classList.contains("focus-visible")||O.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),u(O.target))}function v(O){document.visibilityState==="hidden"&&(o&&(n=!0),Y())}function Y(){document.addEventListener("mousemove",N),document.addEventListener("mousedown",N),document.addEventListener("mouseup",N),document.addEventListener("pointermove",N),document.addEventListener("pointerdown",N),document.addEventListener("pointerup",N),document.addEventListener("touchmove",N),document.addEventListener("touchstart",N),document.addEventListener("touchend",N)}function B(){document.removeEventListener("mousemove",N),document.removeEventListener("mousedown",N),document.removeEventListener("mouseup",N),document.removeEventListener("pointermove",N),document.removeEventListener("pointerdown",N),document.removeEventListener("pointerup",N),document.removeEventListener("touchmove",N),document.removeEventListener("touchstart",N),document.removeEventListener("touchend",N)}function N(O){O.target.nodeName&&O.target.nodeName.toLowerCase()==="html"||(n=!1,B())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",m,!0),document.addEventListener("pointerdown",m,!0),document.addEventListener("touchstart",m,!0),document.addEventListener("visibilitychange",v,!0),Y(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var cn=Pt(Er=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(c){return!1}},r=t(),n=function(c){var u={next:function(){var p=c.shift();return{done:p===void 0,value:p}}};return r&&(u[Symbol.iterator]=function(){return u}),u},o=function(c){return encodeURIComponent(c).replace(/%20/g,"+")},i=function(c){return decodeURIComponent(String(c).replace(/\+/g," "))},s=function(){var c=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var m=typeof p;if(m!=="undefined")if(m==="string")p!==""&&this._fromString(p);else if(p instanceof c){var d=this;p.forEach(function(B,N){d.append(N,B)})}else if(p!==null&&m==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),c._entries&&(c._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(f,c){typeof f!="string"&&(f=String(f)),c&&typeof c!="string"&&(c=String(c));var u=document,p;if(c&&(e.location===void 0||c!==e.location.href)){c=c.toLowerCase(),u=document.implementation.createHTMLDocument(""),p=u.createElement("base"),p.href=c,u.head.appendChild(p);try{if(p.href.indexOf(c)!==0)throw new Error(p.href)}catch(O){throw new Error("URL unable to set base "+c+" due to "+O)}}var m=u.createElement("a");m.href=f,p&&(u.body.appendChild(m),m.href=m.href);var d=u.createElement("input");if(d.type="url",d.value=f,m.protocol===":"||!/:/.test(m.href)||!d.checkValidity()&&!c)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:m});var h=new e.URLSearchParams(this.search),v=!0,Y=!0,B=this;["append","delete","set"].forEach(function(O){var Qe=h[O];h[O]=function(){Qe.apply(h,arguments),v&&(Y=!1,B.search=h.toString(),Y=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var N=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==N&&(N=this.search,Y&&(v=!1,this.searchParams._fromString(this.search),v=!0))}})},s=i.prototype,a=function(f){Object.defineProperty(s,f,{get:function(){return this._anchorElement[f]},set:function(c){this._anchorElement[f]=c},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(f){a(f)}),Object.defineProperty(s,"search",{get:function(){return this._anchorElement.search},set:function(f){this._anchorElement.search=f,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(s,{toString:{get:function(){var f=this;return function(){return f.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(f){this._anchorElement.href=f,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(f){this._anchorElement.pathname=f},enumerable:!0},origin:{get:function(){var f={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],c=this._anchorElement.port!=f&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(c?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(f){},enumerable:!0},username:{get:function(){return""},set:function(f){},enumerable:!0}}),i.createObjectURL=function(f){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(f){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Er)});var qr=Pt((Mt,Nr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Mt=="object"&&typeof Nr=="object"?Nr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Mt=="object"?Mt.ClipboardJS=r():t.ClipboardJS=r()})(Mt,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return Ai}});var s=i(279),a=i.n(s),f=i(370),c=i.n(f),u=i(817),p=i.n(u);function m(j){try{return document.execCommand(j)}catch(T){return!1}}var d=function(T){var E=p()(T);return m("cut"),E},h=d;function v(j){var T=document.documentElement.getAttribute("dir")==="rtl",E=document.createElement("textarea");E.style.fontSize="12pt",E.style.border="0",E.style.padding="0",E.style.margin="0",E.style.position="absolute",E.style[T?"right":"left"]="-9999px";var H=window.pageYOffset||document.documentElement.scrollTop;return E.style.top="".concat(H,"px"),E.setAttribute("readonly",""),E.value=j,E}var Y=function(T,E){var H=v(T);E.container.appendChild(H);var I=p()(H);return m("copy"),H.remove(),I},B=function(T){var E=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},H="";return typeof T=="string"?H=Y(T,E):T instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(T==null?void 0:T.type)?H=Y(T.value,E):(H=p()(T),m("copy")),H},N=B;function O(j){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?O=function(E){return typeof E}:O=function(E){return E&&typeof Symbol=="function"&&E.constructor===Symbol&&E!==Symbol.prototype?"symbol":typeof E},O(j)}var Qe=function(){var T=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},E=T.action,H=E===void 0?"copy":E,I=T.container,q=T.target,Me=T.text;if(H!=="copy"&&H!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(q!==void 0)if(q&&O(q)==="object"&&q.nodeType===1){if(H==="copy"&&q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(H==="cut"&&(q.hasAttribute("readonly")||q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Me)return N(Me,{container:I});if(q)return H==="cut"?h(q):N(q,{container:I})},De=Qe;function $e(j){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?$e=function(E){return typeof E}:$e=function(E){return E&&typeof Symbol=="function"&&E.constructor===Symbol&&E!==Symbol.prototype?"symbol":typeof E},$e(j)}function Ei(j,T){if(!(j instanceof T))throw new TypeError("Cannot call a class as a function")}function tn(j,T){for(var E=0;E0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof I.action=="function"?I.action:this.defaultAction,this.target=typeof I.target=="function"?I.target:this.defaultTarget,this.text=typeof I.text=="function"?I.text:this.defaultText,this.container=$e(I.container)==="object"?I.container:document.body}},{key:"listenClick",value:function(I){var q=this;this.listener=c()(I,"click",function(Me){return q.onClick(Me)})}},{key:"onClick",value:function(I){var q=I.delegateTarget||I.currentTarget,Me=this.action(q)||"copy",kt=De({action:Me,container:this.container,target:this.target(q),text:this.text(q)});this.emit(kt?"success":"error",{action:Me,text:kt,trigger:q,clearSelection:function(){q&&q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(I){return vr("action",I)}},{key:"defaultTarget",value:function(I){var q=vr("target",I);if(q)return document.querySelector(q)}},{key:"defaultText",value:function(I){return vr("text",I)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(I){var q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return N(I,q)}},{key:"cut",value:function(I){return h(I)}},{key:"isSupported",value:function(){var I=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],q=typeof I=="string"?[I]:I,Me=!!document.queryCommandSupported;return q.forEach(function(kt){Me=Me&&!!document.queryCommandSupported(kt)}),Me}}]),E}(a()),Ai=Li},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,f){for(;a&&a.nodeType!==o;){if(typeof a.matches=="function"&&a.matches(f))return a;a=a.parentNode}}n.exports=s},438:function(n,o,i){var s=i(828);function a(u,p,m,d,h){var v=c.apply(this,arguments);return u.addEventListener(m,v,h),{destroy:function(){u.removeEventListener(m,v,h)}}}function f(u,p,m,d,h){return typeof u.addEventListener=="function"?a.apply(null,arguments):typeof m=="function"?a.bind(null,document).apply(null,arguments):(typeof u=="string"&&(u=document.querySelectorAll(u)),Array.prototype.map.call(u,function(v){return a(v,p,m,d,h)}))}function c(u,p,m,d){return function(h){h.delegateTarget=s(h.target,p),h.delegateTarget&&d.call(u,h)}}n.exports=f},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(n,o,i){var s=i(879),a=i(438);function f(m,d,h){if(!m&&!d&&!h)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(h))throw new TypeError("Third argument must be a Function");if(s.node(m))return c(m,d,h);if(s.nodeList(m))return u(m,d,h);if(s.string(m))return p(m,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(m,d,h){return m.addEventListener(d,h),{destroy:function(){m.removeEventListener(d,h)}}}function u(m,d,h){return Array.prototype.forEach.call(m,function(v){v.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(m,function(v){v.removeEventListener(d,h)})}}}function p(m,d,h){return a(document.body,m,d,h)}n.exports=f},817:function(n){function o(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var f=window.getSelection(),c=document.createRange();c.selectNodeContents(i),f.removeAllRanges(),f.addRange(c),s=f.toString()}return s}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,s,a){var f=this.e||(this.e={});return(f[i]||(f[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var f=this;function c(){f.off(i,c),s.apply(a,arguments)}return c._=s,this.on(i,c,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),f=0,c=a.length;for(f;f{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var rs=/["'&<>]/;Yo.exports=ns;function ns(e){var t=""+e,r=rs.exec(t);if(!r)return t;var n,o="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[n++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function W(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var n=r.call(e),o,i=[],s;try{for(;(t===void 0||t-- >0)&&!(o=n.next()).done;)i.push(o.value)}catch(a){s={error:a}}finally{try{o&&!o.done&&(r=n.return)&&r.call(n)}finally{if(s)throw s.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var n=0,o=t.length,i;n1||a(m,d)})})}function a(m,d){try{f(n[m](d))}catch(h){p(i[0][3],h)}}function f(m){m.value instanceof et?Promise.resolve(m.value.v).then(c,u):p(i[0][2],m)}function c(m){a("next",m)}function u(m){a("throw",m)}function p(m,d){m(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function pn(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof Ee=="function"?Ee(e):e[Symbol.iterator](),r={},n("next"),n("throw"),n("return"),r[Symbol.asyncIterator]=function(){return this},r);function n(i){r[i]=e[i]&&function(s){return new Promise(function(a,f){s=e[i](s),o(a,f,s.done,s.value)})}}function o(i,s,a,f){Promise.resolve(f).then(function(c){i({value:c,done:a})},s)}}function C(e){return typeof e=="function"}function at(e){var t=function(n){Error.call(n),n.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var It=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(n,o){return o+1+") "+n.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Ve(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,n,o,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=Ee(s),f=a.next();!f.done;f=a.next()){var c=f.value;c.remove(this)}}catch(v){t={error:v}}finally{try{f&&!f.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var u=this.initialTeardown;if(C(u))try{u()}catch(v){i=v instanceof It?v.errors:[v]}var p=this._finalizers;if(p){this._finalizers=null;try{for(var m=Ee(p),d=m.next();!d.done;d=m.next()){var h=d.value;try{ln(h)}catch(v){i=i!=null?i:[],v instanceof It?i=D(D([],W(i)),W(v.errors)):i.push(v)}}}catch(v){n={error:v}}finally{try{d&&!d.done&&(o=m.return)&&o.call(m)}finally{if(n)throw n.error}}}if(i)throw new It(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ln(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Ve(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Ve(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Sr=Ie.EMPTY;function jt(e){return e instanceof Ie||e&&"closed"in e&&C(e.remove)&&C(e.add)&&C(e.unsubscribe)}function ln(e){C(e)?e():e.unsubscribe()}var Le={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],n=2;n0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,s=o.isStopped,a=o.observers;return i||s?Sr:(this.currentObservers=null,a.push(r),new Ie(function(){n.currentObservers=null,Ve(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,s=n.isStopped;o?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,n){return new xn(r,n)},t}(F);var xn=function(e){ie(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:Sr},t}(x);var Et={now:function(){return(Et.delegate||Date).now()},delegate:void 0};var wt=function(e){ie(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=Et);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,s=n._infiniteTimeWindow,a=n._timestampProvider,f=n._windowTime;o||(i.push(r),!s&&i.push(a.now()+f)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,s=o._buffer,a=s.slice(),f=0;f0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=ut.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){var i;if(o===void 0&&(o=0),o!=null?o>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);var s=r.actions;n!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==n&&(ut.cancelAnimationFrame(n),r._scheduled=void 0)},t}(Wt);var Sn=function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(Dt);var Oe=new Sn(wn);var M=new F(function(e){return e.complete()});function Vt(e){return e&&C(e.schedule)}function Cr(e){return e[e.length-1]}function Ye(e){return C(Cr(e))?e.pop():void 0}function Te(e){return Vt(Cr(e))?e.pop():void 0}function zt(e,t){return typeof Cr(e)=="number"?e.pop():t}var pt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Nt(e){return C(e==null?void 0:e.then)}function qt(e){return C(e[ft])}function Kt(e){return Symbol.asyncIterator&&C(e==null?void 0:e[Symbol.asyncIterator])}function Qt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Yt=zi();function Gt(e){return C(e==null?void 0:e[Yt])}function Bt(e){return un(this,arguments,function(){var r,n,o,i;return $t(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,et(r.read())];case 3:return n=s.sent(),o=n.value,i=n.done,i?[4,et(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,et(o)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Jt(e){return C(e==null?void 0:e.getReader)}function U(e){if(e instanceof F)return e;if(e!=null){if(qt(e))return Ni(e);if(pt(e))return qi(e);if(Nt(e))return Ki(e);if(Kt(e))return On(e);if(Gt(e))return Qi(e);if(Jt(e))return Yi(e)}throw Qt(e)}function Ni(e){return new F(function(t){var r=e[ft]();if(C(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function qi(e){return new F(function(t){for(var r=0;r=2;return function(n){return n.pipe(e?A(function(o,i){return e(o,i,n)}):de,ge(1),r?He(t):Dn(function(){return new Zt}))}}function Vn(){for(var e=[],t=0;t=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,f=a===void 0?!0:a;return function(c){var u,p,m,d=0,h=!1,v=!1,Y=function(){p==null||p.unsubscribe(),p=void 0},B=function(){Y(),u=m=void 0,h=v=!1},N=function(){var O=u;B(),O==null||O.unsubscribe()};return y(function(O,Qe){d++,!v&&!h&&Y();var De=m=m!=null?m:r();Qe.add(function(){d--,d===0&&!v&&!h&&(p=$r(N,f))}),De.subscribe(Qe),!u&&d>0&&(u=new rt({next:function($e){return De.next($e)},error:function($e){v=!0,Y(),p=$r(B,o,$e),De.error($e)},complete:function(){h=!0,Y(),p=$r(B,s),De.complete()}}),U(O).subscribe(u))})(c)}}function $r(e,t){for(var r=[],n=2;ne.next(document)),e}function K(e,t=document){return Array.from(t.querySelectorAll(e))}function z(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function _e(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function tr(e){return L(b(document.body,"focusin"),b(document.body,"focusout")).pipe(ke(1),l(()=>{let t=_e();return typeof t!="undefined"?e.contains(t):!1}),V(e===_e()),J())}function Xe(e){return{x:e.offsetLeft,y:e.offsetTop}}function Kn(e){return L(b(window,"load"),b(window,"resize")).pipe(Ce(0,Oe),l(()=>Xe(e)),V(Xe(e)))}function rr(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return L(b(e,"scroll"),b(window,"resize")).pipe(Ce(0,Oe),l(()=>rr(e)),V(rr(e)))}var Yn=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!Wr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),va?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!Wr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=ba.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Gn=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),Jn=typeof WeakMap!="undefined"?new WeakMap:new Yn,Xn=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=ga.getInstance(),n=new La(t,r,this);Jn.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){Xn.prototype[e]=function(){var t;return(t=Jn.get(this))[e].apply(t,arguments)}});var Aa=function(){return typeof nr.ResizeObserver!="undefined"?nr.ResizeObserver:Xn}(),Zn=Aa;var eo=new x,Ca=$(()=>k(new Zn(e=>{for(let t of e)eo.next(t)}))).pipe(g(e=>L(ze,k(e)).pipe(R(()=>e.disconnect()))),X(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ye(e){return Ca.pipe(S(t=>t.observe(e)),g(t=>eo.pipe(A(({target:r})=>r===e),R(()=>t.unobserve(e)),l(()=>he(e)))),V(he(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function ar(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var to=new x,Ra=$(()=>k(new IntersectionObserver(e=>{for(let t of e)to.next(t)},{threshold:0}))).pipe(g(e=>L(ze,k(e)).pipe(R(()=>e.disconnect()))),X(1));function sr(e){return Ra.pipe(S(t=>t.observe(e)),g(t=>to.pipe(A(({target:r})=>r===e),R(()=>t.unobserve(e)),l(({isIntersecting:r})=>r))))}function ro(e,t=16){return dt(e).pipe(l(({y:r})=>{let n=he(e),o=bt(e);return r>=o.height-n.height-t}),J())}var cr={drawer:z("[data-md-toggle=drawer]"),search:z("[data-md-toggle=search]")};function no(e){return cr[e].checked}function Ke(e,t){cr[e].checked!==t&&cr[e].click()}function Ue(e){let t=cr[e];return b(t,"change").pipe(l(()=>t.checked),V(t.checked))}function ka(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ha(){return L(b(window,"compositionstart").pipe(l(()=>!0)),b(window,"compositionend").pipe(l(()=>!1))).pipe(V(!1))}function oo(){let e=b(window,"keydown").pipe(A(t=>!(t.metaKey||t.ctrlKey)),l(t=>({mode:no("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),A(({mode:t,type:r})=>{if(t==="global"){let n=_e();if(typeof n!="undefined")return!ka(n,r)}return!0}),pe());return Ha().pipe(g(t=>t?M:e))}function le(){return new URL(location.href)}function ot(e){location.href=e.href}function io(){return new x}function ao(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)ao(e,r)}function _(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="undefined"&&(typeof t[o]!="boolean"?n.setAttribute(o,t[o]):n.setAttribute(o,""));for(let o of r)ao(n,o);return n}function fr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function so(){return location.hash.substring(1)}function Dr(e){let t=_("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Pa(e){return L(b(window,"hashchange"),e).pipe(l(so),V(so()),A(t=>t.length>0),X(1))}function co(e){return Pa(e).pipe(l(t=>ce(`[id="${t}"]`)),A(t=>typeof t!="undefined"))}function Vr(e){let t=matchMedia(e);return er(r=>t.addListener(()=>r(t.matches))).pipe(V(t.matches))}function fo(){let e=matchMedia("print");return L(b(window,"beforeprint").pipe(l(()=>!0)),b(window,"afterprint").pipe(l(()=>!1))).pipe(V(e.matches))}function zr(e,t){return e.pipe(g(r=>r?t():M))}function ur(e,t={credentials:"same-origin"}){return ue(fetch(`${e}`,t)).pipe(fe(()=>M),g(r=>r.status!==200?Ot(()=>new Error(r.statusText)):k(r)))}function We(e,t){return ur(e,t).pipe(g(r=>r.json()),X(1))}function uo(e,t){let r=new DOMParser;return ur(e,t).pipe(g(n=>n.text()),l(n=>r.parseFromString(n,"text/xml")),X(1))}function pr(e){let t=_("script",{src:e});return $(()=>(document.head.appendChild(t),L(b(t,"load"),b(t,"error").pipe(g(()=>Ot(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(l(()=>{}),R(()=>document.head.removeChild(t)),ge(1))))}function po(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function lo(){return L(b(window,"scroll",{passive:!0}),b(window,"resize",{passive:!0})).pipe(l(po),V(po()))}function mo(){return{width:innerWidth,height:innerHeight}}function ho(){return b(window,"resize",{passive:!0}).pipe(l(mo),V(mo()))}function bo(){return G([lo(),ho()]).pipe(l(([e,t])=>({offset:e,size:t})),X(1))}function lr(e,{viewport$:t,header$:r}){let n=t.pipe(ee("size")),o=G([n,r]).pipe(l(()=>Xe(e)));return G([r,t,o]).pipe(l(([{height:i},{offset:s,size:a},{x:f,y:c}])=>({offset:{x:s.x-f,y:s.y-c+i},size:a})))}(()=>{function e(n,o){parent.postMessage(n,o||"*")}function t(...n){return n.reduce((o,i)=>o.then(()=>new Promise(s=>{let a=document.createElement("script");a.src=i,a.onload=s,document.body.appendChild(a)})),Promise.resolve())}var r=class extends EventTarget{constructor(n){super(),this.url=n,this.m=i=>{i.source===this.w&&(this.dispatchEvent(new MessageEvent("message",{data:i.data})),this.onmessage&&this.onmessage(i))},this.e=(i,s,a,f,c)=>{if(s===`${this.url}`){let u=new ErrorEvent("error",{message:i,filename:s,lineno:a,colno:f,error:c});this.dispatchEvent(u),this.onerror&&this.onerror(u)}};let o=document.createElement("iframe");o.hidden=!0,document.body.appendChild(this.iframe=o),this.w.document.open(),this.w.document.write(` + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Authorino Operator

+

A Kubernetes Operator to manage Authorino instances.

+

License +codecov

+

Installation

+

The Operator can be installed by applying the manifests to the Kubernetes cluster or using Operator Lifecycle Manager (OLM)

+

Applying the manifests to the cluster

+
    +
  1. Create the namespace for the Operator
  2. +
+
kubectl create namespace authorino-operator
+
+
    +
  1. Install the Operator manifests
  2. +
+
make install
+
+
    +
  1. Deploy the Operator
  2. +
+
make deploy
+
+
+ Tip: Deploy a custom image of the Operator +
+ To deploy an image of the Operator other than the default quay.io/kuadrant/authorino-operator:latest, specify by setting the OPERATOR_IMAGE parameter. E.g.: + +
make deploy OPERATOR_IMAGE=authorino-operator:local
+
+
+ +

Installing via OLM

+

To install the Operator using the Operator Lifecycle Manager, you need to make the +Operator CSVs available in the cluster by creating a CatalogSource resource.

+

The bundle and catalog images of the Operator are available in Quay.io:

+ + + + + + + + + + + +
Bundlequay.io/kuadrant/authorino-operator-bundle
Catalogquay.io/kuadrant/authorino-operator-catalog
+ +
    +
  1. Create the namespace for the Operator
  2. +
+
kubectl create namespace authorino-operator
+
+
    +
  1. Create the CatalogSource resource pointing to + one of the images from in the Operator's catalog repo:
  2. +
+
kubectl -n authorino-operator apply -f -<<EOF
+apiVersion: operators.coreos.com/v1alpha1
+kind: CatalogSource
+metadata:
+  name: operatorhubio-catalog
+  namespace: authorino-operator
+spec:
+  sourceType: grpc
+  image: quay.io/kuadrant/authorino-operator-catalog:latest
+  displayName: Authorino Operator
+EOF
+
+

Requesting an Authorino instance

+

Once the Operator is up and running, you can request instances of Authorino by creating Authorino CRs. E.g.:

+
kubectl -n default apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The Authorino Custom Resource Definition (CRD)

+

API to install, manage and configure Authorino authorization services .

+

Each Authorino +Custom Resource (CR) represents an instance of Authorino deployed to the cluster. The Authorino Operator will reconcile +the state of the Kubernetes Deployment and associated resources, based on the state of the CR.

+

API Specification

+ + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
specAuthorinoSpecSpecification of the Authorino deployment.Required
+

AuthorinoSpec

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
clusterWideBooleanSets the Authorino instance's watching scope – cluster-wide or namespaced.Default: true (cluster-wide)
authConfigLabelSelectorsStringLabel selectors used by the Authorino instance to filter AuthConfig-related reconciliation events.Default: empty (all AuthConfigs are watched)
secretLabelSelectorsStringLabel selectors used by the Authorino instance to filter Secret-related reconciliation events (API key and mTLS authentication methods).Default: authorino.kuadrant.io/managed-by=authorino
replicasIntegerNumber of replicas desired for the Authorino instance. Values greater than 1 enable leader election in the Authorino service, where the leader updates the statuses of the AuthConfig CRs).Default: 1
evaluatorCacheSizeIntegerCache size (in megabytes) of each Authorino evaluator (when enabled in an AuthConfig).Default: 1
imageStringAuthorino image to be deployed (for dev/testing purpose only).Default: quay.io/kuadrant/authorino:latest
imagePullPolicyStringSets the imagePullPolicy of the Authorino Deployment (for dev/testing purpose only).Default: k8s default
logLevelStringDefines the level of log you want to enable in Authorino (debug, info and error).Default: info
logModeStringDefines the log mode in Authorino (development or production).Default: production
listenerListenerSpecification of the authorization service (gRPC interface).Required
oidcServerOIDCServerSpecification of the OIDC service.Required
tracingTracingConfiguration of the OpenTelemetry tracing exporter.Optional
metricsMetricsConfiguration of the metrics server (port, level).Optional
healthzHealthzConfiguration of the health/readiness probe (port).Optional
volumesVolumesSpecAdditional volumes to be mounted in the Authorino pods.Optional
+

Listener

+

Configuration of the authorization server – gRPC +and raw HTTP +interfaces

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
portIntegerPort number of authorization server (gRPC interface).DEPRECATED
Use ports instead
portsPortsPort numbers of the authorization server (gRPC and raw HTTPinterfaces).Optional
tlsTLSTLS configuration of the authorization server (GRPC and HTTP interfaces).Required
timeoutIntegerTimeout of external authorization request (in milliseconds), controlled internally by the authorization server.Default: 0 (disabled)
+

OIDCServer

+

Configuration of the OIDC Discovery server for Festival Wristband +tokens.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
portIntegerPort number of OIDC Discovery server for Festival Wristband tokens.Default: 8083
tlsTLSTLS configuration of the OIDC Discovery server for Festival Wristband tokensRequired
+

TLS

+

TLS configuration of server. Appears in listener and oidcServer.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
enabledBooleanWhether TLS is enabled or disabled for the server.Default: true
certSecretRefLocalObjectReferenceThe reference to the secret that contains the TLS certificates tls.crt and tls.key.Required when enabled: true
+

Ports

+

Port numbers of the authorization server.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
grpcIntegerPort number of the gRPC interface of the authorization server. Set to 0 to disable this interface.Default: 50001
httpIntegerPort number of the raw HTTP interface of the authorization server. Set to 0 to disable this interface.Default: 5001
+

Tracing

+

Configuration of the OpenTelemetry tracing exporter.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
endpointStringFull endpoint of the OpenTelemetry tracing collector service (e.g. http://jaeger:14268/api/traces).Required
tagsMapKey-value map of fixed tags to add to all OpenTelemetry traces emitted by Authorino.Optional
+

Metrics

+

Configuration of the metrics server.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
portIntegerPort number of the metrics server.Default: 8080
deepBooleanEnable/disable metrics at the level of each evaluator config (if requested in the AuthConfig) exported by the metrics server.Default: false
+

Healthz

+

Configuration of the health/readiness probe (port).

+ + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
portIntegerPort number of the health/readiness probe.Default: 8081
+

VolumesSpec

+

Additional volumes to project in the Authorino pods. Useful for validation of TLS self-signed certificates of external +services known to have to be contacted by Authorino at runtime.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
items[]VolumeSpecList of additional volume items to project.Optional
defaultModeIntegerMode bits used to set permissions on the files. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.Optional
+

VolumeSpec

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescriptionRequired/Default
nameStringName of the volume and volume mount within the Deployment. It must be unique in the CR.Optional
mountPathStringAbsolute path where to mount all the items.Required
configMaps[]StringList of of Kubernetes ConfigMap names to mount.Required exactly one of: confiMaps, secrets.
secrets[]StringList of of Kubernetes Secret names to mount.Required exactly one of: confiMaps, secrets.
items[]KeyToPathMount details for selecting specific ConfigMap or Secret entries.Optional
+

Full example

+
apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  clusterWide: true
+  authConfigLabelSelectors: environment=production
+  secretLabelSelectors: authorino.kuadrant.io/component=authorino,environment=production
+
+  replicas: 2
+
+  evaluatorCacheSize: 2 # mb
+
+  image: quay.io/kuadrant/authorino:latest
+  imagePullPolicy: Always
+
+  logLevel: debug
+  logMode: production
+
+  listener:
+    ports:
+      grpc: 50001
+      http: 5001
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-server-cert # secret must contain `tls.crt` and `tls.key` entries
+
+  oidcServer:
+    port: 8083
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-oidc-server-cert # secret must contain `tls.crt` and `tls.key` entries
+
+  metrics:
+    port: 8080
+    deep: true
+
+  volumes:
+    items:
+      - name: keycloak-tls-cert
+        mountPath: /etc/ssl/certs
+        configMaps:
+          - keycloak-tls-cert
+        items: # details to mount the k8s configmap in the authorino pods
+          - key: keycloak.crt
+            path: keycloak.crt
+    defaultMode: 420
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/architecture.gif b/authorino/docs/architecture.gif new file mode 100644 index 00000000..ee5f30fe Binary files /dev/null and b/authorino/docs/architecture.gif differ diff --git a/authorino/docs/architecture/index.html b/authorino/docs/architecture/index.html new file mode 100644 index 00000000..8b63265e --- /dev/null +++ b/authorino/docs/architecture/index.html @@ -0,0 +1,2695 @@ + + + + + + + + + + + + + + + + + + + + + + + + Architecture - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Architecture

+ +

Overview

+

Architecture

+

There are a few concepts to understand Authorino's architecture. The main components are: Authorino, Envoy and the Upstream service to be protected. Envoy proxies requests to the configured virtual host upstream service, first contacting with Authorino to decide on authN/authZ.

+

The topology can vary from centralized proxy and centralized authorization service, to dedicated sidecars, with the nuances in between. Read more about the topologies in the Topologies section below.

+

Authorino is deployed using the Authorino Operator, from an Authorino Kubernetes custom resource. Then, from another kind of custom resource, the AuthConfig CRs, each Authorino instance reads and adds to the index the exact rules of authN/authZ to enforce for each protected host ("index reconciliation").

+

Everything that the AuthConfig reconciler can fetch in reconciliation-time is stored in the index. This is the case of static parameters such as signing keys, authentication secrets and authorization policies from external policy registries.

+

AuthConfigs can refer to identity providers (IdP) and trusted auth servers whose access tokens will be accepted to authenticate to the protected host. Consumers obtain an authentication token (short-lived access token or long-lived API key) and send those in the requests to the protected service.

+

When Authorino is triggered by Envoy via the gRPC interface, it starts evaluating the Auth Pipeline, i.e. it applies to the request the parameters to verify the identity and to enforce authorization, as found in the index for the requested host (See host lookup for details).

+

Apart from static rules, these parameters can include instructions to contact online with external identity verifiers, external sources of metadata and policy decision points (PDPs).

+

On every request, Authorino's "working memory" is called Authorization JSON, a data structure that holds information about the context (the HTTP request) and objects from each phase of the auth pipeline: i.e., identity verification (phase i), ad-hoc metadata fetching (phase ii), authorization policy enforcement (phase iii), dynamic response (phase iv), and callbacks (phase v). The evaluators in each of these phases can both read and write from the Authorization JSON for dynamic steps and decisions of authN/authZ.

+

Topologies

+

Typically, upstream APIs are deployed to the same Kubernetes cluster and namespace where the Envoy proxy and Authorino is running (although not necessarily). Whatever is the case, Envoy must be proxying to the upstream API (see Envoy's HTTP route components and virtual hosts) and pointing to Authorino in the external authorization filter.

+

This can be achieved with different topologies: +- Envoy can be a centralized gateway with one dedicated instance of Authorino, proxying to one or more upstream services +- Envoy can be deployed as a sidecar of each protected service, but still contacting from a centralized Authorino authorization service +- Both Envoy and Authorino deployed as sidecars of the protected service, restricting all communication between them to localhost

+

Each topology above induces different measures for security.

+

Centralized gateway

+

Centralized gateway topology

+

Recommended in the protected services to validate the origin of the traffic. It must have been proxied by Envoy. See Authorino JSON injection for an extra validation option using a shared secret passed in HTTP header.

+

Centralized authorization service

+

Centralized Authorino topology

+

Protected service should only listen on localhost and all traffic can be considered safe.

+

Sidecars

+

Sidecars topology

+

Recommended namespaced instances of Authorino with fine-grained label selectors to avoid unnecessary caching of AuthConfigs.

+

Apart from that, protected service should only listen on localhost and all traffic can be considered safe.

+

Cluster-wide vs. Namespaced instances

+

Authorino instances can run in either cluster-wide or namespaced mode.

+

Namespace-scoped instances only watch resources (AuthConfigs and Secrets) created in a given namespace. This deployment mode does not require admin privileges over the Kubernetes cluster to deploy the instance of the service (given Authorino's CRDs have been installed beforehand, such as when Authorino is installed using the Authorino Operator).

+

Cluster-wide deployment mode, in contraposition, deploys instances of Authorino that watch resources across the entire cluster, consolidating all resources into a multi-namespace index of auth configs. Admin privileges over the Kubernetes cluster is required to deploy Authorino in cluster-wide mode.

+

Be careful to avoid superposition when combining multiple Authorino instances and instance modes in the same Kubernetes cluster. Apart from caching unnecessary auth config data in the instances depending on your routing settings, the leaders of each instance (set of replicas) may compete for updating the status of the custom resources that are reconciled. See Resource reconciliation and status update for more information.

+

If necessary, use label selectors to narrow down the space of resources watched and reconciled by each Authorino instance. Check out the Sharding section below for details.

+

The Authorino AuthConfig Custom Resource Definition (CRD)

+

The desired protection for a service is declaratively stated by applying an AuthConfig Custom Resource to the Kubernetes cluster running Authorino.

+

An AuthConfig resource typically looks like the following:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-api-protection
+spec:
+  # List of one or more hostname[:port] entries, lookup keys to find this config in request-time
+  # Authorino will try to prevent hostname collision by rejecting a hostname already taken.
+  hosts:
+    - my-api.io # north-south traffic
+    - my-api.ns.svc.cluster.local # east-west traffic
+
+  # List of one or more trusted sources of identity:
+  # - Endpoints of issuers of OpenId Connect ID tokens (JWTs)
+  # - Endpoints for OAuth 2.0 token introspection
+  # - Attributes for the Kubernetes `TokenReview` API
+  # - Label selectors for API keys (stored in Kubernetes `Secret`s)
+  # - mTLS trusted certificate issuers
+  # - HMAC secrets
+  identity: []
+
+  # List of sources of external metadata for the authorization (optional):
+  # - Endpoints for HTTP GET or GET-by-POST requests
+  # - OIDC UserInfo endpoints (associated with an OIDC token issuer)
+  # - User-Managed Access (UMA) resource registries
+  metadata: []
+
+  # List of authorization policies to be enforced (optional):
+  # - JSON pattern-matching rules (e.g. `context.request.http.path eq '/pets'`)
+  # - Open Policy Agent (OPA) inline or external Rego policies
+  # - Attributes for the Kubernetes `SubjectAccessReview` API
+  authorization: []
+
+  # List of dynamic response elements, to inject post-external authorization data into the request (optional):
+  # - JSON objects
+  # - Festival Wristbands (signed JWTs issued by Authorino)
+  # - Envoy Dynamic Metadata
+  response: []
+
+  # List of callback targets:
+  # - Endpoints for HTTP requests
+  callbacks: []
+
+# Custom HTTP status code, message and headers to replace the default `401 Unauthorized` and `403 Forbidden` (optional)
+  denyWith:
+    unauthenticated:
+      code: 302
+      message: Redirecting to login
+      headers:
+        - name: Location
+          value: https://my-app.io/login
+    unauthorized: {}
+
+

Check out the OAS of the AuthConfig CRD for a formal specification of the options for identity verification, external metadata fetching, authorization policies, and dynamic response, as well as any other host protection capability implemented by Authorino.

+

You can also read the specification from the CLI using the kubectl explain command. The Authorino CRD is required to have been installed in Kubernetes cluster. E.g. kubectl explain authconfigs.spec.identity.extendedProperties.

+

A complete description of supported features and corresponding configuration options within an AuthConfig CR can be found in the Features page.

+

More concrete examples of AuthConfigs for specific use-cases can be found in the User guides.

+

Resource reconciliation and status update

+

The instances of the Authorino authorization service workload, following the Operator pattern, watch events related to the AuthConfig custom resources, to build and reconcile an in-memory index of configs. Whenever a replica receives traffic for authorization request, it looks up in the index of AuthConfigs and then triggers the "Auth Pipeline", i.e. enforces the associated auth spec onto the request.

+

An instance can be a single authorization service workload or a set of replicas. All replicas watch and reconcile the same set of resources that match the --auth-config-label-selector and --secret-label-selector configuration options. (See both Cluster-wide vs. Namespaced instances and Sharding, for details about defining the reconciliation space of Authorino instances.)

+

The above means that all replicas of an Authorino instance should be able to receive traffic for authorization requests.

+

Among the multiple replicas of an instance, Authorino elects one replica to be leader. The leader is responsible for updating the status of reconciled AuthConfigs. If the leader eventually becomes unavailable, the instance will automatically elect another replica take its place as the new leader.

+

The status of an AuthConfig tells whether the resource is "ready" (i.e. indexed). It also includes summary information regarding the numbers of identity configs, metadata configs, authorization configs and response configs within the spec, as well as whether Festival Wristband tokens are being issued by the Authorino instance as by spec.

+

Apart from watching events related to AuthConfig custom resources, Authorino also watches events related to Kubernetes Secrets, as part of Authorino's API key authentication feature. Secret resources that store API keys are linked to their corresponding AuthConfigs in the index. Whenever the Authorino instance detects a change in the set of API key Secrets linked to an AuthConfigs, the instance reconciles the index.

+

Authorino only watches events related to Secrets whose metadata.labels match the label selector --secret-label-selector of the Authorino instance. The default values of the label selector for Kubernetes Secrets representing Authorino API keys is authorino.kuadrant.io/managed-by=authorino.

+

The "Auth Pipeline" (aka: enforcing protection in request-time)

+

Authorino Auth Pipeline

+

In each request to the protected API, Authorino triggers the so-called "Auth Pipeline", a set of configured evaluators that are organized in a 5-phase pipeline:

+
    +
  • (i) Identity phase: at least one source of identity (i.e., one identity evaluator) must resolve the supplied credential in the request into a valid identity or Authorino will otherwise reject the request as unauthenticated (401 HTTP response status).
  • +
  • (ii) Metadata phase: optional fetching of additional data from external sources, to add up to context and identity information, and used in authorization policies, dynamic responses and callback requests (phases iii to v).
  • +
  • (iii) Authorization phase: all unskipped policies must evaluate to a positive result ("authorized"), or Authorino will otherwise reject the request as unauthorized (403 HTTP response code).
  • +
  • (iv) Response phase – Authorino builds all user-defined response items (dynamic JSON objects and/or Festival Wristband OIDC tokens), which are supplied back to the external authorization client within added HTTP headers or as Envoy Dynamic Metadata
  • +
  • (v) Callbacks phase – Authorino sends callbacks to specified HTTP endpoints.
  • +
+

Each phase is sequential to the other, from (i) to (v), while the evaluators within each phase are triggered concurrently or as prioritized. The Identity phase (i) is the only one required to list at least one evaluator (i.e. one identity source or more); Metadata, Authorization and Response phases can have any number of evaluators (including zero, and even be omitted in this case).

+

Host lookup

+

Authorino reads the request host from Attributes.Http.Host of Envoy's CheckRequest type, and uses it as key to lookup in the index of AuthConfigs, matched against spec.hosts.

+

Alternatively to Attributes.Http.Host, a host entry can be supplied in the Attributes.ContextExtensions map of the external authorino request. This will take precedence before the host attribute of the HTTP request.

+

The host context extension is useful to support use cases such as of path prefix-based lookup and wildcard subdomains lookup with lookup strongly dictated by the external authorization client (e.g. Envoy), which often knows about routing and the expected AuthConfig to enforce beyond what Authorino can infer strictly based on the host name.

+

Wildcards can also be used in the host names specified in the AuthConfig, resolved by Authorino. E.g. if *.pets.com is in spec.hosts, Authorino will match the concrete host names dogs.pets.com, cats.pets.com, etc. In case, of multiple possible matches, Authorino will try the longest match first (in terms of host name labels) and fall back to the closest wildcard upwards in the domain tree (if any).

+

When more than one host name is specified in the AuthConfig, all of them can be used as key, i.e. all of them can be requested in the authorization request and will be mapped to the same config.

+

Example. Host lookup with wildcards.

+

Domain tree

+

The domain tree above induces the following relation: +- foo.nip.ioauthconfig-1 (matches *.io) +- talker-api.nip.ioauthconfig-2 (matches talker-api.nip.io) +- dogs.pets.comauthconfig-2 (matches *.pets.com) +- api.acme.comauthconfig-3 (matches api.acme.com) +- www.acme.comauthconfig-4 (matches *.acme.com) +- foo.org404 Not found

+


+

The host can include the port number (i.e. hostname:port) or it can be just the name of the host name. Authorino will first try finding in the index a config associated to hostname:port, as supplied in the authorization request; if the index misses an entry for hostname:port, Authorino will then remove the :port suffix and repeat the lookup using just hostname as key. This provides implicit support for multiple port numbers for a same host without having to list all combinations in the AuthConfig.

+

Avoiding host name collision

+

Authorino tries to prevent host name collision between AuthConfigs by rejecting to link in the index any AuthConfig and host name if the host name is already linked to a different AuthConfig in the index. This was intentionally designed to prevent users from superseding each other's AuthConfigs, partially or fully, by just picking the same host names or overlapping host names as others.

+

When wildcards are involved, a host name that matches a host wildcard already linked in the index to another AuthConfig will be considered taken, and therefore the newest AuthConfig will be rejected to be linked to that host.

+

The Authorization JSON

+

On every Auth Pipeline, Authorino builds the Authorization JSON, a "working-memory" data structure composed of context (information about the request, as supplied by the Envoy proxy to Authorino) and auth (objects resolved in phases (i) to (v) of the pipeline). The evaluators of each phase can read from the Authorization JSON and implement dynamic properties and decisions based on its values.

+

At phase (iii), the authorization evaluators count on an Authorization JSON payload that looks like the following:

+
// The authorization JSON combined along Authorino's auth pipeline for each request
+{
+  "context": { // the input from the proxy
+    "origin": {…},
+    "request": {
+      "http": {
+        "method": "…",
+        "headers": {…},
+        "path": "/…",
+        "host": "…",
+        …
+      }
+    }
+  },
+  "auth": {
+    "identity": {
+      // the identity resolved, from the supplied credentials, by one of the evaluators of phase (i)
+    },
+    "metadata": {
+      // each metadata object/collection resolved by the evaluators of phase (ii), by name of the evaluator
+    }
+  }
+}
+
+

The policies evaluated can use any data from the authorization JSON to define authorization rules.

+

After phase (iii), Authorino appends to the authorization JSON the results of this phase as well, and the payload available for phase (iv) becomes:

+
// The authorization JSON combined along Authorino's auth pipeline for each request
+{
+  "context": { // the input from the proxy
+    "origin": {…},
+    "request": {
+      "http": {
+        "method": "…",
+        "headers": {…},
+        "path": "/…",
+        "host": "…",
+        …
+      }
+    }
+  },
+  "auth": {
+    "identity": {
+      // the identity resolved, from the supplied credentials, by one of the evaluators of phase (i)
+    },
+    "metadata": {
+      // each metadata object/collection resolved by the evaluators of phase (ii), by name of the evaluator
+    },
+    "authorization": {
+      // each authorization policy result resolved by the evaluators of phase (iii), by name of the evaluator
+    }
+  }
+}
+
+

Festival Wristbands and Dynamic JSON responses can include dynamic values (custom claims/properties) fetched from the authorization JSON. These can be returned to the external authorization client in added HTTP headers or as Envoy Well Known Dynamic Metadata. Check out Dynamic response features for details.

+

For information about reading and fetching data from the Authorization JSON (syntax, functions, etc), check out JSON paths.

+

Raw HTTP Authorization interface

+

Besides providing the gRPC authorization interface – that implements the Envoy gRPC authorization server –, Authorino also provides another interface for raw HTTP authorization. This second interface responds to GET and POST HTTP requests sent to :5001/check, and is suitable for other forms of integration, such as: +- using Authorino as Kubernetes ValidatingWebhook service (example); +- other HTTP proxies and API gateways; +- old versions of Envoy incompatible with the latest version of gRPC external authorization protocol (Authorino is based on v3.19.1 of Envoy external authorization API)

+

In the raw HTTP interface, the host used to lookup for an AuthConfig must be supplied in the Host HTTP header of the request. Other attributes of the HTTP request are also passed in the context to evaluate the AuthConfig, including the body of the request.

+

Caching

+

OpenID Connect and User-Managed Access configs

+

OpenID Connect and User-Managed Access configurations, discovered usually at reconciliation-time from well-known discovery endpoints.

+

Cached individual OpenID Connect configurations discovered by Authorino can be configured to be auto-refreshed, by setting the corresponding spec.identity.oidc.ttl field in the AuthConfig (given in seconds, default: 0 – i.e. no cache update).

+

JSON Web Keys (JWKs) and JSON Web Key Sets (JWKS)

+

JSON signature verification certificates linked by discovered OpenID Connect configurations, fetched usually at reconciliation-time.

+

Revoked access tokens

+ + + + +
Not implemented - In analysis (#19)
+ +

Caching of access tokens identified and or notified as revoked prior to expiration.

+

External metadata

+ + + + +
Not implemented - Planned (#21)
+ +

Caching of resource data obtained in previous requests.

+

Compiled Rego policies

+

Performed automatically by Authorino at reconciliation-time for the authorization policies based on the built-in OPA module.

+

Precompiled and cached individual Rego policies originally pulled by Authorino from external registries can be configured to be auto-refreshed, by setting the corresponding spec.authorization.opa.externalRegistry.ttl field in the AuthConfig (given in seconds, default: 0 – i.e. no cache update).

+

Repeated requests

+ + + + +
Not implemented - In analysis (#20)
+ +

For consecutive requests performed, within a given period of time, by a same user that request for a same resource, such that the result of the auth pipeline can be proven that would not change.

+

Sharding

+

By default, Authorino instances will watch AuthConfig CRs in the entire space (namespace or entire cluster; see Cluster-wide vs. Namespaced instances for details). To support combining multiple Authorino instances and instance modes in the same Kubernetes cluster, and yet avoiding superposition between the instances (i.e. multiple instances reconciling the same AuthConfigs), Authorino offers support for data sharding, i.e. to horizontally narrow down the space of reconciliation of an Authorino instance to a subset of that space.

+

The benefits of limiting the space of reconciliation of an Authorino instance include avoiding unnecessary caching and workload in instances that do not receive corresponding traffic (according to your routing settings) and preventing leaders of multiple instances (sets of replicas) to compete on resource status updates (see Resource reconciliation and status update for details).

+

Use-cases for sharding of AuthConfigs: +- Horizontal load balancing of traffic of authorization requests +- Supporting for managed centralized instances of Authorino to API owners who create and maintain their own AuthConfigs within their own user namespaces.

+

Authorino's custom controllers filter the AuthConfig-related events to be reconciled using Kubernetes label selectors, defined for the Authorino instance via --auth-config-label-selector command-line flag. By default, --auth-config-label-selector is empty, meaning all AuthConfigs in the space are watched; this variable can be set to any value parseable as a valid label selector, causing Authorino to then watch only events of AuthConfigs whose metadata.labels match the selector.

+

The following are all valid examples of AuthConfig label selector filters:

+
--auth-config-label-selector="authorino.kuadrant.io/managed-by=authorino"
+--auth-config-label-selector="authorino.kuadrant.io/managed-by=authorino,other-label=other-value"
+--auth-config-label-selector="authorino.kuadrant.io/managed-by in (authorino,kuadrant)"
+--auth-config-label-selector="authorino.kuadrant.io/managed-by!=authorino-v0.4"
+--auth-config-label-selector="!disabled"
+
+

RBAC

+

The table below describes the roles and role bindings defined by the Authorino service:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RoleKindScope(*)DescriptionPermissions
authorino-manager-roleClusterRoleC/NRole of the Authorino manager serviceWatch and reconcile AuthConfigs and Secrets
authorino-manager-k8s-auth-roleClusterRoleC/NRole for the Kubernetes auth featuresCreate TokenReviews and SubjectAccessReviews (Kubernetes auth)
authorino-leader-election-roleRoleNLeader election roleCreate/update the ConfigMap used to coordinate which replica of Authorino is the leader
authorino-authconfig-editor-roleClusterRole-AuthConfig editorR/W AuthConfigs; Read AuthConfig/status
authorino-authconfig-viewer-roleClusterRole-AuthConfig viewerRead AuthConfigs and AuthConfig/status
authorino-proxy-roleClusterRoleC/NKube-rbac-proxy-role (sidecar)'s roleCreate TokenReviews and SubjectAccessReviews to check permissions to the /metrics endpoint
authorino-metrics-readerClusterRole-Metrics readerGET /metrics
+

(*) C - Cluster-wide | N - Authorino namespace | C/N - Cluster-wide or Authorino namespace (depending on the deployment mode).

+

Observability

+

Please refer to the Observability user guide for info on Prometheus metrics exported by Authorino, readiness probe, logging, tracing, etc.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/auth-pipeline.gif b/authorino/docs/auth-pipeline.gif new file mode 100644 index 00000000..98ef6b75 Binary files /dev/null and b/authorino/docs/auth-pipeline.gif differ diff --git a/authorino/docs/code_of_conduct/index.html b/authorino/docs/code_of_conduct/index.html new file mode 100644 index 00000000..df43687e --- /dev/null +++ b/authorino/docs/code_of_conduct/index.html @@ -0,0 +1,1978 @@ + + + + + + + + + + + + + + + + + + + + Code of conduct - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Code of conduct

+ +

Authorino Code of Conduct v1.0

+

This document provides community guidelines for a safe, respectful, productive, and collaborative place for any person who is willing to contribute to Authorino.

+
    +
  • Participants will be tolerant of opposing views.
  • +
  • Participants must ensure that their language and actions are free of personal attacks and disparaging personal remarks.
  • +
  • When interpreting the words and actions of others, participants should always assume good intentions.
  • +
  • Behaviour which can be reasonably considered harassment will not be tolerated.
  • +
+

This Code of Conduct is adapted from The Ruby Community Conduct Guideline

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/contributing/index.html b/authorino/docs/contributing/index.html new file mode 100644 index 00000000..8d5ee438 --- /dev/null +++ b/authorino/docs/contributing/index.html @@ -0,0 +1,2450 @@ + + + + + + + + + + + + + + + + + + + + + + + + Developer's Guide - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Developer's Guide

+ +

Technology stack for developers

+

Minimum requirements to contribute to Authorino are: +- Golang v1.19+ +- Docker

+

Authorino's code was originally bundled using the Operator SDK (v1.9.0).

+

The following tools can be installed as part of the development workflow:

+
    +
  • Installed with go install to the $PROJECT_DIR/bin directory:
  • +
  • controller-gen: for building custom types and manifests
  • +
  • Kustomize: for assembling flavoured manifests and installing/deploying
  • +
  • setup-envtest: for running the tests – extra tools installed to ./testbin
  • +
  • [benchstat]https://cs.opensource.google/go/x/perf): for human-friendly test benchmark reports
  • +
  • mockgen: to generate mocks for tests – e.g. ./bin/mockgen -source=pkg/auth/auth.go -destination=pkg/auth/mocks/mock_auth.go
  • +
  • +

    Kind: for deploying a containerized Kubernetes cluster for integration testing purposes

    +
  • +
  • +

    Other recommended tools to have installed:

    +
  • +
  • jq
  • +
  • yq
  • +
  • gnu-sed
  • +
+

Workflow

+

Development workflow

+

Check the issues

+

Start by checking the list of issues in GitHub.

+

In case you want to contribute with an idea for enhancement, a bug fix, or question, please make sure to describe the issue so we can start a conversation together and help you find the best way to get your contribution merged.

+

Clone the repo and setup the local environment

+

Fork/clone the repo:

+
git clone git@github.com:kuadrant/authorino.git && cd authorino
+
+

Download the Golang dependencies: +

make vendor
+

+

For additional automation provided, check:

+
make help
+
+

Make your changes

+

Good changes... +- follow the Golang conventions +- have proper test coverage +- address corresponding updates to the docs +- help us fix wherever we failed to do the above 😜

+

Run the tests

+

To run the tests:

+
make test
+
+

Try locally

+

Build, deploy and try Authorino in a local cluster

+

The following command will: +- Start a local Kubernetes cluster (using Kind) +- Install the Authorino Operator and Authorino CRDs +- Build an image of Authorino based on the current branch +- Push the freshly built image to the cluster's registry +- Install cert-manager in the cluster +- Generate TLS certificates for the Authorino service +- Deploy an instance of Authorino +- Deploy the example application Talker API, a simple HTTP API that echoes back whatever it gets in the request +- Setup Envoy for proxying to the Talker API and using Authorino for external authorization

+
make local-setup
+
+

You will be prompted to edit the Authorino custom resource.

+

The main workload composed of Authorino instance and user apps (Envoy, Talker API) will be deployed to the default Kubernetes namespace.

+

Once the deployment is ready, you can forward the requests on port 8000 to the Envoy service

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+
+ Pro tips + + 1. Change the default workload namespace by supplying the `NAMESPACE` argument to your `make local-setup` and other deployment, apps and local cluster related targets. If the namespace does not exist, it will be created. + 2. Switch to TLS disabled by default when deploying locally by supplying `TLS_ENABLED=0` to your `make local-setup` and `make deploy` commands. E.g. `make local-setup TLS_ENABLED=0`. + 3. Skip being prompted to edit the `Authorino` CR and default to an Authorino deployment with TLS enabled, debug/development log level/mode, and standard name 'authorino', by supplying `FF=1` to your `make local-setup` and `make deploy` commands. E.g. `make local-setup FF=1` + 4. Supply `DEPLOY_IDPS=1` to `make local-setup` and `make user-apps` to deploy Keycloak and Dex to the cluster. `DEPLOY_KEYCLOAK` and `DEPLOY_DEX` are also available. Read more about additional tools for specific use cases in the section below. + 5. Saving the ID of the process (PID) of the port-forward command spawned in the background can be useful to later kill and restart the process. E.g. `kubectl port-forward deployment/envoy 8000:8000 &;PID=$!`; then `kill $PID`. +
+ +

Additional tools (for specific use-cases)

+
+ Limitador + + To deploy [Limitador](https://github.com/kuadrant/limitador) – pre-configured in Envoy for rate-limiting the Talker API to 5 hits per minute per `user_id` when available in the cluster workload –, run: + +
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml
+
+
+ +
+ Keycloak + + Authorino examples include a bundle of [Keycloak](https://www.keycloak.org) preloaded with the following realm setup:
+ - Admin console: http://localhost:8080/auth/admin (admin/p) + - Preloaded realm: **kuadrant** + - Preloaded clients: + - **demo**: to which API consumers delegate access and therefore the one which access tokens are issued to + - **authorino**: used by Authorino to fetch additional user info with `client_credentials` grant type + - **talker-api**: used by Authorino to fetch UMA-protected resource data associated with the Talker API + - Preloaded resources: + - `/hello` + - `/greetings/1` (owned by user john) + - `/greetings/2` (owned by user jane) + - `/goodbye` + - Realm roles: + - member (default to all users) + - admin + - Preloaded users: + - john/p (member) + - jane/p (admin) + - peter/p (member, email not verified) + + To deploy, run: + +
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+ + Forward local requests to the instance of Keycloak running in the cluster: + +
kubectl port-forward deployment/keycloak 8080:8080 &
+
+
+ +
+ Dex + + Authorino examples include a bundle of [Dex](https://dexidp.io) preloaded with the following setup:
+ - Preloaded clients:
+ - **demo**: to which API consumers delegate access and therefore the one which access tokens are issued to (Client secret: aaf88e0e-d41d-4325-a068-57c4b0d61d8e) + - Preloaded users:
+ - marta@localhost/password + + To deploy, run: + +
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/dex/dex-deploy.yaml
+
+ + Forward local requests to the instance of Dex running in the cluster: + +
kubectl port-forward deployment/dex 5556:5556 &
+
+
+ +
+ a12n-server + + Authorino examples include a bundle of [**a12n-server**](https://github.com/curveball/a12n-server) and corresponding MySQL database, preloaded with the following setup:
+ - Admin console: http://a12n-server:8531 (admin/123456) + - Preloaded clients:
+ - **service-account-1**: to obtain access tokens via `client_credentials` OAuth2 grant type, to consume the Talker API (Client secret: DbgXROi3uhWYCxNUq_U1ZXjGfLHOIM8X3C2bJLpeEdE); includes metadata privilege: `{ "talker-api": ["read"] }` that can be used to write authorization policies + - **talker-api**: to authenticate to the token introspect endpoint (Client secret: V6g-2Eq2ALB1_WHAswzoeZofJ_e86RI4tdjClDDDb4g) + + To deploy, run: + +
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/a12n-server/a12n-server-deploy.yaml
+
+ + Forward local requests to the instance of a12n-server running in the cluster: + +
kubectl port-forward deployment/a12n-server 8531:8531 &
+
+
+ +

Re-build and rollout latest

+

Re-build and rollout latest Authorino image:

+
make local-rollout
+
+

If you made changes to the CRD between iterations, re-install by running:

+
make install
+
+

Clean-up

+

The following command deletes the entire Kubernetes cluster started with Kind:

+
make local-cleanup
+
+

Sign your commits

+

All commits to be accepted to Authorino's code are required to be signed. Refer to this page about signing your commits.

+

Logging policy

+

A few guidelines for adding logging messages in your code: +1. Make sure you understand Authorino's Logging architecture and policy regarding log levels, log modes, tracing IDs, etc. +2. Respect controller-runtime's Logging Guidelines. +3. Do not add sensitive data to your info log messages; instead, redact all sensitive data in your log messages or use debug log level by mutating the logger with V(1) before outputting the message.

+

Additional resources

+

Here in the repo:

+ +

Other repos:

+ +

Reach out

+

kuadrant.slack.com

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/features/index.html b/authorino/docs/features/index.html new file mode 100644 index 00000000..426e9313 --- /dev/null +++ b/authorino/docs/features/index.html @@ -0,0 +1,3613 @@ + + + + + + + + + + + + + + + + + + + + + + + + Reference - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Features

+ +

Overview

+

We call features of Authorino the different things one can do to enforce identity verification & authentication and authorization on requests against protected services. These can be a specific identity verification method based on a supported authentication protocol, or a method to fetch additional auth metadata in request-time, etc.

+

Most features of Authorino relate to the different phases of the Auth Pipeline and therefore are configured in the Authorino AuthConfig. An identity verification feature usually refers to a functionality of Authorino such as the API key-based authentication implemented by Authorino, the validation of JWTs/OIDC ID tokens, and authentication based on Kubernetes TokenReviews. Analogously, OPA, JSON pattern-matching and Kubernetes SubjectAccessReview are examples of authorization features of Authorino.

+

At a deeper level, a feature can also be an additional functionality within a bigger feature, usually applicable to the whole class the bigger feature belongs to. For instance, the configuration of the location and key selector of auth credentials, available for all identity verification-related features. Other examples would be Identity extension and Response wrappers.

+

A full specification of all features of Authorino that can be configured in an AuthConfig can be found in the official spec of the custom resource definition.

+

You can also learn about Authorino features by using the kubectl explain command in a Kubernetes cluster where the Authorino CRD has been installed. E.g. kubectl explain authconfigs.spec.identity.extendedProperties.

+

Common feature: JSON paths (valueFrom.authJSON)

+

The first feature of Authorino to learn about is a common functionality, used in the specification of many other features. JSON paths have to do with reading data from the Authorization JSON, to refer to them in configuration of dynamic steps of API protection enforcing.

+

Usage examples of JSON paths are: dynamic URL and request parameters when fetching metadata from external sources, dynamic authorization policy rules, and dynamic authorization responses (injected JSON and Festival Wristband token claims).

+

Syntax

+

The syntax to fetch data from the Authorization JSON with JSON paths is based on GJSON. Refer to GJSON Path Syntax page for more information.

+

String modifiers

+

On top of GJSON, Authorino defines a few string modifiers.

+

Examples below provided for the following Authorization JSON:

+
{
+  "context": {
+    "request": {
+      "http": {
+        "path": "/pets/123",
+        "headers": {
+          "authorization": "Basic amFuZTpzZWNyZXQK" // jane:secret
+          "baggage": "eyJrZXkxIjoidmFsdWUxIn0=" // {"key1":"value1"}
+        }
+      }
+    }
+  },
+  "auth": {
+    "identity": {
+      "username": "jane",
+      "fullname": "Jane Smith",
+      "email": "\u0006jane\u0012@petcorp.com\n"
+    },
+  },
+}
+
+

@strip
+Strips out any non-printable characters such as carriage return. E.g. auth.identity.email.@strip"jane@petcorp.com".

+

@case:upper|lower
+Changes the case of a string. E.g. auth.identity.username.@case:upper"JANE".

+

@replace:{"old":string,"new":string}
+Replaces a substring within a string. E.g. auth.identity.username.@replace:{"old":"Smith","new":"Doe"}"Jane Doe".

+

@extract:{"sep":string,"pos":int}
+Splits a string at occurrences of a separator (default: " ") and selects the substring at the pos-th position (default: 0). E.g. context.request.path.@extract:{"sep":"/","pos":2}123.

+

@base64:encode|decode
+base64-encodes or decodes a string value. E.g. auth.identity.username.decoded.@base64:encode"amFuZQo=".

+

In combination with @extract, @base64 can be used to extract the username in an HTTP Basic Authentication request. E.g. context.request.headers.authorization.@extract:{"pos":1}|@base64:decode|@extract:{"sep":":","pos":1}"jane".

+

Interpolation

+

JSON paths can be interpolated into strings to build template-like dynamic values. E.g. "Hello, {auth.identity.name}!".

+

Identity verification & authentication features (identity)

+

API key (identity.apiKey)

+

Authorino relies on Kubernetes Secret resources to represent API keys.

+

To define an API key, create a Secret in the cluster containing an api_key entry that holds the value of the API key.

+

API key secrets must be created in the same namespace of the AuthConfig (default) or spec.identity.apiKey.allNamespaces must be set to true (only works with cluster-wide Authorino instances).

+

API key secrets must be labeled with the labels that match the selectors specified in spec.identity.apiKey.selector in the AuthConfig.

+

Whenever an AuthConfig is indexed, Authorino will also index all matching API key secrets. In order for Authorino to also watch events related to API key secrets individually (e.g. new Secret created, updates, deletion/revocation), Secrets must also include a label that matches Authorino's bootstrap configuration --secret-label-selector (default: authorino.kuadrant.io/managed-by=authorino). This label may or may not be present to spec.identity.apiKey.selector in the AuthConfig without implications for the caching of the API keys when triggered by the reconciliation of the AuthConfig; however, if not present, individual changes related to the API key secret (i.e. without touching the AuthConfig) will be ignored by the reconciler.

+

Example. For the following AuthConfig:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-api-protection
+  namespace: authorino-system
+spec:
+  hosts:
+    - my-api.io
+  identity:
+    - name: api-key-users
+      apiKey:
+        selector:
+          matchLabels: # the key-value set used to select the matching `Secret`s; resources including these labels will be accepted as valid API keys to authenticate to this service
+            group: friends # some custom label
+        allNamespaces: true # only works with cluster-wide Authorino instances; otherwise, create the API key secrets in the same namespace of the AuthConfig
+
+

The following Kubernetes Secret represents a valid API key:

+
apiVersion: v1
+kind: Secret
+metadata:
+  name: user-1-api-key-1
+  namespace: default
+  labels:
+    authorino.kuadrant.io/managed-by: authorino # so the Authorino controller reconciles events related to this secret
+    group: friends
+stringData:
+  api_key: <some-randomly-generated-api-key-value>
+type: Opaque
+
+

The resolved identity object, added to the authorization JSON following an API key identity source evaluation, is the Kubernetes Secret resource (as JSON).

+

Kubernetes TokenReview (identity.kubernetes)

+

Authorino can verify Kubernetes-valid access tokens (using Kubernetes TokenReview API).

+

These tokens can be either ServiceAccount tokens such as the ones issued by kubelet as part of Kubernetes Service Account Token Volume Projection, or any valid user access tokens issued to users of the Kubernetes server API.

+

The list of audiences of the token must include the requested host and port of the protected API (default), or all audiences specified in the Authorino AuthConfig custom resource. For example:

+

For the following AuthConfig CR, the Kubernetes token must include the audience my-api.io:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-api-protection
+spec:
+  hosts:
+    - my-api.io
+  identity:
+    - name: cluster-users
+      kubernetes: {}
+
+

Whereas for the following AuthConfig CR, the Kubernetes token audiences must include foo and bar:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-api-protection
+spec:
+  hosts:
+    - my-api.io
+  identity:
+    - name: cluster-users
+      kubernetes:
+        audiences:
+          - foo
+          - bar
+
+

The resolved identity object added to the authorization JSON following a successful Kubernetes authentication identity evaluation is the status field of TokenReview response (see TokenReviewStatus for reference).

+

OpenID Connect (OIDC) JWT/JOSE verification and validation (identity.oidc)

+

In reconciliation-time, using OpenID Connect Discovery well-known endpoint, Authorino automatically discovers and caches OpenID Connect configurations and associated JSON Web Key Sets (JWKS) for all OpenID Connect issuers declared in an AuthConfig. Then, in request-time, Authorino verifies the JSON Web Signature (JWS) and check the time validity of signed JSON Web Tokens (JWT) supplied on each request.

+

Important! Authorino does not implement OAuth2 grants nor OIDC authentication flows. As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities.

+

OIDC

+

The kid claim stated in the JWT header must match one of the keys cached by Authorino during OpenID Connect Discovery, therefore supporting JWK rotation.

+

The decoded payload of the validated JWT is appended to the authorization JSON as the resolved identity.

+

OpenID Connect configurations and linked JSON Web Key Sets can be configured to be automatically refreshed (pull again from the OpenID Connect Discovery well-known endpoints), by setting the identity.oidc.ttl field (given in seconds, default: 0 – i.e. auto-refresh disabled).

+

For an excellent summary of the underlying concepts and standards that relate OpenID Connect and JSON Object Signing and Encryption (JOSE), see this article by Jan Rusnacko. For official specification and RFCs, see OpenID Connect Core, OpenID Connect Discovery, JSON Web Token (JWT) (RFC7519), and JSON Object Signing and Encryption (JOSE).

+

OAuth 2.0 introspection (identity.oauth2)

+

For bare OAuth 2.0 implementations, Authorino can perform token introspection on the access tokens supplied in the requests to protected APIs.

+

Authorino does not implement any of OAuth 2.0 grants for the applications to obtain the token. However, it can verify supplied tokens with the OAuth server, including opaque tokens, as long as the server exposes the token_introspect endpoint (RFC 7662).

+

Developers must set the token introspection endpoint in the AuthConfig, as well as a reference to the Kubernetes secret storing the credentials of the OAuth client to be used by Authorino when requesting the introspect.

+

OAuth 2.0 Token Introspect

+

The response returned by the OAuth2 server to the token introspection request is the resolved identity appended to the authorization JSON.

+

OpenShift OAuth (user-echo endpoint) (identity.openshift)

+ + + + +
Not implemented - In analysis
+ +

Online token introspection of OpenShift-valid access tokens based on OpenShift's user-echo endpoint.

+

Mutual Transport Layer Security (mTLS) authentication (identity.mtls)

+

Authorino can verify x509 certificates presented by clients for authentication on the request to the protected APIs, at application level.

+

Trusted root Certificate Authorities (CA) are stored in Kubernetes Secrets labeled according to selectors specified in the AuthConfig, watched and indexed by Authorino. Make sure to create proper kubernetes.io/tls-typed Kubernetes Secrets, containing the public certificates of the CA stored in either a tls.crt or ca.crt entry inside the secret.

+

Trusted root CA secrets must be created in the same namespace of the AuthConfig (default) or spec.identity.mtls.allNamespaces must be set to true (only works with cluster-wide Authorino instances).

+

The identity object resolved out of a client x509 certificate is equal to the subject field of the certificate, and it serializes as JSON within the Authorization JSON usually as follows:

+
{
+    "auth": {
+        "identity": {
+            "CommonName": "aisha",
+            "Country": ["PK"],
+            "ExtraNames": null,
+            "Locality": ["Islamabad"],
+            "Names": [
+                { "Type": [2, 5, 4, 3], "Value": "aisha" },
+                { "Type": [2, 5, 4, 6], "Value": "PK" },
+                { "Type": [2, 5, 4, 7], "Value": "Islamabad" },
+                { "Type": [2, 5, 4,10], "Value": "ACME Inc." },
+                { "Type": [2, 5, 4,11], "Value": "Engineering" }
+            ],
+            "Organization": ["ACME Inc."],
+            "OrganizationalUnit": ["Engineering"],
+            "PostalCode": null,
+            "Province": null,
+            "SerialNumber": "",
+            "StreetAddress": null
+        }
+  }
+}
+
+

Hash Message Authentication Code (HMAC) authentication (identity.hmac)

+ + + + +
Not implemented - Planned (#9)
+ +

Authentication based on the validation of a hash code generated from the contextual information of the request to the protected API, concatenated with a secret known by the API consumer.

+

Plain (identity.plain)

+

Authorino can read plain identity objects, based on authentication tokens provided and verified beforehand using other means (e.g. Envoy JWT Authentication filter, Kubernetes API server authentication), and injected into the payload to the external authorization service.

+

The plain identity object is retrieved from the Authorization JSON based on a JSON path specified in the AuthConfig.

+

This feature is particularly useful in cases where authentication/identity verification is handled before invoking the authorization service and its resolved value injected in the payload can be trusted. Examples of applications for this feature include: +- Authentication handled in Envoy leveraging the Envoy JWT Authentication filter (decoded JWT injected as 'metadata_context') +- Use of Authorino as Kubernetes ValidatingWebhook service (Kubernetes 'userInfo' injected in the body of the AdmissionReview request)

+

Example of AuthConfig to retrieve plain identity object from the Authorization JSON.

+
spec:
+  identity:
+  - name: plain
+    plain:
+      authJSON: context.metadata_context.filter_metadata.envoy\.filters\.http\.jwt_authn|verified_jwt
+
+

If the specified JSON path does not exist in the Authorization JSON or the value is null, the identity verification will fail and, unless other identity config succeeds, Authorino will halt the Auth Pipeline with the usual 401 Unauthorized.

+

Anonymous access (identity.anonymous)

+

Literally a no-op evaluator for the identity verification phase that returns a static identity object {"anonymous":true}.

+

It allows to implement AuthConfigs that bypasses the identity verification phase of Authorino, to such as: +- enable anonymous access to protected services (always or combined with Priorities) +- postpone authentication in the Auth Pipeline to be resolved as part of an OPA policy

+

Example of AuthConfig spec that falls back to anonymous access when OIDC authentication fails, enforcing read-only access to the protected service in such cases:

+
spec:
+  identity:
+  - name: jwt
+    oidc: { endpoint: ... }
+  - name: anonymous
+    priority: 1 # expired oidc token, missing creds, etc. default to anonymous access
+    anonymous: {}
+  authorization:
+  - name: read-only-access-if-authn-fails
+    when:
+    - selector: auth.identity.anonymous
+      operator: eq
+      value: "true"
+    json:
+      rules:
+      - selector: context.request.http.method
+        operator: eq
+        value: GET
+
+

Festival Wristband authentication

+

Authorino-issued Festival Wristband tokens can be validated as any other signed JWT using Authorino's OpenID Connect (OIDC) JWT/JOSE verification and validation.

+

The value of the issuer must be the same issuer specified in the custom resource for the protected API originally issuing wristband. Eventually, this can be the same custom resource where the wristband is configured as a valid source of identity, but not necessarily.

+

Extra: Auth credentials (credentials)

+

All the identity verification methods supported by Authorino can be configured regarding the location where access tokens and credentials (i.e. authentication secrets) fly within the request.

+

By default, authentication secrets are expected to be supplied in the Authorization HTTP header, with the Bearer prefix and plain authentication secret, separated by space. The full list of supported options for the location of authentication secrets and selector is specified in the table below:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Location (credentials.in)DescriptionSelector (credentials.keySelector)
authorization_headerAuthorization HTTP headerPrefix (default: Bearer)
custom_headerCustom HTTP headerName of the header. Value should have no prefix.
queryQuery string parameterName of the parameter
cookieCookie headerID of the cookie entry
+

Extra: Identity extension (extendedProperties)

+

Resolved identity objects can be extended with user-defined JSON properties. Values can be static or fetched from the Authorization JSON.

+

A typical use-case for this feature is token normalization. Say you have more than one identity source listed in your AuthConfig but each source issues an access token with a different JSON structure – e.g. two OIDC issuers that use different names for custom JWT claims of similar meaning; when two different identity verification/authentication methods are combined, such as API keys (whose identity objects are the corresponding Kubernetes Secrets) and Kubernetes tokens (whose identity objects are Kubernetes UserInfo data).

+

In such cases, identity extension can be used to normalize the token to always include the same set of JSON properties of interest, regardless of the source of identity that issued the original token verified by Authorino. This simplifies the writing of authorization policies and configuration of dynamic responses.

+

In case of extending an existing property of the identity object (replacing), the API allows to control whether to overwrite the value or not. This is particularly useful for normalizing tokens of a same identity source that nonetheless may occasionally differ in structure, such as in the case of JWT claims that sometimes may not be present but can be safely replaced with another (e.g. username or sub).

+

External auth metadata features (metadata)

+

HTTP GET/GET-by-POST (metadata.http)

+

Generic HTTP adapter that sends a request to an external service. It can be used to fetch external metadata for the authorization policies (phase ii of the Authorino Auth Pipeline), or as a web hook.

+

The adapter allows issuing requests either by GET or POST methods; in both cases with URL and parameters defined by the user in the spec. Dynamic values fetched from the Authorization JSON can be used.

+

POST request parameters as well as the encoding of the content can be controlled using the bodyParameters and contentType fields of the config, respectively. The Content-Type of POST requests can be either application/x-www-form-urlencoded (default) or application/json.

+

Authentication of Authorino with the external metadata server can be set either via long-lived shared secret stored in a Kubernetes Secret or via OAuth2 client credentials grant. For long-lived shared secret, set the sharedSecretRef field. For OAuth2 client credentials grant, use the oauth2 option.

+

In both cases, the location where the secret (long-lived or OAuth2 access token) travels in the request performed to the external HTTP service can be specified in the credentials field. By default, the authentication secret is supplied in the Authorization header with the Bearer prefix.

+

Custom headers can be set with the headers field. Nevertheless, headers such as Content-Type and Authorization (or eventual custom header used for carrying the authentication secret, set instead via the credentials option) will be superseded by the respective values defined for the fields contentType and sharedSecretRef.

+

OIDC UserInfo (metadata.userInfo)

+

Online fetching of OpenID Connect (OIDC) UserInfo data (phase ii of the Authorino Auth Pipeline), associated with an OIDC identity source configured and resolved in phase (i).

+

Apart from possibly complementing information of the JWT, fetching OpenID Connect UserInfo in request-time can be particularly useful for remote checking the state of the session, as opposed to only verifying the JWT/JWS offline.

+

Implementation requires an OpenID Connect issuer (spec.identity.oidc) configured in the same AuthConfig.

+

The response returned by the OIDC server to the UserInfo request is appended (as JSON) to auth.metadata in the authorization JSON.

+

User-Managed Access (UMA) resource registry (metadata.uma)

+

User-Managed Access (UMA) is an OAuth-based protocol for resource owners to allow other users to access their resources. Since the UMA-compliant server is expected to know about the resources, Authorino includes a client that fetches resource data from the server and adds that as metadata of the authorization payload.

+

This enables the implementation of resource-level Attribute-Based Access Control (ABAC) policies. Attributes of the resource fetched in a UMA flow can be, e.g., the owner of the resource, or any business-level attributes stored in the UMA-compliant server.

+

A UMA-compliant server is an external authorization server (e.g., Keycloak) where the protected resources are registered. It can be as well the upstream API itself, as long as it implements the UMA protocol, with initial authentication by client_credentials grant to exchange for a Protected API Token (PAT).

+

UMA

+

It's important to notice that Authorino does NOT manage resources in the UMA-compliant server. As shown in the flow above, Authorino's UMA client is only to fetch data about the requested resources. Authorino exchanges client credentials for a Protected API Token (PAT), then queries for resources whose URI match the path of the HTTP request (as passed to Authorino by the Envoy proxy) and fetches data of each matching resource.

+

The resources data is added as metadata of the authorization payload and passed as input for the configured authorization policies. All resources returned by the UMA-compliant server in the query by URI are passed along. They are available in the PDPs (authorization payload) as input.auth.metadata.custom-name => Array. (See The "Auth Pipeline" for details.)

+

Authorization features (authorization)

+

JSON pattern-matching authorization rules (authorization.json)

+

Grant/deny access based on simple pattern-matching expressions ("rules") compared against values selected from the Authorization JSON.

+

Each expression is a tuple composed of: +- a selector, to fetch from the Authorization JSON – see Common feature: JSON paths for details about syntax; +- an operatoreq (equals), neq (not equal); incl (includes) and excl (excludes), for arrays; and matches, for regular expressions; +- a fixed comparable value

+

Rules can mix and combine literal expressions and references to expression sets ("named patterns") defined at the upper level of the AuthConfig spec. (See Common feature: Conditions)

+
spec:
+  authorization:
+    - name: my-simple-json-pattern-matching-policy
+      json:
+        rules: # All rules must match for access to be granted
+          - selector: auth.identity.email_verified
+            operator: eq
+            value: "true"
+          - patternRef: admin
+
+  patterns:
+    admin: # a named pattern that can be reused in other sets of rules or conditions
+      - selector: auth.identity.roles
+        operator: incl
+        value: admin
+
+

Open Policy Agent (OPA) Rego policies (authorization.opa)

+

You can model authorization policies in Rego language and add them as part of the protection of your APIs.

+

Policies can be either declared in-line in Rego language (inlineRego) or as an HTTP endpoint where Authorino will fetch the source code of the policy in reconciliation-time (externalRegistry).

+

Policies pulled from external registries can be configured to be automatically refreshed (pulled again from the external registry), by setting the authorization.opa.externalRegistry.ttl field (given in seconds, default: 0 – i.e. auto-refresh disabled).

+

Authorino's built-in OPA module precompiles the policies during reconciliation of the AuthConfig and caches the precompiled policies for fast evaluation in runtime, where they receive the Authorization JSON as input.

+

OPA

+

An optional field allValues: boolean makes the values of all rules declared in the Rego document to be returned in the OPA output after policy evaluation. When disabled (default), only the boolean value allow is returned. Values of internal rules of the Rego document can be referenced in subsequent policies/phases of the Auth Pipeline.

+

Kubernetes SubjectAccessReview (authorization.kubernetes)

+

Access control enforcement based on rules defined in the Kubernetes authorization system, i.e. Role, ClusterRole, RoleBinding and ClusterRoleBinding resources of Kubernetes RBAC.

+

Authorino issues a SubjectAccessReview (SAR) inquiry that checks with the underlying Kubernetes server whether the user can access a particular resource, resource kind or generic URL.

+

It supports resource attributes authorization check (parameters defined in the AuthConfig) and non-resource attributes authorization check (HTTP endpoint inferred from the original request). +- Resource attributes: adequate for permissions set at namespace level, defined in terms of common attributes of operations on Kubernetes resources (namespace, API group, kind, name, subresource, verb) +- Non-resource attributes: adequate for permissions set at cluster scope, defined for protected endpoints of a generic HTTP API (URL path + verb)

+

Example of Kubernetes role for resource attributes authorization:

+
apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: pet-reader
+rules:
+- apiGroups: ["pets.io"]
+  resources: ["pets"]
+  verbs: ["get"]
+
+

Example of Kubernetes cluster role for non-resource attributes authorization:

+
apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: pet-editor
+rules:
+- nonResourceURLs: ["/pets/*"]
+  verbs: ["put", "delete"]
+
+

Kubernetes' authorization policy configs look like the following in an Authorino AuthConfig:

+
authorization:
+  - name: kubernetes-rbac
+    kubernetes:
+      user:
+        valueFrom: # values of the parameter can be fixed (`value`) or fetched from the Authorization JSON (`valueFrom.authJSON`)
+          authJSON: auth.identity.metadata.annotations.userid
+
+      groups: [] # user groups to test for.
+
+      # for resource attributes permission checks; omit it to perform a non-resource attributes SubjectAccessReview with path and method/verb assumed from the original request
+      # if included, use the resource attributes, where the values for each parameter can be fixed (`value`) or fetched from the Authorization JSON (`valueFrom.authJSON`)
+      resourceAttributes:
+        namespace:
+          value: default
+        group:
+          value: pets.io # the api group of the protected resource to be checked for permissions for the user
+        resource:
+          value: pets # the resource kind
+        name:
+          valueFrom: { authJSON: context.request.http.path.@extract:{"sep":"/","pos":2} } # resource name – e.g., the {id} in `/pets/{id}`
+        verb:
+          valueFrom: { authJSON: context.request.http.method.@case:lower } # api operation – e.g., copying from the context to use the same http method of the request
+
+

user and properties of resourceAttributes can be defined from fixed values or patterns of the Authorization JSON.

+

An array of groups (optional) can as well be set. When defined, it will be used in the SubjectAccessReview request.

+

Authzed/SpiceDB (authorization.authzed)

+

Check permission requests sent to a Google Zanzibar-based Authzed/SpiceDB instance, via gRPC.

+

Subject, resource and permission parameters can be set to static values or read from the Authorization JSON.

+
spec:
+  authorization:
+  - name: authzed
+    authzed:
+      endpoint: spicedb:50051
+      insecure: true # disables TLS
+      sharedSecretRef:
+        name: spicedb
+        key: token
+      subject:
+        kind:
+          value: blog/user
+        name:
+          valueFrom:
+            authJSON: auth.identity.sub
+      resource:
+        kind:
+          value: blog/post
+        name:
+          valueFrom:
+            authJSON: context.request.http.path.@extract:{"sep":"/","pos":2} # /posts/{id}
+      permission:
+        valueFrom:
+          authJSON: context.request.http.method
+
+

Keycloak Authorization Services (UMA-compliant Authorization API)

+ + + + +
Not implemented - In analysis
+ +

Online delegation of authorization to a Keycloak server.

+

Dynamic response features (response)

+

JSON injection (response.json)

+

User-defined dynamic JSON objects generated by Authorino in the response phase, from static or dynamic data of the auth pipeline, and passed back to the external authorization client within added HTTP headers or as Envoy Well Known Dynamic Metadata.

+

The following Authorino AuthConfig custom resource is an example that defines 3 dynamic JSON response items, where two items are returned to the client, stringified, in added HTTP headers, and the third is wrapped as Envoy Dynamic Metadata("emitted", in Envoy terminology). Envoy proxy can be configured to "pipe" dynamic metadata emitted by one filter into another filter – for example, from external authorization to rate limit.

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  namespace: my-namespace
+  name: my-api-protection
+spec:
+  hosts:
+    - my-api.io
+  identity:
+    - name: edge
+      apiKey:
+        selector:
+          matchLabels:
+            authorino.kuadrant.io/managed-by: authorino
+      credentials:
+        in: authorization_header
+        keySelector: APIKEY
+  response:
+    - name: a-json-returned-in-a-header
+      wrapper: httpHeader # can be omitted
+      wrapperKey: x-my-custom-header # if omitted, name of the header defaults to the name of the config ("a-json-returned-in-a-header")
+      json:
+        properties:
+          - name: prop1
+            value: value1
+          - name: prop2
+            valueFrom:
+              authJSON: some.path.within.auth.json
+
+    - name: another-json-returned-in-a-header
+      wrapperKey: x-ext-auth-other-json
+      json:
+        properties:
+          - name: propX
+            value: valueX
+
+    - name: a-json-returned-as-envoy-metadata
+      wrapper: envoyDynamicMetadata
+      wrapperKey: auth-data
+      json:
+        properties:
+          - name: api-key-ns
+            valueFrom:
+              authJSON: auth.identity.metadata.namespace
+          - name: api-key-name
+            valueFrom:
+              authJSON: auth.identity.metadata.name
+
+

Plain (response.plain)

+

Simpler, yet more generalized form, for extending the authorization response for header mutation and Envoy Dynamic Metadata, based on plain text values.

+

The value can be static:

+
response:
+- name: x-auth-service
+  plain:
+    value: Authorino
+
+

or fetched dynamically from the Authorization JSON (which includes support for interpolation):

+
- name: x-username
+  plain:
+    valueFrom:
+      authJSON: auth.identity.username
+
+

Festival Wristband tokens (response.wristband)

+

Festival Wristbands are signed OpenID Connect JSON Web Tokens (JWTs) issued by Authorino at the end of the auth pipeline and passed back to the client, typically in added HTTP response header. It is an opt-in feature that can be used to implement Edge Authentication Architecture (EAA) and enable token normalization. Authorino wristbands include minimal standard JWT claims such as iss, iat, and exp, and optional user-defined custom claims, whose values can be static or dynamically fetched from the authorization JSON.

+

The Authorino AuthConfig custom resource below sets an API protection that issues a wristband after a successful authentication via API key. Apart from standard JWT claims, the wristband contains 2 custom claims: a static value aud=internal and a dynamic value born that fetches from the authorization JSON the date/time of creation of the secret that represents the API key used to authenticate.

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  namespace: my-namespace
+  name: my-api-protection
+spec:
+  hosts:
+    - my-api.io
+  identity:
+    - name: edge
+      apiKey:
+        selector:
+          matchLabels:
+            authorino.kuadrant.io/managed-by: authorino
+      credentials:
+        in: authorization_header
+        keySelector: APIKEY
+  response:
+    - name: my-wristband
+      wristband:
+        issuer: https://authorino-oidc.default.svc:8083/my-namespace/my-api-protection/my-wristband
+        customClaims:
+          - name: aud
+            value: internal
+          - name: born
+            valueFrom:
+              authJSON: auth.identity.metadata.creationTimestamp
+        tokenDuration: 300
+        signingKeyRefs:
+          - name: my-signing-key
+            algorithm: ES256
+          - name: my-old-signing-key
+            algorithm: RS256
+      wrapper: httpHeader # can be omitted
+      wrapperKey: x-ext-auth-wristband # whatever http header name desired - defaults to the name of  the response config ("my-wristband")
+
+

The signing key names listed in signingKeyRefs must match the names of Kubernetes Secret resources created in the same namespace, where each secret contains a key.pem entry that holds the value of the private key that will be used to sign the wristbands issued, formatted as PEM. The first key in this list will be used to sign the wristbands, while the others are kept to support key rotation.

+

For each protected API configured for the Festival Wristband issuing, Authorino exposes the following OpenID Connect Discovery well-known endpoints (available for requests within the cluster): +- OpenID Connect configuration:
+ https://authorino-oidc.default.svc:8083/{namespace}/{api-protection-name}/{response-config-name}/.well-known/openid-configuration +- JSON Web Key Set (JWKS) well-known endpoint:
+ https://authorino-oidc.default.svc:8083/{namespace}/{api-protection-name}/{response-config-name}/.well-known/openid-connect/certs

+

Extra: Response wrappers (wrapper and wrapperKey)

+

Added HTTP headers

+

By default, Authorino dynamic responses (injected JSON and Festival Wristband tokens) are passed back to Envoy, stringified, as injected HTTP headers. This can be made explicit by setting the wrapper property of the response config to httpHeader.

+

The property wrapperKey controls the name of the HTTP header, with default to the name of dynamic response config when omitted.

+

Envoy Dynamic Metadata

+

Authorino dynamic responses (injected JSON and Festival Wristband tokens) can be passed back to Envoy in the form of Envoy Dynamic Metadata. To do so, set the wrapper property of the response config to envoyDynamicMetadata.

+

A response config with wrapper=envoyDynamicMetadata and wrapperKey=auth-data in the AuthConfig can be configured in the Envoy route or virtual host setting to be passed to rate limiting filter as below. The metadata content is expected to be a dynamic JSON injected by Authorino containing { "auth-data": { "api-key-ns": string, "api-key-name": string } }. (See the response config a-json-returned-as-envoy-metadata in the example for the JSON injection feature above)

+
# Envoy config snippet to inject `user_namespace` and `username` rate limit descriptors from metadata returned by Authorino
+rate_limits:
+- actions:
+    - metadata:
+        metadata_key:
+          key: "envoy.filters.http.ext_authz"
+          path:
+          - key: auth-data
+          - key: api-key-ns
+        descriptor_key: user_namespace
+    - metadata:
+        metadata_key:
+          key: "envoy.filters.http.ext_authz"
+          path:
+          - key: auth-data
+          - key: api-key-name
+        descriptor_key: username
+
+

Extra: Custom denial status (denyWith)

+

By default, Authorino will inform Envoy to respond with 401 Unauthorized or 403 Forbidden respectively when the identity verification (phase i of the Auth Pipeline) or authorization (phase ii) fail. These can be customized by specifying spec.denyWith in the AuthConfig.

+

Callbacks (callbacks)

+

HTTP endpoints (callbacks.http)

+

Sends requests to specified HTTP endpoints at the end of the auth pipeline.

+

The scheme of the http field is the same as of metadata.http.

+

Example:

+
spec:
+  identity: []
+  authorization: []
+
+  callbacks:
+    - name: log
+      http:
+        endpoint: http://logsys
+        method: POST
+        body:
+          valueFrom:
+            authJSON: |
+              \{"requestId":context.request.http.id,"username":"{auth.identity.username}","authorizationResult":{auth.authorization}\}
+    - name: important-forbidden
+      when:
+        - selector: auth.authorization.important-policy
+          operator: eq
+          value: "false"
+      http:
+        endpoint: "http://monitoring/important?forbidden-user={auth.identity.username}"
+
+

Common feature: Priorities

+

Priorities allow to set sequence of execution for blocks of concurrent evaluators within phases of the Auth Pipeline.

+

Evaluators of same priority execute concurrently to each other "in a block". After syncing that block (i.e. after all evaluators of the block have returned), the next block of evaluator configs of consecutive priority is triggered.

+

Use cases for priorities are: +1. Saving expensive tasks to be triggered when there's a high chance of returning immediately after finishing executing a less expensive one – e.g. + - an identity config that calls an external IdP to verify a token that is rarely used, compared to verifying JWTs preferred by most users of the service; + - an authorization policy that performs some quick checks first, such as verifying allowed paths, and only if it passes, moves to the evaluation of a more expensive policy. +2. Establishing dependencies between evaluators - e.g. + - an external metadata request that needs to wait until a previous metadata responds first (in order to use data from the response)

+

Priorities can be set using the priority property available in all evaluator configs of all phases of the Auth Pipeline (identity, metadata, authorization and response). The lower the number, the highest the priority. By default, all evaluators have priority 0 (i.e. highest priority).

+

Consider the following example to understand how priorities work:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+    - talker-api
+  identity:
+    - name: tier-1
+      priority: 0
+      apiKey:
+        selector:
+          matchLabels:
+            tier: "1"
+    - name: tier-2
+      priority: 1
+      apiKey:
+        selector:
+          matchLabels:
+            tier: "2"
+    - name: tier-3
+      priority: 1
+      apiKey:
+        selector:
+          matchLabels:
+            tier: "3"
+  metadata:
+    - name: first
+      http:
+        endpoint: http://talker-api:3000
+        method: GET
+    - name: second
+      priority: 1
+      http:
+        endpoint: http://talker-api:3000/first_uuid={auth.metadata.first.uuid}
+        method: GET
+  authorization:
+    - name: allowed-endpoints
+      when:
+        - selector: context.request.http.path
+          operator: neq
+          value: /hi
+        - selector: context.request.http.path
+          operator: neq
+          value: /hello
+        - selector: context.request.http.path
+          operator: neq
+          value: /aloha
+        - selector: context.request.http.path
+          operator: neq
+          value: /ciao
+      json:
+        rules:
+          - selector: deny
+            operator: eq
+            value: "true"
+    - name: more-expensive-policy # no point in evaluating this one if it's not an allowed endpoint
+      priority: 1
+      opa:
+        inlineRego: |
+          allow { true }
+  response:
+    - name: x-auth-data
+      json:
+        properties:
+          - name: tier
+            valueFrom:
+              authJSON: auth.identity.metadata.labels.tier
+          - name: first-uuid
+            valueFrom:
+              authJSON: auth.metadata.first.uuid
+          - name: second-uuid
+            valueFrom:
+              authJSON: auth.metadata.second.uuid
+          - name: second-path
+            valueFrom:
+              authJSON: auth.metadata.second.path
+
+

For the AuthConfig above,

+
    +
  • +

    Identity configs tier-2 and tier-3 (priority 1) will only trigger (concurrently) in case tier-1 (priority 0) fails to validate the authentication token first. (This behavior happens without prejudice of context canceling between concurrent evaluators – i.e. evaluators that are triggered concurrently to another, such as tier-2 and tier-3, continue to cancel the context of each other if any of them succeeds validating the token first.)

    +
  • +
  • +

    Metadata source second (priority 1) uses the response of the request issued by metadata source first (priority 0), so it will wait for first to finish by triggering only in the second block.

    +
  • +
  • +

    Authorization policy allowed-endpoints (priority 0) is considered to be a lot less expensive than more-expensive-policy (priority 1) and has a high chance of denying access to the protected service (if the path is not one of the allowed endpoints). By setting different priorities to these policies we ensure the more expensive policy if triggered in sequence of the less expensive one, instead of concurrently.

    +
  • +
+

Common feature: Conditions (when)

+

Conditions, named when in the AuthConfig API, are sets of expressions (JSON patterns) that, whenever included, must evaluate to true against the Authorization JSON, so the scope where the expressions are defined is enforced. If any of the expressions in the set of conditions for a given scope does not match, Authorino will skip that scope in the Auth Pipeline.

+

The scope for a set of when conditions can be the entire AuthConfig ("top-level conditions") or a particular evaluator of any phase of the auth pipeline.

+

Each expression is a tuple composed of: +- a selector, to fetch from the Authorization JSON – see Common feature: JSON paths for details about syntax; +- an operatoreq (equals), neq (not equal); incl (includes) and excl (excludes), for arrays; and matches, for regular expressions; +- a fixed comparable value

+

Literal expressions and references to expression sets (patterns, defined at the upper level of the AuthConfig spec) can be listed, mixed and combined in when conditions sets.

+

Conditions can be used, e.g.,:

+

i) to skip an entire AuthConfig based on the context:

+
spec:
+  when: # no authn/authz required on requests to /status
+  - selector: context.request.http.path
+    operator: neq
+    value: /status
+
+

ii) to skip parts of an AuthConfig (i.e. a specific evaluator):

+
spec:
+  metadata:
+  - name: metadata-source
+    http:
+      endpoint: https://my-metadata-source.io
+    when: # only fetch the external metadata if the context is HTTP method other than OPTIONS
+    - selector: context.request.http.method
+      operator: neq
+      value: OPTIONS
+
+

iii) to enforce a particular evaluator only in certain contexts (really the same as the above, though to a different use case):

+
spec:
+  identity:
+  - name: authn-meth-1
+    apiKey: {...} # this authn method only valid for POST requests to /foo[/*]
+    when:
+    - selector: context.request.http.path
+      operator: matches
+      value: ^/foo(/.*)?$
+    - selector: context.request.http.method
+      operator: eq
+      value: POST
+
+  - name: authn-meth-2
+    oidc: {...}
+
+

iv) to avoid repetition while defining patterns for conditions:

+
spec:
+  patterns:
+    a-pet: # a named pattern that can be reused in sets of conditions
+    - selector: context.request.http.path
+      operator: matches
+      value: ^/pets/\d+(/.*)$
+
+  metadata:
+  - name: pets-info
+    when:
+    - patternRef: a-pet
+    http:
+      endpoint: https://pets-info.io?petId={context.request.http.path.@extract:{"sep":"/","pos":2}}
+
+  authorization:
+  - name: pets-owners-only
+    when:
+    - patternRef: a-pet
+    opa:
+      inlineRego: |
+        allow { input.metadata["pets-info"].ownerid == input.auth.identity.userid }
+
+

v) mixing and combining literal expressions and refs:

+
spec:
+  patterns:
+    foo:
+    - selector: context.request.http.path
+      operator: eq
+      value: /foo
+
+  when: # unauthenticated access to /foo always granted
+  - patternRef: foo
+  - selector: context.request.http.headers.authorization
+    operator: eq
+    value: ""
+
+  authorization:
+  - name: my-policy-1
+    when: # authenticated access to /foo controlled by policy
+    - patternRef: foo
+    json: {...}
+
+

vi) to avoid evaluating unnecessary identity checks when the user can indicate the preferred authentication method (again the pattern of skipping based upon the context):

+
spec:
+  identity:
+  - name: jwt
+    when:
+    - selector: context.request.http.headers.authorization
+      operator: matches
+      value: JWT .+
+    oidc: {...}
+
+  - name: api-key
+    when:
+    - selector: context.request.http.headers.authorization
+      operator: matches
+      value: APIKEY .+
+    apiKey: {...}
+
+

Common feature: Caching (cache)

+

Objects resolved at runtime in an Auth Pipeline can be cached "in-memory", and avoided being evaluated again at a subsequent request, until it expires. A lookup cache key and a TTL can be set individually for any evaluator config in an AuthConfig.

+

Each cache config induces a completely independent cache table (or "cache namespace"). Consequently, different evaluator configs can use the same cache key and there will be no collision between entries from different evaluators.

+

E.g.:

+
spec:
+  hosts:
+  - my-api.io
+
+  identity: [...]
+
+  metadata:
+  - name: external-metadata
+    http:
+      endpoint: http://my-external-source?search={context.request.http.path}
+    cache:
+      key:
+        valueFrom: { authJSON: context.request.http.path }
+      ttl: 300
+
+  authorization:
+  - name: complex-policy
+    opa:
+      externalRegistry:
+        endpoint: http://my-policy-registry
+    cache:
+      key:
+        valueFrom:
+          authJSON: "{auth.identity.group}-{context.request.http.method}-{context.request.http.path}"
+      ttl: 60
+
+

The example above sets caching for the 'external-metadata' metadata config and for the 'complex-policy' authorization policy. In the case of 'external-metadata', the cache key is the path of the original HTTP request being authorized by Authorino (fetched dynamically from the Authorization JSON); i.e., after obtaining a metadata object from the external source for a given contextual HTTP path one first time, whenever that same HTTP path repeats in a subsequent request, Authorino will use the cached object instead of sending a request again to the external source of metadata. After 5 minutes (300 seconds), the cache entry will expire and Authorino will fetch again from the source if requested.

+

As for the 'complex-policy' authorization policy, the cache key is a string composed the 'group' the identity belongs to, the method of the HTTP request and the path of the HTTP request. Whenever these repeat, Authorino will use the result of the policy that was evaluated and cached priorly. Cache entries in this namespace expire after 60 seconds.

+

Notes on evaluator caching

+

Capacity - By default, each cache namespace is limited to 1 mb. Entries will be evicted following First-In-First-Out (FIFO) policy to release space. The individual capacity of cache namespaces is set at the level of the Authorino instance (via --evaluator-cache-size command-line flag or spec.evaluatorCacheSize field of the Authorino CR).

+

Usage - Avoid caching objects whose evaluation is considered to be relatively cheap. Examples of operations associated to Authorino auth features that are usually NOT worth caching: validation of JSON Web Tokens (JWT), Kubernetes TokenReviews and SubjectAccessReviews, API key validation, simple JSON pattern-matching authorization rules, simple OPA policies. Examples of operations where caching may be desired: OAuth2 token introspection, fetching of metadata from external sources (via HTTP request), complex OPA policies.

+

Common feature: Metrics (metrics)

+

By default, Authorino will only export metrics down to the level of the AuthConfig. Deeper metrics at the level of each evaluator within an AuthConfig can be activated by setting the common field metrics: true of the evaluator config.

+

E.g.:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-authconfig
+  namespace: my-ns
+spec:
+  metadata:
+  - name: my-external-metadata
+    http:
+      endpoint: http://my-external-source?search={context.request.http.path}
+    metrics: true
+
+

The above will enable the metrics auth_server_evaluator_duration_seconds (histogram) and auth_server_evaluator_total (counter) with labels namespace="my-ns", authconfig="my-authconfig", evaluator_type="METADATA_GENERIC_HTTP" and evaluator_name="my-external-metadata".

+

The same pattern works for other types of evaluators. Find below the list of all types and corresponding label constant used in the metric:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Evaluator typeMetric's evaluator_type label
identity.apiKeyIDENTITY_APIKEY
identity.kubernetesIDENTITY_KUBERNETES
identity.oidcIDENTITY_OIDC
identity.oauth2IDENTITY_OAUTH2
identity.mtlsIDENTITY_MTLS
identity.hmacIDENTITY_HMAC
identity.plainIDENTITY_PLAIN
identity.anonymousIDENTITY_NOOP
metadata.httpMETADATA_GENERIC_HTTP
metadata.userInfoMETADATA_USERINFO
metadata.umaMETADATA_UMA
authorization.jsonAUTHORIZATION_JSON
authorization.opaAUTHORIZATION_OPA
authorization.kubernetesAUTHORIZATION_KUBERNETES
response.jsonRESPONSE_JSON
response.wristbandRESPONSE_WRISTBAND
+

Metrics at the level of the evaluators can also be enforced to an entire Authorino instance, by setting the --deep-metrics-enabled command-line flag. In this case, regardless of the value of the field spec.(identity|metadata|authorization|response).metrics in the AuthConfigs, individual metrics for all evaluators of all AuthConfigs will be exported.

+

For more information about metrics exported by Authorino, see Observability.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/getting-started/index.html b/authorino/docs/getting-started/index.html new file mode 100644 index 00000000..c0d7c243 --- /dev/null +++ b/authorino/docs/getting-started/index.html @@ -0,0 +1,2589 @@ + + + + + + + + + + + + + + + + + + + + + + + + Getting Started - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Getting started

+

This page covers requirements and instructions to deploy Authorino on a Kubernetes cluster, as well as the steps to declare, apply and try out a protection layer of authentication and authorization over your service, clean-up and complete uninstallation.

+

If you prefer learning with an example, check out our Hello World.

+ +

Requirements

+

Platform requirements

+

These are the platform requirements to use Authorino:

+
    +
  • Kubernetes server (recommended v1.20 or later), with permission to create Kubernetes Custom Resource Definitions (CRDs) (for bootstrapping Authorino and Authorino Operator)
  • +
+
+ Alternative: K8s distros and platforms + + Alternatively to upstream Kubernetes, you should be able to use any other Kubernetes distribution or Kubernetes Management Platform (KMP) with support for Kubernetes Custom Resources Definitions (CRD) and custom controllers, such as Red Hat OpenShift, IBM Cloud Kubernetes Service (IKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). +
+ +
    +
  • Envoy proxy (recommended v1.19 or later), to wire up Upstream services (i.e. the services to be protected with Authorino) and external authorization filter (Authorino) for integrations based on the reverse-proxy architecture - example
  • +
+
+ Alternative: Non-reverse-proxy integration + + Technically, any client that implements Envoy's external authorization gRPC protocol should be compatible with Authorino. For integrations based on the reverse-proxy architecture nevertheless, we strongly recommended that you leverage Envoy alongside Authorino. +
+ +

Feature-specific requirements

+

A few examples are:

+
    +
  • +

    For OpenID Connect, make sure you have access to an identity provider (IdP) and an authority that can issue ID tokens (JWTs). Check out Keycloak which can solve both and connect to external identity sources and user federation like LDAP.

    +
  • +
  • +

    For Kubernetes authentication tokens, platform support for the TokenReview and SubjectAccessReview APIs of Kubernetes shall be required. In case you want to be able to requests access tokens for clients running outside the custer, you may also want to check out the requisites for using Kubernetes TokenRequest API (GA in v1.20).

    +
  • +
  • +

    For User-Managed Access (UMA) resource data, you will need a UMA-compliant server running as well. This can be an implementation of the UMA protocol by each upstream API itself or (more typically) an external server that knows about the resources. Again, Keycloak can be a good fit here as well. Just keep in mind that, whatever resource server you choose, changing-state actions commanded in the upstream APIs or other parties will have to be reflected in the resource server. Authorino will not do that for you.

    +
  • +
+

Check out the Feature specification page for more feature-specific requirements.

+

Installation

+

Step: Install the Authorino Operator

+

The simplest way to install the Authorino Operator is by applying the manifest bundle:

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

The above will install the latest build of the Authorino Operator and latest version of the manifests (CRDs and RBAC), which by default points as well to the latest build of Authorino, both based on the main branches of each component. To install a stable released version of the Operator and therefore also defaults to its latest compatible stable release of Authorino, replace main with another tag of a proper release of the Operator, e.g. 'v0.2.0'.

+

Alternatively, you can deploy the Authorino Operator using the Operator Lifecycle Manager bundles. For instructions, check out Installing via OLM.

+

Step: Request an Authorino instance

+

Choose either cluster-wide or namespaced deployment mode and whether you want TLS termination enabled for the Authorino endpoints (gRPC authorization, raw HTTP authorization, and OIDC Festival Wristband Discovery listeners), and follow the corresponding instructions below.

+

The instructions here are for centralized gateway or centralized authorization service architecture. Check out the Topologies section of the docs for alternatively running Authorino in a sidecar container.

+
+ Cluster-wide (with TLS) + + Create the namespace: +
kubectl create namespace authorino
+
+ + Deploy [cert-manager](https://github.com/jetstack/cert-manager) (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace or cert-manager is installed and running in the cluster): +
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml
+
+ + Create the TLS certificates (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace): +
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed "s/\$(AUTHORINO_INSTANCE)/authorino/g;s/\$(NAMESPACE)/authorino/g" | kubectl -n authorino apply -f -
+
+ + Deploy Authorino: +
kubectl -n authorino apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  replicas: 1
+  clusterWide: true
+  listener:
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-server-cert
+  oidcServer:
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-oidc-server-cert
+EOF
+
+
+ +
+ Cluster-wide (without TLS) + +
kubectl create namespace authorino
+kubectl -n authorino apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  image: quay.io/kuadrant/authorino:latest
+  replicas: 1
+  clusterWide: true
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+
+ +
+ Namespaced (with TLS) + + Create the namespace: +
kubectl create namespace myapp
+
+ + Deploy [cert-manager](https://github.com/jetstack/cert-manager) (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace or cert-manager is installed and running in the cluster): +
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml
+
+ + Create the TLS certificates (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace): +
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed "s/\$(AUTHORINO_INSTANCE)/authorino/g;s/\$(NAMESPACE)/myapp/g" | kubectl -n myapp apply -f -
+
+ + Deploy Authorino: +
kubectl -n myapp apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  image: quay.io/kuadrant/authorino:latest
+  replicas: 1
+  clusterWide: false
+  listener:
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-server-cert
+  oidcServer:
+    tls:
+      enabled: true
+      certSecretRef:
+        name: authorino-oidc-server-cert
+EOF
+
+
+ +
+ Namespaced (without TLS) + +
kubectl create namespace myapp
+kubectl -n myapp apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  image: quay.io/kuadrant/authorino:latest
+  replicas: 1
+  clusterWide: false
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+
+ +

Protect a service

+

The most typical integration to protect services with Authorino is by putting the service (upstream) behind a reverse-proxy or API gateway, enabled with an authorization filter that ensures all requests to the service are first checked with the authorization server (Authorino).

+

To do that, make sure you have your upstream service deployed and running, usually in the same Kubernetes server where you installed Authorino. Then, setup an Envoy proxy and create an Authorino AuthConfig for your service.

+

Authorino exposes 2 interfaces to serve the authorization requests: +- a gRPC interface that implements Envoy's External Authorization protocol; +- a raw HTTP authorization interface, suitable for using Authorino with Kubernetes ValidatingWebhook, for Envoy external authorization via HTTP, and other integrations (e.g. other proxies).

+

To use Authorino as a simple satellite (sidecar) Policy Decision Point (PDP), applications can integrate directly via any of these interfaces. By integrating via a proxy or API gateway, the combination makes Authorino to perform as an external Policy Enforcement Point (PEP) completely decoupled from the application.

+

Life cycle

+

API protection life cycle

+

Step: Setup Envoy

+

To configure Envoy for proxying requests targeting the upstream service and authorizing with Authorino, setup an Envoy configuration that enables Envoy's external authorization HTTP filter. Store the configuration in a ConfigMap.

+

These are the important bits in the Envoy configuration to activate Authorino:

+
static_resources:
+  listeners:
+  - address: {} # TCP socket address and port of the proxy
+    filter_chains:
+    - filters:
+      - name: envoy.http_connection_manager
+        typed_config:
+          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
+          route_config: {} # routing configs - virtual host domain and endpoint matching patterns and corresponding upstream services to redirect the traffic
+          http_filters:
+          - name: envoy.filters.http.ext_authz # the external authorization filter
+            typed_config:
+              "@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
+              transport_api_version: V3
+              failure_mode_allow: false # ensures only authenticated and authorized traffic goes through
+              grpc_service:
+                envoy_grpc:
+                  cluster_name: authorino
+                timeout: 1s
+  clusters:
+  - name: authorino
+    connect_timeout: 0.25s
+    type: strict_dns
+    lb_policy: round_robin
+    http2_protocol_options: {}
+    load_assignment:
+      cluster_name: authorino
+      endpoints:
+      - lb_endpoints:
+        - endpoint:
+            address:
+              socket_address:
+                address: authorino-authorino-authorization # name of the Authorino service deployed – it can be the fully qualified name with `.<namespace>.svc.cluster.local` suffix (e.g. `authorino-authorino-authorization.myapp.svc.cluster.local`)
+                port_value: 50051
+    transport_socket: # in case TLS termination is enabled in Authorino; omit it otherwise
+      name: envoy.transport_sockets.tls
+      typed_config:
+        "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
+        common_tls_context:
+          validation_context:
+            trusted_ca:
+              filename: /etc/ssl/certs/authorino-ca-cert.crt
+
+

For a complete Envoy ConfigMap containing an upstream API protected with Authorino, with TLS enabled and option for rate limiting with Limitador, plus a webapp served with under the same domain of the protected API, check out this example.

+

After creating the ConfigMap with the Envoy configuration, create an Envoy Deployment and Service. E.g.:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: envoy
+  labels:
+    app: envoy
+spec:
+  selector:
+    matchLabels:
+      app: envoy
+  template:
+    metadata:
+      labels:
+        app: envoy
+    spec:
+      containers:
+        - name: envoy
+          image: envoyproxy/envoy:v1.19-latest
+          command: ["/usr/local/bin/envoy"]
+          args:
+            - --config-path /usr/local/etc/envoy/envoy.yaml
+            - --service-cluster front-proxy
+            - --log-level info
+            - --component-log-level filter:trace,http:debug,router:debug
+          ports:
+            - name: web
+              containerPort: 8000 # matches the address of the listener in the envoy config
+          volumeMounts:
+            - name: config
+              mountPath: /usr/local/etc/envoy
+              readOnly: true
+            - name: authorino-ca-cert # in case TLS termination is enabled in Authorino; omit it otherwise
+              subPath: ca.crt
+              mountPath: /etc/ssl/certs/authorino-ca-cert.crt
+              readOnly: true
+      volumes:
+        - name: config
+          configMap:
+            name: envoy
+            items:
+              - key: envoy.yaml
+                path: envoy.yaml
+        - name: authorino-ca-cert # in case TLS termination is enabled in Authorino; omit it otherwise
+          secret:
+            defaultMode: 420
+            secretName: authorino-ca-cert
+  replicas: 1
+EOF
+
+
kubectl -n myapp apply -f -<<EOF
+apiVersion: v1
+kind: Service
+metadata:
+  name: envoy
+spec:
+  selector:
+    app: envoy
+  ports:
+    - name: web
+      port: 8000
+      protocol: TCP
+EOF
+
+

Step: Apply an AuthConfig

+

Check out the docs for a full description of Authorino's AuthConfig Custom Resource Definition (CRD) and its features.

+

For examples based on specific use-cases, check out the User guides.

+

For authentication based on OpenID Connect (OIDC) JSON Web Tokens (JWT), plus one simple JWT claim authorization check, a typical AuthConfig custom resource looks like the following:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: my-api-protection
+spec:
+  hosts: # any hosts that resolve to the envoy service and envoy routing config where the external authorization filter is enabled
+    - my-api.io # north-south traffic through a Kubernetes `Ingress` or OpenShift `Route`
+    - my-api.myapp.svc.cluster.local # east-west traffic (between applications within the cluster)
+  identity:
+    - name: idp-users
+      oidc:
+        endpoint: https://my-idp.com/auth/realm
+  authorization:
+    - name: check-claim
+      json:
+        rules:
+          - selector: auth.identity.group
+            operator: eq
+            value: allowed-users
+EOF
+
+

After applying the AuthConfig, consumers of the protected service should be able to start sending requests.

+

Clean-up

+

Remove protection

+

Delete the AuthConfig:

+
kubectl -n myapp delete authconfig/my-api-protection
+
+

Decommission the Authorino instance:

+
kubectl -n myapp delete authorino/authorino
+
+

Uninstall

+

To completely remove Authorino CRDs, run from the Authorino Operator directory:

+
make uninstall
+
+

Next steps

+
    +
  1. Read the docs. The Architecture page and the Features page are good starting points to learn more about how Authorino works and its functionalities.
  2. +
  3. Check out the User guides for several examples of AuthConfigs based on specific use-cases
  4. +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/index.html b/authorino/docs/index.html new file mode 100644 index 00000000..df0ae49f --- /dev/null +++ b/authorino/docs/index.html @@ -0,0 +1,2011 @@ + + + + + + + + + + + + + + + + + + + + Documentation - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/terminology/index.html b/authorino/docs/terminology/index.html new file mode 100644 index 00000000..3da23118 --- /dev/null +++ b/authorino/docs/terminology/index.html @@ -0,0 +1,2073 @@ + + + + + + + + + + + + + + + + + + + + Terminology - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Terminology

+

Here we define some terms that are used in the project, with the goal of avoiding confusion and facilitating more +accurate conversations related to Authorino.

+

If you see terms used that are not here (or are used in place of terms here) please consider contributing a definition +to this doc with a PR, or modifying the use elsewhere to align with these terms.

+

Terms

+

Access token
+Type of temporary password (security token), tied to an authenticated identity, issued by an auth server as of request from either the identity subject itself or a registered auth client known by the auth server, and that delegates to a party powers to operate on behalf of that identity before a resource server; it can be formatted as an opaque data string or as an encoded JSON Web Token (JWT).

+

Application Programming Interface (API)
+Interface that defines interactions between multiple software applications; (in HTTP communication) set of endpoints and specification to expose resources hosted by a resource server, to be consumed by client applications; the access facade of a resource server.

+

Attribute-based Access Control (ABAC)
+Authorization model that grants/denies access to resources based on evaluation of authorization policies which combine attributes together (from claims, from the request, from the resource, etc).

+

Auth
+Usually employed as a short for authentication and authorization together (AuthN/AuthZ).

+

Auth client
+Application client (software) that uses an auth server, either in the process of authenticating and/or authorizing identity subjects (including self) who want to consume resources from a resources server or auth server.

+

Auth server
+Server where auth clients, users, roles, scopes, resources, policies and permissions can be stored and managed.

+

Authentication (AuthN)
+Process of verifying that a given credential belongs to a claimed-to-be identity; usually resulting in the issuing of an access token.

+

Authorization (AuthZ)
+Process of granting (or denying) access over a resource to a party based on the set of authorization rules, policies and/or permissions enforced.

+

Authorization header
+HTTP request header frequently used to carry credentials to authenticate a user in an HTTP communication, like in requests sent to an API; alternatives usually include credentials carried in another (custom) HTTP header, query string parameter or HTTP cookie.

+

Capability
+Usually employed to refer to a management feature of a Kubernetes-native system, based on the definition and use of Kubernetes Custom Resources (CRDs and CRs), that enables that system to one of the following “capability levels”: Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, Auto Pilot.

+

Claim
+Attribute packed in a security token which represents a claim that one who bears the token is making about an entity, usually an identity subject.

+

Client ID
+Unique identifier of an auth client within an auth server domain (or auth server realm).

+

Client secret
+Password presented by auth clients together with their Client IDs while authenticating with an auth server, either when requesting access tokens to be issued or when consuming services from the auth servers in general.

+

Delegation
+Process of granting a party (usually an auth client) with powers to act, often with limited scope, on behalf of an identity, to access resources from a resource server. See also OAuth2.

+

Hash-based Message Authentication Code (HMAC)
+Specific type of message authentication code (MAC) that involves a cryptographic hash function and a shared secret cryptographic key; it can be used to verify the authenticity of a message and therefore as an authentication method.

+

Identity
+Set of properties that qualifies a subject as a strong identifiable entity (usually a user), who can be authenticated by an auth server. See also Claims.

+

Identity and Access Management (IAM) system
+Auth system that implements and/or connects with sources of identity (IdP) and offers interfaces for managing access (authorization policies and permissions). See also Auth server.

+

Identity Provider (IdP)
+Source of identity; it can be a feature of an auth server or external source connected to an auth server.

+

ID token
+Special type of access token; an encoded JSON Web Token (JWT) that packs claims about an identity.

+

JSON Web Token (JWT)
+JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.

+

JSON Web Signature (JWS)
+Standard for signing arbitrary data, especially JSON Web Tokens (JWT).

+

JSON Web Key Set (JWKS)
+Set of keys containing the public keys used to verify any JSON Web Token (JWT).

+

Keycloak
+Open source auth server to allow single sign-on with identity and access management.

+

Lightweight Directory Access Protocol (LDAP)
+Open standard for distributed directory information services for sharing of information about users, systems, networks, services and applications.

+

Mutual Transport Layer Security (mTLS)
+Protocol for the mutual authentication of client-server communication, i.e., the client authenticates the server and the server authenticates the client, based on the acceptance of the X.509 certificates of each party.

+

OAuth 2.0 (OAuth2)
+Industry-standard protocol for delegation.

+

OpenID Connect (OIDC)
+Simple identity verification (authentication) layer built on top of the OAuth2 protocol.

+

Open Policy Agent (OPA)
+Authorization policy agent that enables the usage of declarative authorization policies written in Rego language.

+

Opaque token
+Security token devoid of explicit meaning (e.g. random string); it requires the usage of lookup mechanism to be translated into a meaningful set claims representing an identity.

+

Permission
+Association between a protected resource the authorization policies that must be evaluated whether access should be granted; e.g. <user|group|role> CAN DO <action> ON RESOURCE <X>.

+

Policy
+Rule or condition (authorization policy) that must be satisfied to grant access to a resource; strongly related to the different access control mechanisms (ACMs) and strategies one can use to protect resources, e.g. attribute-based access control (ABAC), role-based access control (RBAC), context-based access control, user-based access control (UBAC).

+

Policy Administration Point (PAP)
+Set of UIs and APIs to manage resources servers, resources, scopes, policies and permissions; it is where the auth system is configured.

+

Policy Decision Point (PDP)
+Where the authorization requests are sent, with permissions being requested, and authorization policies are evaluated accordingly.

+

Policy Enforcement Point (PEP)
+Where the authorization is effectively enforced, usually at the resource server or at a proxy, based on a response provided by the Policy Decision Point (PDP).

+

Policy storage
+Where policies are stored and from where they can be fetched, perhaps to be cached.

+

Red Hat SSO
+Auth server; downstream product created from the Keycloak Open Source project.

+

Refresh token
+Special type of security token, often provided together with an access token in an OAuth2 flow, used to renew the duration of an access token before it expires; it requires client authentication.

+

Request Party Token (RPT)
+JSON Web Token (JWT) digitally signed using JSON Web Signature (JWS), issued by the Keycloak auth server.

+

Resource
+One or more endpoints of a system, API or server, that can be protected.

+

Resource-level Access Control (RLAC)
+Authorization model that takes into consideration attributes of each specific request resource to grant/deny access to those resources (e.g. the resource's owner).

+

Resource server
+Server that hosts protected resources.

+

Role
+Aspect of a user’s identity assigned to the user to indicate the level of access they should have to the system; essentially, roles represent collections of permissions

+

Role-based Access Control (RBAC)
+Authorization model that grants/denies access to resources based on the roles of authenticated users (rather than on complex attributes/policy rules).

+

Scope
+Mechanism that defines the specific operations that applications can be allowed to do or information that they can request on an identity’s behalf; often presented as a parameter when access is requested as a way to communicate what access is needed, and used by auth server to respond what actual access is granted.

+

Single Page Application (SPA)
+Web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server.

+

Single Sign-on (SSO)
+Authentication scheme that allows a user to log in with a single ID and password to any of several related, yet independent, software systems.

+

Upstream
+(In the context of authentication/authorization) API whose endpoints must be protected by the auth system; the unprotected service in front of which a protection layer is added (by connecting with a Policy Decision Point).

+

User-based Access Control (UBAC)
+Authorization model that grants/denies access to resources based on claims of the identity (attributes of the user).

+

User-Managed Access (UMA)
+OAuth2-based access management protocol, used for users of an auth server to control the authorization process, i.e. directly granting/denying access to user-owned resources to other requesting parties.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/anonymous-access/index.html b/authorino/docs/user-guides/anonymous-access/index.html new file mode 100644 index 00000000..752bfa4e --- /dev/null +++ b/authorino/docs/user-guides/anonymous-access/index.html @@ -0,0 +1,2202 @@ + + + + + + + + + + + + + + + + + + + + + + + + Anonymous access - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + +

User guide: Anonymous access

+

Bypass identity verification or fall back to anonymous access when credentials fail to validate

+
+ + Authorino features in this guide: + + + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: public
+    anonymous: {}
+EOF
+
+

The example above enables anonymous access (i.e. removes authentication), without adding any extra layer of protection to the API. This is virtually equivalent to setting a top-level condition to the AuthConfig that always skips the configuration, or to switching authentication/authorization off completely in the route to the API.

+

For more sophisticated use cases of anonymous access with Authorino, consider combining this feature with other identity sources in the AuthConfig while playing with the priorities of each source, as well as combination with when conditions, and/or adding authorization policies that either cover authentication or address anonymous access with proper rules (e.g. enforcing read-only access).

+

Check out the docs for the Anonymous access feature for an example of an AuthConfig that falls back to anonymous access when a priority OIDC/JWT-based authentication fails, and enforces a read-only policy in such cases.

+

6. Consume the API

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/api-key-authentication/index.html b/authorino/docs/user-guides/api-key-authentication/index.html new file mode 100644 index 00000000..9f24fe56 --- /dev/null +++ b/authorino/docs/user-guides/api-key-authentication/index.html @@ -0,0 +1,2261 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authentication with API keys - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Authentication with API keys

+

Issue API keys stored in Kubernetes Secrets for clients to authenticate with your protected hosts.

+
+ + Authorino features in this guide: +
    +
  • Identity verification & authentication → API key
  • +
+
+ + In Authorino, API keys are stored as Kubernetes `Secret`s. Each resource must contain an `api_key` entry with the value of the API key, and labeled to match the selectors specified in `spec.identity.apiKey.selector` of the `AuthConfig`. + + API key `Secret`s must also include labels that match the `secretLabelSelector` field of the Authorino instance. See [Resource reconciliation and status update](../architecture.md#resource-reconciliation-and-status-update) for details. + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+EOF
+
+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

7. Consume the API

+

With a valid API key:

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

With missing or invalid API key:

+
curl -H 'Authorization: APIKEY invalid' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: APIKEY realm="friends"
+# x-ext-auth-reason: the API Key provided is invalid
+
+

8. Delete an API key (revoke access to the API)

+
kubectl delete secret/api-key-1
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/index.html b/authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/index.html new file mode 100644 index 00000000..f38fe440 --- /dev/null +++ b/authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/index.html @@ -0,0 +1,2297 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authenticated rate limiting (with Envoy Dynamic Metadata) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Authenticated rate limiting (with Envoy Dynamic Metadata)

+

Provide Envoy with dynamic metadata about the external authorization process to be injected into the rate limiting filter.

+
+ + Authorino features in this guide: + + + + Dynamic JSON objects built out of static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json) can be wrapped to be returned to the reverse-proxy as Envoy Well Known Dynamic Metadata content. Envoy can use those to inject data returned by the external authorization service into the other filters, such as the rate limiting filter. + + Check out as well the user guides about [Injecting data in the request](./injecting-data.md) and [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Deploy Limitador

+

Limitador is a lightweight rate limiting service that can be used with Envoy.

+

On this bundle, we will deploy Limitador pre-configured to limit requests to the talker-api domain up to 5 requests per interval of 60 seconds per user_id. Envoy will be configured to recognize the presence of Limitador and activate it on requests to the Talker API.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml
+
+

5. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

6. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+  response:
+  - name: rate-limit
+    wrapper: envoyDynamicMetadata
+    wrapperKey: ext_auth_data # how this bit of dynamic metadata from the ext authz service is named in the Envoy config
+    json:
+      properties:
+      - name: username
+        valueFrom:
+          authJSON: auth.identity.metadata.annotations.auth-data\/username
+EOF
+
+

An annotation auth-data/username will be read from the Kubernetes Secrets storing valid API keys and passed as dynamic metadata { "ext_auth_data": { "username": «annotations.auth-data/username» } }.

+

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

+

7. Create a couple of API keys

+

For user John:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+  annotations:
+    auth-data/username: john
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

For user Jane:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-2
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+  annotations:
+    auth-data/username: jane
+stringData:
+  api_key: 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY
+type: Opaque
+EOF
+
+

8. Consume the API

+

As John:

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Repeat the request a few more times within the 60-second time window, until the response status is 429 Too Many Requests.

+

While the API is still limited to John, send requests as Jane:

+
curl -H 'Authorization: APIKEY 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete secret/api-key-2
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/authzed/index.html b/authorino/docs/user-guides/authzed/index.html new file mode 100644 index 00000000..03f27ff5 --- /dev/null +++ b/authorino/docs/user-guides/authzed/index.html @@ -0,0 +1,2433 @@ + + + + + + + + + + + + + + + + + + + + + + + + Integration with Authzed/SpiceDB - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Integration with Authzed/SpiceDB

+

Permission requests sent to a Google Zanzibar-based Authzed/SpiceDB instance, via gRPC.

+
+ + Authorino features in this guide: + + +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the permission database

+

Create the namespace:

+
kubectl create namespace spicedb
+
+

Create the SpiceDB instance:

+
kubectl -n spicedb apply -f -<<EOF
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: spicedb
+  labels:
+    app: spicedb
+spec:
+  selector:
+    matchLabels:
+      app: spicedb
+  template:
+    metadata:
+      labels:
+        app: spicedb
+    spec:
+      containers:
+      - name: spicedb
+        image: authzed/spicedb
+        args:
+        - serve
+        - "--grpc-preshared-key"
+        - secret
+        - "--http-enabled"
+        ports:
+        - containerPort: 50051
+        - containerPort: 8443
+  replicas: 1
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: spicedb
+spec:
+  selector:
+    app: spicedb
+  ports:
+    - name: grpc
+      port: 50051
+      protocol: TCP
+    - name: http
+      port: 8443
+      protocol: TCP
+EOF
+
+

Forward local request to the SpiceDB service:

+
kubectl -n spicedb port-forward service/spicedb 8443:8443 2>&1 >/dev/null &
+
+

Create the permission schema:

+
curl -X POST http://localhost:8443/v1/schema/write \
+  -H 'Authorization: Bearer secret' \
+  -H 'Content-Type: application/json' \
+  -d @- << EOF
+{
+  "schema": "definition blog/user {}\ndefinition blog/post {\n\trelation reader: blog/user\n\trelation writer: blog/user\n\n\tpermission read = reader + writer\n\tpermission write = writer\n}"
+}
+EOF
+
+

Create the relationships:

+
    +
  • blog/user:emiliawriter of blog/post:1
  • +
  • blog/user:beatricereader of blog/post:1
  • +
+
curl -X POST http://localhost:8443/v1/relationships/write \
+  -H 'Authorization: Bearer secret' \
+  -H 'Content-Type: application/json' \
+  -d @- << EOF
+{
+  "updates": [
+    {
+      "operation": "OPERATION_CREATE",
+      "relationship": {
+        "resource": {
+          "objectType": "blog/post",
+          "objectId": "1"
+        },
+        "relation": "writer",
+        "subject": {
+          "object": {
+            "objectType": "blog/user",
+            "objectId": "emilia"
+          }
+        }
+      }
+    },
+    {
+      "operation": "OPERATION_CREATE",
+      "relationship": {
+        "resource": {
+          "objectType": "blog/post",
+          "objectId": "1"
+        },
+        "relation": "reader",
+        "subject": {
+          "object": {
+            "objectType": "blog/user",
+            "objectId": "beatrice"
+          }
+        }
+      }
+    }
+  ]
+}
+EOF
+
+

6. Create the AuthConfig

+

Store the shared token for Authorino to authenticate with the SpiceDB instance in a Service:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: spicedb
+  labels:
+    app: spicedb
+stringData:
+  grpc-preshared-key: secret
+EOF
+
+

Create the AuthConfig:

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: blog-users
+    apiKey:
+      selector:
+        matchLabels:
+          app: talker-api
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+  authorization:
+  - name: authzed
+    authzed:
+      endpoint: spicedb.spicedb.svc.cluster.local:50051
+      insecure: true
+      sharedSecretRef:
+        name: spicedb
+        key: grpc-preshared-key
+      subject:
+        kind:
+          value: blog/user
+        name:
+          valueFrom:
+            authJSON: auth.identity.metadata.annotations.username
+      resource:
+        kind:
+          value: blog/post
+        name:
+          valueFrom:
+            authJSON: context.request.http.path.@extract:{"sep":"/","pos":2}
+      permission:
+        valueFrom:
+          authJSON: context.request.http.method.@replace:{"old":"GET","new":"read"}.@replace:{"old":"POST","new":"write"}
+EOF
+
+

7. Create the API keys

+

For Emilia (writer):

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-writer
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    app: talker-api
+  annotations:
+    username: emilia
+stringData:
+  api_key: IAMEMILIA
+EOF
+
+

For Beatrice (reader):

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-reader
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    app: talker-api
+  annotations:
+    username: beatrice
+stringData:
+  api_key: IAMBEATRICE
+EOF
+
+

8. Consume the API

+

As Emilia, send a GET request:

+
curl -H 'Authorization: APIKEY IAMEMILIA' \
+     -X GET \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i
+# HTTP/1.1 200 OK
+
+

As Emilia, send a POST request:

+
curl -H 'Authorization: APIKEY IAMEMILIA' \
+     -X POST \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i
+# HTTP/1.1 200 OK
+
+

As Beatrice, send a GET request:

+
curl -H 'Authorization: APIKEY IAMBEATRICE' \
+     -X GET \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i
+# HTTP/1.1 200 OK
+
+

As Beatrice, send a POST request:

+
curl -H 'Authorization: APIKEY IAMBEATRICE' \
+     -X POST \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i
+# HTTP/1.1 403 Forbidden
+# x-ext-auth-reason: PERMISSIONSHIP_NO_PERMISSION;token=GhUKEzE2NzU3MDE3MjAwMDAwMDAwMDA=
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace spicedb
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/caching/index.html b/authorino/docs/user-guides/caching/index.html new file mode 100644 index 00000000..dbee9162 --- /dev/null +++ b/authorino/docs/user-guides/caching/index.html @@ -0,0 +1,2276 @@ + + + + + + + + + + + + + + + + + + + + + + + + Caching - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + +

User guide: Caching

+

Cache auth objects resolved at runtime for any configuration bit of an AuthConfig (i.e. any evaluator), of any phase (identity, metadata, authorization and dynamic response), for easy access in subsequent requests, whenever an arbitrary (user-defined) cache key repeats, until the cache entry expires.

+

This is particularly useful for configuration bits whose evaluation is significantly more expensive than accessing the cache. E.g.:

+
    +
  • Caching of metadata fetched from external sources in general
  • +
  • Caching of previously validated identity access tokens (e.g. for OAuth2 opaque tokens that involve consuming the token introspection endpoint of an external auth server)
  • +
  • Caching of complex Rego policies that involve sending requests to external services
  • +
+

Cases where one will NOT want to enable caching, due to relatively cheap compared to accessing and managing the cache: +- Validation of OIDC/JWT access tokens +- OPA/Rego policies that do not involve external requests +- JSON pattern-matching authorization +- Dynamic JSON responses +- Anonymous access

+
+ + Authorino features in this guide: + + + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: anonymous
+    anonymous: {}
+  metadata:
+  - name: cached-metadata
+    http:
+      endpoint: http://talker-api.default.svc.cluster.local:3000/metadata/{context.request.http.path}
+      method: GET
+    cache:
+      key:
+        valueFrom: { authJSON: context.request.http.path }
+      ttl: 60
+  authorization:
+  - name: cached-authz
+    opa:
+      inlineRego: |
+        now = time.now_ns()
+        allow = true
+      allValues: true
+    cache:
+      key:
+        valueFrom: { authJSON: context.request.http.path }
+      ttl: 60
+  response:
+  - name: x-authz-data
+    json:
+      properties:
+      - name: cached-metadata
+        valueFrom: { authJSON: auth.metadata.cached-metadata.uuid }
+      - name: cached-authz
+        valueFrom: { authJSON: auth.authorization.cached-authz.now }
+EOF
+
+

The example above enables caching for the external source of metadata, which in this case, for convenience, is the same upstream API protected by Authorino (i.e. the Talker API), though consumed directly by Authorino, without passing through the proxy. This API generates a uuid random hash that it injects in the JSON response. This value is different in every request processed by the API.

+

The example also enables caching of returned OPA virtual documents. cached-authz is a trivial Rego policy that always grants access, but generates a timestamp, which Authorino will cache.

+

In both cases, the path of the HTTP request is used as cache key. I.e., whenever the path repeats, Authorino reuse the values stored previously in each cache table (cached-metadata and cached-authz), respectively saving a request to the external source of metadata and the evaluation of the OPA policy. Cache entries will expire in both cases after 60 seconds they were stored in the cache.

+

The cached values will be visible in the response returned by the Talker API in x-authz-data header injected by Authorino. This way, we can tell when an existing value in the cache was used and when a new one was generated and stored.

+

6. Consume the API

+
    +
  1. To /hello
  2. +
+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# […]
+#  "X-Authz-Data": "{\"cached-authz\":\"1649343067462380300\",\"cached-metadata\":\"92c111cd-a10f-4e86-8bf0-e0cd646c6f79\"}",
+# […]
+
+
    +
  1. To a different path
  2. +
+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/goodbye
+# […]
+#  "X-Authz-Data": "{\"cached-authz\":\"1649343097860450300\",\"cached-metadata\":\"37fce386-1ee8-40a7-aed1-bf8a208f283c\"}",
+# […]
+
+
    +
  1. To /hello again before the cache entry expires (60 seconds from the first request sent to this path)
  2. +
+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# […]
+#  "X-Authz-Data": "{\"cached-authz\":\"1649343067462380300\",\"cached-metadata\":\"92c111cd-a10f-4e86-8bf0-e0cd646c6f79\"}",  <=== same cache-id as before
+# […]
+
+
    +
  1. To /hello again after the cache entry expires (60 seconds from the first request sent to this path)
  2. +
+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# […]
+#  "X-Authz-Data": "{\"cached-authz\":\"1649343135702743800\",\"cached-metadata\":\"e708a3a6-5caf-4028-ab5c-573ad9be7188\"}",  <=== different cache-id
+# […]
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/deny-with-redirect-to-login/index.html b/authorino/docs/user-guides/deny-with-redirect-to-login/index.html new file mode 100644 index 00000000..df0e18f1 --- /dev/null +++ b/authorino/docs/user-guides/deny-with-redirect-to-login/index.html @@ -0,0 +1,2398 @@ + + + + + + + + + + + + + + + + + + + + + + + + Redirecting to a login page - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Redirecting to a login page

+

Customize response status code and headers on failed requests to redirect users of a web application protected with Authorino to a login page instead of a 401 Unauthorized.

+
+ + Authorino features in this guide: + + + + Authorino's default response status codes, messages and headers for unauthenticated (`401`) and unauthorized (`403`) requests can be customized with static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). + + Check out as well the user guides about [HTTP "Basic" Authentication (RFC 7235)](./user-guides/http-basic-authentication.md) and [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Matrix Quotes web application

+

The Matrix Quotes is a static web application that contains quotes from the film The Matrix.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/matrix-quotes-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Matrix Quotes webapp behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/envoy-deploy.yaml
+
+

The bundle also creates an Ingress with host name matrix-quotes-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: matrix-quotes-protection
+spec:
+  hosts:
+  - matrix-quotes-authorino.127.0.0.1.nip.io
+  identity:
+  - name: browser-users
+    apiKey:
+      selector:
+        matchLabels:
+          group: users
+    credentials:
+      in: cookie
+      keySelector: TOKEN
+  - name: http-basic-auth
+    apiKey:
+      selector:
+        matchLabels:
+          group: users
+    credentials:
+      in: authorization_header
+      keySelector: Basic
+  denyWith:
+    unauthenticated:
+      code: 302
+      headers:
+      - name: Location
+        valueFrom:
+          authJSON: http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/login.html?redirect_to={context.request.http.path}
+EOF
+
+

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: user-credential-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: users
+stringData:
+  api_key: am9objpw # john:p
+type: Opaque
+EOF
+
+

7. Consume the application

+

On a web browser, navigate to http://matrix-quotes-authorino.127.0.0.1.nip.io:8000.

+

Click on the cards to read quotes from characters of the movie. You should be redirected to login page.

+

Log in using John's credentials: +- Username: john +- Password: p

+

Click again on the cards and check that now you are able to access the inner pages.

+

You can also consume a protected endpoint of the application using HTTP Basic Authentication:

+
curl -u john:p http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/neo.html
+# HTTP/1.1 200 OK
+
+

8. (Optional) Modify the AuthConfig to authenticate with OIDC

+

Setup a Keycloak server

+

Deploy a Keycloak server preloaded with a realm named kuadrant:

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Resolve local Keycloak domain so it can be accessed from the local host and inside the cluster with the name: (This will be needed to redirect to Keycloak's login page and at the same time validate issued tokens.)

+
echo '127.0.0.1 keycloak' >> /etc/hosts
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl port-forward deployment/keycloak 8080:8080 &
+
+

Create a client:

+
curl -H "Authorization: Bearer $(curl http://keycloak:8080/auth/realms/master/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=admin-cli' -d 'username=admin' -d 'password=p' | jq -r .access_token)" \
+     -H 'Content-type: application/json' \
+     -d '{ "name": "matrix-quotes", "clientId": "matrix-quotes", "publicClient": true, "redirectUris": ["http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/auth*"], "enabled": true }' \
+     http://keycloak:8080/auth/admin/realms/kuadrant/clients
+
+

Reconfigure the Matrix Quotes app to use Keycloak's login page

+
kubectl set env deployment/matrix-quotes KEYCLOAK_REALM=http://keycloak:8080/auth/realms/kuadrant CLIENT_ID=matrix-quotes
+
+

Apply the changes to the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: matrix-quotes-protection
+spec:
+  hosts:
+  - matrix-quotes-authorino.127.0.0.1.nip.io
+  identity:
+  - name: idp-users
+    oidc:
+      endpoint: http://keycloak:8080/auth/realms/kuadrant
+    credentials:
+      in: cookie
+      keySelector: TOKEN
+  denyWith:
+    unauthenticated:
+      code: 302
+      headers:
+      - name: Location
+        valueFrom:
+          authJSON: http://keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/auth?client_id=matrix-quotes&redirect_uri=http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/auth?redirect_to={context.request.http.path}&scope=openid&response_type=code
+EOF
+
+

Consume the application again

+

Refresh the browser window or navigate again to http://matrix-quotes-authorino.127.0.0.1.nip.io:8000.

+

Click on the cards to read quotes from characters of the movie. You should be redirected to login page this time served by the Keycloak server.

+

Log in as Jane (a user of the Keycloak realm): +- Username: jane +- Password: p

+

Click again on the cards and check that now you are able to access the inner pages.

+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/user-credential-1
+kubectl delete authconfig/matrix-quotes-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/matrix-quotes-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/index.html b/authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/index.html new file mode 100644 index 00000000..32a1b4d1 --- /dev/null +++ b/authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/index.html @@ -0,0 +1,2532 @@ + + + + + + + + + + + + + + + + + + + + + + + + Edge Authentication Architecture (EAA) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Edge Authentication Architecture (EAA)

+

Edge Authentication Architecture (EAA) is a pattern where more than extracting authentication logics and specifics from the application codebase to a proper authN/authZ layer, this is pushed to the edge of your cloud network, without violating the Zero Trust principle nevertheless.

+

The very definition of "edge" is subject to discussion, but the underlying idea is that clients (e.g. API clients, IoT devices, etc.) authenticate with a layer that, before moving traffic to inside the network: +- understands the complexity of all the different methods of authentication supported; +- sometimes some token normalization is involved; +- eventually enforces some preliminary authorization policies; and +- possibly filters data bits that are sensitive to privacy concerns (e.g. to comply with local legislation such as GRPD, CCPA, etc)

+

As a minimum, EAA allows to simplify authentication between applications and microservices inside the network, as well as to reduce authorization to domain-specific rules and policies, rather than having to deal all the complexity to support all types of clients in every node.

+
+ + Authorino features in this guide: + + + + Festival Wristbands are OpenID Connect ID tokens (signed JWTs) issued by Authorino by the end of the Auth Pipeline, for authorized requests. It can be configured to include claims based on static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). + + Check out as well the user guides about [Token normalization](./token-normalization.md), [Authentication with API keys](./api-key-authentication.md) and [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
  • jwt, to inspect JWTs (optional)
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Create the namespaces

+

For simplicity, this examples will set up edge and internal nodes in different namespaces of the same Kubernetes cluster. Those will share a same single cluster-wide Authorino instance. In real-life scenarios, it does not have to be like that.

+
kubectl create namespace authorino
+kubectl create namespace edge
+kubectl create namespace internal
+
+

3. Deploy Authorino

+
kubectl -n authorino apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  clusterWide: true
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

5. Setup the Edge

+

Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl -n edge apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/eaa/envoy-edge-deploy.yaml
+
+

The bundle also creates an Ingress with host name edge-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 9000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl -n edge port-forward deployment/envoy 9000:9000 &
+
+

Create the AuthConfig

+

Create a required secret, used by Authorino to sign the Festival Wristband tokens:

+
kubectl -n edge apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: wristband-signing-key
+stringData:
+  key.pem: |
+    -----BEGIN EC PRIVATE KEY-----
+    MHcCAQEEIDHvuf81gVlWGo0hmXGTAnA/HVxGuH8vOc7/8jewcVvqoAoGCCqGSM49
+    AwEHoUQDQgAETJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZxJKDysoGwn
+    cnUvHIu23SgW+Ee9lxSmZGhO4eTdQeKxMA==
+    -----END EC PRIVATE KEY-----
+type: Opaque
+EOF
+
+

Create the config:

+
kubectl -n edge apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: edge-auth
+spec:
+  hosts:
+  - edge-authorino.127.0.0.1.nip.io
+  identity:
+  - name: api-clients
+    apiKey:
+      selector:
+        matchLabels:
+          authorino.kuadrant.io/managed-by: authorino
+      allNamespaces: true
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+    extendedProperties:
+    - name: username
+      valueFrom:
+        authJSON: auth.identity.metadata.annotations.authorino\.kuadrant\.io/username
+  - name: idp-users
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+    extendedProperties:
+    - name: username
+      valueFrom:
+        authJSON: auth.identity.preferred_username
+  response:
+  - name: wristband
+    wrapper: envoyDynamicMetadata
+    wristband:
+      issuer: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband
+      customClaims:
+      - name: username
+        valueFrom:
+          authJSON: auth.identity.username
+      tokenDuration: 300
+      signingKeyRefs:
+        - name: wristband-signing-key
+          algorithm: ES256
+EOF
+
+

6. Setup the internal workload

+

Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl -n internal apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl -n internal apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/eaa/envoy-node-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl -n internal port-forward deployment/envoy 8000:8000 &
+
+

Create the AuthConfig

+
kubectl -n internal apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: edge-authenticated
+    oidc:
+      endpoint: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband
+EOF
+
+

7. Create an API key

+
kubectl -n edge apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+  annotations:
+    authorino.kuadrant.io/username: alice
+    authorino.kuadrant.io/email: alice@host
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

8. Consume the API

+

Using the API key to authenticate

+

Authenticate at the edge:

+
WRISTBAND_TOKEN=$(curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://edge-authorino.127.0.0.1.nip.io:9000/auth -is | tr -d '\r' | sed -En 's/^x-wristband-token: (.*)/\1/p')
+
+

Consume the API:

+
curl -H "Authorization: Bearer $WRISTBAND_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+

Try to consume the API with authentication token that is only accepted in the edge:

+
curl -H "Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="edge-authenticated"
+# x-ext-auth-reason: credential not found
+
+

(Optional) Inspect the wristband token and verify that it only contains restricted info to authenticate and authorize with internal apps.

+
jwt decode $WRISTBAND_TOKEN
+# [...]
+#
+# Token claims
+# ------------
+# {
+#   "exp": 1638452051,
+#   "iat": 1638451751,
+#   "iss": "http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband",
+#   "sub": "02cb51ea0e1c9f3c0960197a2518c8eb4f47e1b9222a968ffc8d4c8e783e4d19",
+#   "username": "alice"
+# }
+
+

Authenticating with the Keycloak server

+

Obtain an access token with the Keycloak server for Jane:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

+
ACCESS_TOKEN=$(kubectl -n edge run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

(Optional) Inspect the access token issue by Keycloak and verify and how it contains more details about the identity than required to authenticate and authorize with internal apps.

+
jwt decode $ACCESS_TOKEN
+# [...]
+#
+# Token claims
+# ------------
+# { [...]
+#   "email": "jane@kuadrant.io",
+#   "email_verified": true,
+#   "exp": 1638452220,
+#   "family_name": "Smith",
+#   "given_name": "Jane",
+#   "iat": 1638451920,
+#   "iss": "http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant",
+#   "jti": "699f6e49-dea4-4f29-ae2a-929a3a18c94b",
+#   "name": "Jane Smith",
+#   "preferred_username": "jane",
+#   "realm_access": {
+#     "roles": [
+#       "offline_access",
+#       "member",
+#       "admin",
+#       "uma_authorization"
+#     ]
+#   },
+# [...]
+
+

As Jane, obtain a limited wristband token at the edge:

+
WRISTBAND_TOKEN=$(curl -H "Authorization: Bearer $ACCESS_TOKEN" http://edge-authorino.127.0.0.1.nip.io:9000/auth -is | tr -d '\r' | sed -En 's/^x-wristband-token: (.*)/\1/p')
+
+

Consume the API:

+
curl -H "Authorization: Bearer $WRISTBAND_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete namespace edge
+kubectl delete namespace internal
+kubectl delete namespace authorino
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino and Authorino Operator manifests, run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/envoy-jwt-authn-and-authorino/index.html b/authorino/docs/user-guides/envoy-jwt-authn-and-authorino/index.html new file mode 100644 index 00000000..d34c568b --- /dev/null +++ b/authorino/docs/user-guides/envoy-jwt-authn-and-authorino/index.html @@ -0,0 +1,2537 @@ + + + + + + + + + + + + + + + + + + + + + + + + Mixing Envoy built-in filter for auth and Authorino - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Mixing Envoy built-in filter for auth and Authorino

+

Have JWT validation handled by Envoy beforehand and the JWT payload injected into the request to Authorino, to be used in custom authorization policies defined in a AuthConfig.

+

In this user guide, we will set up Envoy and Authorino to protect a service called the Talker API service, with JWT authentication handled in Envoy and a more complex authorization policy enforced in Authorino.

+

The policy defines a geo-fence by which only requests originated in Great Britain (country code: GB) will be accepted, unless the user is bound to a role called 'admin' in the auth server, in which case no geofence is enforced.

+

All requests to the Talker API will be authenticated in Envoy. However, requests to /global will not trigger the external authorization.

+
+ + Authorino features in this guide: + + + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The command below creates the Envoy configuration and deploys the Envoy proxy wire up the Talker API and external authorization with Authorino.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  labels:
+    app: authorino
+  name: envoy
+data:
+  envoy.yaml: |
+    static_resources:
+      clusters:
+      - name: talker-api
+        connect_timeout: 0.25s
+        type: strict_dns
+        lb_policy: round_robin
+        load_assignment:
+          cluster_name: talker-api
+          endpoints:
+          - lb_endpoints:
+            - endpoint:
+                address:
+                  socket_address:
+                    address: talker-api
+                    port_value: 3000
+      - name: keycloak
+        connect_timeout: 0.25s
+        type: logical_dns
+        lb_policy: round_robin
+        load_assignment:
+          cluster_name: keycloak
+          endpoints:
+          - lb_endpoints:
+            - endpoint:
+                address:
+                  socket_address:
+                    address: keycloak.keycloak.svc.cluster.local
+                    port_value: 8080
+      - name: authorino
+        connect_timeout: 0.25s
+        type: strict_dns
+        lb_policy: round_robin
+        http2_protocol_options: {}
+        load_assignment:
+          cluster_name: authorino
+          endpoints:
+          - lb_endpoints:
+            - endpoint:
+                address:
+                  socket_address:
+                    address: authorino-authorino-authorization
+                    port_value: 50051
+      listeners:
+      - address:
+          socket_address:
+            address: 0.0.0.0
+            port_value: 8000
+        filter_chains:
+        - filters:
+          - name: envoy.http_connection_manager
+            typed_config:
+              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
+              stat_prefix: local
+              route_config:
+                name: local_route
+                virtual_hosts:
+                - name: local_service
+                  domains: ['*']
+                  routes:
+                  - match: { path_separated_prefix: /global }
+                    route: { cluster: talker-api }
+                    typed_per_filter_config:
+                      envoy.filters.http.ext_authz:
+                        "@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
+                        disabled: true
+                  - match: { prefix: / }
+                    route: { cluster: talker-api }
+              http_filters:
+              - name: envoy.filters.http.jwt_authn
+                typed_config:
+                  "@type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
+                  providers:
+                    keycloak:
+                      issuer: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+                      remote_jwks:
+                        http_uri:
+                          uri: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/certs
+                          cluster: keycloak
+                          timeout: 5s
+                        cache_duration:
+                          seconds: 300
+                      payload_in_metadata: verified_jwt
+                  rules:
+                  - match: { prefix: / }
+                    requires: { provider_name: keycloak }
+              - name: envoy.filters.http.ext_authz
+                typed_config:
+                  "@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
+                  transport_api_version: V3
+                  failure_mode_allow: false
+                  metadata_context_namespaces:
+                  - envoy.filters.http.jwt_authn
+                  grpc_service:
+                    envoy_grpc:
+                      cluster_name: authorino
+                    timeout: 1s
+              - name: envoy.filters.http.router
+                typed_config:
+                  "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
+              use_remote_address: true
+    admin:
+      access_log_path: "/tmp/admin_access.log"
+      address:
+        socket_address:
+          address: 0.0.0.0
+          port_value: 8001
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    app: authorino
+    svc: envoy
+  name: envoy
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: authorino
+      svc: envoy
+  template:
+    metadata:
+      labels:
+        app: authorino
+        svc: envoy
+    spec:
+      containers:
+      - args:
+        - --config-path /usr/local/etc/envoy/envoy.yaml
+        - --service-cluster front-proxy
+        - --log-level info
+        - --component-log-level filter:trace,http:debug,router:debug
+        command:
+        - /usr/local/bin/envoy
+        image: envoyproxy/envoy:v1.22-latest
+        name: envoy
+        ports:
+        - containerPort: 8000
+          name: web
+        - containerPort: 8001
+          name: admin
+        volumeMounts:
+        - mountPath: /usr/local/etc/envoy
+          name: config
+          readOnly: true
+      volumes:
+      - configMap:
+          items:
+          - key: envoy.yaml
+            path: envoy.yaml
+          name: envoy
+        name: config
+---
+apiVersion: v1
+kind: Service
+metadata:
+  labels:
+    app: authorino
+  name: envoy
+spec:
+  ports:
+  - name: web
+    port: 8000
+    protocol: TCP
+  selector:
+    app: authorino
+    svc: envoy
+---
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+  name: ingress-wildcard-host
+spec:
+  rules:
+  - host: talker-api-authorino.127.0.0.1.nip.io
+    http:
+      paths:
+      - backend:
+          service:
+            name: envoy
+            port:
+              number: 8000
+        path: /
+        pathType: Prefix
+EOF
+
+

For convenience, an Ingress resource is defined with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Deploy the IP Location service

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-examples/main/ip-location/ip-location-deploy.yaml
+
+

6. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: jwt
+    plain:
+      authJSON: context.metadata_context.filter_metadata.envoy\.filters\.http\.jwt_authn|verified_jwt
+  metadata:
+  - name: geoinfo
+    http:
+      endpoint: http://ip-location.default.svc.cluster.local:3000/{context.request.http.headers.x-forwarded-for.@extract:{"sep":","}}
+      method: GET
+      headers:
+      - name: Accept
+        value: application/json
+    cache:
+      key:
+        valueFrom: { authJSON: "context.request.http.headers.x-forwarded-for.@extract:{\"sep\":\",\"}" }
+  authorization:
+  - name: geofence
+    when:
+    - selector: auth.identity.realm_access.roles
+      operator: excl
+      value: admin
+    json:
+      rules:
+      - selector: auth.metadata.geoinfo.country_iso_code
+        operator: eq
+        value: "GB"
+  denyWith:
+    unauthorized:
+      message:
+        valueFrom: { authJSON: "The requested resource is not available in {auth.metadata.geoinfo.country_name}" }
+EOF
+
+

7. Obtain a token and consume the API

+

Obtain an access token and consume the API as John (member)

+

Obtain an access token with the Keycloak server for John:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user John, a non-admin (member) user:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

As John, consume the API inside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 79.123.45.67' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 200 OK
+
+

As John, consume the API outside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 109.69.200.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 403 Forbidden
+# x-ext-auth-reason: The requested resource is not available in Italy
+
+

As John, consume a path of the API that will cause Envoy to skip external authorization:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 109.69.200.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/global -i
+# HTTP/1.1 200 OK
+
+

Obtain an access token and consume the API as Jane (admin)

+

Obtain an access token with the Keycloak server for Jane, an admin user:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

As Jane, consume the API inside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 79.123.45.67' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 200 OK
+
+

As Jane, consume the API outside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 109.69.200.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 200 OK
+
+

As Jane, consume a path of the API that will cause Envoy to skip external authorization:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 109.69.200.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/global -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete ingress/ingress-wildcard-host
+kubectl delete service/envoy
+kubectl delete deployment/envoy
+kubectl delete configmap/envoy
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/external-metadata/index.html b/authorino/docs/user-guides/external-metadata/index.html new file mode 100644 index 00000000..6fbaa032 --- /dev/null +++ b/authorino/docs/user-guides/external-metadata/index.html @@ -0,0 +1,2283 @@ + + + + + + + + + + + + + + + + + + + + + + + + Fetching auth metadata from external sources - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Fetching auth metadata from external sources

+

Get online data from remote HTTP services to enhance authorization rules.

+
+ + Authorino features in this guide: + + + + You can configure Authorino to fetch additional metadata from external sources in request-time, by sending either GET or POST request to an HTTP service. The service is expected to return a JSON content which is appended to the [Authorization JSON](./../architecture.md#the-authorization-json), thus becoming available for usage in other configs of the Auth Pipeline, such as in authorization policies or custom responses. + + URL, parameters and headers of the request to the external source of metadata can be configured, including with dynamic values. Authentication between Authorino and the service can be set as part of these configuration options, or based on shared authentication token stored in a Kubernetes `Secret`. + + Check out as well the user guides about [Authentication with API keys](./api-key-authentication.md) and [Open Policy Agent (OPA) Rego policies](./opa-authorization.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

In this example, we will implement a geofence policy for the API, using OPA and metadata fetching from an external service that returns geolocalization JSON data for a given IP address. The policy establishes that only GET requests are allowed and the path of the request should be in the form /{country-code}/*, where {country-code} is the 2-character code of the country where the client is identified as in.

+

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+  metadata:
+    - name: geo
+      http:
+        endpoint: http://ip-api.com/json/{context.request.http.headers.x-forwarded-for.@extract:{"sep":","}}?fields=countryCode
+        method: GET
+        headers:
+        - name: Accept
+          value: application/json
+  authorization:
+  - name: geofence
+    opa:
+      inlineRego: |
+        import input.context.request.http
+
+        allow {
+          http.method = "GET"
+          split(http.path, "/") = [_, requested_country, _]
+          lower(requested_country) == lower(object.get(input.auth.metadata.geo, "countryCode", ""))
+        }
+EOF
+
+

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON, including the description of the @extract string modifier.

+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

7. Consume the API

+

From an IP address assigned to the United Kingdom of Great Britain and Northern Ireland (country code GB):

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 79.123.45.67' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/gb/hello -i
+# HTTP/1.1 200 OK
+
+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 79.123.45.67' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/it/hello -i
+# HTTP/1.1 403 Forbidden
+
+

From an IP address assigned to Italy (country code IT):

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 109.112.34.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/gb/hello -i
+# HTTP/1.1 403 Forbidden
+
+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 109.112.34.56' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/it/hello -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/hello-world/index.html b/authorino/docs/user-guides/hello-world/index.html new file mode 100644 index 00000000..215aac89 --- /dev/null +++ b/authorino/docs/user-guides/hello-world/index.html @@ -0,0 +1,2278 @@ + + + + + + + + + + + + + + + + + + + + + + + + Hello World - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Hello World

+


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Create the namespace

+
kubectl create namespace hello-world
+# namespace/hello-world created
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+# deployment.apps/talker-api created
+# service/talker-api created
+
+

3. Setup Envoy

+
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/envoy-deploy.yaml
+# configmap/envoy created
+# deployment.apps/envoy created
+# service/envoy created
+
+

Forward requests on port 8000 to the Envoy pod running inside the cluster:

+
kubectl -n hello-world port-forward deployment/envoy 8000:8000 &
+
+

4. Consume the API (unprotected)

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+

5. Protect the API

+

Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

Deploy Authorino

+
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/authorino.yaml
+# authorino.operator.authorino.kuadrant.io/authorino created
+
+

The command above will deploy Authorino as a separate service (in contrast to as a sidecar of the Talker API and other architectures). For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

6. Consume the API behind Envoy and Authorino

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 404 Not Found
+# x-ext-auth-reason: Service not found
+
+

Authorino does not know about the talker-api-authorino.127.0.0.1.nip.io host, hence the 404 Not Found. Teach it by applying an AuthConfig.

+

7. Apply an AuthConfig

+
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/authconfig.yaml
+# authconfig.authorino.kuadrant.io/talker-api-protection created
+
+

8. Consume the API without credentials

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: APIKEY realm="api-clients"
+# x-ext-auth-reason: credential not found
+
+

Grant access to the API with a tailor-made security scheme

+

Check out other user guides for several AuthN/AuthZ use-cases and instructions to implement them using Authorino. A few examples are:

+ +

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the namespaces created in step 1 and 5:

+
kubectl delete namespace hello-world
+kubectl delete namespace authorino-operator
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/host-override/index.html b/authorino/docs/user-guides/host-override/index.html new file mode 100644 index 00000000..5829a58b --- /dev/null +++ b/authorino/docs/user-guides/host-override/index.html @@ -0,0 +1,2126 @@ + + + + + + + + + + + + + + + + + + + + + + + + Host override via context extension - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + +

Host override via context extension

+

By default, Authorino uses the host information of the HTTP request (Attributes.Http.Host) to lookup for an indexed AuthConfig to be enforced. The host info be overridden by supplying a host entry as a (per-route) context extension (Attributes.ContextExtensions), which takes precedence whenever present.

+

Overriding the host attribute of the HTTP request can be useful to support use cases such as of path prefix-based lookup and wildcard subdomains lookup.

+ +

For further details about Authorino lookup of AuthConfig, check out Host lookup.

+

Example of host override for path prefix-based lookup

+

In this use case, 2 different APIs (i.e. Dogs API and Cats API) are served under the same base domain, and differentiated by the path prefix: +- pets.com/dogs → Dogs API +- pets.com/cats → Cats API

+

Edit the Envoy config to extend the external authorization settings at the level of the routes, with the host value that will be favored by Authorino before the actual host attribute of the HTTP request:

+
virtual_hosts:
+- name: pets-api
+  domains: ['pets.com']
+  routes:
+  - match:
+      prefix: /dogs
+    route:
+      cluster: dogs-api
+    typed_per_filter_config:
+      envoy.filters.http.ext_authz:
+        \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
+        check_settings:
+          context_extensions:
+            host: dogs.pets.com
+  - match:
+      prefix: /cats
+    route:
+      cluster: cats-api
+    typed_per_filter_config:
+      envoy.filters.http.ext_authz:
+        \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
+        check_settings:
+          context_extensions:
+            host: cats.pets.com
+
+

Create the AuthConfig for the Pets API:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: dogs-api-protection
+spec:
+  hosts:
+  - dogs.pets.com
+
+  identity: [...]
+
+

Create the AuthConfig for the Cats API:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: cats-api-protection
+spec:
+  hosts:
+  - cats.pets.com
+
+  identity: [...]
+
+

Notice that the host subdomains dogs.pets.com and cats.pets.com are not really requested by the API consumers. Rather, users send requests to pets.com/dogs and pets.com/cats. When routing those requests, Envoy makes sure to inject the corresponding context extensions that will induce the right lookup in Authorino.

+

Example of host override for wildcard subdomain lookup

+

In this use case, a single Pets API serves requests for any subdomain that matches *.pets.com, e.g.: +- dogs.pets.com → Pets API +- cats.pets.com → Pets API

+

Edit the Envoy config to extend the external authorization settings at the level of the virtual host, with the host value that will be favored by Authorino before the actual host attribute of the HTTP request:

+
virtual_hosts:
+- name: pets-api
+  domains: ['*.pets.com']
+  typed_per_filter_config:
+    envoy.filters.http.ext_authz:
+      \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
+      check_settings:
+        context_extensions:
+          host: pets.com
+  routes:
+  - match:
+      prefix: /
+    route:
+      cluster: pets-api
+
+

The host context extension used above is any key that matches one of the hosts listed in the targeted AuthConfig.

+

Create the AuthConfig for the Pets API:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: pets-api-protection
+spec:
+  hosts:
+  - pets.com
+
+  identity: [...]
+
+

Notice that requests to dogs.pets.com and to cats.pets.com are all routed by Envoy to the same API, with same external authorization configuration. in all the cases, Authorino will lookup for the indexed AuthConfig associated with pets.com. The same is valid for a request sent, e.g., to birds.pets.com.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/http-basic-authentication/index.html b/authorino/docs/user-guides/http-basic-authentication/index.html new file mode 100644 index 00000000..e5c523d0 --- /dev/null +++ b/authorino/docs/user-guides/http-basic-authentication/index.html @@ -0,0 +1,2307 @@ + + + + + + + + + + + + + + + + + + + + + + + + HTTP "Basic" Authentication (RFC 7235) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: HTTP "Basic" Authentication (RFC 7235)

+

Turn Authorino API key Secrets settings into HTTP basic auth.

+
+ + Authorino features in this guide: + + + + HTTP "Basic" Authentication ([RFC 7235](https://datatracker.ietf.org/doc/html/rfc7235)) is not recommended if you can afford other more secure methods such as OpenID Connect. To support legacy nonetheless it is sometimes necessary to implement it. + + In Authorino, HTTP "Basic" Authentication can be modeled leveraging the API key authentication feature (stored as Kubernetes `Secret`s with an `api_key` entry and labeled to match selectors specified in `spec.identity.apiKey.selector` of the `AuthConfig`). + + Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: http-basic-auth
+    apiKey:
+      selector:
+        matchLabels:
+          group: users
+    credentials:
+      in: authorization_header
+      keySelector: Basic
+  authorization:
+  - name: acl
+    when:
+    - selector: context.request.http.path
+      operator: eq
+      value: /bye
+    json:
+      rules:
+      - selector: context.request.http.headers.authorization.@extract:{"pos":1}|@base64:decode|@extract:{"sep":":"}
+        operator: eq
+        value: john
+EOF
+
+

The config specifies an Access Control List (ACL), by which only the user john is authorized to consume the /bye endpoint of the API.

+

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON, including the description of the string modifiers @extract and @case used above. Check out as well the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

+

6. Create user credentials

+

To create credentials for HTTP "Basic" Authentication, store each username:password, base64-encoded, in the api_key value of the Kubernetes Secret resources. E.g.:

+
printf "john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx" | base64
+# am9objpuZHlCenJlVXpGNHpxRFFzcVNQTUhrUmhyaUVPdGNSeA==
+
+

Create credentials for user John:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: basic-auth-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: users
+stringData:
+  api_key: am9objpuZHlCenJlVXpGNHpxRFFzcVNQTUhrUmhyaUVPdGNSeA== # john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

Create credentials for user Jane:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: basic-auth-2
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: users
+stringData:
+  api_key: amFuZTpkTnNScnNhcHkwbk5Dd210NTM3ZkhGcHl4MGNCc0xFcA== # jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp
+type: Opaque
+EOF
+
+

7. Consume the API

+

As John (authorized in the ACL):

+
curl -u john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+
curl -u john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx http://talker-api-authorino.127.0.0.1.nip.io:8000/bye
+# HTTP/1.1 200 OK
+
+

As Jane (NOT authorized in the ACL):

+
curl -u jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+
curl -u jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp http://talker-api-authorino.127.0.0.1.nip.io:8000/bye -i
+# HTTP/1.1 403 Forbidden
+
+

With an invalid user/password:

+
curl -u unknown:invalid http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Basic realm="http-basic-auth"
+
+

8. Revoke access to the API

+
kubectl delete secret/basic-auth-1
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/basic-auth-1
+kubectl delete secret/basic-auth-2
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/index.html b/authorino/docs/user-guides/index.html new file mode 100644 index 00000000..5c9184e5 --- /dev/null +++ b/authorino/docs/user-guides/index.html @@ -0,0 +1,2073 @@ + + + + + + + + + + + + + + + + + + + + User guides - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

User guides

+ + + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/injecting-data/index.html b/authorino/docs/user-guides/injecting-data/index.html new file mode 100644 index 00000000..ddaac06d --- /dev/null +++ b/authorino/docs/user-guides/injecting-data/index.html @@ -0,0 +1,2268 @@ + + + + + + + + + + + + + + + + + + + + + + + + Injecting data in the request - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Injecting data in the request

+

Inject HTTP headers with serialized JSON content.

+
+ + Authorino features in this guide: + + + + Inject serialized custom JSON objects as HTTP request headers. Values can be static or fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). + + Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

The following defines a JSON object to be injected as an added HTTP header into the request, named after the response config x-ext-auth-data. The object includes 3 properties: +1. a static value authorized: true; +2. a dynamic value request-time, from Envoy-supplied contextual data present in the Authorization JSON; and +3. a greeting message geeting-message that interpolates a dynamic value read from an annotation of the Kubernetes Secret resource that represents the API key used to authenticate into a static string.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+  response:
+  - name: x-ext-auth-data
+    json:
+      properties:
+      - name: authorized
+        value: true
+      - name: request-time
+        valueFrom:
+          authJSON: context.request.time.seconds
+      - name: greeting-message
+        valueFrom:
+          authJSON: Hello, {auth.identity.metadata.annotations.auth-data\/name}!
+EOF
+
+

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+  annotations:
+    auth-data/name: Rita
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

7. Consume the API

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# {
+#   "method": "GET",
+#   "path": "/hello",
+#   "query_string": null,
+#   "body": "",
+#   "headers": {
+#     …
+#     "X-Ext-Auth-Data": "{\"authorized\":true,\"greeting-message\":\"Hello, Rita!\",\"request-time\":1637954644}",
+#   },
+#   …
+# }
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/json-pattern-matching-authorization/index.html b/authorino/docs/user-guides/json-pattern-matching-authorization/index.html new file mode 100644 index 00000000..bec54b12 --- /dev/null +++ b/authorino/docs/user-guides/json-pattern-matching-authorization/index.html @@ -0,0 +1,2303 @@ + + + + + + + + + + + + + + + + + + + + + + + + Simple pattern-matching authorization policies - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Simple pattern-matching authorization policies

+

Write simple authorization rules based on JSON patterns matched against Authorino's Authorization JSON; check contextual information of the request, validate JWT claims, cross metadata fetched from external sources, etc.

+
+ + Authorino features in this guide: + + + + Authorino provides a built-in authorization module to check simple pattern-matching rules against the [Authorization JSON](./../architecture.md#the-authorization-json). This is an alternative to [OPA](./../features.md#open-policy-agent-opa-rego-policies-authorizationopa) when all you want is to check for some simple rules, without complex logics, such as match the value of a JWT claim. + + Check out as well the user guide about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

The email-verified-only authorization policy ensures that users consuming the API from a given network (IP range 192.168.1/24) must have their emails verified.

+

The email_verified claim is a property of the identity added to the JWT by the OpenID Connect issuer.

+

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  authorization:
+  - name: email-verified-only
+    when:
+    - selector: "context.request.http.headers.x-forwarded-for.@extract:{\"sep\": \",\"}"
+      operator: matches
+      value: 192\\.168\\.1\\.\\d+
+    json:
+      rules:
+      - selector: auth.identity.email_verified
+        operator: eq
+        value: "true"
+EOF
+
+

Check out the docs for information about semantics and operators supported by the JSON pattern-matching authorization feature, as well the common feature JSON paths for reading from the Authorization JSON, including the description of the string modifier @extract used above. Check out as well the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

+

6. Obtain an access token and consume the API

+

Obtain an access token and consume the API as Jane (email verified)

+

Obtain an access token with the Keycloak server for Jane:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

As Jane, consume the API outside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 123.45.6.78' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

As Jane, consume the API inside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 192.168.1.10' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Obtain an access token and consume the API as Peter (email NOT verified)

+

Obtain an access token with the Keycloak server for Peter:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=peter' -d 'password=p' | jq -r .access_token)
+
+

As Peter, consume the API outside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 123.45.6.78' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

As Peter, consume the API inside the area where the policy applies:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
+     -H 'X-Forwarded-For: 192.168.1.10' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 403 Forbidden
+# x-ext-auth-reason: Unauthorized
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/keycloak-authorization-services/index.html b/authorino/docs/user-guides/keycloak-authorization-services/index.html new file mode 100644 index 00000000..0ebf4287 --- /dev/null +++ b/authorino/docs/user-guides/keycloak-authorization-services/index.html @@ -0,0 +1,2284 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authorization with Keycloak Authorization Services - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Authorization with Keycloak Authorization Services

+

Keycloak provides a powerful set of tools (REST endpoints and administrative UIs), also known as Keycloak Authorization Services, to manage and enforce authorization, workflows for multiple access control mechanisms, including discretionary user access control and user-managed permissions.

+

This user guide is an example of how to use Authorino as an adapter to Keycloak Authorization Services while still relying on the reverse-proxy integration pattern, thus not involving importing an authorization library nor rebuilding the application's code.

+
+ + Authorino features in this guide: + + + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Keycloak server
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

In this example, Authorino will accept access tokens (JWTs) issued by the Keycloak server. These JWTs can be either normal Keycloak ID tokens or Requesting Party Tokens (RPT).

+

RPTs include claims about the permissions of the user regarding protected resources and scopes associated with a Keycloak authorization client that the user can access.

+

When the supplied access token is an RPT, Authorino will just validate whether the user's granted permissions present in the token include the requested resource ID (translated from the path) and scope (inferred from the HTTP method). If the token does not contain a permissions claim (i.e. it is not an RPT), Authorino will negotiate a User-Managed Access (UMA) ticket on behalf of the user and try to obtain an RPT on that UMA ticket.

+

In cases of asynchronous user-managed permission control, the first request to the API using a normal Keycloak ID token is denied by Authorino. The user that owns the resource acknowledges the access request in the Keycloak UI. If access is granted, the new permissions will be reflected in subsequent RPTs obtained by Authorino on behalf of the requesting party.

+

Whenever an RPT with proper permissions is obtained by Authorino, the RPT is supplied back to the API consumer, so it can be used in subsequent requests thus skipping new negotiations of UMA tickets.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  authorization:
+  - name: uma
+    opa:
+      inlineRego: |
+        pat := http.send({"url":"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token","method": "post","headers":{"Content-Type":"application/x-www-form-urlencoded"},"raw_body":"grant_type=client_credentials"}).body.access_token
+        resource_id := http.send({"url":concat("",["http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=",input.context.request.http.path]),"method":"get","headers":{"Authorization":concat(" ",["Bearer ",pat])}}).body[0]
+        scope := lower(input.context.request.http.method)
+        access_token := trim_prefix(input.context.request.http.headers.authorization, "Bearer ")
+
+        default rpt = ""
+        rpt = access_token { object.get(input.auth.identity, "authorization", {}).permissions }
+        else = rpt_str {
+          ticket := http.send({"url":"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/authz/protection/permission","method":"post","headers":{"Authorization":concat(" ",["Bearer ",pat]),"Content-Type":"application/json"},"raw_body":concat("",["[{\"resource_id\":\"",resource_id,"\",\"resource_scopes\":[\"",scope,"\"]}]"])}).body.ticket
+          rpt_str := object.get(http.send({"url":"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token","method":"post","headers":{"Authorization":concat(" ",["Bearer ",access_token]),"Content-Type":"application/x-www-form-urlencoded"},"raw_body":concat("",["grant_type=urn:ietf:params:oauth:grant-type:uma-ticket&ticket=",ticket,"&submit_request=true"])}).body, "access_token", "")
+        }
+
+        allow {
+          permissions := object.get(io.jwt.decode(rpt)[1], "authorization", { "permissions": [] }).permissions
+          permissions[i]
+          permissions[i].rsid = resource_id
+          permissions[i].scopes[_] = scope
+        }
+      allValues: true
+  response:
+  - name: x-keycloak
+    when:
+    - selector: auth.identity.authorization.permissions
+      operator: eq
+      value: ""
+    json:
+      properties:
+      - name: rpt
+        valueFrom: { authJSON: auth.authorization.uma.rpt }
+EOF
+
+

6. Obtain an access token with the Keycloak server

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for user Jane:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

7. Consume the API

+

As Jane, try to send a GET request to the protected resource /greetings/1, owned by user John.

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 403 Forbidden
+
+

As John, log in to http://localhost:8080/auth/realms/kuadrant/account in the web browser (username: john / password: p), and grant access to the resource greeting-1 for Jane. A pending permission request by Jane shall exist in the list of John's Resources.

+

As Jane, try to consume the protected resource /greetings/1 again:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 200 OK
+#
+# {…
+#   "headers": {…
+#     "X-Keycloak": "{\"rpt\":\"<RPT>", …
+
+

Copy the RPT from the response and repeat the request now using the RPT to authenticate:

+
curl -H "Authorization: Bearer <RPT>" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/kubernetes-subjectaccessreview/index.html b/authorino/docs/user-guides/kubernetes-subjectaccessreview/index.html new file mode 100644 index 00000000..491f1a25 --- /dev/null +++ b/authorino/docs/user-guides/kubernetes-subjectaccessreview/index.html @@ -0,0 +1,2392 @@ + + + + + + + + + + + + + + + + + + + + + + + + Kubernetes RBAC for service authorization (SubjectAccessReview API) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Kubernetes RBAC for service authorization (SubjectAccessReview API)

+

Manage permissions in the Kubernetes RBAC and let Authorino to check them in request-time with the authorization system of the cluster.

+
+ + Authorino features in this guide: + + + + Authorino can delegate authorization decision to the Kubernetes authorization system, allowing permissions to be stored and managed using the Kubernetes Role-Based Access Control (RBAC) for example. The feature is based on the `SubjectAccessReview` API and can be used for `resourceAttributes` (parameters defined in the `AuthConfig`) or `nonResourceAttributes` (inferring HTTP path and verb from the original request). + + Check out as well the user guide about [Authentication with Kubernetes tokens (TokenReview API)](./kubernetes-tokenreview.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Kubernetes user with permission to create TokenRequests (to consume the API from outside the cluster)
  • +
  • yq (to parse your ~/.kube/config file to extract user authentication data)
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

The AuthConfig below sets all Kubernetes service accounts as trusted users of the API, and relies on the Kubernetes RBAC to enforce authorization using Kubernetes SubjectAccessReview API for non-resource endpoints:

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  - envoy.default.svc.cluster.local
+  identity:
+  - name: service-accounts
+    kubernetes:
+      audiences: ["https://kubernetes.default.svc.cluster.local"]
+  authorization:
+  - name: k8s-rbac
+    kubernetes:
+      user:
+        valueFrom: { authJSON: auth.identity.user.username }
+EOF
+
+

Check out the spec for the Authorino Kubernetes SubjectAccessReview authorization feature, for resource attributes permission checks where SubjectAccessReviews issued by Authorino are modeled in terms of common attributes of operations on Kubernetes resources (namespace, API group, kind, name, subresource, verb).

+

6. Create roles associated with endpoints of the API

+

Because the k8s-rbac policy defined in the AuthConfig in the previous step is for non-resource access review requests, the corresponding roles and role bindings have to be defined at cluster scope.

+

Create a talker-api-greeter role whose users and service accounts bound to this role can consume the non-resource endpoints POST /hello and POST /hi of the API:

+
kubectl apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: talker-api-greeter
+rules:
+- nonResourceURLs: ["/hello"]
+  verbs: ["post"]
+- nonResourceURLs: ["/hi"]
+  verbs: ["post"]
+EOF
+
+

Create a talker-api-speaker role whose users and service accounts bound to this role can consume the non-resource endpoints POST /say/* of the API:

+
kubectl apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: talker-api-speaker
+rules:
+- nonResourceURLs: ["/say/*"]
+  verbs: ["post"]
+EOF
+
+

7. Create the ServiceAccounts and permissions to consume the API

+

Create service accounts api-consumer-1 and api-consumer-2:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: api-consumer-1
+EOF
+
+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: api-consumer-2
+EOF
+
+

Bind both service accounts to the talker-api-greeter role:

+
kubectl apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: talker-api-greeter-rolebinding
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: talker-api-greeter
+subjects:
+- kind: ServiceAccount
+  name: api-consumer-1
+  namespace: default
+- kind: ServiceAccount
+  name: api-consumer-2
+  namespace: default
+EOF
+
+

Bind service account api-consumer-1 to the talker-api-speaker role:

+
kubectl apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: talker-api-speaker-rolebinding
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: talker-api-speaker
+subjects:
+- kind: ServiceAccount
+  name: api-consumer-1
+  namespace: default
+EOF
+
+

8. Consume the API

+

Run a pod that consumes one of the greeting endpoints of the API from inside the cluster, as service account api-consumer-1, bound to the talker-api-greeter and talker-api-speaker cluster roles in the Kubernetes RBAC:

+
kubectl run greeter --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{
+  "apiVersion": "v1",
+  "spec": {
+    "containers": [{
+      "name": "api-consumer", "image": "quay.io/kuadrant/authorino-examples:api-consumer", "command": ["./run"],
+      "args":["--endpoint=http://envoy.default.svc.cluster.local:8000/hi","--method=POST","--interval=0","--token-path=/var/run/secrets/tokens/api-token"],
+      "volumeMounts": [{"mountPath": "/var/run/secrets/tokens","name": "access-token"}]
+    }],
+    "serviceAccountName": "api-consumer-1",
+    "volumes": [{"name": "access-token","projected": {"sources": [{"serviceAccountToken": {"path": "api-token","expirationSeconds": 7200}}]}}]
+  }
+}' -- sh
+# Sending...
+# 200
+
+

Run a pod that sends a POST request to /say/blah from within the cluster, as service account api-consumer-1:

+
kubectl run speaker --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{
+  "apiVersion": "v1",
+  "spec": {
+    "containers": [{
+      "name": "api-consumer", "image": "quay.io/kuadrant/authorino-examples:api-consumer", "command": ["./run"],
+      "args":["--endpoint=http://envoy.default.svc.cluster.local:8000/say/blah","--method=POST","--interval=0","--token-path=/var/run/secrets/tokens/api-token"],
+      "volumeMounts": [{"mountPath": "/var/run/secrets/tokens","name": "access-token"}]
+    }],
+    "serviceAccountName": "api-consumer-1",
+    "volumes": [{"name": "access-token","projected": {"sources": [{"serviceAccountToken": {"path": "api-token","expirationSeconds": 7200}}]}}]
+  }
+}' -- sh
+# Sending...
+# 200
+
+

Run a pod that sends a POST request to /say/blah from within the cluster, as service account api-consumer-2, bound only to the talker-api-greeter cluster role in the Kubernetes RBAC:

+
kubectl run speaker --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{
+  "apiVersion": "v1",
+  "spec": {
+    "containers": [{
+      "name": "api-consumer", "image": "quay.io/kuadrant/authorino-examples:api-consumer", "command": ["./run"],
+      "args":["--endpoint=http://envoy.default.svc.cluster.local:8000/say/blah","--method=POST","--interval=0","--token-path=/var/run/secrets/tokens/api-token"],
+      "volumeMounts": [{"mountPath": "/var/run/secrets/tokens","name": "access-token"}]
+    }],
+    "serviceAccountName": "api-consumer-2",
+    "volumes": [{"name": "access-token","projected": {"sources": [{"serviceAccountToken": {"path": "api-token","expirationSeconds": 7200}}]}}]
+  }
+}' -- sh
+# Sending...
+# 403
+
+
+ Extra: consume the API as service account api-consumer-2 from outside the cluster + +
+ + Obtain a short-lived access token for service account `api-consumer-2`, bound to the `talker-api-greeter` cluster role in the Kubernetes RBAC, using the Kubernetes TokenRequest API: + +
export ACCESS_TOKEN=$(echo '{ "apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest", "spec": { "expirationSeconds": 600 } }' | kubectl create --raw /api/v1/namespaces/default/serviceaccounts/api-consumer-2/token -f - | jq -r .status.token)
+
+ + Consume the API as `api-consumer-2` from outside the cluster: + +
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X POST http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+ +
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X POST http://talker-api-authorino.127.0.0.1.nip.io:8000/say/something -i
+# HTTP/1.1 403 Forbidden
+
+
+ +

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete serviceaccount/api-consumer-1
+kubectl delete serviceaccount/api-consumer-2
+kubectl delete clusterrolebinding/talker-api-greeter-rolebinding
+kubectl delete clusterrolebinding/talker-api-speaker-rolebinding
+kubectl delete clusterrole/talker-api-greeter
+kubectl delete clusterrole/talker-api-speaker
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/kubernetes-tokenreview/index.html b/authorino/docs/user-guides/kubernetes-tokenreview/index.html new file mode 100644 index 00000000..ec708fdf --- /dev/null +++ b/authorino/docs/user-guides/kubernetes-tokenreview/index.html @@ -0,0 +1,2297 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authentication with Kubernetes tokens (TokenReview API) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Authentication with Kubernetes tokens (TokenReview API)

+

Validate Kubernetes Service Account tokens to authenticate requests to your protected hosts.

+
+ + Authorino features in this guide: + + + + Authorino can verify Kubernetes-valid access tokens (using Kubernetes [TokenReview](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1) API). + + These tokens can be either `ServiceAccount` tokens or any valid user access tokens issued to users of the Kubernetes server API. + + The `audiences` claim of the token must include the requested host and port of the protected API (default), or all audiences specified in `spec.identity.kubernetes.audiences` of the `AuthConfig`. + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Kubernetes user with permission to create TokenRequests (to consume the API from outside the cluster)
  • +
  • yq (to parse your ~/.kube/config file to extract user authentication data)
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  - envoy.default.svc.cluster.local
+  identity:
+  - name: authorized-service-accounts
+    kubernetes:
+      audiences:
+      - talker-api
+EOF
+
+

6. Create a ServiceAccount

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: api-consumer-1
+EOF
+
+

7. Consume the API from outside the cluster

+

Obtain a short-lived access token for the api-consumer-1 ServiceAccount:

+
export ACCESS_TOKEN=$(echo '{ "apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest", "spec": { "audiences": ["talker-api"], "expirationSeconds": 600 } }' | kubectl create --raw /api/v1/namespaces/default/serviceaccounts/api-consumer-1/token -f - | jq -r .status.token)
+
+

Consume the API with a valid Kubernetes token:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Consume the API with the Kubernetes token expired (10 minutes):

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="authorized-service-accounts"
+# x-ext-auth-reason: Not authenticated
+
+

8. Consume the API from inside the cluster

+

Deploy an application that consumes an endpoint of the Talker API, in a loop, every 10 seconds. The application uses a short-lived service account token mounted inside the container using Kubernetes Service Account Token Volume Projection to authenticate.

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Pod
+metadata:
+  name: api-consumer
+spec:
+  containers:
+  - name: api-consumer
+    image: quay.io/kuadrant/authorino-examples:api-consumer
+    command: ["./run"]
+    args:
+      - --endpoint=http://envoy.default.svc.cluster.local:8000/hello
+      - --token-path=/var/run/secrets/tokens/api-token
+      - --interval=10
+    volumeMounts:
+    - mountPath: /var/run/secrets/tokens
+      name: talker-api-access-token
+  serviceAccountName: api-consumer-1
+  volumes:
+  - name: talker-api-access-token
+    projected:
+      sources:
+      - serviceAccountToken:
+          path: api-token
+          expirationSeconds: 7200
+          audience: talker-api
+EOF
+
+

Check the logs of api-consumer:

+
kubectl logs -f api-consumer
+# Sending...
+# 200
+# 200
+# 200
+# 200
+# ...
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl pod/api-consumer
+kubectl serviceaccount/api-consumer-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/mtls-authentication/index.html b/authorino/docs/user-guides/mtls-authentication/index.html new file mode 100644 index 00000000..e7a223cd --- /dev/null +++ b/authorino/docs/user-guides/mtls-authentication/index.html @@ -0,0 +1,2501 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authentication with X.509 certificates and mTLS - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Authentication with X.509 certificates and Mutual Transport Layer Security (mTLS)

+

Verify client X.509 certificates against trusted root CAs stored in Kubernetes Secrets to authenticate access to APIs protected with Authorino.

+
+ + Authorino features in this guide: + + + + Authorino can verify x509 certificates presented by clients for authentication on the request to the protected APIs, at application level. + + Trusted root Certificate Authorities (CA) are stored as Kubernetes `kubernetes.io/tls` Secrets labeled according to selectors specified in the AuthConfig, watched and cached by Authorino. + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+ +

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Install cert-manager in the cluster:

+
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy Authorino

+

Create the TLS certificates for the Authorino service:

+
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed "s/\$(AUTHORINO_INSTANCE)/authorino/g;s/\$(NAMESPACE)/default/g" | kubectl apply -f -
+
+

Deploy an Authorino service:

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      certSecretRef:
+        name: authorino-server-cert
+  oidcServer:
+    tls:
+      certSecretRef:
+        name: authorino-oidc-server-cert
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination enabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

3. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

4. Create a CA

+

Create a CA certificate to issue the client certificates that will be used to authenticate to consume the Talker API:

+
openssl req -x509 -sha256 -days 365 -nodes -newkey rsa:2048 -subj "/CN=talker-api-ca" -keyout /tmp/ca.key -out /tmp/ca.crt
+
+

Store the CA cert in a Kubernetes Secret, labeled to be discovered by Authorino:

+
kubectl create secret tls talker-api-ca --cert=/tmp/ca.crt --key=/tmp/ca.key
+kubectl label secret talker-api-ca authorino.kuadrant.io/managed-by=authorino app=talker-api
+
+

5. Setup Envoy

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  labels:
+    app: envoy
+  name: envoy
+data:
+  envoy.yaml: |
+    static_resources:
+      listeners:
+      - address:
+          socket_address:
+            address: 0.0.0.0
+            port_value: 8000
+        filter_chains:
+        - transport_socket:
+            name: envoy.transport_sockets.tls
+            typed_config:
+              "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
+              common_tls_context:
+                tls_certificates:
+                - certificate_chain: {filename: "/etc/ssl/certs/talker-api/tls.crt"}
+                  private_key: {filename: "/etc/ssl/certs/talker-api/tls.key"}
+                validation_context:
+                  trusted_ca:
+                    filename: /etc/ssl/certs/talker-api/tls.crt
+          filters:
+          - name: envoy.http_connection_manager
+            typed_config:
+              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
+              stat_prefix: local
+              route_config:
+                name: local_route
+                virtual_hosts:
+                - name: local_service
+                  domains: ['*']
+                  routes:
+                  - match: { prefix: / }
+                    route: { cluster: talker-api }
+              http_filters:
+              - name: envoy.filters.http.ext_authz
+                typed_config:
+                  "@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
+                  transport_api_version: V3
+                  failure_mode_allow: false
+                  include_peer_certificate: true
+                  grpc_service:
+                    envoy_grpc: { cluster_name: authorino }
+                    timeout: 1s
+              - name: envoy.filters.http.router
+                typed_config: {}
+              use_remote_address: true
+      clusters:
+      - name: authorino
+        connect_timeout: 0.25s
+        type: strict_dns
+        lb_policy: round_robin
+        http2_protocol_options: {}
+        load_assignment:
+          cluster_name: authorino
+          endpoints:
+          - lb_endpoints:
+            - endpoint:
+                address:
+                  socket_address:
+                    address: authorino-authorino-authorization
+                    port_value: 50051
+        transport_socket:
+          name: envoy.transport_sockets.tls
+          typed_config:
+            "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
+            common_tls_context:
+              validation_context:
+                trusted_ca:
+                  filename: /etc/ssl/certs/authorino-ca-cert.crt
+      - name: talker-api
+        connect_timeout: 0.25s
+        type: strict_dns
+        lb_policy: round_robin
+        load_assignment:
+          cluster_name: talker-api
+          endpoints:
+          - lb_endpoints:
+            - endpoint:
+                address:
+                  socket_address:
+                    address: talker-api
+                    port_value: 3000
+    admin:
+      access_log_path: "/tmp/admin_access.log"
+      address:
+        socket_address:
+          address: 0.0.0.0
+          port_value: 8001
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    app: envoy
+  name: envoy
+spec:
+  selector:
+    matchLabels:
+      app: envoy
+  template:
+    metadata:
+      labels:
+        app: envoy
+    spec:
+      containers:
+      - args:
+        - --config-path /usr/local/etc/envoy/envoy.yaml
+        - --service-cluster front-proxy
+        - --log-level info
+        - --component-log-level filter:trace,http:debug,router:debug
+        command:
+        - /usr/local/bin/envoy
+        image: envoyproxy/envoy:v1.19-latest
+        name: envoy
+        ports:
+        - containerPort: 8000
+          name: web
+        - containerPort: 8001
+          name: admin
+        volumeMounts:
+        - mountPath: /usr/local/etc/envoy
+          name: config
+          readOnly: true
+        - mountPath: /etc/ssl/certs/authorino-ca-cert.crt
+          name: authorino-ca-cert
+          readOnly: true
+          subPath: ca.crt
+        - mountPath: /etc/ssl/certs/talker-api
+          name: talker-api-ca
+          readOnly: true
+      volumes:
+      - configMap:
+          items:
+          - key: envoy.yaml
+            path: envoy.yaml
+          name: envoy
+        name: config
+      - name: authorino-ca-cert
+        secret:
+          defaultMode: 420
+          secretName: authorino-ca-cert
+      - name: talker-api-ca
+        secret:
+          defaultMode: 420
+          secretName: talker-api-ca
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: envoy
+spec:
+  selector:
+    app: envoy
+  ports:
+  - name: web
+    port: 8000
+    protocol: TCP
+---
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+  name: ingress-wildcard-host
+spec:
+  rules:
+  - host: talker-api-authorino.127.0.0.1.nip.io
+    http:
+      paths:
+      - backend:
+          service:
+            name: envoy
+            port: { number: 8000 }
+        path: /
+        pathType: Prefix
+EOF
+
+

The bundle includes an Ingress with host name talker-api-authorino.127.0.0.1.nip.io. If you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

6. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: mtls
+    mtls:
+      selector:
+        matchLabels:
+          app: talker-api
+  authorization:
+  - name: acme
+    json:
+      rules:
+      - selector: auth.identity.Organization
+        operator: incl
+        value: ACME Inc.
+EOF
+
+

7. Consume the API

+

With a TLS certificate signed by the trusted CA:

+
openssl genrsa -out /tmp/aisha.key 2048
+openssl req -new -key /tmp/aisha.key -out /tmp/aisha.csr -subj "/CN=aisha/C=PK/L=Islamabad/O=ACME Inc./OU=Engineering"
+openssl x509 -req -in /tmp/aisha.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -CAcreateserial -out /tmp/aisha.crt -days 1 -sha256
+
+curl -k --cert /tmp/aisha.crt --key /tmp/aisha.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 200 OK
+
+

With a TLS certificate signed by the trusted CA, though missing an authorized Organization:

+
openssl genrsa -out /tmp/john.key 2048
+openssl req -new -key /tmp/john.key -out /tmp/john.csr -subj "/CN=john/C=UK/L=London"
+openssl x509 -req -in /tmp/john.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -CAcreateserial -out /tmp/john.crt -days 1 -sha256
+
+curl -k --cert /tmp/john.crt --key /tmp/john.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 403 Forbidden
+# x-ext-auth-reason: Unauthorized
+
+

8. Try the AuthConfig via raw HTTP authorization interface

+

Expose Authorino's raw HTTP authorization to the local host:

+
kubectl port-forward service/authorino-authorino-authorization 5001:5001 &
+
+

With a TLS certificate signed by the trusted CA:

+
curl -k --cert /tmp/aisha.crt --key /tmp/aisha.key -H 'Content-Type: application/json' -d '{}' https://talker-api-authorino.127.0.0.1.nip.io:5001/check -i
+# HTTP/2 200
+
+

With a TLS certificate signed by an unknown authority:

+
openssl req -x509 -sha256 -days 365 -nodes -newkey rsa:2048 -subj "/CN=untrusted" -keyout /tmp/untrusted-ca.key -out /tmp/untrusted-ca.crt
+openssl genrsa -out /tmp/niko.key 2048
+openssl req -new -key /tmp/niko.key -out /tmp/niko.csr -subj "/CN=niko/C=JP/L=Osaka"
+openssl x509 -req -in /tmp/niko.csr -CA /tmp/untrusted-ca.crt -CAkey /tmp/untrusted-ca.key -CAcreateserial -out /tmp/niko.crt -days 1 -sha256
+
+curl -k --cert /tmp/niko.crt --key /tmp/niko.key -H 'Content-Type: application/json' -d '{}' https://talker-api-authorino.127.0.0.1.nip.io:5001/check -i
+# HTTP/2 401
+# www-authenticate: Basic realm="mtls"
+# x-ext-auth-reason: x509: certificate signed by unknown authority
+
+

9. Revoke an entire chain of certificates

+
kubectl delete secret/talker-api-ca
+
+

Even if the deleted root certificate is still cached and accepted at the gateway, Authorino will revoke access at application level immediately.

+

Try with a previously accepted certificate:

+
curl -k --cert /tmp/aisha.crt --key /tmp/aisha.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Basic realm="mtls"
+# x-ext-auth-reason: x509: certificate signed by unknown authority
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete ingress/service
+kubectl delete configmap/service
+kubectl delete configmap/deployment
+kubectl delete configmap/envoy
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

To uninstall the cert-manager, run:

+
kubectl delete -f kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/oauth2-token-introspection/index.html b/authorino/docs/user-guides/oauth2-token-introspection/index.html new file mode 100644 index 00000000..dd09b21a --- /dev/null +++ b/authorino/docs/user-guides/oauth2-token-introspection/index.html @@ -0,0 +1,2374 @@ + + + + + + + + + + + + + + + + + + + + + + + + OAuth 2.0 token introspection (RFC 7662) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: OAuth 2.0 token introspection (RFC 7662)

+

Introspect OAuth 2.0 access tokens (e.g. opaque tokens) for online user data and token validation in request-time.

+
+ + Authorino features in this guide: + + + + Authorino can perform OAuth 2.0 token introspection ([RFC 7662](https://tools.ietf.org/html/rfc7662)) on the access tokens supplied in the requests to protected APIs. This is particularly useful when using opaque tokens, for remote checking the token validity and resolving the identity object. + + _Important!_ Authorino does **not** implement [OAuth2 grants](https://datatracker.ietf.org/doc/html/rfc6749#section-4) nor [OIDC authentication flows](https://openid.net/specs/openid-connect-core-1_0.html#Authentication). As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities. + + Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./user-guides/json-pattern-matching-authorization.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • OAuth 2.0 server that implements the token introspection endpoint (RFC 7662) (e.g. Keycloak or a12n-server)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

Deploy an a12n-server server preloaded with all the realm settings required for this guide:

+
kubectl create namespace a12n-server
+kubectl -n a12n-server apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/a12n-server/a12n-server-deploy.yaml
+
+

Forward local requests to the instance of a12n-server running in the cluster:

+
kubectl -n a12n-server port-forward deployment/a12n-server 8531:8531 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

Create a couple required secret, used by Authorino to authenticate with Keycloak and a12n-server during the introspection request:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: oauth2-token-introspection-credentials-keycloak
+stringData:
+  clientID: talker-api
+  clientSecret: 523b92b6-625d-4e1e-a313-77e7a8ae4e88
+type: Opaque
+EOF
+
+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: oauth2-token-introspection-credentials-a12n-server
+stringData:
+  clientID: talker-api
+  clientSecret: V6g-2Eq2ALB1_WHAswzoeZofJ_e86RI4tdjClDDDb4g
+type: Opaque
+EOF
+
+

Create the config:

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak
+    oauth2:
+      tokenIntrospectionUrl: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect
+      tokenTypeHint: requesting_party_token
+      credentialsRef:
+        name: oauth2-token-introspection-credentials-keycloak
+  - name: a12n-server
+    oauth2:
+      tokenIntrospectionUrl: http://a12n-server.a12n-server.svc.cluster.local:8531/introspect
+      credentialsRef:
+        name: oauth2-token-introspection-credentials-a12n-server
+  authorization:
+  - name: can-read
+    when:
+    - selector: auth.identity.privileges
+      operator: neq
+      value: ""
+    json:
+      rules:
+      - selector: auth.identity.privileges.talker-api
+        operator: incl
+        value: read
+EOF
+
+

On every request, Authorino will try to verify the token remotely with the Keycloak server and the a12n-server server.

+

For authorization, whenever the introspected token data includes a privileges property (returned by a12n-server), Authorino will enforce only consumers whose privileges.talker-api includes the "read" permission are granted access.

+

Check out the docs for information about the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

+

6. Obtain an access token and consume the API

+

Obtain an access token with Keycloak and consume the API

+

Obtain an access token with the Keycloak server for user Jane:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

+
export $(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r '"ACCESS_TOKEN="+.access_token,"REFRESH_TOKEN="+.refresh_token')
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

As user Jane, consume the API:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Revoke the access token and try to consume the API again:

+
kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/logout -H "Content-Type: application/x-www-form-urlencoded" -d "refresh_token=$REFRESH_TOKEN" -d 'token_type_hint=requesting_party_token' -u demo:
+
+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="keycloak"
+# www-authenticate: Bearer realm="a12n-server"
+# x-ext-auth-reason: {"a12n-server":"token is not active","keycloak":"token is not active"}
+
+

Obtain an access token with a12n-server and consume the API

+

Obtain an access token with the a12n-server server for service account service-account-1:

+
ACCESS_TOKEN=$(curl -d 'grant_type=client_credentials' -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s "http://localhost:8531/token" | jq -r .access_token)
+
+

You can as well obtain an access token from within the cluster, in case your a12n-server is not reachable from the outside:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://a12n-server.a12n-server.svc.cluster.local:8531/token -s -d 'grant_type=client_credentials' -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s | jq -r .access_token)
+
+

Verify the issued token is an opaque access token in this case:

+
echo $ACCESS_TOKEN
+
+

As service-account-1, consumer the API with a valid access token:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Revoke the access token and try to consume the API again:

+
curl -d "token=$ACCESS_TOKEN" -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s "http://localhost:8531/revoke" -i
+
+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="keycloak"
+# www-authenticate: Bearer realm="a12n-server"
+# x-ext-auth-reason: {"a12n-server":"token is not active","keycloak":"token is not active"}
+
+

Consume the API with a missing or invalid access token

+
curl -H "Authorization: Bearer invalid" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="keycloak"
+# www-authenticate: Bearer realm="a12n-server"
+# x-ext-auth-reason: {"a12n-server":"token is not active","keycloak":"token is not active"}
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete secret/oauth2-token-introspection-credentials-keycloak
+kubectl delete secret/oauth2-token-introspection-credentials-a12n-server
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+kubectl delete namespace a12-server
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/observability/index.html b/authorino/docs/user-guides/observability/index.html new file mode 100644 index 00000000..57253025 --- /dev/null +++ b/authorino/docs/user-guides/observability/index.html @@ -0,0 +1,3990 @@ + + + + + + + + + + + + + + + + + + + + + + + + Observability - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Observability

+

Metrics

+

Authorino exports metrics at 2 endpoints:

+ + + + + + + + + +
/metricsMetrics of the controller-runtime about reconciliation (caching) of AuthConfigs and API key Secrets
/server-metricsMetrics of the external authorization gRPC and OIDC/Festival Wristband validation built-in HTTP servers
+ +

The Authorino Operator creates a Kubernetes Service named <authorino-cr-name>-controller-metrics that exposes the endpoints on port 8080. The Authorino instance allows to modify the port number of the metrics endpoints, by setting the --metrics-addr command-line flag (default: :8080).

+

Main metrics exported by endpoint1:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Endpoint: /metrics

Metric nameDescription                                                                                       LabelsType
controller_runtime_reconcile_totalTotal number of reconciliations per controllercontroller=authconfig|secret, result=success|error|requeuecounter
controller_runtime_reconcile_errors_totalTotal number of reconciliation errors per controllercontroller=authconfig|secretcounter
controller_runtime_reconcile_time_secondsLength of time per reconciliation per controllercontroller=authconfig|secrethistogram
controller_runtime_max_concurrent_reconcilesMaximum number of concurrent reconciles per controllercontroller=authconfig|secretgauge
workqueue_adds_totalTotal number of adds handled by workqueuename=authconfig|secretcounter
workqueue_depthCurrent depth of workqueuename=authconfig|secretgauge
workqueue_queue_duration_secondsHow long in seconds an item stays in workqueue before being requestedname=authconfig|secrethistogram
workqueue_longest_running_processor_secondsHow many seconds has the longest running processor for workqueue been running.name=authconfig|secretgauge
workqueue_retries_totalTotal number of retries handled by workqueuename=authconfig|secretcounter
workqueue_unfinished_work_secondsHow many seconds of work has been done that is in progress and hasn't been observed by work_duration.name=authconfig|secretgauge
workqueue_work_duration_secondsHow long in seconds processing an item from workqueue takes.name=authconfig|secrethistogram
rest_client_requests_totalNumber of HTTP requests, partitioned by status code, method, and host.code=200|404, method=GET|PUT|POSTcounter


Endpoint: /server-metrics

Metric nameDescriptionLabelsType
auth_server_evaluator_total2Total number of evaluations of individual authconfig rule performed by the auth server.namespace, authconfig, evaluator_type, evaluator_namecounter
auth_server_evaluator_cancelled2Number of evaluations of individual authconfig rule cancelled by the auth server.namespace, authconfig, evaluator_type, evaluator_namecounter
auth_server_evaluator_ignored2Number of evaluations of individual authconfig rule ignored by the auth server.namespace, authconfig, evaluator_type, evaluator_namecounter
auth_server_evaluator_denied2Number of denials from individual authconfig rule evaluated by the auth server.namespace, authconfig, evaluator_type, evaluator_namecounter
auth_server_evaluator_duration_seconds2Response latency of individual authconfig rule evaluated by the auth server (in seconds).namespace, authconfig, evaluator_type, evaluator_namehistogram
auth_server_authconfig_totalTotal number of authconfigs enforced by the auth server, partitioned by authconfig.namespace, authconfigcounter
auth_server_authconfig_response_statusResponse status of authconfigs sent by the auth server, partitioned by authconfig.namespace, authconfig, status=OK|UNAUTHENTICATED,PERMISSION_DENIEDcounter
auth_server_authconfig_duration_secondsResponse latency of authconfig enforced by the auth server (in seconds).namespace, authconfighistogram
auth_server_response_statusResponse status of authconfigs sent by the auth server.status=OK|UNAUTHENTICATED,PERMISSION_DENIED|NOT_FOUNDcounter
grpc_server_handled_totalTotal number of RPCs completed on the server, regardless of success or failure.grpc_code=OK|Aborted|Canceled|DeadlineExceeded|Internal|ResourceExhausted|Unknown, grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorizationcounter
grpc_server_handling_secondsResponse latency (seconds) of gRPC that had been application-level handled by the server.grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorizationhistogram
grpc_server_msg_received_totalTotal number of RPC stream messages received on the server.grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorizationcounter
grpc_server_msg_sent_totalTotal number of gRPC stream messages sent by the server.grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorizationcounter
grpc_server_started_totalTotal number of RPCs started on the server.grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorizationcounter
http_server_handled_totalTotal number of calls completed on the raw HTTP authorization server, regardless of success or failure.http_codecounter
http_server_handling_secondsResponse latency (seconds) of raw HTTP authorization request that had been application-level handled by the server.histogram
oidc_server_requests_totalNumber of get requests received on the OIDC (Festival Wristband) server.namespace, authconfig, wristband, path=oidc-config|jwkscounter
oidc_server_response_statusStatus of HTTP response sent by the OIDC (Festival Wristband) server.status=200|404counter
+ +

1 Both endpoints export metrics about the Go runtime, such as number of goroutines (go_goroutines) and threads (go_threads), usage of CPU, memory and GC stats.

+

2 Opt-in metrics: auth_server_evaluator_* metrics require authconfig.spec.(identity|metadata|authorization|response).metrics: true (default: false). This can be enforced for the entire instance (all AuthConfigs and evaluators), by setting the --deep-metrics-enabled command-line flag in the Authorino deployment.

+
+ Example of metrics exported at the /metrics endpoint + +
# HELP controller_runtime_active_workers Number of currently used workers per controller
+# TYPE controller_runtime_active_workers gauge
+controller_runtime_active_workers{controller="authconfig"} 0
+controller_runtime_active_workers{controller="secret"} 0
+# HELP controller_runtime_max_concurrent_reconciles Maximum number of concurrent reconciles per controller
+# TYPE controller_runtime_max_concurrent_reconciles gauge
+controller_runtime_max_concurrent_reconciles{controller="authconfig"} 1
+controller_runtime_max_concurrent_reconciles{controller="secret"} 1
+# HELP controller_runtime_reconcile_errors_total Total number of reconciliation errors per controller
+# TYPE controller_runtime_reconcile_errors_total counter
+controller_runtime_reconcile_errors_total{controller="authconfig"} 12
+controller_runtime_reconcile_errors_total{controller="secret"} 0
+# HELP controller_runtime_reconcile_time_seconds Length of time per reconciliation per controller
+# TYPE controller_runtime_reconcile_time_seconds histogram
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.005"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.01"} 11
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.025"} 17
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.05"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.1"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.15"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.2"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.25"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.3"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.35"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.4"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.45"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.5"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.6"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.7"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.8"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="0.9"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="1"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="1.25"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="1.5"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="1.75"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="2"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="2.5"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="3"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="3.5"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="4"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="4.5"} 18
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="5"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="6"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="7"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="8"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="9"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="10"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="15"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="20"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="25"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="30"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="40"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="50"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="60"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="authconfig",le="+Inf"} 19
+controller_runtime_reconcile_time_seconds_sum{controller="authconfig"} 5.171108321999999
+controller_runtime_reconcile_time_seconds_count{controller="authconfig"} 19
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.005"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.01"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.025"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.05"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.1"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.15"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.2"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.25"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.3"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.35"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.4"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.45"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.6"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.7"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.8"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="0.9"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="1"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="1.25"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="1.5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="1.75"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="2"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="2.5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="3"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="3.5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="4"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="4.5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="5"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="6"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="7"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="8"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="9"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="10"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="15"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="20"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="25"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="30"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="40"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="50"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="60"} 1
+controller_runtime_reconcile_time_seconds_bucket{controller="secret",le="+Inf"} 1
+controller_runtime_reconcile_time_seconds_sum{controller="secret"} 0.000138025
+controller_runtime_reconcile_time_seconds_count{controller="secret"} 1
+# HELP controller_runtime_reconcile_total Total number of reconciliations per controller
+# TYPE controller_runtime_reconcile_total counter
+controller_runtime_reconcile_total{controller="authconfig",result="error"} 12
+controller_runtime_reconcile_total{controller="authconfig",result="requeue"} 0
+controller_runtime_reconcile_total{controller="authconfig",result="requeue_after"} 0
+controller_runtime_reconcile_total{controller="authconfig",result="success"} 7
+controller_runtime_reconcile_total{controller="secret",result="error"} 0
+controller_runtime_reconcile_total{controller="secret",result="requeue"} 0
+controller_runtime_reconcile_total{controller="secret",result="requeue_after"} 0
+controller_runtime_reconcile_total{controller="secret",result="success"} 1
+# HELP go_gc_cycles_automatic_gc_cycles_total Count of completed GC cycles generated by the Go runtime.
+# TYPE go_gc_cycles_automatic_gc_cycles_total counter
+go_gc_cycles_automatic_gc_cycles_total 13
+# HELP go_gc_cycles_forced_gc_cycles_total Count of completed GC cycles forced by the application.
+# TYPE go_gc_cycles_forced_gc_cycles_total counter
+go_gc_cycles_forced_gc_cycles_total 0
+# HELP go_gc_cycles_total_gc_cycles_total Count of all completed GC cycles.
+# TYPE go_gc_cycles_total_gc_cycles_total counter
+go_gc_cycles_total_gc_cycles_total 13
+# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 4.5971e-05
+go_gc_duration_seconds{quantile="0.25"} 5.69e-05
+go_gc_duration_seconds{quantile="0.5"} 0.000140699
+go_gc_duration_seconds{quantile="0.75"} 0.000313162
+go_gc_duration_seconds{quantile="1"} 0.001692423
+go_gc_duration_seconds_sum 0.003671076
+go_gc_duration_seconds_count 13
+# HELP go_gc_heap_allocs_by_size_bytes_total Distribution of heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_allocs_by_size_bytes_total histogram
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="8.999999999999998"} 6357
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="16.999999999999996"} 45065
+[...]
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="32768.99999999999"} 128306
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="+Inf"} 128327
+go_gc_heap_allocs_by_size_bytes_total_sum 1.5021512e+07
+go_gc_heap_allocs_by_size_bytes_total_count 128327
+# HELP go_gc_heap_allocs_bytes_total Cumulative sum of memory allocated to the heap by the application.
+# TYPE go_gc_heap_allocs_bytes_total counter
+go_gc_heap_allocs_bytes_total 1.5021512e+07
+# HELP go_gc_heap_allocs_objects_total Cumulative count of heap allocations triggered by the application. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_allocs_objects_total counter
+go_gc_heap_allocs_objects_total 128327
+# HELP go_gc_heap_frees_by_size_bytes_total Distribution of freed heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_frees_by_size_bytes_total histogram
+go_gc_heap_frees_by_size_bytes_total_bucket{le="8.999999999999998"} 3885
+go_gc_heap_frees_by_size_bytes_total_bucket{le="16.999999999999996"} 33418
+[...]
+go_gc_heap_frees_by_size_bytes_total_bucket{le="32768.99999999999"} 96417
+go_gc_heap_frees_by_size_bytes_total_bucket{le="+Inf"} 96425
+go_gc_heap_frees_by_size_bytes_total_sum 9.880944e+06
+go_gc_heap_frees_by_size_bytes_total_count 96425
+# HELP go_gc_heap_frees_bytes_total Cumulative sum of heap memory freed by the garbage collector.
+# TYPE go_gc_heap_frees_bytes_total counter
+go_gc_heap_frees_bytes_total 9.880944e+06
+# HELP go_gc_heap_frees_objects_total Cumulative count of heap allocations whose storage was freed by the garbage collector. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_frees_objects_total counter
+go_gc_heap_frees_objects_total 96425
+# HELP go_gc_heap_goal_bytes Heap size target for the end of the GC cycle.
+# TYPE go_gc_heap_goal_bytes gauge
+go_gc_heap_goal_bytes 9.356624e+06
+# HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory.
+# TYPE go_gc_heap_objects_objects gauge
+go_gc_heap_objects_objects 31902
+# HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size.
+# TYPE go_gc_heap_tiny_allocs_objects_total counter
+go_gc_heap_tiny_allocs_objects_total 11750
+# HELP go_gc_pauses_seconds_total Distribution individual GC-related stop-the-world pause latencies.
+# TYPE go_gc_pauses_seconds_total histogram
+go_gc_pauses_seconds_total_bucket{le="9.999999999999999e-10"} 0
+go_gc_pauses_seconds_total_bucket{le="1.9999999999999997e-09"} 0
+[...]
+go_gc_pauses_seconds_total_bucket{le="206708.18602188796"} 26
+go_gc_pauses_seconds_total_bucket{le="+Inf"} 26
+go_gc_pauses_seconds_total_sum 0.003151488
+go_gc_pauses_seconds_total_count 26
+# HELP go_goroutines Number of goroutines that currently exist.
+# TYPE go_goroutines gauge
+go_goroutines 80
+# HELP go_info Information about the Go environment.
+# TYPE go_info gauge
+go_info{version="go1.18.7"} 1
+# HELP go_memory_classes_heap_free_bytes Memory that is completely free and eligible to be returned to the underlying system, but has not been. This metric is the runtime's estimate of free address space that is backed by physical memory.
+# TYPE go_memory_classes_heap_free_bytes gauge
+go_memory_classes_heap_free_bytes 589824
+# HELP go_memory_classes_heap_objects_bytes Memory occupied by live objects and dead objects that have not yet been marked free by the garbage collector.
+# TYPE go_memory_classes_heap_objects_bytes gauge
+go_memory_classes_heap_objects_bytes 5.140568e+06
+# HELP go_memory_classes_heap_released_bytes Memory that is completely free and has been returned to the underlying system. This metric is the runtime's estimate of free address space that is still mapped into the process, but is not backed by physical memory.
+# TYPE go_memory_classes_heap_released_bytes gauge
+go_memory_classes_heap_released_bytes 4.005888e+06
+# HELP go_memory_classes_heap_stacks_bytes Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use.
+# TYPE go_memory_classes_heap_stacks_bytes gauge
+go_memory_classes_heap_stacks_bytes 786432
+# HELP go_memory_classes_heap_unused_bytes Memory that is reserved for heap objects but is not currently used to hold heap objects.
+# TYPE go_memory_classes_heap_unused_bytes gauge
+go_memory_classes_heap_unused_bytes 2.0602e+06
+# HELP go_memory_classes_metadata_mcache_free_bytes Memory that is reserved for runtime mcache structures, but not in-use.
+# TYPE go_memory_classes_metadata_mcache_free_bytes gauge
+go_memory_classes_metadata_mcache_free_bytes 13984
+# HELP go_memory_classes_metadata_mcache_inuse_bytes Memory that is occupied by runtime mcache structures that are currently being used.
+# TYPE go_memory_classes_metadata_mcache_inuse_bytes gauge
+go_memory_classes_metadata_mcache_inuse_bytes 2400
+# HELP go_memory_classes_metadata_mspan_free_bytes Memory that is reserved for runtime mspan structures, but not in-use.
+# TYPE go_memory_classes_metadata_mspan_free_bytes gauge
+go_memory_classes_metadata_mspan_free_bytes 17104
+# HELP go_memory_classes_metadata_mspan_inuse_bytes Memory that is occupied by runtime mspan structures that are currently being used.
+# TYPE go_memory_classes_metadata_mspan_inuse_bytes gauge
+go_memory_classes_metadata_mspan_inuse_bytes 113968
+# HELP go_memory_classes_metadata_other_bytes Memory that is reserved for or used to hold runtime metadata.
+# TYPE go_memory_classes_metadata_other_bytes gauge
+go_memory_classes_metadata_other_bytes 5.544408e+06
+# HELP go_memory_classes_os_stacks_bytes Stack memory allocated by the underlying operating system.
+# TYPE go_memory_classes_os_stacks_bytes gauge
+go_memory_classes_os_stacks_bytes 0
+# HELP go_memory_classes_other_bytes Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more.
+# TYPE go_memory_classes_other_bytes gauge
+go_memory_classes_other_bytes 537777
+# HELP go_memory_classes_profiling_buckets_bytes Memory that is used by the stack trace hash map used for profiling.
+# TYPE go_memory_classes_profiling_buckets_bytes gauge
+go_memory_classes_profiling_buckets_bytes 1.455487e+06
+# HELP go_memory_classes_total_bytes All memory mapped by the Go runtime into the current process as read-write. Note that this does not include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in /memory/classes.
+# TYPE go_memory_classes_total_bytes gauge
+go_memory_classes_total_bytes 2.026804e+07
+# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
+# TYPE go_memstats_alloc_bytes gauge
+go_memstats_alloc_bytes 5.140568e+06
+# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
+# TYPE go_memstats_alloc_bytes_total counter
+go_memstats_alloc_bytes_total 1.5021512e+07
+# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
+# TYPE go_memstats_buck_hash_sys_bytes gauge
+go_memstats_buck_hash_sys_bytes 1.455487e+06
+# HELP go_memstats_frees_total Total number of frees.
+# TYPE go_memstats_frees_total counter
+go_memstats_frees_total 108175
+# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
+# TYPE go_memstats_gc_cpu_fraction gauge
+go_memstats_gc_cpu_fraction 0
+# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
+# TYPE go_memstats_gc_sys_bytes gauge
+go_memstats_gc_sys_bytes 5.544408e+06
+# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
+# TYPE go_memstats_heap_alloc_bytes gauge
+go_memstats_heap_alloc_bytes 5.140568e+06
+# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
+# TYPE go_memstats_heap_idle_bytes gauge
+go_memstats_heap_idle_bytes 4.595712e+06
+# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
+# TYPE go_memstats_heap_inuse_bytes gauge
+go_memstats_heap_inuse_bytes 7.200768e+06
+# HELP go_memstats_heap_objects Number of allocated objects.
+# TYPE go_memstats_heap_objects gauge
+go_memstats_heap_objects 31902
+# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
+# TYPE go_memstats_heap_released_bytes gauge
+go_memstats_heap_released_bytes 4.005888e+06
+# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
+# TYPE go_memstats_heap_sys_bytes gauge
+go_memstats_heap_sys_bytes 1.179648e+07
+# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
+# TYPE go_memstats_last_gc_time_seconds gauge
+go_memstats_last_gc_time_seconds 1.6461572121033354e+09
+# HELP go_memstats_lookups_total Total number of pointer lookups.
+# TYPE go_memstats_lookups_total counter
+go_memstats_lookups_total 0
+# HELP go_memstats_mallocs_total Total number of mallocs.
+# TYPE go_memstats_mallocs_total counter
+go_memstats_mallocs_total 140077
+# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
+# TYPE go_memstats_mcache_inuse_bytes gauge
+go_memstats_mcache_inuse_bytes 2400
+# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
+# TYPE go_memstats_mcache_sys_bytes gauge
+go_memstats_mcache_sys_bytes 16384
+# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
+# TYPE go_memstats_mspan_inuse_bytes gauge
+go_memstats_mspan_inuse_bytes 113968
+# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
+# TYPE go_memstats_mspan_sys_bytes gauge
+go_memstats_mspan_sys_bytes 131072
+# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
+# TYPE go_memstats_next_gc_bytes gauge
+go_memstats_next_gc_bytes 9.356624e+06
+# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
+# TYPE go_memstats_other_sys_bytes gauge
+go_memstats_other_sys_bytes 537777
+# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
+# TYPE go_memstats_stack_inuse_bytes gauge
+go_memstats_stack_inuse_bytes 786432
+# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
+# TYPE go_memstats_stack_sys_bytes gauge
+go_memstats_stack_sys_bytes 786432
+# HELP go_memstats_sys_bytes Number of bytes obtained from system.
+# TYPE go_memstats_sys_bytes gauge
+go_memstats_sys_bytes 2.026804e+07
+# HELP go_sched_goroutines_goroutines Count of live goroutines.
+# TYPE go_sched_goroutines_goroutines gauge
+go_sched_goroutines_goroutines 80
+# HELP go_sched_latencies_seconds Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running.
+# TYPE go_sched_latencies_seconds histogram
+go_sched_latencies_seconds_bucket{le="9.999999999999999e-10"} 244
+go_sched_latencies_seconds_bucket{le="1.9999999999999997e-09"} 244
+[...]
+go_sched_latencies_seconds_bucket{le="206708.18602188796"} 2336
+go_sched_latencies_seconds_bucket{le="+Inf"} 2336
+go_sched_latencies_seconds_sum 0.18509832400000004
+go_sched_latencies_seconds_count 2336
+# HELP go_threads Number of OS threads created.
+# TYPE go_threads gauge
+go_threads 8
+# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
+# TYPE process_cpu_seconds_total counter
+process_cpu_seconds_total 1.84
+# HELP process_max_fds Maximum number of open file descriptors.
+# TYPE process_max_fds gauge
+process_max_fds 1.048576e+06
+# HELP process_open_fds Number of open file descriptors.
+# TYPE process_open_fds gauge
+process_open_fds 14
+# HELP process_resident_memory_bytes Resident memory size in bytes.
+# TYPE process_resident_memory_bytes gauge
+process_resident_memory_bytes 4.3728896e+07
+# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
+# TYPE process_start_time_seconds gauge
+process_start_time_seconds 1.64615612779e+09
+# HELP process_virtual_memory_bytes Virtual memory size in bytes.
+# TYPE process_virtual_memory_bytes gauge
+process_virtual_memory_bytes 7.65362176e+08
+# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
+# TYPE process_virtual_memory_max_bytes gauge
+process_virtual_memory_max_bytes 1.8446744073709552e+19
+# HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.
+# TYPE rest_client_requests_total counter
+rest_client_requests_total{code="200",host="10.96.0.1:443",method="GET"} 114
+rest_client_requests_total{code="200",host="10.96.0.1:443",method="PUT"} 4
+# HELP workqueue_adds_total Total number of adds handled by workqueue
+# TYPE workqueue_adds_total counter
+workqueue_adds_total{name="authconfig"} 19
+workqueue_adds_total{name="secret"} 1
+# HELP workqueue_depth Current depth of workqueue
+# TYPE workqueue_depth gauge
+workqueue_depth{name="authconfig"} 0
+workqueue_depth{name="secret"} 0
+# HELP workqueue_longest_running_processor_seconds How many seconds has the longest running processor for workqueue been running.
+# TYPE workqueue_longest_running_processor_seconds gauge
+workqueue_longest_running_processor_seconds{name="authconfig"} 0
+workqueue_longest_running_processor_seconds{name="secret"} 0
+# HELP workqueue_queue_duration_seconds How long in seconds an item stays in workqueue before being requested
+# TYPE workqueue_queue_duration_seconds histogram
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="1e-08"} 0
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="1e-07"} 0
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="1e-06"} 0
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="9.999999999999999e-06"} 8
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="9.999999999999999e-05"} 17
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="0.001"} 17
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="0.01"} 17
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="0.1"} 18
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="1"} 18
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="10"} 19
+workqueue_queue_duration_seconds_bucket{name="authconfig",le="+Inf"} 19
+workqueue_queue_duration_seconds_sum{name="authconfig"} 4.969016371
+workqueue_queue_duration_seconds_count{name="authconfig"} 19
+workqueue_queue_duration_seconds_bucket{name="secret",le="1e-08"} 0
+workqueue_queue_duration_seconds_bucket{name="secret",le="1e-07"} 0
+workqueue_queue_duration_seconds_bucket{name="secret",le="1e-06"} 0
+workqueue_queue_duration_seconds_bucket{name="secret",le="9.999999999999999e-06"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="9.999999999999999e-05"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="0.001"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="0.01"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="0.1"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="1"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="10"} 1
+workqueue_queue_duration_seconds_bucket{name="secret",le="+Inf"} 1
+workqueue_queue_duration_seconds_sum{name="secret"} 4.67e-06
+workqueue_queue_duration_seconds_count{name="secret"} 1
+# HELP workqueue_retries_total Total number of retries handled by workqueue
+# TYPE workqueue_retries_total counter
+workqueue_retries_total{name="authconfig"} 12
+workqueue_retries_total{name="secret"} 0
+# HELP workqueue_unfinished_work_seconds How many seconds of work has been done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.
+# TYPE workqueue_unfinished_work_seconds gauge
+workqueue_unfinished_work_seconds{name="authconfig"} 0
+workqueue_unfinished_work_seconds{name="secret"} 0
+# HELP workqueue_work_duration_seconds How long in seconds processing an item from workqueue takes.
+# TYPE workqueue_work_duration_seconds histogram
+workqueue_work_duration_seconds_bucket{name="authconfig",le="1e-08"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="1e-07"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="1e-06"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="9.999999999999999e-06"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="9.999999999999999e-05"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="0.001"} 0
+workqueue_work_duration_seconds_bucket{name="authconfig",le="0.01"} 11
+workqueue_work_duration_seconds_bucket{name="authconfig",le="0.1"} 18
+workqueue_work_duration_seconds_bucket{name="authconfig",le="1"} 18
+workqueue_work_duration_seconds_bucket{name="authconfig",le="10"} 19
+workqueue_work_duration_seconds_bucket{name="authconfig",le="+Inf"} 19
+workqueue_work_duration_seconds_sum{name="authconfig"} 5.171738079000001
+workqueue_work_duration_seconds_count{name="authconfig"} 19
+workqueue_work_duration_seconds_bucket{name="secret",le="1e-08"} 0
+workqueue_work_duration_seconds_bucket{name="secret",le="1e-07"} 0
+workqueue_work_duration_seconds_bucket{name="secret",le="1e-06"} 0
+workqueue_work_duration_seconds_bucket{name="secret",le="9.999999999999999e-06"} 0
+workqueue_work_duration_seconds_bucket{name="secret",le="9.999999999999999e-05"} 0
+workqueue_work_duration_seconds_bucket{name="secret",le="0.001"} 1
+workqueue_work_duration_seconds_bucket{name="secret",le="0.01"} 1
+workqueue_work_duration_seconds_bucket{name="secret",le="0.1"} 1
+workqueue_work_duration_seconds_bucket{name="secret",le="1"} 1
+workqueue_work_duration_seconds_bucket{name="secret",le="10"} 1
+workqueue_work_duration_seconds_bucket{name="secret",le="+Inf"} 1
+workqueue_work_duration_seconds_sum{name="secret"} 0.000150956
+workqueue_work_duration_seconds_count{name="secret"} 1
+
+
+ +
+ Example of metrics exported at the /server-metrics endpoint + +
# HELP auth_server_authconfig_duration_seconds Response latency of authconfig enforced by the auth server (in seconds).
+# TYPE auth_server_authconfig_duration_seconds histogram
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.001"} 0
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.051000000000000004"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.101"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.15100000000000002"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.201"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.251"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.301"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.351"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.40099999999999997"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.45099999999999996"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.501"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.551"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.6010000000000001"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.6510000000000001"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.7010000000000002"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.7510000000000002"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.8010000000000003"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.8510000000000003"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.9010000000000004"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="0.9510000000000004"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="edge-auth",namespace="authorino",le="+Inf"} 1
+auth_server_authconfig_duration_seconds_sum{authconfig="edge-auth",namespace="authorino"} 0.001701795
+auth_server_authconfig_duration_seconds_count{authconfig="edge-auth",namespace="authorino"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.001"} 1
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.051000000000000004"} 4
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.101"} 4
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.15100000000000002"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.201"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.251"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.301"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.351"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.40099999999999997"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.45099999999999996"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.501"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.551"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.6010000000000001"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.6510000000000001"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.7010000000000002"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.7510000000000002"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.8010000000000003"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.8510000000000003"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.9010000000000004"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="0.9510000000000004"} 5
+auth_server_authconfig_duration_seconds_bucket{authconfig="talker-api-protection",namespace="authorino",le="+Inf"} 5
+auth_server_authconfig_duration_seconds_sum{authconfig="talker-api-protection",namespace="authorino"} 0.26967658299999997
+auth_server_authconfig_duration_seconds_count{authconfig="talker-api-protection",namespace="authorino"} 5
+# HELP auth_server_authconfig_response_status Response status of authconfigs sent by the auth server, partitioned by authconfig.
+# TYPE auth_server_authconfig_response_status counter
+auth_server_authconfig_response_status{authconfig="edge-auth",namespace="authorino",status="OK"} 1
+auth_server_authconfig_response_status{authconfig="talker-api-protection",namespace="authorino",status="OK"} 2
+auth_server_authconfig_response_status{authconfig="talker-api-protection",namespace="authorino",status="PERMISSION_DENIED"} 2
+auth_server_authconfig_response_status{authconfig="talker-api-protection",namespace="authorino",status="UNAUTHENTICATED"} 1
+# HELP auth_server_authconfig_total Total number of authconfigs enforced by the auth server, partitioned by authconfig.
+# TYPE auth_server_authconfig_total counter
+auth_server_authconfig_total{authconfig="edge-auth",namespace="authorino"} 1
+auth_server_authconfig_total{authconfig="talker-api-protection",namespace="authorino"} 5
+# HELP auth_server_evaluator_duration_seconds Response latency of individual authconfig rule evaluated by the auth server (in seconds).
+# TYPE auth_server_evaluator_duration_seconds histogram
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.001"} 0
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.051000000000000004"} 3
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.101"} 3
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.15100000000000002"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.201"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.251"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.301"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.351"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.40099999999999997"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.45099999999999996"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.501"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.551"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.6010000000000001"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.6510000000000001"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.7010000000000002"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.7510000000000002"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.8010000000000003"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.8510000000000003"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.9010000000000004"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="0.9510000000000004"} 4
+auth_server_evaluator_duration_seconds_bucket{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino",le="+Inf"} 4
+auth_server_evaluator_duration_seconds_sum{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino"} 0.25800055
+auth_server_evaluator_duration_seconds_count{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino"} 4
+# HELP auth_server_evaluator_total Total number of evaluations of individual authconfig rule performed by the auth server.
+# TYPE auth_server_evaluator_total counter
+auth_server_evaluator_total{authconfig="talker-api-protection",evaluator_name="geo",evaluator_type="METADATA_GENERIC_HTTP",namespace="authorino"} 4
+# HELP auth_server_response_status Response status of authconfigs sent by the auth server.
+# TYPE auth_server_response_status counter
+auth_server_response_status{status="NOT_FOUND"} 1
+auth_server_response_status{status="OK"} 3
+auth_server_response_status{status="PERMISSION_DENIED"} 2
+auth_server_response_status{status="UNAUTHENTICATED"} 1
+# HELP go_gc_cycles_automatic_gc_cycles_total Count of completed GC cycles generated by the Go runtime.
+# TYPE go_gc_cycles_automatic_gc_cycles_total counter
+go_gc_cycles_automatic_gc_cycles_total 11
+# HELP go_gc_cycles_forced_gc_cycles_total Count of completed GC cycles forced by the application.
+# TYPE go_gc_cycles_forced_gc_cycles_total counter
+go_gc_cycles_forced_gc_cycles_total 0
+# HELP go_gc_cycles_total_gc_cycles_total Count of all completed GC cycles.
+# TYPE go_gc_cycles_total_gc_cycles_total counter
+go_gc_cycles_total_gc_cycles_total 11
+# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 4.5971e-05
+go_gc_duration_seconds{quantile="0.25"} 5.69e-05
+go_gc_duration_seconds{quantile="0.5"} 0.000158594
+go_gc_duration_seconds{quantile="0.75"} 0.000324091
+go_gc_duration_seconds{quantile="1"} 0.001692423
+go_gc_duration_seconds_sum 0.003546711
+go_gc_duration_seconds_count 11
+# HELP go_gc_heap_allocs_by_size_bytes_total Distribution of heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_allocs_by_size_bytes_total histogram
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="8.999999999999998"} 6261
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="16.999999999999996"} 42477
+[...]
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="32768.99999999999"} 122133
+go_gc_heap_allocs_by_size_bytes_total_bucket{le="+Inf"} 122154
+go_gc_heap_allocs_by_size_bytes_total_sum 1.455944e+07
+go_gc_heap_allocs_by_size_bytes_total_count 122154
+# HELP go_gc_heap_allocs_bytes_total Cumulative sum of memory allocated to the heap by the application.
+# TYPE go_gc_heap_allocs_bytes_total counter
+go_gc_heap_allocs_bytes_total 1.455944e+07
+# HELP go_gc_heap_allocs_objects_total Cumulative count of heap allocations triggered by the application. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_allocs_objects_total counter
+go_gc_heap_allocs_objects_total 122154
+# HELP go_gc_heap_frees_by_size_bytes_total Distribution of freed heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_frees_by_size_bytes_total histogram
+go_gc_heap_frees_by_size_bytes_total_bucket{le="8.999999999999998"} 3789
+go_gc_heap_frees_by_size_bytes_total_bucket{le="16.999999999999996"} 31067
+[...]
+go_gc_heap_frees_by_size_bytes_total_bucket{le="32768.99999999999"} 91013
+go_gc_heap_frees_by_size_bytes_total_bucket{le="+Inf"} 91021
+go_gc_heap_frees_by_size_bytes_total_sum 9.399936e+06
+go_gc_heap_frees_by_size_bytes_total_count 91021
+# HELP go_gc_heap_frees_bytes_total Cumulative sum of heap memory freed by the garbage collector.
+# TYPE go_gc_heap_frees_bytes_total counter
+go_gc_heap_frees_bytes_total 9.399936e+06
+# HELP go_gc_heap_frees_objects_total Cumulative count of heap allocations whose storage was freed by the garbage collector. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+# TYPE go_gc_heap_frees_objects_total counter
+go_gc_heap_frees_objects_total 91021
+# HELP go_gc_heap_goal_bytes Heap size target for the end of the GC cycle.
+# TYPE go_gc_heap_goal_bytes gauge
+go_gc_heap_goal_bytes 9.601744e+06
+# HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory.
+# TYPE go_gc_heap_objects_objects gauge
+go_gc_heap_objects_objects 31133
+# HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size.
+# TYPE go_gc_heap_tiny_allocs_objects_total counter
+go_gc_heap_tiny_allocs_objects_total 9866
+# HELP go_gc_pauses_seconds_total Distribution individual GC-related stop-the-world pause latencies.
+# TYPE go_gc_pauses_seconds_total histogram
+go_gc_pauses_seconds_total_bucket{le="9.999999999999999e-10"} 0
+go_gc_pauses_seconds_total_bucket{le="1.9999999999999997e-09"} 0
+[...]
+go_gc_pauses_seconds_total_bucket{le="206708.18602188796"} 22
+go_gc_pauses_seconds_total_bucket{le="+Inf"} 22
+go_gc_pauses_seconds_total_sum 0.0030393599999999996
+go_gc_pauses_seconds_total_count 22
+# HELP go_goroutines Number of goroutines that currently exist.
+# TYPE go_goroutines gauge
+go_goroutines 79
+# HELP go_info Information about the Go environment.
+# TYPE go_info gauge
+go_info{version="go1.18.7"} 1
+# HELP go_memory_classes_heap_free_bytes Memory that is completely free and eligible to be returned to the underlying system, but has not been. This metric is the runtime's estimate of free address space that is backed by physical memory.
+# TYPE go_memory_classes_heap_free_bytes gauge
+go_memory_classes_heap_free_bytes 630784
+# HELP go_memory_classes_heap_objects_bytes Memory occupied by live objects and dead objects that have not yet been marked free by the garbage collector.
+# TYPE go_memory_classes_heap_objects_bytes gauge
+go_memory_classes_heap_objects_bytes 5.159504e+06
+# HELP go_memory_classes_heap_released_bytes Memory that is completely free and has been returned to the underlying system. This metric is the runtime's estimate of free address space that is still mapped into the process, but is not backed by physical memory.
+# TYPE go_memory_classes_heap_released_bytes gauge
+go_memory_classes_heap_released_bytes 3.858432e+06
+# HELP go_memory_classes_heap_stacks_bytes Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use.
+# TYPE go_memory_classes_heap_stacks_bytes gauge
+go_memory_classes_heap_stacks_bytes 786432
+# HELP go_memory_classes_heap_unused_bytes Memory that is reserved for heap objects but is not currently used to hold heap objects.
+# TYPE go_memory_classes_heap_unused_bytes gauge
+go_memory_classes_heap_unused_bytes 2.14776e+06
+# HELP go_memory_classes_metadata_mcache_free_bytes Memory that is reserved for runtime mcache structures, but not in-use.
+# TYPE go_memory_classes_metadata_mcache_free_bytes gauge
+go_memory_classes_metadata_mcache_free_bytes 13984
+# HELP go_memory_classes_metadata_mcache_inuse_bytes Memory that is occupied by runtime mcache structures that are currently being used.
+# TYPE go_memory_classes_metadata_mcache_inuse_bytes gauge
+go_memory_classes_metadata_mcache_inuse_bytes 2400
+# HELP go_memory_classes_metadata_mspan_free_bytes Memory that is reserved for runtime mspan structures, but not in-use.
+# TYPE go_memory_classes_metadata_mspan_free_bytes gauge
+go_memory_classes_metadata_mspan_free_bytes 16696
+# HELP go_memory_classes_metadata_mspan_inuse_bytes Memory that is occupied by runtime mspan structures that are currently being used.
+# TYPE go_memory_classes_metadata_mspan_inuse_bytes gauge
+go_memory_classes_metadata_mspan_inuse_bytes 114376
+# HELP go_memory_classes_metadata_other_bytes Memory that is reserved for or used to hold runtime metadata.
+# TYPE go_memory_classes_metadata_other_bytes gauge
+go_memory_classes_metadata_other_bytes 5.544408e+06
+# HELP go_memory_classes_os_stacks_bytes Stack memory allocated by the underlying operating system.
+# TYPE go_memory_classes_os_stacks_bytes gauge
+go_memory_classes_os_stacks_bytes 0
+# HELP go_memory_classes_other_bytes Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more.
+# TYPE go_memory_classes_other_bytes gauge
+go_memory_classes_other_bytes 537777
+# HELP go_memory_classes_profiling_buckets_bytes Memory that is used by the stack trace hash map used for profiling.
+# TYPE go_memory_classes_profiling_buckets_bytes gauge
+go_memory_classes_profiling_buckets_bytes 1.455487e+06
+# HELP go_memory_classes_total_bytes All memory mapped by the Go runtime into the current process as read-write. Note that this does not include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in /memory/classes.
+# TYPE go_memory_classes_total_bytes gauge
+go_memory_classes_total_bytes 2.026804e+07
+# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
+# TYPE go_memstats_alloc_bytes gauge
+go_memstats_alloc_bytes 5.159504e+06
+# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
+# TYPE go_memstats_alloc_bytes_total counter
+go_memstats_alloc_bytes_total 1.455944e+07
+# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
+# TYPE go_memstats_buck_hash_sys_bytes gauge
+go_memstats_buck_hash_sys_bytes 1.455487e+06
+# HELP go_memstats_frees_total Total number of frees.
+# TYPE go_memstats_frees_total counter
+go_memstats_frees_total 100887
+# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
+# TYPE go_memstats_gc_cpu_fraction gauge
+go_memstats_gc_cpu_fraction 0
+# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
+# TYPE go_memstats_gc_sys_bytes gauge
+go_memstats_gc_sys_bytes 5.544408e+06
+# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
+# TYPE go_memstats_heap_alloc_bytes gauge
+go_memstats_heap_alloc_bytes 5.159504e+06
+# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
+# TYPE go_memstats_heap_idle_bytes gauge
+go_memstats_heap_idle_bytes 4.489216e+06
+# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
+# TYPE go_memstats_heap_inuse_bytes gauge
+go_memstats_heap_inuse_bytes 7.307264e+06
+# HELP go_memstats_heap_objects Number of allocated objects.
+# TYPE go_memstats_heap_objects gauge
+go_memstats_heap_objects 31133
+# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
+# TYPE go_memstats_heap_released_bytes gauge
+go_memstats_heap_released_bytes 3.858432e+06
+# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
+# TYPE go_memstats_heap_sys_bytes gauge
+go_memstats_heap_sys_bytes 1.179648e+07
+# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
+# TYPE go_memstats_last_gc_time_seconds gauge
+go_memstats_last_gc_time_seconds 1.6461569717723043e+09
+# HELP go_memstats_lookups_total Total number of pointer lookups.
+# TYPE go_memstats_lookups_total counter
+go_memstats_lookups_total 0
+# HELP go_memstats_mallocs_total Total number of mallocs.
+# TYPE go_memstats_mallocs_total counter
+go_memstats_mallocs_total 132020
+# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
+# TYPE go_memstats_mcache_inuse_bytes gauge
+go_memstats_mcache_inuse_bytes 2400
+# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
+# TYPE go_memstats_mcache_sys_bytes gauge
+go_memstats_mcache_sys_bytes 16384
+# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
+# TYPE go_memstats_mspan_inuse_bytes gauge
+go_memstats_mspan_inuse_bytes 114376
+# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
+# TYPE go_memstats_mspan_sys_bytes gauge
+go_memstats_mspan_sys_bytes 131072
+# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
+# TYPE go_memstats_next_gc_bytes gauge
+go_memstats_next_gc_bytes 9.601744e+06
+# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
+# TYPE go_memstats_other_sys_bytes gauge
+go_memstats_other_sys_bytes 537777
+# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
+# TYPE go_memstats_stack_inuse_bytes gauge
+go_memstats_stack_inuse_bytes 786432
+# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
+# TYPE go_memstats_stack_sys_bytes gauge
+go_memstats_stack_sys_bytes 786432
+# HELP go_memstats_sys_bytes Number of bytes obtained from system.
+# TYPE go_memstats_sys_bytes gauge
+go_memstats_sys_bytes 2.026804e+07
+# HELP go_sched_goroutines_goroutines Count of live goroutines.
+# TYPE go_sched_goroutines_goroutines gauge
+go_sched_goroutines_goroutines 79
+# HELP go_sched_latencies_seconds Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running.
+# TYPE go_sched_latencies_seconds histogram
+go_sched_latencies_seconds_bucket{le="9.999999999999999e-10"} 225
+go_sched_latencies_seconds_bucket{le="1.9999999999999997e-09"} 225
+[...]
+go_sched_latencies_seconds_bucket{le="206708.18602188796"} 1916
+go_sched_latencies_seconds_bucket{le="+Inf"} 1916
+go_sched_latencies_seconds_sum 0.18081453600000003
+go_sched_latencies_seconds_count 1916
+# HELP go_threads Number of OS threads created.
+# TYPE go_threads gauge
+go_threads 8
+# HELP grpc_server_handled_total Total number of RPCs completed on the server, regardless of success or failure.
+# TYPE grpc_server_handled_total counter
+grpc_server_handled_total{grpc_code="Aborted",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Aborted",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Aborted",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="AlreadyExists",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="AlreadyExists",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="AlreadyExists",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Canceled",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Canceled",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Canceled",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="DataLoss",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="DataLoss",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="DataLoss",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="DeadlineExceeded",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="DeadlineExceeded",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="DeadlineExceeded",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="FailedPrecondition",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="FailedPrecondition",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="FailedPrecondition",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Internal",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Internal",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Internal",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="InvalidArgument",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="InvalidArgument",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="InvalidArgument",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="NotFound",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="NotFound",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="NotFound",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="OK",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 7
+grpc_server_handled_total{grpc_code="OK",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="OK",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="OutOfRange",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="OutOfRange",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="OutOfRange",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="PermissionDenied",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="PermissionDenied",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="PermissionDenied",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="ResourceExhausted",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="ResourceExhausted",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="ResourceExhausted",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Unauthenticated",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unauthenticated",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unauthenticated",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Unavailable",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unavailable",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unavailable",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Unimplemented",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unimplemented",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unimplemented",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+grpc_server_handled_total{grpc_code="Unknown",grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unknown",grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_handled_total{grpc_code="Unknown",grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+# HELP grpc_server_handling_seconds Histogram of response latency (seconds) of gRPC that had been application-level handled by the server.
+# TYPE grpc_server_handling_seconds histogram
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.005"} 3
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.01"} 3
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.025"} 3
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.05"} 6
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.1"} 6
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.25"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="0.5"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="1"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="2.5"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="5"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="10"} 7
+grpc_server_handling_seconds_bucket{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary",le="+Inf"} 7
+grpc_server_handling_seconds_sum{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 0.277605516
+grpc_server_handling_seconds_count{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 7
+# HELP grpc_server_msg_received_total Total number of RPC stream messages received on the server.
+# TYPE grpc_server_msg_received_total counter
+grpc_server_msg_received_total{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 7
+grpc_server_msg_received_total{grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_msg_received_total{grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+# HELP grpc_server_msg_sent_total Total number of gRPC stream messages sent by the server.
+# TYPE grpc_server_msg_sent_total counter
+grpc_server_msg_sent_total{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 7
+grpc_server_msg_sent_total{grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_msg_sent_total{grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+# HELP grpc_server_started_total Total number of RPCs started on the server.
+# TYPE grpc_server_started_total counter
+grpc_server_started_total{grpc_method="Check",grpc_service="envoy.service.auth.v3.Authorization",grpc_type="unary"} 7
+grpc_server_started_total{grpc_method="Check",grpc_service="grpc.health.v1.Health",grpc_type="unary"} 0
+grpc_server_started_total{grpc_method="Watch",grpc_service="grpc.health.v1.Health",grpc_type="server_stream"} 0
+# HELP oidc_server_requests_total Number of get requests received on the OIDC (Festival Wristband) server.
+# TYPE oidc_server_requests_total counter
+oidc_server_requests_total{authconfig="edge-auth",namespace="authorino",path="/.well-known/openid-configuration",wristband="wristband"} 1
+oidc_server_requests_total{authconfig="edge-auth",namespace="authorino",path="/.well-known/openid-connect/certs",wristband="wristband"} 1
+# HELP oidc_server_response_status Status of HTTP response sent by the OIDC (Festival Wristband) server.
+# TYPE oidc_server_response_status counter
+oidc_server_response_status{status="200"} 2
+# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
+# TYPE process_cpu_seconds_total counter
+process_cpu_seconds_total 1.42
+# HELP process_max_fds Maximum number of open file descriptors.
+# TYPE process_max_fds gauge
+process_max_fds 1.048576e+06
+# HELP process_open_fds Number of open file descriptors.
+# TYPE process_open_fds gauge
+process_open_fds 14
+# HELP process_resident_memory_bytes Resident memory size in bytes.
+# TYPE process_resident_memory_bytes gauge
+process_resident_memory_bytes 4.370432e+07
+# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
+# TYPE process_start_time_seconds gauge
+process_start_time_seconds 1.64615612779e+09
+# HELP process_virtual_memory_bytes Virtual memory size in bytes.
+# TYPE process_virtual_memory_bytes gauge
+process_virtual_memory_bytes 7.65362176e+08
+# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
+# TYPE process_virtual_memory_max_bytes gauge
+process_virtual_memory_max_bytes 1.8446744073709552e+19
+# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
+# TYPE promhttp_metric_handler_requests_in_flight gauge
+promhttp_metric_handler_requests_in_flight 1
+# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
+# TYPE promhttp_metric_handler_requests_total counter
+promhttp_metric_handler_requests_total{code="200"} 1
+promhttp_metric_handler_requests_total{code="500"} 0
+promhttp_metric_handler_requests_total{code="503"} 0
+
+
+ +

Readiness check

+

Authorino exposes two main endpoints for health and readiness check of the AuthConfig controller: +- /healthz: Health probe (ping) – reports "ok" if the controller is healthy. +- /readyz: Readiness probe – reports "ok" if the controller is ready to reconcile AuthConfig-related events.

+

In general, the endpoints return either 200 ("ok", i.e. all checks have passed) or 500 (when one or more checks failed).

+

The default binding network address is :8081, which can be changed by setting the command-line flag --health-probe-addr.

+

The following additional subpath is available and its corresponding check can be aggregated into the response from the main readiness probe: +- /readyz/authconfigs: Aggregated readiness status of the AuthConfigs – reports "ok" if all AuthConfigs watched by the reconciler have been marked as ready.

+ + + + + + +
Important!
The AuthConfig readiness check within the scope of the aggregated readiness probe endpoint is deactivated by default – i.e. this check is an opt-in check. Sending a request to the /readyz endpoint without explicitly opting-in for the AuthConfigs check, by using the include parameter, will result in a response message that disregards the actual status of the watched AuthConfigs, possibly an "ok" message. To read the aggregated status of the watched AuthConfigs, either use the specific endpoint /readyz/authconfigs or opt-in for the check in the aggregated endpoint by sending a request to /readyz?include=authconfigs
+ +

Apart from include to add the aggregated status of the AuthConfigs, the following additional query string parameters are available: +- verbose=true|false - provides more verbose response messages; +- exclude=(check name) – to exclude a particular readiness check (for future usage).

+

Logging

+

Authorino provides structured log messages ("production") or more log messages output to stdout in a more user-friendly format ("development" mode) and different level of logging.

+

Log levels and log modes

+

Authorino outputs 3 levels of log messages: (from lowest to highest level) +1. debug +2. info (default) +3. error

+

info logging is restricted to high-level information of the gRPC and HTTP authorization services, limiting messages to incoming request and respective outgoing response logs, with reduced details about the corresponding objects (request payload and authorization result), and without any further detailed logs of the steps in between, except for errors.

+

Only debug logging will include processing details of each Auth Pipeline, such as intermediary requests to validate identities with external auth servers, requests to external sources of auth metadata or authorization policies.

+

To configure the desired log level, set the spec.logLevel field of the Authorino custom resource (or --log-level command-line flag in the Authorino deployment), to one of the supported values listed above. Default log level is info.

+

Apart from log level, Authorino can output messages to the logs in 2 different formats: +- production (default): each line is a parseable JSON object with properties {"level":string, "ts":int, "msg":string, "logger":string, extra values...} +- development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\t<log-level>\t<logger>\t<message>\t{extra-values-as-json}

+

To configure the desired log mode, set the spec.logMode field of the Authorino custom resource (or --log-mode command-line flag in the Authorino deployment), to one of the supported values listed above. Default log level is production.

+

Example of Authorino custom resource with log level debug and log mode production:

+
apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  logLevel: debug
+  logMode: production
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+
+

Sensitive data output to the logs

+

Authorino will never output HTTP headers and query string parameters to info log messages, as such values usually include sensitive data (e.g. access tokens, API keys and Authorino Festival Wristbands). However, debug log messages may include such sensitive information and those are not redacted.

+

Therefore, DO NOT USE debug LOG LEVEL IN PRODUCTION! Instead, use either info or error.

+

Log messages printed by Authorino

+

Some log messages printed by Authorino and corresponding extra values included:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
loggerlevelmessageextra values
authorinoinfo"setting instance base logger"min level=info\|debug, mode=production\|development
authorinoinfo"booting up authorino"version
authorinodebug"setting up with options"auth-config-label-selector, deep-metrics-enabled, enable-leader-election, evaluator-cache-size, ext-auth-grpc-port, ext-auth-http-port, health-probe-addr, log-level, log-mode, max-http-request-body-size, metrics-addr, oidc-http-port, oidc-tls-cert, oidc-tls-cert-key, secret-label-selector, timeout, tls-cert, tls-cert-key, watch-namespace
authorinoinfo"attempting to acquire leader lease <namespace>/cb88a58a.authorino.kuadrant.io...\n"
authorinoinfo"successfully acquired lease <namespace>/cb88a58a.authorino.kuadrant.io\n"
authorinoinfo"disabling grpc auth service"
authorinoinfo"starting grpc auth service"port, tls
authorinoerror"failed to obtain port for the grpc auth service"
authorinoerror"failed to load tls cert for the grpc auth"
authorinoerror"failed to start grpc auth service"
authorinoinfo"disabling http auth service"
authorinoinfo"starting http auth service"port, tls
authorinoerror"failed to obtain port for the http auth service"
authorinoerror"failed to start http auth service"
authorinoinfo"disabling http oidc service"
authorinoinfo"starting http oidc service"port, tls
authorinoerror"failed to obtain port for the http oidc service"
authorinoerror"failed to start http oidc service"
authorinoinfo"starting manager"
authorinoerror"unable to start manager"
authorinoerror"unable to create controller"controller=authconfig\|secret\|authconfigstatusupdate
authorinoerror"problem running manager"
authorinoinfo"starting status update manager"
authorinoerror"unable to start status update manager"
authorinoerror"problem running status update manager"
authorino.controller-runtime.metricsinfo"metrics server is starting to listen"addr
authorino.controller-runtime.managerinfo"starting metrics server"path
authorino.controller-runtime.manager.eventsdebug"Normal"object={kind=ConfigMap, apiVersion=v1}, reauthorino.ason=LeaderElection, message="authorino-controller-manager-* became leader"
authorino.controller-runtime.manager.eventsdebug"Normal"object={kind=Lease, apiVersion=coordination.k8s.io/v1}, reauthorino.ason=LeaderElection, message="authorino-controller-manager-* became leader"
authorino.controller-runtime.manager.controller.authconfiginfo"resource reconciled"authconfig
authorino.controller-runtime.manager.controller.authconfiginfo"host already taken"authconfig, host
authorino.controller-runtime.manager.controller.authconfig.statusupdaterdebug"resource status did not change"authconfig
authorino.controller-runtime.manager.controller.authconfig.statusupdaterdebug"resource status changed"authconfig, authconfig/status
authorino.controller-runtime.manager.controller.authconfig.statusupdatererror"failed to update the resource"authconfig
authorino.controller-runtime.manager.controller.authconfig.statusupdaterinfo"resource status updated"authconfig
authorino.controller-runtime.manager.controller.secretinfo"resource reconciled"
authorino.controller-runtime.manager.controller.secretinfo"could not reconcile authconfigs using api key authorino.authentication"
authorino.service.oidcinfo"request received"request id, url, realm, config, path
authorino.service.oidcinfo"response sent"request id
authorino.service.oidcerror"failed to serve oidc request"
authorino.service.authinfo"incoming authorization request"request id, object
authorino.service.authdebug"incoming authorization request"request id, object
authorino.service.authinfo"outgoing authorization response"request id, authorized, response, object
authorino.service.authdebug"outgoing authorization response"request id, authorized, response, object
authorino.service.autherror"failed to create dynamic metadata"request id, object
authorino.service.auth.authpipelinedebug"skipping config"request id, config, reason
authorino.service.auth.authpipeline.identitydebug"identity validated"request id, config, object
authorino.service.auth.authpipeline.identitydebug"cannot validate identity"request id, config, reason
authorino.service.auth.authpipeline.identityerror"failed to extend identity object"request id, config, object
authorino.service.auth.authpipeline.identity.oidcerror"failed to discovery openid connect configuration"endpoint
authorino.service.auth.authpipeline.identity.oidcdebug"auto-refresh of openid connect configuration disabled"endpoint, reason
authorino.service.auth.authpipeline.identity.oidcdebug"openid connect configuration updated"endpoint
authorino.service.auth.authpipeline.identity.oauth2debug"sending token introspection request"request id, url, data
authorino.service.auth.authpipeline.identity.kubernetesauthdebug"calling kubernetes token review api"request id, tokenreview
authorino.service.auth.authpipeline.identity.apikeyerror"Something went wrong fetching the authorized credentials"
authorino.service.auth.authpipeline.metadatadebug"fetched auth metadata"request id, config, object
authorino.service.auth.authpipeline.metadatadebug"cannot fetch metadata"request id, config, reason
authorino.service.auth.authpipeline.metadata.httpdebug"sending request"request id, method, url, headers
authorino.service.auth.authpipeline.metadata.userinfodebug"fetching user info"request id, endpoint
authorino.service.auth.authpipeline.metadata.umadebug"requesting pat"request id, url, data, headers
authorino.service.auth.authpipeline.metadata.umadebug"querying resources by uri"request id, url
authorino.service.auth.authpipeline.metadata.umadebug"getting resource data"request id, url
authorino.service.auth.authpipeline.authorizationdebug"evaluating for input"request id, input
authorino.service.auth.authpipeline.authorizationdebug"access granted"request id, config, object
authorino.service.auth.authpipeline.authorizationdebug"access denied"request id, config, reason
authorino.service.auth.authpipeline.authorization.opaerror"invalid response from policy evaluation"policy
authorino.service.auth.authpipeline.authorization.opaerror"failed to precompile policy"policy
authorino.service.auth.authpipeline.authorization.opaerror"failed to download policy from external registry"policy, endpoint
authorino.service.auth.authpipeline.authorization.opaerror"failed to refresh policy from external registry"policy, endpoint
authorino.service.auth.authpipeline.authorization.opadebug"external policy unchanged"policy, endpoint
authorino.service.auth.authpipeline.authorization.opadebug"auto-refresh of external policy disabled"policy, endpoint, reason
authorino.service.auth.authpipeline.authorization.opainfo"policy updated from external registry"policy, endpoint
authorino.service.auth.authpipeline.authorization.kubernetesauthzdebug"calling kubernetes subject access review api"request id, subjectaccessreview
authorino.service.auth.authpipeline.responsedebug"dynamic response built"request id, config, object
authorino.service.auth.authpipeline.responsedebug"cannot build dynamic response"request id, config, reason
authorino.service.auth.httpdebug"bad request"request id
authorino.service.auth.httpdebug"not found"request id
authorino.service.auth.httpdebug"request body too large"request id
authorino.service.auth.httpdebug"service unavailable"request id
+

Examples

+

The examples below are all with --log-level=debug and --log-mode=production.

+
+ Booting up the service + +
{"level":"info","ts":1669220526.929678,"logger":"authorino","msg":"setting instance base logger","min level":"debug","mode":"production"}
+{"level":"info","ts":1669220526.929718,"logger":"authorino","msg":"booting up authorino","version":"7688cfa32317a49f0461414e741c980e9c05dba3"}
+{"level":"debug","ts":1669220526.9297278,"logger":"authorino","msg":"setting up with options","auth-config-label-selector":"","deep-metrics-enabled":"false","enable-leader-election":"false","evaluator-cache-size":"1","ext-auth-grpc-port":"50051","ext-auth-http-port":"5001","health-probe-addr":":8081","log-level":"debug","log-mode":"production","max-http-request-body-size":"8192","metrics-addr":":8080","oidc-http-port":"8083","oidc-tls-cert":"/etc/ssl/certs/oidc.crt","oidc-tls-cert-key":"/etc/ssl/private/oidc.key","secret-label-selector":"authorino.kuadrant.io/managed-by=authorino","timeout":"0","tls-cert":"/etc/ssl/certs/tls.crt","tls-cert-key":"/etc/ssl/private/tls.key","watch-namespace":"default"}
+{"level":"info","ts":1669220527.9816976,"logger":"authorino.controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
+{"level":"info","ts":1669220527.9823213,"logger":"authorino","msg":"starting grpc auth service","port":50051,"tls":true}
+{"level":"info","ts":1669220527.9823658,"logger":"authorino","msg":"starting http auth service","port":5001,"tls":true}
+{"level":"info","ts":1669220527.9824295,"logger":"authorino","msg":"starting http oidc service","port":8083,"tls":true}
+{"level":"info","ts":1669220527.9825335,"logger":"authorino","msg":"starting manager"}
+{"level":"info","ts":1669220527.982721,"logger":"authorino","msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
+{"level":"info","ts":1669220527.982766,"logger":"authorino","msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
+{"level":"info","ts":1669220527.9829438,"logger":"authorino.controller.secret","msg":"Starting EventSource","reconciler group":"","reconciler kind":"Secret","source":"kind source: *v1.Secret"}
+{"level":"info","ts":1669220527.9829693,"logger":"authorino.controller.secret","msg":"Starting Controller","reconciler group":"","reconciler kind":"Secret"}
+{"level":"info","ts":1669220527.9829714,"logger":"authorino.controller.authconfig","msg":"Starting EventSource","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig","source":"kind source: *v1beta1.AuthConfig"}
+{"level":"info","ts":1669220527.9830208,"logger":"authorino.controller.authconfig","msg":"Starting Controller","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669220528.0834699,"logger":"authorino.controller.authconfig","msg":"Starting workers","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig","worker count":1}
+{"level":"info","ts":1669220528.0836608,"logger":"authorino.controller.secret","msg":"Starting workers","reconciler group":"","reconciler kind":"Secret","worker count":1}
+{"level":"info","ts":1669220529.041266,"logger":"authorino","msg":"starting status update manager"}
+{"level":"info","ts":1669220529.0418258,"logger":"authorino.controller.authconfig","msg":"Starting EventSource","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig","source":"kind source: *v1beta1.AuthConfig"}
+{"level":"info","ts":1669220529.0418813,"logger":"authorino.controller.authconfig","msg":"Starting Controller","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669220529.1432905,"logger":"authorino.controller.authconfig","msg":"Starting workers","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig","worker count":1}
+
+
+ +
+ Reconciling an AuthConfig and 2 related API key secrets + +
{"level":"debug","ts":1669221208.7473805,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status changed","authconfig":"default/talker-api-protection","authconfig/status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-11-23T16:33:28Z","reason":"HostsNotLinked","message":"No hosts linked to the resource"},{"type":"Ready","status":"False","lastTransitionTime":"2022-11-23T16:33:28Z","reason":"Unknown"}],"summary":{"ready":false,"hostsReady":[],"numHostsReady":"0/1","numIdentitySources":1,"numMetadataSources":0,"numAuthorizationPolicies":0,"numResponseItems":0,"festivalWristbandEnabled":false}}}
+{"level":"info","ts":1669221208.7496614,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"default/talker-api-protection"}
+{"level":"info","ts":1669221208.7532616,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"default/talker-api-protection"}
+{"level":"debug","ts":1669221208.7535005,"logger":"authorino.controller.secret","msg":"adding k8s secret to the index","reconciler group":"","reconciler kind":"Secret","name":"api-key-1","namespace":"default","authconfig":"default/talker-api-protection","config":"friends"}
+{"level":"debug","ts":1669221208.7535596,"logger":"authorino.controller.secret.apikey","msg":"api key added","reconciler group":"","reconciler kind":"Secret","name":"api-key-1","namespace":"default"}
+{"level":"info","ts":1669221208.7536132,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"default/api-key-1"}
+{"level":"info","ts":1669221208.753772,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status updated","authconfig":"default/talker-api-protection"}
+{"level":"debug","ts":1669221208.753835,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status changed","authconfig":"default/talker-api-protection","authconfig/status":{"conditions":[{"type":"Available","status":"True","lastTransitionTime":"2022-11-23T16:33:28Z","reason":"HostsLinked"},{"type":"Ready","status":"True","lastTransitionTime":"2022-11-23T16:33:28Z","reason":"Reconciled"}],"summary":{"ready":true,"hostsReady":["talker-api-authorino.127.0.0.1.nip.io"],"numHostsReady":"1/1","numIdentitySources":1,"numMetadataSources":0,"numAuthorizationPolicies":0,"numResponseItems":0,"festivalWristbandEnabled":false}}}
+{"level":"info","ts":1669221208.7571108,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"default/talker-api-protection"}
+{"level":"info","ts":1669221208.7573664,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status updated","authconfig":"default/talker-api-protection"}
+{"level":"debug","ts":1669221208.757429,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status did not change","authconfig":"default/talker-api-protection"}
+{"level":"debug","ts":1669221208.7586699,"logger":"authorino.controller.secret","msg":"adding k8s secret to the index","reconciler group":"","reconciler kind":"Secret","name":"api-key-2","namespace":"default","authconfig":"default/talker-api-protection","config":"friends"}
+{"level":"debug","ts":1669221208.7586884,"logger":"authorino.controller.secret.apikey","msg":"api key added","reconciler group":"","reconciler kind":"Secret","name":"api-key-2","namespace":"default"}
+{"level":"info","ts":1669221208.7586913,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"default/api-key-2"}
+{"level":"debug","ts":1669221208.7597604,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status did not change","authconfig":"default/talker-api-protection"}
+
+
+ +
+ Enforcing an AuthConfig with authentication based on Kubernetes tokens: + +
+ + - identity: k8s-auth, oidc, oauth2, apikey + - metadata: http, oidc userinfo + - authorization: opa, k8s-authz + - response: wristband + +
{"level":"info","ts":1634830460.1486168,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"8157480586935853928","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":53144}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"http":{"id":"8157480586935853928","method":"GET","path":"/hello","host":"talker-api","scheme":"http"}}}}
+{"level":"debug","ts":1634830460.1491194,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"8157480586935853928","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":53144}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"time":{"seconds":1634830460,"nanos":147259000},"http":{"id":"8157480586935853928","method":"GET","headers":{":authority":"talker-api",":method":"GET",":path":"/hello",":scheme":"http","accept":"*/*","authorization":"Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA","user-agent":"curl/7.65.3","x-envoy-internal":"true","x-forwarded-for":"10.244.0.11","x-forwarded-proto":"http","x-request-id":"4c5d5c97-e15b-46a3-877a-d8188e09e08f"},"path":"/hello","host":"talker-api","scheme":"http","protocol":"HTTP/1.1"}},"context_extensions":{"virtual_host":"local_service"},"metadata_context":{}}}
+{"level":"debug","ts":1634830460.150506,"logger":"authorino.service.auth.authpipeline.identity.kubernetesauth","msg":"calling kubernetes token review api","request id":"8157480586935853928","tokenreview":{"metadata":{"creationTimestamp":null},"spec":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA","audiences":["talker-api"]},"status":{"user":{}}}}
+{"level":"debug","ts":1634830460.1509938,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"8157480586935853928","config":{"Name":"api-keys","ExtendedProperties":[{"Name":"sub","Value":{"Static":null,"Pattern":"auth.identity.metadata.annotations.userid"}}],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":{"AuthCredentials":{"KeySelector":"APIKEY","In":"authorization_header"},"Name":"api-keys","LabelSelectors":{"audience":"talker-api","authorino.kuadrant.io/managed-by":"authorino"}},"KubernetesAuth":null},"reason":"credential not found"}
+{"level":"debug","ts":1634830460.1517606,"logger":"authorino.service.auth.authpipeline.identity.oauth2","msg":"sending token introspection request","request id":"8157480586935853928","url":"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect","data":"token=eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA&token_type_hint=requesting_party_token"}
+{"level":"debug","ts":1634830460.1620777,"logger":"authorino.service.auth.authpipeline.identity","msg":"identity validated","request id":"8157480586935853928","config":{"Name":"k8s-service-accounts","ExtendedProperties":[],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"object":{"aud":["talker-api"],"exp":1634831051,"iat":1634830451,"iss":"https://kubernetes.default.svc.cluster.local","kubernetes.io":{"namespace":"authorino","serviceaccount":{"name":"api-consumer-1","uid":"b40f531c-ecab-4f31-a496-2ebc72add121"}},"nbf":1634830451,"sub":"system:serviceaccount:authorino:api-consumer-1"}}
+{"level":"debug","ts":1634830460.1622565,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"requesting pat","request id":"8157480586935853928","url":"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token","data":"grant_type=client_credentials","headers":{"Content-Type":["application/x-www-form-urlencoded"]}}
+{"level":"debug","ts":1634830460.1670353,"logger":"authorino.service.auth.authpipeline.metadata.http","msg":"sending request","request id":"8157480586935853928","method":"GET","url":"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path=/hello","headers":{"Content-Type":["text/plain"]}}
+{"level":"debug","ts":1634830460.169326,"logger":"authorino.service.auth.authpipeline.metadata","msg":"cannot fetch metadata","request id":"8157480586935853928","config":{"Name":"oidc-userinfo","UserInfo":{"OIDC":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"},"Endpoint":"http://keycloak:8080/auth/realms/kuadrant"}},"UMA":null,"GenericHTTP":null},"reason":"Missing identity for OIDC issuer http://keycloak:8080/auth/realms/kuadrant. Skipping related UserInfo metadata."}
+{"level":"debug","ts":1634830460.1753876,"logger":"authorino.service.auth.authpipeline.metadata","msg":"fetched auth metadata","request id":"8157480586935853928","config":{"Name":"http-metadata","UserInfo":null,"UMA":null,"GenericHTTP":{"Endpoint":"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path={context.request.http.path}","Method":"GET","Parameters":[],"ContentType":"application/x-www-form-urlencoded","SharedSecret":"","AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"object":{"body":"","headers":{"Accept-Encoding":"gzip","Content-Type":"text/plain","Host":"talker-api.default.svc.cluster.local:3000","User-Agent":"Go-http-client/1.1","Version":"HTTP/1.1"},"method":"GET","path":"/metadata","query_string":"encoding=text/plain&original_path=/hello","uuid":"1aa6ac66-3179-4351-b1a7-7f6a761d5b61"}}
+{"level":"debug","ts":1634830460.2331996,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"querying resources by uri","request id":"8157480586935853928","url":"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=/hello"}
+{"level":"debug","ts":1634830460.2495668,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"getting resource data","request id":"8157480586935853928","url":"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set/e20d194c-274c-4845-8c02-0ca413c9bf18"}
+{"level":"debug","ts":1634830460.2927864,"logger":"authorino.service.auth.authpipeline.metadata","msg":"fetched auth metadata","request id":"8157480586935853928","config":{"Name":"uma-resource-registry","UserInfo":null,"UMA":{"Endpoint":"http://keycloak:8080/auth/realms/kuadrant","ClientID":"talker-api","ClientSecret":"523b92b6-625d-4e1e-a313-77e7a8ae4e88"},"GenericHTTP":null},"object":[{"_id":"e20d194c-274c-4845-8c02-0ca413c9bf18","attributes":{},"displayName":"hello","name":"hello","owner":{"id":"57a645a5-fb67-438b-8be5-dfb971666dbc"},"ownerManagedAccess":false,"resource_scopes":[],"uris":["/hi","/hello"]}]}
+{"level":"debug","ts":1634830460.2930083,"logger":"authorino.service.auth.authpipeline.authorization","msg":"evaluating for input","request id":"8157480586935853928","input":{"context":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":53144}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"time":{"seconds":1634830460,"nanos":147259000},"http":{"id":"8157480586935853928","method":"GET","headers":{":authority":"talker-api",":method":"GET",":path":"/hello",":scheme":"http","accept":"*/*","authorization":"Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA","user-agent":"curl/7.65.3","x-envoy-internal":"true","x-forwarded-for":"10.244.0.11","x-forwarded-proto":"http","x-request-id":"4c5d5c97-e15b-46a3-877a-d8188e09e08f"},"path":"/hello","host":"talker-api","scheme":"http","protocol":"HTTP/1.1"}},"context_extensions":{"virtual_host":"local_service"},"metadata_context":{}},"auth":{"identity":{"aud":["talker-api"],"exp":1634831051,"iat":1634830451,"iss":"https://kubernetes.default.svc.cluster.local","kubernetes.io":{"namespace":"authorino","serviceaccount":{"name":"api-consumer-1","uid":"b40f531c-ecab-4f31-a496-2ebc72add121"}},"nbf":1634830451,"sub":"system:serviceaccount:authorino:api-consumer-1"},"metadata":{"http-metadata":{"body":"","headers":{"Accept-Encoding":"gzip","Content-Type":"text/plain","Host":"talker-api.default.svc.cluster.local:3000","User-Agent":"Go-http-client/1.1","Version":"HTTP/1.1"},"method":"GET","path":"/metadata","query_string":"encoding=text/plain&original_path=/hello","uuid":"1aa6ac66-3179-4351-b1a7-7f6a761d5b61"},"uma-resource-registry":[{"_id":"e20d194c-274c-4845-8c02-0ca413c9bf18","attributes":{},"displayName":"hello","name":"hello","owner":{"id":"57a645a5-fb67-438b-8be5-dfb971666dbc"},"ownerManagedAccess":false,"resource_scopes":[],"uris":["/hi","/hello"]}]}}}}
+{"level":"debug","ts":1634830460.2955465,"logger":"authorino.service.auth.authpipeline.authorization.kubernetesauthz","msg":"calling kubernetes subject access review api","request id":"8157480586935853928","subjectaccessreview":{"metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/hello","verb":"get"},"user":"system:serviceaccount:authorino:api-consumer-1"},"status":{"allowed":false}}}
+{"level":"debug","ts":1634830460.2986183,"logger":"authorino.service.auth.authpipeline.authorization","msg":"access granted","request id":"8157480586935853928","config":{"Name":"my-policy","OPA":{"Rego":"fail := input.context.request.http.headers[\"x-ext-auth-mock\"] == \"FAIL\"\nallow { not fail }\n","OPAExternalSource":{"Endpoint":"","SharedSecret":"","AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"JSON":null,"KubernetesAuthz":null},"object":true}
+{"level":"debug","ts":1634830460.3044975,"logger":"authorino.service.auth.authpipeline.authorization","msg":"access granted","request id":"8157480586935853928","config":{"Name":"kubernetes-rbac","OPA":null,"JSON":null,"KubernetesAuthz":{"Conditions":[],"User":{"Static":"","Pattern":"auth.identity.user.username"},"Groups":null,"ResourceAttributes":null}},"object":true}
+{"level":"debug","ts":1634830460.3052874,"logger":"authorino.service.auth.authpipeline.response","msg":"dynamic response built","request id":"8157480586935853928","config":{"Name":"wristband","Wrapper":"httpHeader","WrapperKey":"x-ext-auth-wristband","Wristband":{"Issuer":"https://authorino-oidc.default.svc:8083/default/talker-api-protection/wristband","CustomClaims":[],"TokenDuration":300,"SigningKeys":[{"use":"sig","kty":"EC","kid":"wristband-signing-key","crv":"P-256","alg":"ES256","x":"TJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZw","y":"SSg8rKBsJ3J1LxyLtt0oFvhHvZcUpmRoTuHk3UHisTA","d":"Me-5_zWBWVYajSGZcZMCcD8dXEa4fy85zv_yN7BxW-o"}]},"DynamicJSON":null},"object":"eyJhbGciOiJFUzI1NiIsImtpZCI6IndyaXN0YmFuZC1zaWduaW5nLWtleSIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzQ4MzA3NjAsImlhdCI6MTYzNDgzMDQ2MCwiaXNzIjoiaHR0cHM6Ly9hdXRob3Jpbm8tb2lkYy5hdXRob3Jpbm8uc3ZjOjgwODMvYXV0aG9yaW5vL3RhbGtlci1hcGktcHJvdGVjdGlvbi93cmlzdGJhbmQiLCJzdWIiOiI4NDliMDk0ZDA4MzU0ZjM0MjA4ZGI3MjBmYWZmODlmNmM3NmYyOGY3MTcxOWI4NTQ3ZDk5NWNlNzAwMjU2ZGY4In0.Jn-VB5Q_0EX1ed1ji4KvhO4DlMqZeIl5H0qlukbTyYkp-Pgb4SnPGSbYWp5_uvG8xllsFAA5nuyBIXeba-dbkw"}
+{"level":"info","ts":1634830460.3054585,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"8157480586935853928","authorized":true,"response":"OK"}
+{"level":"debug","ts":1634830460.305476,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"8157480586935853928","authorized":true,"response":"OK"}
+
+
+ +
+ Enforcing an AuthConfig with authentication based on API keys + +
+ + - identity: k8s-auth, oidc, oauth2, apikey + - metadata: http, oidc userinfo + - authorization: opa, k8s-authz + - response: wristband + +
{"level":"info","ts":1634830413.2425854,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"7199257136822741594","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":52702}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"http":{"id":"7199257136822741594","method":"GET","path":"/hello","host":"talker-api","scheme":"http"}}}}
+{"level":"debug","ts":1634830413.2426975,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"7199257136822741594","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":52702}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"time":{"seconds":1634830413,"nanos":240094000},"http":{"id":"7199257136822741594","method":"GET","headers":{":authority":"talker-api",":method":"GET",":path":"/hello",":scheme":"http","accept":"*/*","authorization":"APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx","user-agent":"curl/7.65.3","x-envoy-internal":"true","x-forwarded-for":"10.244.0.11","x-forwarded-proto":"http","x-request-id":"d38f5e66-bd72-4733-95d1-3179315cdd60"},"path":"/hello","host":"talker-api","scheme":"http","protocol":"HTTP/1.1"}},"context_extensions":{"virtual_host":"local_service"},"metadata_context":{}}}
+{"level":"debug","ts":1634830413.2428744,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"7199257136822741594","config":{"Name":"k8s-service-accounts","ExtendedProperties":[],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"reason":"credential not found"}
+{"level":"debug","ts":1634830413.2434332,"logger":"authorino.service.auth.authpipeline","msg":"skipping config","request id":"7199257136822741594","config":{"Name":"keycloak-jwts","ExtendedProperties":[],"OAuth2":null,"OIDC":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"},"Endpoint":"http://keycloak:8080/auth/realms/kuadrant"},"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":null},"reason":"context canceled"}
+{"level":"debug","ts":1634830413.2479305,"logger":"authorino.service.auth.authpipeline.identity","msg":"identity validated","request id":"7199257136822741594","config":{"Name":"api-keys","ExtendedProperties":[{"Name":"sub","Value":{"Static":null,"Pattern":"auth.identity.metadata.annotations.userid"}}],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":{"AuthCredentials":{"KeySelector":"APIKEY","In":"authorization_header"},"Name":"api-keys","LabelSelectors":{"audience":"talker-api","authorino.kuadrant.io/managed-by":"authorino"}},"KubernetesAuth":null},"object":{"apiVersion":"v1","data":{"api_key":"bmR5QnpyZVV6RjR6cURRc3FTUE1Ia1JocmlFT3RjUng="},"kind":"Secret","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{\"userid\":\"john\"},\"labels\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"},\"name\":\"api-key-1\",\"namespace\":\"authorino\"},\"stringData\":{\"api_key\":\"ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\"},\"type\":\"Opaque\"}\n","userid":"john"},"creationTimestamp":"2021-10-21T14:45:54Z","labels":{"audience":"talker-api","authorino.kuadrant.io/managed-by":"authorino"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:api_key":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:userid":{}},"f:labels":{".":{},"f:audience":{},"f:authorino.kuadrant.io/managed-by":{}}},"f:type":{}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-10-21T14:45:54Z"}],"name":"api-key-1","namespace":"authorino","resourceVersion":"8979","uid":"c369852a-7e1a-43bd-94ca-e2b3f617052e"},"sub":"john","type":"Opaque"}}
+{"level":"debug","ts":1634830413.248768,"logger":"authorino.service.auth.authpipeline.metadata.http","msg":"sending request","request id":"7199257136822741594","method":"GET","url":"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path=/hello","headers":{"Content-Type":["text/plain"]}}
+{"level":"debug","ts":1634830413.2496722,"logger":"authorino.service.auth.authpipeline.metadata","msg":"cannot fetch metadata","request id":"7199257136822741594","config":{"Name":"oidc-userinfo","UserInfo":{"OIDC":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"},"Endpoint":"http://keycloak:8080/auth/realms/kuadrant"}},"UMA":null,"GenericHTTP":null},"reason":"Missing identity for OIDC issuer http://keycloak:8080/auth/realms/kuadrant. Skipping related UserInfo metadata."}
+{"level":"debug","ts":1634830413.2497928,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"requesting pat","request id":"7199257136822741594","url":"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token","data":"grant_type=client_credentials","headers":{"Content-Type":["application/x-www-form-urlencoded"]}}
+{"level":"debug","ts":1634830413.258932,"logger":"authorino.service.auth.authpipeline.metadata","msg":"fetched auth metadata","request id":"7199257136822741594","config":{"Name":"http-metadata","UserInfo":null,"UMA":null,"GenericHTTP":{"Endpoint":"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path={context.request.http.path}","Method":"GET","Parameters":[],"ContentType":"application/x-www-form-urlencoded","SharedSecret":"","AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"object":{"body":"","headers":{"Accept-Encoding":"gzip","Content-Type":"text/plain","Host":"talker-api.default.svc.cluster.local:3000","User-Agent":"Go-http-client/1.1","Version":"HTTP/1.1"},"method":"GET","path":"/metadata","query_string":"encoding=text/plain&original_path=/hello","uuid":"97529f8c-587b-4121-a4db-cd90c63871fd"}}
+{"level":"debug","ts":1634830413.2945344,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"querying resources by uri","request id":"7199257136822741594","url":"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=/hello"}
+{"level":"debug","ts":1634830413.3123596,"logger":"authorino.service.auth.authpipeline.metadata.uma","msg":"getting resource data","request id":"7199257136822741594","url":"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set/e20d194c-274c-4845-8c02-0ca413c9bf18"}
+{"level":"debug","ts":1634830413.3340268,"logger":"authorino.service.auth.authpipeline.metadata","msg":"fetched auth metadata","request id":"7199257136822741594","config":{"Name":"uma-resource-registry","UserInfo":null,"UMA":{"Endpoint":"http://keycloak:8080/auth/realms/kuadrant","ClientID":"talker-api","ClientSecret":"523b92b6-625d-4e1e-a313-77e7a8ae4e88"},"GenericHTTP":null},"object":[{"_id":"e20d194c-274c-4845-8c02-0ca413c9bf18","attributes":{},"displayName":"hello","name":"hello","owner":{"id":"57a645a5-fb67-438b-8be5-dfb971666dbc"},"ownerManagedAccess":false,"resource_scopes":[],"uris":["/hi","/hello"]}]}
+{"level":"debug","ts":1634830413.3367748,"logger":"authorino.service.auth.authpipeline.authorization","msg":"evaluating for input","request id":"7199257136822741594","input":{"context":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":52702}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"time":{"seconds":1634830413,"nanos":240094000},"http":{"id":"7199257136822741594","method":"GET","headers":{":authority":"talker-api",":method":"GET",":path":"/hello",":scheme":"http","accept":"*/*","authorization":"APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx","user-agent":"curl/7.65.3","x-envoy-internal":"true","x-forwarded-for":"10.244.0.11","x-forwarded-proto":"http","x-request-id":"d38f5e66-bd72-4733-95d1-3179315cdd60"},"path":"/hello","host":"talker-api","scheme":"http","protocol":"HTTP/1.1"}},"context_extensions":{"virtual_host":"local_service"},"metadata_context":{}},"auth":{"identity":{"apiVersion":"v1","data":{"api_key":"bmR5QnpyZVV6RjR6cURRc3FTUE1Ia1JocmlFT3RjUng="},"kind":"Secret","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{\"userid\":\"john\"},\"labels\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"},\"name\":\"api-key-1\",\"namespace\":\"authorino\"},\"stringData\":{\"api_key\":\"ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\"},\"type\":\"Opaque\"}\n","userid":"john"},"creationTimestamp":"2021-10-21T14:45:54Z","labels":{"audience":"talker-api","authorino.kuadrant.io/managed-by":"authorino"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:api_key":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:userid":{}},"f:labels":{".":{},"f:audience":{},"f:authorino.kuadrant.io/managed-by":{}}},"f:type":{}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-10-21T14:45:54Z"}],"name":"api-key-1","namespace":"authorino","resourceVersion":"8979","uid":"c369852a-7e1a-43bd-94ca-e2b3f617052e"},"sub":"john","type":"Opaque"},"metadata":{"http-metadata":{"body":"","headers":{"Accept-Encoding":"gzip","Content-Type":"text/plain","Host":"talker-api.default.svc.cluster.local:3000","User-Agent":"Go-http-client/1.1","Version":"HTTP/1.1"},"method":"GET","path":"/metadata","query_string":"encoding=text/plain&original_path=/hello","uuid":"97529f8c-587b-4121-a4db-cd90c63871fd"},"uma-resource-registry":[{"_id":"e20d194c-274c-4845-8c02-0ca413c9bf18","attributes":{},"displayName":"hello","name":"hello","owner":{"id":"57a645a5-fb67-438b-8be5-dfb971666dbc"},"ownerManagedAccess":false,"resource_scopes":[],"uris":["/hi","/hello"]}]}}}}
+{"level":"debug","ts":1634830413.339894,"logger":"authorino.service.auth.authpipeline.authorization","msg":"access granted","request id":"7199257136822741594","config":{"Name":"my-policy","OPA":{"Rego":"fail := input.context.request.http.headers[\"x-ext-auth-mock\"] == \"FAIL\"\nallow { not fail }\n","OPAExternalSource":{"Endpoint":"","SharedSecret":"","AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"JSON":null,"KubernetesAuthz":null},"object":true}
+{"level":"debug","ts":1634830413.3444238,"logger":"authorino.service.auth.authpipeline.authorization.kubernetesauthz","msg":"calling kubernetes subject access review api","request id":"7199257136822741594","subjectaccessreview":{"metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/hello","verb":"get"},"user":"john"},"status":{"allowed":false}}}
+{"level":"debug","ts":1634830413.3547812,"logger":"authorino.service.auth.authpipeline.authorization","msg":"access granted","request id":"7199257136822741594","config":{"Name":"kubernetes-rbac","OPA":null,"JSON":null,"KubernetesAuthz":{"Conditions":[],"User":{"Static":"","Pattern":"auth.identity.user.username"},"Groups":null,"ResourceAttributes":null}},"object":true}
+{"level":"debug","ts":1634830413.3558292,"logger":"authorino.service.auth.authpipeline.response","msg":"dynamic response built","request id":"7199257136822741594","config":{"Name":"wristband","Wrapper":"httpHeader","WrapperKey":"x-ext-auth-wristband","Wristband":{"Issuer":"https://authorino-oidc.default.svc:8083/default/talker-api-protection/wristband","CustomClaims":[],"TokenDuration":300,"SigningKeys":[{"use":"sig","kty":"EC","kid":"wristband-signing-key","crv":"P-256","alg":"ES256","x":"TJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZw","y":"SSg8rKBsJ3J1LxyLtt0oFvhHvZcUpmRoTuHk3UHisTA","d":"Me-5_zWBWVYajSGZcZMCcD8dXEa4fy85zv_yN7BxW-o"}]},"DynamicJSON":null},"object":"eyJhbGciOiJFUzI1NiIsImtpZCI6IndyaXN0YmFuZC1zaWduaW5nLWtleSIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzQ4MzA3MTMsImlhdCI6MTYzNDgzMDQxMywiaXNzIjoiaHR0cHM6Ly9hdXRob3Jpbm8tb2lkYy5hdXRob3Jpbm8uc3ZjOjgwODMvYXV0aG9yaW5vL3RhbGtlci1hcGktcHJvdGVjdGlvbi93cmlzdGJhbmQiLCJzdWIiOiI5NjhiZjViZjk3MDM3NWRiNjE0ZDFhMDgzZTg2NTBhYTVhMGVhMzAyOTdiYmJjMTBlNWVlMWZmYTkxYTYwZmY4In0.7G440sWgi2TIaxrGJf5KWR9UOFpNTjwVYeaJXFLzsLhVNICoMLbYzBAEo4M3ym1jipxxTVeE7anm4qDDc7cnVQ"}
+{"level":"info","ts":1634830413.3569078,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"7199257136822741594","authorized":true,"response":"OK"}
+{"level":"debug","ts":1634830413.3569596,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"7199257136822741594","authorized":true,"response":"OK"}
+
+
+ +
+ Enforcing an AuthConfig with authentication based on API keys (invalid API key) + +
+ + - identity: k8s-auth, oidc, oauth2, apikey + - metadata: http, oidc userinfo + - authorization: opa, k8s-authz + - response: wristband + +
{"level":"info","ts":1634830373.2066543,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"12947265773116138711","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":52288}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"http":{"id":"12947265773116138711","method":"GET","path":"/hello","host":"talker-api","scheme":"http"}}}}
+{"level":"debug","ts":1634830373.2068064,"logger":"authorino.service.auth","msg":"incoming authorization request","request id":"12947265773116138711","object":{"source":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":52288}}}}},"destination":{"address":{"Address":{"SocketAddress":{"address":"127.0.0.1","PortSpecifier":{"PortValue":8000}}}}},"request":{"time":{"seconds":1634830373,"nanos":198329000},"http":{"id":"12947265773116138711","method":"GET","headers":{":authority":"talker-api",":method":"GET",":path":"/hello",":scheme":"http","accept":"*/*","authorization":"APIKEY invalid","user-agent":"curl/7.65.3","x-envoy-internal":"true","x-forwarded-for":"10.244.0.11","x-forwarded-proto":"http","x-request-id":"9e391846-afe4-489a-8716-23a2e1c1aa77"},"path":"/hello","host":"talker-api","scheme":"http","protocol":"HTTP/1.1"}},"context_extensions":{"virtual_host":"local_service"},"metadata_context":{}}}
+{"level":"debug","ts":1634830373.2070816,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"12947265773116138711","config":{"Name":"keycloak-opaque","ExtendedProperties":[],"OAuth2":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"},"TokenIntrospectionUrl":"http://keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect","TokenTypeHint":"requesting_party_token","ClientID":"talker-api","ClientSecret":"523b92b6-625d-4e1e-a313-77e7a8ae4e88"},"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":null},"reason":"credential not found"}
+{"level":"debug","ts":1634830373.207225,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"12947265773116138711","config":{"Name":"api-keys","ExtendedProperties":[{"Name":"sub","Value":{"Static":null,"Pattern":"auth.identity.metadata.annotations.userid"}}],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":{"AuthCredentials":{"KeySelector":"APIKEY","In":"authorization_header"},"Name":"api-keys","LabelSelectors":{"audience":"talker-api","authorino.kuadrant.io/managed-by":"authorino"}},"KubernetesAuth":null},"reason":"the API Key provided is invalid"}
+{"level":"debug","ts":1634830373.2072473,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"12947265773116138711","config":{"Name":"k8s-service-accounts","ExtendedProperties":[],"OAuth2":null,"OIDC":null,"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"}}},"reason":"credential not found"}
+{"level":"debug","ts":1634830373.2072592,"logger":"authorino.service.auth.authpipeline.identity","msg":"cannot validate identity","request id":"12947265773116138711","config":{"Name":"keycloak-jwts","ExtendedProperties":[],"OAuth2":null,"OIDC":{"AuthCredentials":{"KeySelector":"Bearer","In":"authorization_header"},"Endpoint":"http://keycloak:8080/auth/realms/kuadrant"},"MTLS":null,"HMAC":null,"APIKey":null,"KubernetesAuth":null},"reason":"credential not found"}
+{"level":"info","ts":1634830373.2073083,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"12947265773116138711","authorized":false,"response":"UNAUTHENTICATED","object":{"code":16,"status":302,"message":"Redirecting to login"}}
+{"level":"debug","ts":1634830373.2073889,"logger":"authorino.service.auth","msg":"outgoing authorization response","request id":"12947265773116138711","authorized":false,"response":"UNAUTHENTICATED","object":{"code":16,"status":302,"message":"Redirecting to login","headers":[{"Location":"https://my-app.io/login"}]}}
+
+
+ +
+ Deleting an AuthConfig and 2 related API key secrets + + +
{"level":"info","ts":1669221361.5032296,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"default/api-key-1"}
+{"level":"info","ts":1669221361.5057878,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"default/api-key-2"}
+
+
+ +
+ Shutting down the service + +
{"level":"info","ts":1669221635.0135982,"logger":"authorino","msg":"Stopping and waiting for non leader election runnables"}
+{"level":"info","ts":1669221635.0136683,"logger":"authorino","msg":"Stopping and waiting for leader election runnables"}
+{"level":"info","ts":1669221635.0135982,"logger":"authorino","msg":"Stopping and waiting for non leader election runnables"}
+{"level":"info","ts":1669221635.0136883,"logger":"authorino","msg":"Stopping and waiting for leader election runnables"}
+{"level":"info","ts":1669221635.0137057,"logger":"authorino.controller.secret","msg":"Shutdown signal received, waiting for all workers to finish","reconciler group":"","reconciler kind":"Secret"}
+{"level":"info","ts":1669221635.013724,"logger":"authorino.controller.authconfig","msg":"Shutdown signal received, waiting for all workers to finish","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669221635.01375,"logger":"authorino.controller.authconfig","msg":"All workers finished","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669221635.013752,"logger":"authorino.controller.secret","msg":"All workers finished","reconciler group":"","reconciler kind":"Secret"}
+{"level":"info","ts":1669221635.0137632,"logger":"authorino","msg":"Stopping and waiting for caches"}
+{"level":"info","ts":1669221635.013751,"logger":"authorino.controller.authconfig","msg":"Shutdown signal received, waiting for all workers to finish","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669221635.0137684,"logger":"authorino.controller.authconfig","msg":"All workers finished","reconciler group":"authorino.kuadrant.io","reconciler kind":"AuthConfig"}
+{"level":"info","ts":1669221635.0137722,"logger":"authorino","msg":"Stopping and waiting for caches"}
+{"level":"info","ts":1669221635.0138857,"logger":"authorino","msg":"Stopping and waiting for webhooks"}
+{"level":"info","ts":1669221635.0138955,"logger":"authorino","msg":"Wait completed, proceeding to shutdown the manager"}
+{"level":"info","ts":1669221635.0138893,"logger":"authorino","msg":"Stopping and waiting for webhooks"}
+{"level":"info","ts":1669221635.0139785,"logger":"authorino","msg":"Wait completed, proceeding to shutdown the manager"}
+
+
+ +

Tracing

+

Request ID

+

Processes related to the authorization request are identified and linked together by a request ID. The request ID can be: +* generated outside Authorino and passed in the authorization request – this is essentially the case of requests via GRPC authorization interface initiated by the Envoy; +* generated by Authorino – requests via Raw HTTP Authorization interface.

+

Propagation

+

Authorino propagates trace identifiers compatible with the W3C Trace Context format (https://www.w3.org/TR/trace-context/) and user-defined baggage data in the W3C Baggage format (https://www.w3.org/TR/baggage).

+

Log tracing

+

Most log messages associated with an authorization request include the request id value. This value can be used to match incoming request and corresponding outgoing response log messages, including at deep level when more fine-grained log details are enabled (debug level level).

+

OpenTelemetry integration

+

Integration with an OpenTelemetry collector can be enabled by supplying the --tracing-service-endpoint command-line flag (e.g. authorino server --tracing-service-endpoint=http://jaeger:14268/api/traces).

+

The additional --tracing-service-tags command-line flag allow to specify fixed agent-level key-value tags for the trace signals emitted by Authorino (e.g. authorino server --tracing-service-endpoint=... --tracing-service-tag=key1=value1 --tracing-service-tag=key2=value2).

+

Traces related to authorization requests are additionally tagged with the authorino.request_id attribute.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/oidc-jwt-authentication/index.html b/authorino/docs/user-guides/oidc-jwt-authentication/index.html new file mode 100644 index 00000000..a1b7d2fd --- /dev/null +++ b/authorino/docs/user-guides/oidc-jwt-authentication/index.html @@ -0,0 +1,2241 @@ + + + + + + + + + + + + + + + + + + + + + + + + OpenID Connect Discovery and authentication with JWTs - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: OpenID Connect Discovery and authentication with JWTs

+

Validate JSON Web Tokens (JWT) issued and signed by an OpenID Connect server; leverage OpenID Connect Discovery to automatically fetch JSON Web Key Sets (JWKS).

+
+ + Authorino features in this guide: + + + + Authorino validates JSON Web Tokens (JWT) issued by an OpenID Connect server that implements OpenID Connect Discovery. Authorino fetches the OpenID Connect configuration and JSON Web Key Set (JWKS) from the issuer endpoint, and verifies the JSON Web Signature (JWS) and time validity of the token. + + _Important!_ Authorino does **not** implement [OAuth2 grants](https://datatracker.ietf.org/doc/html/rfc6749#section-4) nor [OIDC authentication flows](https://openid.net/specs/openid-connect-core-1_0.html#Authentication). As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities. + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+EOF
+
+

6. Obtain an access token with the Keycloak server

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

7. Consume the API

+

With a valid access token:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

With missing or invalid access token:

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="keycloak-kuadrant-realm"
+# x-ext-auth-reason: credential not found
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/oidc-rbac/index.html b/authorino/docs/user-guides/oidc-rbac/index.html new file mode 100644 index 00000000..a5668bd7 --- /dev/null +++ b/authorino/docs/user-guides/oidc-rbac/index.html @@ -0,0 +1,2359 @@ + + + + + + + + + + + + + + + + + + + + + + + + OpenID Connect (OIDC) and Role-Based Access Control (RBAC) with Authorino and Keycloak - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: OpenID Connect (OIDC) and Role-Based Access Control (RBAC) with Authorino and Keycloak

+

Combine OpenID Connect (OIDC) authentication and Role-Based Access Control (RBAC) authorization rules leveraging Keycloak and Authorino working together.

+

In this user guide, you will learn via example how to implement a simple Role-Based Access Control (RBAC) system to protect endpoints of an API, with roles assigned to users of an Identity Provider (Keycloak) and carried within the access tokens as JSON Web Token (JWT) claims. Users authenticate with the IdP via OAuth2/OIDC flow and get their access tokens verified and validated by Authorino on every request. Moreover, Authorino reads the role bindings of the user and enforces the proper RBAC rules based upon the context.

+
+ + Authorino features in this guide: + + + + Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./json-pattern-matching-authorization.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

In this example, the Keycloak realm defines a few users and 2 realm roles: 'member' and 'admin'. When users authenticate to the Keycloak server by any of the supported OAuth2/OIDC flows, Keycloak adds to the access token JWT a claim "realm_access": { "roles": array } that holds the list of roles assigned to the user. Authorino will verify the JWT on requests to the API and read from that claim to enforce the following RBAC rules:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
PathMethodRole
/resources[/*]GET / POST / PUTmember
/resources/{id}DELETEadmin
/admin[/*]*admin
+

Apply the AuthConfig:

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+
+  patterns:
+    member-role:
+    - selector: auth.identity.realm_access.roles
+      operator: incl
+      value: member
+    admin-role:
+    - selector: auth.identity.realm_access.roles
+      operator: incl
+      value: admin
+
+  authorization:
+  # RBAC rule: 'member' role required for requests to /resources[/*]
+  - name: rbac-resources-api
+    when:
+    - selector: context.request.http.path
+      operator: matches
+      value: ^/resources(/.*)?$
+    json:
+      rules:
+      - patternRef: member-role
+
+  # RBAC rule: 'admin' role required for DELETE requests to /resources/{id}
+  - name: rbac-delete-resource
+    when:
+    - selector: context.request.http.path
+      operator: matches
+      value: ^/resources/\d+$
+    - selector: context.request.http.method
+      operator: eq
+      value: DELETE
+    json:
+      rules:
+      - patternRef: admin-role
+
+  # RBAC rule: 'admin' role required for requests to /admin[/*]
+  - name: rbac-admin-api
+    when:
+    - selector: context.request.http.path
+      operator: matches
+      value: ^/admin(/.*)?$
+    json:
+      rules:
+      - patternRef: admin-role
+EOF
+
+

6. Obtain an access token and consume the API

+

Obtain an access token and consume the API as John (member)

+

Obtain an access token with the Keycloak server for John:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user John, who is assigned to the 'member' role:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

As John, send a GET request to /resources:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/resources -i
+# HTTP/1.1 200 OK
+
+

As John, send a DELETE request to /resources/123:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/resources/123 -i
+# HTTP/1.1 403 Forbidden
+
+

As John, send a GET request to /admin/settings:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/admin/settings -i
+# HTTP/1.1 403 Forbidden
+
+

Obtain an access token and consume the API as Jane (member/admin)

+

Obtain an access token from within the cluster for the user Jane, who is assigned to the 'member' and 'admin' roles:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

As Jane, send a GET request to /resources:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/resources -i
+# HTTP/1.1 200 OK
+
+

As Jane, send a DELETE request to /resources/123:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/resources/123 -i
+# HTTP/1.1 200 OK
+
+

As Jane, send a GET request to /admin/settings:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/admin/settings -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/oidc-user-info/index.html b/authorino/docs/user-guides/oidc-user-info/index.html new file mode 100644 index 00000000..3497af1e --- /dev/null +++ b/authorino/docs/user-guides/oidc-user-info/index.html @@ -0,0 +1,2254 @@ + + + + + + + + + + + + + + + + + + + + + + + + OpenID Connect UserInfo - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: OpenID Connect UserInfo

+

Fetch user info for OpenID Connect ID tokens in request-time for extra metadata for your policies and online verification of token validity.

+
+ + Authorino features in this guide: + + + + Apart from possibly complementing information of the JWT, fetching OpenID Connect UserInfo in request-time can be particularly useful for remote checking the state of the session, as opposed to only verifying the JWT/JWS offline. Implementation requires an OpenID Connect issuer ([`spec.identity.oidc`](#openid-connect-oidc-jwtjose-verification-and-validation-identityoidc)) configured in the same `AuthConfig`. + + Check out as well the user guide about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  metadata:
+  - name: userinfo
+    userInfo:
+      identitySource: keycloak-kuadrant-realm
+  authorization:
+  - name: active-tokens-only
+    json:
+      rules:
+      - selector: "auth.metadata.userinfo.email" # user email expected from the userinfo instead of the jwt
+        operator: neq
+        value: ""
+EOF
+
+

6. Obtain an access token with the Keycloak server

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster:

+
export $(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r '"ACCESS_TOKEN="+.access_token,"REFRESH_TOKEN="+.refresh_token')
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

7. Consume the API

+

With a valid access token:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Revoke the access token and try to consume the API again:

+
kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/logout -H "Content-Type: application/x-www-form-urlencoded" -d "refresh_token=$REFRESH_TOKEN" -d 'token_type_hint=requesting_party_token' -u demo:
+
+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 403 Forbidden
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/opa-authorization/index.html b/authorino/docs/user-guides/opa-authorization/index.html new file mode 100644 index 00000000..1e570e35 --- /dev/null +++ b/authorino/docs/user-guides/opa-authorization/index.html @@ -0,0 +1,2275 @@ + + + + + + + + + + + + + + + + + + + + + + + + Open Policy Agent (OPA) Rego policies - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Open Policy Agent (OPA) Rego policies

+

Leverage the power of Open Policy Agent (OPA) policies, evaluated against Authorino's Authorization JSON in a built-in runtime compiled together with Authorino; pre-cache policies defined in Rego language inline or fetched from an external policy registry.

+
+ + Authorino features in this guide: + + + + Authorino supports [Open Policy Agent](https://www.openpolicyagent.org) policies, either inline defined in [Rego language](https://www.openpolicyagent.org/docs/latest/policy-language) as part of the `AuthConfig` or fetched from an external endpoint, such as an OPA Policy Registry. + + Authorino's built-in OPA module precompiles the policies in reconciliation-time and cache them for fast evaluation in request-time, where they receive the Authorization JSON as input. + + Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

In this example, we will use OPA to implement a read-only policy for requests coming from outside a trusted network (IP range 192.168.1/24).

+

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

+

Optional. Set use_remote_address: true in the Envoy route configuration, so the proxy will append its IP address instead of run in transparent mode. This setting will also ensure real remote address of the client connection passed in the x-envoy-external-address HTTP header, which can be used to simplify the read-only policy in remote environment.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+  authorization:
+  - name: read-only-outside
+    opa:
+      inlineRego: |
+        ips := split(input.context.request.http.headers["x-forwarded-for"], ",")
+        trusted_network { regex.match(`192\.168\.1\.\d+`, ips[0]) }
+
+        allow { trusted_network }
+        allow { not trusted_network; input.context.request.http.method == "GET" }
+EOF
+
+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

7. Consume the API

+

Inside the trusted network:

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 192.168.1.10' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 192.168.1.10' \
+     -X POST \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Outside the trusted network:

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 123.45.6.78' \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \
+     -H 'X-Forwarded-For: 123.45.6.78' \
+     -X POST \
+     http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 403 Forbidden
+# x-ext-auth-reason: Unauthorized
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/passing-credentials/index.html b/authorino/docs/user-guides/passing-credentials/index.html new file mode 100644 index 00000000..4cb1c987 --- /dev/null +++ b/authorino/docs/user-guides/passing-credentials/index.html @@ -0,0 +1,2317 @@ + + + + + + + + + + + + + + + + + + + + + + + + Passing credentials (`Authorization` header, cookie headers and others) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Passing credentials (Authorization header, cookie headers and others)

+

Customize where credentials are supplied in the request by each trusted source of identity.

+
+ + Authorino features in this guide: +
    +
  • Identity verification & authentication → Auth credentials
  • +
  • Identity verification & authentication → API key
  • +
+
+ + Authentication tokens can be supplied in the `Authorization` header, in a custom header, cookie or query string parameter. + + Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

In this example, member users can authenticate supplying the API key in any of 4 different ways: +- HTTP header Authorization: APIKEY <api-key> +- HTTP header X-API-Key: <api-key> +- Query string parameter api_key=<api-key> +- Cookie Cookie: APIKEY=<api-key>;

+

admin API keys are only accepted in the (default) HTTP header Authorization: Bearer <api-key>.

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: members-authorization-header
+    apiKey:
+      selector:
+        matchLabels:
+          group: members
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY # instead of the default prefix 'Bearer'
+  - name: members-custom-header
+    apiKey:
+      selector:
+        matchLabels:
+          group: members
+    credentials:
+      in: custom_header
+      keySelector: X-API-Key
+  - name: members-query-string-param
+    apiKey:
+      selector:
+        matchLabels:
+          group: members
+    credentials:
+      in: query
+      keySelector: api_key
+  - name: members-cookie
+    apiKey:
+      selector:
+        matchLabels:
+          group: members
+    credentials:
+      in: cookie
+      keySelector: APIKEY
+  - name: admins
+    apiKey:
+      selector:
+        matchLabels:
+          group: admins
+EOF
+
+

6. Create a couple API keys

+

For a member user:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: members
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

For an admin user:

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-2
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: admins
+stringData:
+  api_key: 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY
+type: Opaque
+EOF
+
+

7. Consume the API

+

As member user, passing the API key in the Authorization header:

+
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

As member user, passing the API key in the custom X-API-Key header:

+
curl -H 'X-API-Key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

As member user, passing the API key in the query string parameter api_key:

+
curl "http://talker-api-authorino.127.0.0.1.nip.io:8000/hello?api_key=ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx"
+# HTTP/1.1 200 OK
+
+

As member user, passing the API key in the APIKEY cookie header:

+
curl -H 'Cookie: APIKEY=ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx;foo=bar' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

As admin user:

+
curl -H 'Authorization: Bearer 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello
+# HTTP/1.1 200 OK
+
+

Missing the API key:

+
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: APIKEY realm="members-authorization-header"
+# www-authenticate: X-API-Key realm="members-custom-header"
+# www-authenticate: api_key realm="members-query-string-param"
+# www-authenticate: APIKEY realm="members-cookie"
+# www-authenticate: Bearer realm="admins"
+# x-ext-auth-reason: {"admins":"credential not found","members-authorization-header":"credential not found","members-cookie":"credential not found","members-custom-header":"credential not found","members-query-string-param":"credential not found"}
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete secret/api-key-2
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/resource-level-authorization-uma/index.html b/authorino/docs/user-guides/resource-level-authorization-uma/index.html new file mode 100644 index 00000000..2b0961ae --- /dev/null +++ b/authorino/docs/user-guides/resource-level-authorization-uma/index.html @@ -0,0 +1,2389 @@ + + + + + + + + + + + + + + + + + + + + + + + + Resource-level authorization with User-Managed Access (UMA) resource registry - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Resource-level authorization with User-Managed Access (UMA) resource registry

+

Fetch resource metadata relevant for your authorization policies from Keycloak authorization clients, using User-Managed Access (UMA) protocol.

+
+ + Authorino features in this guide: + + + + Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Open Policy Agent (OPA) Rego policies](./opa-authorization.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

Forward local requests to the instance of Keycloak running in the cluster:

+
kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

This user guide's implementation for resource-level authorization leverages part of Keycloak's User-Managed Access (UMA) support. Authorino will fetch resource attributes stored in a Keycloak resource server client.

+

The Keycloak server also provides the identities. The sub claim of the Keycloak-issued ID tokens must match the owner of the requested resource, identified by the URI of the request.

+

Create a required secret, used by Authorino to start the authentication with the UMA registry.

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: talker-api-uma-credentials
+stringData:
+  clientID: talker-api
+  clientSecret: 523b92b6-625d-4e1e-a313-77e7a8ae4e88
+type: Opaque
+EOF
+
+

Create the config:

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  metadata:
+  - name: resource-data
+    uma:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+      credentialsRef:
+        name: talker-api-uma-credentials
+  authorization:
+  - name: owned-resources
+    opa:
+      inlineRego: |
+        COLLECTIONS = ["greetings"]
+
+        http_request = input.context.request.http
+        http_method = http_request.method
+        requested_path_sections = split(trim_left(trim_right(http_request.path, "/"), "/"), "/")
+
+        get { http_method == "GET" }
+        post { http_method == "POST" }
+        put { http_method == "PUT" }
+        delete { http_method == "DELETE" }
+
+        valid_collection { COLLECTIONS[_] == requested_path_sections[0] }
+
+        collection_endpoint {
+          valid_collection
+          count(requested_path_sections) == 1
+        }
+
+        resource_endpoint {
+          valid_collection
+          some resource_id
+          requested_path_sections[1] = resource_id
+        }
+
+        identity_owns_the_resource {
+          identity := input.auth.identity
+          resource_attrs := object.get(input.auth.metadata, "resource-data", [])[0]
+          resource_owner := object.get(object.get(resource_attrs, "owner", {}), "id", "")
+          resource_owner == identity.sub
+        }
+
+        allow { get;    collection_endpoint }
+        allow { post;   collection_endpoint }
+        allow { get;    resource_endpoint; identity_owns_the_resource }
+        allow { put;    resource_endpoint; identity_owns_the_resource }
+        allow { delete; resource_endpoint; identity_owns_the_resource }
+EOF
+
+

The OPA policy owned-resource above enforces that all users can send GET and POST requests to /greetings, while only resource owners can send GET, PUT and DELETE requests to /greetings/{resource-id}.

+

6. Obtain access tokens with the Keycloak server and consume the API

+

Obtain an access token as John and consume the API

+

Obtain an access token for user John (owner of the resource /greetings/1 in the UMA registry):

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

As John, send requests to the API:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings
+# HTTP/1.1 200 OK
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1
+# HTTP/1.1 200 OK
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1
+# HTTP/1.1 200 OK
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2 -i
+# HTTP/1.1 403 Forbidden
+
+

Obtain an access token as Jane and consume the API

+

Obtain an access token for user Jane (owner of the resource /greetings/2 in the UMA registry):

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

As Jane, send requests to the API:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings
+# HTTP/1.1 200 OK
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 403 Forbidden
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 403 Forbidden
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2
+# HTTP/1.1 200 OK
+
+

Obtain an access token as Peter and consume the API

+

Obtain an access token for user Peter (does not own any resource in the UMA registry):

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=peter' -d 'password=p' | jq -r .access_token)
+
+

As Jane, send requests to the API:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings
+# HTTP/1.1 200 OK
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 403 Forbidden
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i
+# HTTP/1.1 403 Forbidden
+
+curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2 -i
+# HTTP/1.1 403 Forbidden
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authconfig/talker-api-protection
+kubectl delete secret/talker-api-uma-credentials
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/sharding/index.html b/authorino/docs/user-guides/sharding/index.html new file mode 100644 index 00000000..81bb3eed --- /dev/null +++ b/authorino/docs/user-guides/sharding/index.html @@ -0,0 +1,2317 @@ + + + + + + + + + + + + + + + + + + + + + + + + Reducing the operational space: sharding, noise and multi-tenancy - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Reducing the operational space

+

By default, Authorino will watch events related to all AuthConfig custom resources in the reconciliation space (namespace or entire cluster). Instances can be configured though to only watch a subset of the resources, thus allowing such as: +- to reduce noise and lower memory usage inside instances meant for restricted scope (e.g. Authorino deployed as a dedicated sidecar to protect only one host); +- sharding auth config data across multiple instances; +- multiple environments (e.g. staging, production) inside of a same cluster/namespace; +- providing managed instances of Authorino that all watch CRs cluster-wide, yet dedicated to organizations allowed to create and operate their own AuthConfigs across multiple namespaces.

+
+ + Authorino features in this guide: + + + + Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy a couple instances of Authorino

+

Deploy an instance of Authorino dedicated to AuthConfigs and API key Secrets labeled with authorino/environment=staging:

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino-staging
+spec:
+  clusterWide: true
+  authConfigLabelSelectors: authorino/environment=staging
+  secretLabelSelectors: authorino/environment=staging
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

Deploy an instance of Authorino dedicated to AuthConfigs and API key Secrets labeled with authorino/environment=production, ans NOT labeled disabled:

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino-production
+spec:
+  clusterWide: true
+  authConfigLabelSelectors: authorino/environment=production,!disabled
+  secretLabelSelectors: authorino/environment=production,!disabled
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The commands above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

3. Create a namespace for user resources

+
kubectl create namespace myapp
+
+

4. Create AuthConfigs and API key Secrets for both instances

+

Create resources for authorino-staging

+

Create an AuthConfig:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: auth-config-1
+  labels:
+    authorino/environment: staging
+spec:
+  hosts:
+  - my-host.staging.io
+  identity:
+  - name: api-key
+    apiKey:
+      selector:
+        matchLabels:
+          authorino/api-key: "true"
+          authorino/environment: staging
+EOF
+
+

Create an API key Secret:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino/api-key: "true"
+    authorino/environment: staging
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

Verify in the logs that only the authorino-staging instance adds the resources to the index:

+
kubectl logs $(kubectl get pods -l authorino-resource=authorino-staging -o name)
+# {"level":"info","ts":1638382989.8327162,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"myapp/auth-config-1"}
+# {"level":"info","ts":1638382989.837424,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status updated","authconfig/status":"myapp/auth-config-1"}
+# {"level":"info","ts":1638383144.9486837,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"myapp/api-key-1"}
+
+

Create resources for authorino-production

+

Create an AuthConfig:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: auth-config-2
+  labels:
+    authorino/environment: production
+spec:
+  hosts:
+  - my-host.io
+  identity:
+  - name: api-key
+    apiKey:
+      selector:
+        matchLabels:
+          authorino/api-key: "true"
+          authorino/environment: production
+EOF
+
+

Create an API key Secret:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-2
+  labels:
+    authorino/api-key: "true"
+    authorino/environment: production
+stringData:
+  api_key: MUWdeBte7AbSWxl6CcvYNJ+3yEIm5CaL
+type: Opaque
+EOF
+
+

Verify in the logs that only the authorino-production instance adds the resources to the index:

+
kubectl logs $(kubectl get pods -l authorino-resource=authorino-production -o name)
+# {"level":"info","ts":1638383423.86086,"logger":"authorino.controller-runtime.manager.controller.authconfig.statusupdater","msg":"resource status updated","authconfig/status":"myapp/auth-config-2"}
+# {"level":"info","ts":1638383423.8608105,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"myapp/auth-config-2"}
+# {"level":"info","ts":1638383460.3515081,"logger":"authorino.controller-runtime.manager.controller.secret","msg":"resource reconciled","secret":"myapp/api-key-2"}
+
+

9. Remove a resource from scope

+
kubectl -n myapp label authconfig/auth-config-2 disabled=true
+# authconfig.authorino.kuadrant.io/auth-config-2 labeled
+
+

Verify in the logs that only the authorino-production instance adds the resources to the index:

+
kubectl logs $(kubectl get pods -l authorino-resource=authorino-production -o name)
+# {"level":"info","ts":1638383515.6428752,"logger":"authorino.controller-runtime.manager.controller.authconfig","msg":"resource reconciled","authconfig":"myapp/auth-config-2"}
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete authorino/authorino-staging
+kubectl delete authorino/authorino-production
+kubectl delete namespace myapp
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/token-normalization/index.html b/authorino/docs/user-guides/token-normalization/index.html new file mode 100644 index 00000000..ecdf46a0 --- /dev/null +++ b/authorino/docs/user-guides/token-normalization/index.html @@ -0,0 +1,2347 @@ + + + + + + + + + + + + + + + + + + + + + + + + Token normalization - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Token normalization

+

Broadly, the term token normalization in authentication systems usually implies the exchange of an authentication token, as provided by the user in a given format, and/or its associated identity claims, for another freshly issued token/set of claims, of a given (normalized) structure or format.

+

The most typical use-case for token normalization involves accepting tokens issued by multiple trusted sources and of often varied authentication protocols, while ensuring that the eventual different data structures adopted by each of those sources are normalized, thus allowing to simplify policies and authorization checks that depend on those values. In general, however, any modification to the identity claims can be for the purpose of normalization.

+

This user guide focuses on the aspect of mutation of the identity claims resolved from an authentication token, to a certain data format and/or by extending them, so that required attributes can thereafter be trusted to be present among the claims, in a desired form. For such, Authorino allows to extend resolved identity objects with custom attributes (custom claims) of either static values or with values fetched from the Authorization JSON.

+

For not only normalizing the identity claims for purpose of writing simpler authorization checks and policies, but also getting Authorino to issue a new token in a normalized format, check the Festival Wristband tokens feature.

+
+ + Authorino features in this guide: + + + + Check out as well the user guides about [Authentication with API keys](./api-key-authentication.md), [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./user-guides/json-pattern-matching-authorization.md). + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
  • jq, to extract parts of JSON responses
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy the Talker API

+

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+
+

3. Deploy Authorino

+
kubectl apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  listener:
+    tls:
+      enabled: false
+  oidcServer:
+    tls:
+      enabled: false
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+

4. Setup Envoy

+

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

+

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

+
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+
+

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

+
kubectl port-forward deployment/envoy 8000:8000 &
+
+

5. Create the AuthConfig

+

This example implements a policy that only users bound to the admin role can send DELETE requests.

+

The config trusts access tokens issued by a Keycloak realm as well as API keys labeled specifically to a selected group (friends). The roles of the identities handled by Keycloak are managed in Keycloak, as realm roles. Particularly, users john and peter are bound to the member role, while user jane is bound to roles member and admin. As for the users authenticating with API key, they are all bound to the admin role.

+

Without normalizing identity claims from these two different sources, the policy would have to handle the differences of data formats with additional ifs-and-elses. Instead, the config here uses the identity.extendedProperties option to ensure a custom roles (Array) claim is always present in the identity object. In the case of Keycloak ID tokens, the value is extracted from the realm_access.roles claim; for API key-resolved objects, the custom claim is set to the static value ["admin"].

+
kubectl apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: talker-api-protection
+spec:
+  hosts:
+  - talker-api-authorino.127.0.0.1.nip.io
+  identity:
+  - name: keycloak-kuadrant-realm
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+    extendedProperties:
+    - name: roles
+      valueFrom:
+        authJSON: auth.identity.realm_access.roles
+  - name: api-key-friends
+    apiKey:
+      selector:
+        matchLabels:
+          group: friends
+    credentials:
+      in: authorization_header
+      keySelector: APIKEY
+    extendedProperties:
+    - name: roles
+      value: ["admin"]
+  authorization:
+  - name: only-admins-can-delete
+    when:
+    - selector: context.request.http.method
+      operator: eq
+      value: DELETE
+    json:
+      rules:
+      - selector: auth.identity.roles
+        operator: incl
+        value: admin
+EOF
+
+

6. Create an API key

+
kubectl apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: api-key-1
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    group: friends
+stringData:
+  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx
+type: Opaque
+EOF
+
+

7. Consume the API

+

Obtain an access token and consume the API as Jane (admin)

+

Obtain an access token with the Keycloak server for Jane:

+

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

+

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)
+
+

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

+

Consume the API as Jane:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+

Obtain an access token and consume the API as John (member)

+

Obtain an access token with the Keycloak server for John:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

Consume the API as John:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 403 Forbidden
+
+

Consume the API using the API key to authenticate (admin)

+
curl -H "Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i
+# HTTP/1.1 200 OK
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete secret/api-key-1
+kubectl delete authconfig/talker-api-protection
+kubectl delete authorino/authorino
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
+kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/docs/user-guides/validating-webhook/index.html b/authorino/docs/user-guides/validating-webhook/index.html new file mode 100644 index 00000000..4473cf27 --- /dev/null +++ b/authorino/docs/user-guides/validating-webhook/index.html @@ -0,0 +1,2647 @@ + + + + + + + + + + + + + + + + + + + + + + + + Using Authorino as ValidatingWebhook service - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

User guide: Using Authorino as ValidatingWebhook service

+

Authorino provides an interface for raw HTTP external authorization requests. This interface can be used for integrations other than the typical Envoy gRPC protocol, such as (though not limited to) using Authorino as a generic Kubernetes ValidatingWebhook service.

+

The rules to validate a request to the Kubernetes API – typically a POST, PUT or DELETE request targeting a particular Kubernetes resource or collection –, according to which either the change will be deemed accepted or not, are written in an Authorino AuthConfig custom resource. Authentication and authorization are performed by the Kubernetes API server as usual, with auth features of Authorino implementing the additional validation within the scope of an AdmissionReview request.

+

This user guide provides an example of using Authorino as a Kubernetes ValidatingWebhook service that validates requests to CREATE and UPDATE Authorino AuthConfig resources. In other words, we will use Authorino as a validator inside the cluster that decides what is a valid AuthConfig for any application which wants to rely on Authorino to protect itself.

+

The AuthConfig to validate other AuthConfigs will enforce the following rules: +- Authorino features that cannot be used by any application in their security schemes: + - Anonymous Access + - Plain identity object extracted from context + - Kubernetes authentication (TokenReview) + - Kubernetes authorization (SubjectAccessReview) + - Festival Wristband tokens +- Authorino features that require a RoleBinding to a specific ClusterRole in the 'authorino' namespace, to be used in a AuthConfig: + - Authorino API key authentication +- All metadata pulled from external sources must be cached for precisely 5 minutes (300 seconds)

+

For convenience, the same instance of Authorino used to enforce the AuthConfig associated with the validating webhook will also be targeted for the sample AuthConfigs created to test the validation. For using different instances of Authorino for the validating webhook and for protecting applications behind a proxy, check out the section about sharding in the docs. There is also a user guide on the topic, with concrete examples.

+
+ + Authorino features in this guide: + + + + For further details about Authorino features in general, check the [docs](./../features.md). +
+ +


+

Requirements

+
    +
  • Kubernetes server
  • +
  • cert-manager
  • +
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • +
+

Create a containerized Kubernetes server locally using Kind:

+
kind create cluster --name authorino-tutorial
+
+

Install cert-manager:

+
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml
+
+

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

+
kubectl create namespace keycloak
+kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+

1. Install the Authorino Operator

+
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+

2. Deploy Authorino

+

Create the namespace:

+
kubectl create namespace authorino
+
+

Create the TLS certificates:

+
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed "s/\$(AUTHORINO_INSTANCE)/authorino/g;s/\$(NAMESPACE)/authorino/g" | kubectl -n authorino apply -f -
+
+

Create the Authorino instance:

+
kubectl -n authorino apply -f -<<EOF
+apiVersion: operator.authorino.kuadrant.io/v1beta1
+kind: Authorino
+metadata:
+  name: authorino
+spec:
+  clusterWide: true
+  listener:
+    ports:
+      grpc: 50051
+      http: 5001 # for admissionreview requests sent by the kubernetes api server
+    tls:
+      certSecretRef:
+        name: authorino-server-cert
+  oidcServer:
+    tls:
+      certSecretRef:
+        name: authorino-oidc-server-cert
+EOF
+
+

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination enabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

+ +

Create the AuthConfig:

+
kubectl -n authorino apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: authconfig-validator
+spec:
+  # admissionreview requests will be sent to this host name
+  hosts:
+  - authorino-authorino-authorization.authorino.svc
+
+  # because we're using a single authorino instance for the validating webhook and to protect the user applications,
+  # skip operations related to this one authconfig in the 'authorino' namespace
+  when:
+  - selector: context.request.http.body.@fromstr|request.object.metadata.namespace
+    operator: neq
+    value: authorino
+
+  # kubernetes admissionreviews carry info about the authenticated user
+  identity:
+  - name: k8s-userinfo
+    plain:
+      authJSON: context.request.http.body.@fromstr|request.userInfo
+
+  authorization:
+  - name: features
+    opa:
+      inlineRego: |
+        authconfig = json.unmarshal(input.context.request.http.body).request.object
+
+        forbidden { count(object.get(authconfig.spec, "identity", [])) == 0 }
+        forbidden { authconfig.spec.identity[_].anonymous }
+        forbidden { authconfig.spec.identity[_].kubernetes }
+        forbidden { authconfig.spec.identity[_].plain }
+        forbidden { authconfig.spec.authorization[_].kubernetes }
+        forbidden { authconfig.spec.response[_].wristband }
+
+        apiKey { authconfig.spec.identity[_].apiKey }
+
+        allow { count(authconfig.spec.identity) > 0; not forbidden }
+      allValues: true
+
+  - name: apikey-authn-requires-k8s-role-binding
+    priority: 1
+    when:
+    - selector: auth.authorization.features.apiKey
+      operator: eq
+      value: "true"
+    kubernetes:
+      user:
+        valueFrom: { authJSON: auth.identity.username }
+      resourceAttributes:
+        namespace: { value: authorino }
+        group: { value: authorino.kuadrant.io }
+        resource: { value: authconfigs-with-apikeys }
+        verb: { value: create }
+
+  - name: metadata-cache-ttl
+    priority: 1
+    opa:
+      inlineRego: |
+        invalid_ttl = input.auth.authorization.features.authconfig.spec.metadata[_].cache.ttl != 300
+        allow { not invalid_ttl }
+EOF
+
+

Define a ClusterRole to control the usage of protected features of Authorino:

+
kubectl apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: authorino-apikey
+rules:
+- apiGroups: ["authorino.kuadrant.io"]
+  resources: ["authconfigs-with-apikeys"] # not a real k8s resource
+  verbs: ["create"]
+EOF
+
+

4. Create the ValidatingWebhookConfiguration

+
kubectl -n authorino apply -f -<<EOF
+apiVersion: admissionregistration.k8s.io/v1
+kind: ValidatingWebhookConfiguration
+metadata:
+  name: authconfig-authz
+  annotations:
+    cert-manager.io/inject-ca-from: authorino/authorino-ca-cert
+webhooks:
+- name: check-authconfig.authorino.kuadrant.io
+  clientConfig:
+    service:
+      namespace: authorino
+      name: authorino-authorino-authorization
+      port: 5001
+      path: /check
+  rules:
+  - apiGroups: ["authorino.kuadrant.io"]
+    apiVersions: ["v1beta1"]
+    resources: ["authconfigs"]
+    operations: ["CREATE", "UPDATE"]
+    scope: Namespaced
+  sideEffects: None
+  admissionReviewVersions: ["v1"]
+EOF
+
+

5. Try it out

+

Create a namespace:

+
kubectl create namespace myapp
+
+

With a valid AuthConfig

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: keycloak
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+EOF
+# authconfig.authorino.kuadrant.io/myapp-protection created
+
+

With forbidden features

+

Anonymous access:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"]}}\n"}},"spec":{"identity":null}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: anonymous-access
+    anonymous: {}
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"anonymous\":{},\"name\":\"anonymous-access\"}]}}\n"}},"spec":{"identity":[{"anonymous":{},"name":"anonymous-access"}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

Kubernetes TokenReview:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: k8s-tokenreview
+    kubernetes:
+      audiences: ["myapp"]
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"kubernetes\":{\"audiences\":[\"myapp\"]},\"name\":\"k8s-tokenreview\"}]}}\n"}},"spec":{"identity":[{"kubernetes":{"audiences":["myapp"]},"name":"k8s-tokenreview"}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

Plain identity extracted from context:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: envoy-jwt-authn
+    plain:
+      authJSON: context.metadata_context.filter_metadata.envoy\.filters\.http\.jwt_authn|verified_jwt
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"name\":\"envoy-jwt-authn\",\"plain\":{\"authJSON\":\"context.metadata_context.filter_metadata.envoy\\\\.filters\\\\.http\\\\.jwt_authn|verified_jwt\"}}]}}\n"}},"spec":{"identity":[{"name":"envoy-jwt-authn","plain":{"authJSON":"context.metadata_context.filter_metadata.envoy\\.filters\\.http\\.jwt_authn|verified_jwt"}}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

Kubernetes SubjectAccessReview:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: keycloak
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  authorization:
+  - name: k8s-subjectaccessreview
+    kubernetes:
+      user:
+        valueFrom: { authJSON: auth.identity.sub }
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"authorization\":[{\"kubernetes\":{\"user\":{\"valueFrom\":{\"authJSON\":\"auth.identity.sub\"}}},\"name\":\"k8s-subjectaccessreview\"}],\"hosts\":[\"myapp.io\"],\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}]}}\n"}},"spec":{"authorization":[{"kubernetes":{"user":{"valueFrom":{"authJSON":"auth.identity.sub"}}},"name":"k8s-subjectaccessreview"}],"identity":[{"name":"keycloak","oidc":{"endpoint":"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant"}}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

Festival Wristband tokens:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: wristband-signing-key
+stringData:
+  key.pem: |
+    -----BEGIN EC PRIVATE KEY-----
+    MHcCAQEEIDHvuf81gVlWGo0hmXGTAnA/HVxGuH8vOc7/8jewcVvqoAoGCCqGSM49
+    AwEHoUQDQgAETJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZxJKDysoGwn
+    cnUvHIu23SgW+Ee9lxSmZGhO4eTdQeKxMA==
+    -----END EC PRIVATE KEY-----
+type: Opaque
+---
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: keycloak
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  response:
+  - name: wristband
+    wristband:
+      issuer: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband
+      signingKeyRefs:
+      - algorithm: ES256
+        name: wristband-signing-key
+EOF
+# secret/wristband-signing-key created
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}],\"response\":[{\"name\":\"wristband\",\"wristband\":{\"issuer\":\"http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband\",\"signingKeyRefs\":[{\"algorithm\":\"ES256\",\"name\":\"wristband-signing-key\"}]}}]}}\n"}},"spec":{"identity":[{"name":"keycloak","oidc":{"endpoint":"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant"}}],"response":[{"name":"wristband","wristband":{"issuer":"http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband","signingKeyRefs":[{"algorithm":"ES256","name":"wristband-signing-key"}]}}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

With features that require additional permissions

+

Before adding the required permissions:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: api-key
+    apiKey:
+      selector:
+        matchLabels: { app: myapp }
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"apiKey\":{\"selector\":{\"matchLabels\":{\"app\":\"myapp\"}}},\"name\":\"api-key\"}]}}\n"}},"spec":{"identity":[{"apiKey":{"selector":{"matchLabels":{"app":"myapp"}}},"name":"api-key"}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Not authorized: unknown reason
+
+

Add the required permissions:

+
kubectl -n authorino apply -f -<<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: authorino-apikey
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: authorino-apikey
+subjects:
+- kind: User
+  name: kubernetes-admin
+EOF
+# rolebinding.rbac.authorization.k8s.io/authorino-apikey created
+
+

After adding the required permissions:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: api-key
+    apiKey:
+      selector:
+        matchLabels: { app: myapp }
+EOF
+# authconfig.authorino.kuadrant.io/myapp-protection configured
+
+

With features that require specific property validation

+

Invalid:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: keycloak
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  metadata:
+  - name: external-source
+    http:
+      endpoint: http://metadata.io
+      method: GET
+    cache:
+      key: { value: global }
+      ttl: 60
+EOF
+# Error from server: error when applying patch:
+# {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"authorino.kuadrant.io/v1beta1\",\"kind\":\"AuthConfig\",\"metadata\":{\"annotations\":{},\"name\":\"myapp-protection\",\"namespace\":\"myapp\"},\"spec\":{\"hosts\":[\"myapp.io\"],\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}],\"metadata\":[{\"cache\":{\"key\":{\"value\":\"global\"},\"ttl\":60},\"http\":{\"endpoint\":\"http://metadata.io\",\"method\":\"GET\"},\"name\":\"external-source\"}]}}\n"}},"spec":{"identity":[{"name":"keycloak","oidc":{"endpoint":"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant"}}],"metadata":[{"cache":{"key":{"value":"global"},"ttl":60},"http":{"endpoint":"http://metadata.io","method":"GET"},"name":"external-source"}]}}
+# to:
+# Resource: "authorino.kuadrant.io/v1beta1, Resource=authconfigs", GroupVersionKind: "authorino.kuadrant.io/v1beta1, Kind=AuthConfig"
+# Name: "myapp-protection", Namespace: "myapp"
+# for: "STDIN": admission webhook "check-authconfig.authorino.kuadrant.io" denied the request: Unauthorized
+
+

Valid:

+
kubectl -n myapp apply -f -<<EOF
+apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: myapp-protection
+spec:
+  hosts:
+  - myapp.io
+  identity:
+  - name: keycloak
+    oidc:
+      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+  metadata:
+  - name: external-source
+    http:
+      endpoint: http://metadata.io
+      method: GET
+    cache:
+      key: { value: global }
+      ttl: 300
+EOF
+# authconfig.authorino.kuadrant.io/myapp-protection configured
+
+

Cleanup

+

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

+
kind delete cluster --name authorino-tutorial
+
+

Otherwise, delete the resources created in each step:

+
kubectl delete namespace myapp
+kubectl delete namespace authorino
+kubectl delete namespace clusterrole/authorino-apikey
+kubectl delete namespace keycloak
+
+

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

+
kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/authorino/index.html b/authorino/index.html new file mode 100644 index 00000000..82c1d50c --- /dev/null +++ b/authorino/index.html @@ -0,0 +1,2548 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Authorino

+

Kubernetes-native authorization service for tailor-made Zero Trust API security.

+

A lightweight Envoy external authorization server fully manageable via Kubernetes Custom Resources.
+JWT authentication, API key, mTLS, pattern-matching authz, OPA, K8s SA tokens, K8s RBAC, external metadata fetching, and more, with minimum to no coding at all, no rebuilding of your applications.

+

Authorino is not about inventing anything new. It's about making the best things about auth out there easy and simple to use. Authorino is multi-tenant, it's cloud-native and it's open source.

+

License +Unit Tests +End-to-end Tests +Smoke Tests

+

Table of contents

+ +

Getting started

+
    +
  1. Deploy with the Authorino Operator
  2. +
  3. Setup Envoy proxy and the external authorization filter
  4. +
  5. Apply an Authorino AuthConfig custom resource
  6. +
  7. Obtain an authentication token and start sending requests
  8. +
+

The full Getting started page of the docs provides details for the steps above, as well as information about requirements and next steps.

+

Or try out our Hello World example.

+

For general information about protecting your service using Authorino, check out the docs.

+

Use-cases

+

The User guides section of the docs gathers several AuthN/AuthZ use-cases as well as the instructions to implement them using Authorino. A few examples are:

+ +

How it works

+

Authorino enables hybrid API security, with usually no code changes required to your application, tailor-made for your own combination of authentication standards and protocols and authorization policies of choice.

+

Authorino implements Envoy Proxy's external authorization gRPC protocol, and is a part of Red Hat Kuadrant architecture.

+

Under the hood, Authorino is based on Kubernetes Custom Resource Definitions and the Operator pattern.

+

Bootstrap and configuration:

+
    +
  1. Deploy the service/API to be protected ("Upstream"), Authorino and Envoy
  2. +
  3. Write and apply an Authorino AuthConfig Custom Resource associated to the public host of the service
  4. +
+

Request-time:

+

+ + How it works +

+
    +
  1. A user or service account ("Consumer") obtains an access token to consume resources of the Upstream service, and sends a request to the Envoy ingress endpoint
  2. +
  3. The Envoy proxy establishes fast gRPC connection with Authorino carrying data of the HTTP request (context info), which causes Authorino to lookup for an AuthConfig Custom Resource to enforce (pre-cached)
  4. +
  5. Identity verification (authentication) phase - Authorino verifies the identity of the consumer, where at least one authentication method/identity provider must go through
  6. +
  7. External metadata phase - Authorino fetches additional metadata for the authorization from external sources (optional)
  8. +
  9. Policy enforcement (authorization) phase - Authorino takes as input a JSON composed out of context data, resolved identity object and fetched additional metadata from previous phases, and triggers the evaluation of user-defined authorization policies
  10. +
  11. Response (metadata-out) phase – Authorino builds user-defined custom responses (dynamic JSON objects and/or Festival Wristband OIDC tokens), to be supplied back to the client and/or upstream service within added HTTP headers or as Envoy Dynamic Metadata (optional)
  12. +
  13. Callbacks phase – Authorino sends callbacks to specified HTTP endpoints (optional)
  14. +
  15. Authorino and Envoy settle the authorization protocol with either OK/NOK response
  16. +
  17. If authorized, Envoy triggers other HTTP filters in the chain (if any), pre-injecting eventual dynamic metadata returned by Authorino, and ultimately redirects the request to the Upstream
  18. +
  19. The Upstream serves the requested resource to the consumer
  20. +
+
+ More + + The [Architecture](./docs/architecture.md) section of the docs covers details of protecting your APIs with Envoy and Authorino, including information about topology (centralized gateway, centralized authorization service or sidecars), deployment modes (cluster-wide reconciliation vs. namespaced instances), an specification of Authorino's [`AuthConfig`](./docs/architecture.md#the-authorino-authconfig-custom-resource-definition-crd) Custom Resource Definition (CRD) and more. + + You will also find in that section information about what happens in request-time (aka Authorino's [Auth Pipeline](./docs/architecture.md#the-auth-pipeline-aka-enforcing-protection-in-request-time)) and how to leverage the [Authorization JSON](./docs/architecture.md#the-authorization-json) for writing policies, dynamic responses and other features of Authorino. +
+ +

List of features

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureStage
Identity verification & authenticationJOSE/JWT validation (OpenID Connect)Ready
OAuth 2.0 Token Introspection (opaque tokens)Ready
Kubernetes TokenReview (SA tokens)Ready
OpenShift User-echo endpointIn analysis
API key authenticationReady
mTLS authenticationReady
HMAC authenticationPlanned (#9)
Plain (resolved beforehand and injected in the payload)Ready
Anonymous accessReady
Ad hoc external metadata fetchingOpenID Connect User InfoReady
UMA-protected resource attributesReady
HTTP GET/GET-by-POSTReady
Policy enforcement/authorizationJSON pattern matching (e.g. JWT claims, request attributes checking)Ready
OPA/Rego policies (inline and pull from registry)Ready
Kubernetes SubjectAccessReview (resource and non-resource attributes)Ready
Authzed/SpiceDBReady
Keycloak Authorization Services (UMA-compliant Authorization API)In analysis
Custom responsesFestival Wristbands tokens (token normalization, Edge Authentication Architecture)Ready
JSON injection (header injection, Envoy Dynamic Metadata)Ready
Plain text value (header injection)Ready
Custom response status code/messages (e.g. redirect)Ready
CallbacksHTTP endpointsReady
CachingOpenID Connect and User-Managed Access configsReady
JSON Web Keys (JWKs) and JSON Web Key Sets (JWKS)Ready
Access tokensReady
External metadataReady
Precompiled Rego policiesReady
Policy evaluationReady
Sharding (lookup performance, multitenancy)Ready
+ +

For a detailed description of the features above, refer to the Features page.

+

FAQ

+
+ Do I need to deploy Envoy? + + Authorino is built from the ground up to work well with Envoy. It is strongly recommended that you leverage Envoy along side Authorino. That said, it is possible to use Authorino without Envoy. + + Authorino implements Envoy's [external authorization](https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/ext_authz) gRPC protocol and therefore will accept any client request that complies. + + Authorino also provides a second interface for [raw HTTP authorization](./docs/architecture.md#raw-http-authorization-interface), suitable for using with Kubernetes ValidatingWebhook and other integrations (e.g. other proxies). + + The only attribute of the authorization request that is strictly required is the host name. (See [Host lookup](./docs/architecture.md#host-lookup) for more information.) The other attributes, such as method, path, headers, etc, might as well be required, depending on each `AuthConfig`. In the case of the gRPC [`CheckRequest`](https://pkg.go.dev/github.com/envoyproxy/go-control-plane/envoy/service/auth/v3?utm_source=gopls#CheckRequest) method, the host is supplied in `Attributes.Request.Http.Host` and alternatively in `Attributes.ContextExtensions["host"]`. For raw HTTP authorization requests, the host must be supplied in `Host` HTTP header. + + Check out [Kuadrant](https://github.com/kuadrant/kuadrant-controller) for easy-to-use Envoy and Authorino deployment & configuration for API management use-cases, using Kubernetes Custom Resources. +
+ +
+ Is Authorino an Identity Provider (IdP)? + + No, Authorino is not an Identity Provider (IdP). Neither it is an auth server of any kind, such as an OAuth2 server, an OpenID Connect (OIDC) server, a Single Sign On (SSO) server. + + Authorino is not an identity broker either. It can verify access tokens from multiple trusted sources of identity and protocols, but it will not negotiate authentication flows for non-authenticated access requests. Some tricks nonetheless can be done, for example, to [redirect unauthenticated users to a login page](./docs/user-guides/deny-with-redirect-to-login.md). + + For an excellent auth server that checks all the boxes above, check out [Keycloak](https://www.keycloak.org). +
+ +
+ How does Authorino compare to Keycloak? + + Keycloak is a proper auth server and identity provider (IdP). It offers a huge set of features for managing identities, identity sources with multiple user federation options, and a platform for authentication and authorization services. + + Keycloak exposes authenticators that implement protocols such as OpenID Connect. The is a one-time flow that establishes the delegation of power to a client, for a short period of time. To be consistent with Zero Trust security, you want a validator to verify the short-lived tokens in every request that tries to reach your protected service/resource. This step that will repeat everytime could save heavy looking up into big tables of tokens and leverage cached authorization policies for fast in-memory evaluation. This is where Authorino comes in. + + Authorino verifies and validates Keycloak-issued ID tokens. OpenID Connect Discovery is used to request and cache JSON Web Key Sets (JWKS), used to verify the signature of the tokens without having to contact again with the Keycloak server, or looking in a table of credentials. Moreover, user long-lived credentials are safe, rather than spread in hops across the network. + + You can also use Keycloak for storing auth-relevant resource metadata. These can be fetched by Authorino in request-time, to be combined into your authorization policies. See Keycloak Authorization Services and User-Managed Access (UMA) support, as well as Authorino [UMA external metadata](./docs/features.md#user-managed-access-uma-resource-registry-metadatauma) counter-part. +
+ +
+ Why doesn't Authorino handle OAuth flows? + + It has to do with trust. OAuth grants are supposed to be negotiated directly between whoever owns the long-lived credentials in one hand (user, service accounts), and the trustworthy auth server that receives those credentials – ideally with minimum number of hops in the middle – and exchanges them for short-lived access tokens, on the other end. + + There are use-cases for Authorino running in the edge (e.g. Edge Authentication Architecture and token normalization), but in most cases Authorino should be seen as a last-mile component that provides decoupled identity verification and authorization policy enforcement to protected services in request-time. In this sense, the OAuth grant is a pre-flight exchange that happens once and as direct and safe as possible, whereas auth enforcement is kept lightweight and efficient. +
+ +
+ Where does Authorino store users and roles? + + Authorino does not store users, roles, role bindings, access control lists, or any raw authorization data. Authorino handles policies, where even these policies can be stored elsewhere (as opposed to stated inline inside of an Authorino `AuthConfig` CR). + + Authorino evaluates policies for stateless authorization requests. Any additional context is either resolved from the provided payload or static definitions inside the policies. That includes extracting user information from a JWT or client TLS certificate, requesting user metadata from opaque authentication tokens (e.g. API keys) to the trusted sources actually storing that content, obtaining synchronous HTTP metadata from services, etc. + + In the case of authentication with API keys, as well as its derivative to model HTTP Basic Auth, user data are stored in Kubernetes `Secret`s. The secret's keys, annotations and labels are usually the structures used to organize the data that later a policy evaluated in Authorino may require. Strictly, those are not Authorino data structures. +
+ +
+ Can't I just use Envoy JWT Authentication and RBAC filters? + + Envoy's [JWT Authentication](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/jwt_authn/v3/config.proto.html) works pretty much similar to Authorino's [JOSE/JWT verification and validation for OpenID Connect](./docs/features.md#openid-connect-oidc-jwtjose-verification-and-validation-identityoidc). In both cases, the JSON Web Key Sets (JWKS) to verify the JWTs are auto-loaded and cached to be used in request-time. Moreover, you can configure for details such as where to extract the JWT from the HTTP request (header, param or cookie) and do some cool tricks regarding how dynamic metadata based on JWT claims can be injected to consecutive filters in the chain. + + However, in terms of authorization, while Envoy's implementation essentially allows to check for the list of audiences (`aud` JWT claim), Authorino opens up for a lot more options such as pattern-matching rules with operators and conditionals, built-in OPA and other methods of evaluating authorization policies. + + Authorino also allows to combine JWT authentication with other types of authentication to support different sources of identity and groups of users such as API keys, Kubernetes tokens, OAuth opaque tokens , etc. + + In summary, Envoy's JWT Authentication and Envoy RBAC filter are excellent solutions for simple use-cases where JWTs from one single issuer is the only authentication method you are planning to support and limited to no authorization rules suffice. On the other hand, if you need to integrate more identity sources, different types of authentication, authorization policies, etc, you might to consider Authorino. +
+ +
+ Should I use Authorino if I already have Istio configured? + + Istio is a great solution for managing service meshes. It delivers an excellent platform with an interesting layer of abstraction on top of Envoy proxy's virtual omnipresence within the mesh. + + There are lots of similarities, but also complementarity between Authorino and Istio and [Istio Authorization](https://istio.io/latest/docs/concepts/security/#authorization) in special. + + Istio provides a simple way to enable features that are, in many cases, features of Envoy, such as authorization based on JWTs, authorization based on attributes of the request, and activation of external authorization services, without having to deal with complex Envoy config files. See [Kuadrant](https://github.com/kuadrant/kuadrant-controller) for a similar approach, nonetheless leveraging features of Istio as well. + + Authorino is an Envoy-compatible external authorization service. One can use Authorino with or without Istio. + + In particular, [Istio Authorization Policies](https://istio.io/latest/docs/reference/config/security/authorization-policy/) can be seen, in terms of functionality and expressiveness, as a subset of one type of authorization policies supported by Authorino, the [JSON pattern-matching authorization](./docs/features.md#json-pattern-matching-authorization-rules-authorizationjson) policies. While Istio, however, is heavily focused on specific use cases of API Management, offering a relatively limited list of [supported attribute conditions](https://istio.io/latest/docs/reference/config/security/conditions/), Authorino is more generic, allowing to express authorization rules for a wider spectrum of use cases – ACLs, RBAC, ABAC, etc, pretty much counting on any attribute of the Envoy payload, identity object and external metadata available. + + Authorino also provides built-in OPA authorization, several other methods of authentication and identity verification (e.g. Kubernetes token validation, API key-based authentication, OAuth token introspection, OIDC-discoverable JWT verification, etc), and features like fetching of external metadata (HTTP services, OIDC userinfo, UMA resource data), token normalization, wristband tokens and dynamic responses. These all can be used independently or combined, in a simple and straightforward Kubernetes-native fashion. + + In summary, one might value Authorino when looking for a policy enforcer that offers: + 1. multiple supported methods and protocols for rather hybrid authentication, encompassing future and legacy auth needs; + 2. broader expressiveness and more functionalities for the authorization rules; + 3. authentication and authorization in one single declarative manifest; + 4. capability to fetch auth metadata from external sources on-the-fly; + 5. built-in OPA module; + 6. easy token normalization and/or aiming for Edge Authentication Architecture (EAA). + + The good news is that, if you have Istio configured, then you have Envoy and the whole platform for wiring Authorino up if you want to. 😉 +
+ +
+ Do I have to learn OPA/Rego language to use Authorino? + + No, you do not. However, if you are comfortable with [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/) from Open Policy Agent (OPA), there are some quite interesting things you can do in Authorino, just as you would in any OPA server or OPA plugin, but leveraging Authorino's [built-in OPA module](./docs/features.md#open-policy-agent-opa-rego-policies-authorizationopa) instead. Authorino's OPA module is compiled as part of Authorino's code directly from the Golang packages, and imposes no extra latency to the evaluation of your authorization policies. Even the policies themselves are pre-compiled in reconciliation-time, for fast evaluation afterwards, in request-time. + + On the other hand, if you do not want to learn Rego or in any case would like to combine it with declarative and Kubernetes-native authN/authZ spec for your services, Authorino does complement OPA with at least two other methods for expressing authorization policies – i.e. [JSON pattern-matching authorization rules](./docs/features.md#json-pattern-matching-authorization-rules-authorizationjson) and [Kubernetes SubjectAccessReview](./docs/features.md#kubernetes-subjectaccessreview-authorizationkubernetes), the latter allowing to rely completely on the Kubernetes RBAC. + + You break down, mix and combine these methods and technolgies in as many authorization policies as you want, potentially applying them according to specific conditions. Authorino will trigger the evaluation of concurrent policies in parallel, aborting the context if any of the processes denies access. + + Authorino also packages well-established industry standards and protocols for identity verification (JOSE/JWT validation, OAuth token introspection, Kubernetes TokenReview) and ad-hoc request-time metadata fetching (OIDC userinfo, User-Managed Access (UMA)), and corresponding layers of caching, without which such functionalities would have to be implemented by code. +
+ +
+ Can I use Authorino to protect non-REST APIs? + + Yes, you can. In principle, the API format (REST, gRPC, GraphQL, etc) should not matter for the authN/authZ enforcer. There are a couple points to consider though. + + While REST APIs are designed in a way that, in most cases, information usually needed for the evaluation of authorization policies are available in the metadata of the HTTP request (method, path, headers), other API formats quite often will require processing of the HTTP body. By default, Envoy's external authorization HTTP filter will not forward the body of the request to Authorino; to change that, enable the `with_request_body` option in the Envoy configuration for the external authorization filter. E.g.: + +
with_request_body:
+  max_request_bytes: 1024
+  allow_partial_message: true
+  pack_as_bytes: true
+
+ + Additionally, when enabling the request body passed in the payload to Authorino, parsing of the content should be of concern as well. Authorino provides easy access to attributes of the HTTP request, parsed as part of the [Authorization JSON](./docs/architecture.md#the-authorization-json), however the body of the request is passed as string and should be parsed by the user according to each case. + + Check out Authorino [OPA authorization](./docs/features.md#open-policy-agent-opa-rego-policies-authorizationopa) and the Rego [Encoding](https://www.openpolicyagent.org/docs/latest/policy-reference/#encoding) functions for options to parse serialized JSON, YAML and URL-encoded params. For XML transformation, an external parsing service connected via Authorino's [HTTP GET/GET-by-POST external metadata](./docs/features.md#http-getget-by-post-metadatahttp) might be required. +
+ +
+ Can I run Authorino other than on Kubernetes? + + As of today, no, you cannot, or at least it wouldn't suit production requirements. +
+ +
+ Do I have to be admin of the cluster to install Authorino? + + To install the Authorino Custom Resource Definition (CRD) and to define cluster roles required by the Authorino service, admin privilege to the Kubernetes cluster is required. This step happens only once per cluster and is usually equivalent to installing the [Authorino Operator](https://github.com/kuadrant/authorino-operator). + + Thereafter, deploying instances of the Authorino service and applying `AuthConfig` custom resources to a namespace depend on the permissions set by the cluster administrator – either directly by editing the bindings in the cluster's RBAC, or via options of the operator. In most cases, developers will be granted permissions to create and manage `AuthConfig`s, and sometimes to deploy their own instances of Authorino. +
+ +
+ Is it OK to store AuthN/AuthZ configs as Kubernetes objects? + + Authorino's API checks all the bullets to be [aggregated to the Kubernetes cluster APIs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#should-i-add-a-custom-resource-to-my-kubernetes-cluster), and therefore using Custom Resource Definition (CRD) and the [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) has always been an easy design decision. + + By merging the definitions of service authN/authZ to the control plane, Authorino `AuthConfig` resources can be thought as extensions of the specs of the desired state of services regarding the data flow security. The Authorino custom controllers, built-in into the authorization service, are the agents that read from that desired state and reconcile the processes operating in the data plane. + + Authorino is declarative and seamless for developers and cluster administrators managing the state of security of the applications running in the server, used to tools such as `kubectl`, the Kubernetes UI and its dashboards. Instead of learning about yet another configuration API format, Authorino users can jump straight to applying and editing YAML or JSON structures they already know, in a way that things such as `spec`, `status`, `namespace` and `labels` have the meaning they are expected to have, and docs are as close as `kubectl explain`. Moreover, Authorino does not pile up any other redundant layers of APIs, event-processing, RBAC, transformation and validation webhooks, etc. It is Kubernetes in its best. + + In terms of scale, Authorino `AuthConfig`s should grow proportionally to the number of protected services, virtually limited by nothing but the Kubernetes API data storage, while [namespace division](./docs/architecture.md#cluster-wide-vs-namespaced-instances) and [label selectors](./docs/architecture.md#sharding) help adjust horizontally and keep distributed. + + In other words, there are lots of benefits of using Kubernetes custom resources and custom controllers, and unless you are planning on bursting your server with more services than it can keep record, it is totally 👍 to store your AuthN/AuthZ configs as cluster API objects. +
+ +
+ Can I use Authorino for rate limiting? + + You can, but you shouldn't. Check out instead [Limitador](https://github.com/kuadrant/limitador), for simple and efficient global rate limiting. Combine it with Authorino and Authorino's support for [Envoy Dynamic Metadata](./docs/features.md#envoy-dynamic-metadata) for authenticated rate limiting. +
+ +

Benchmarks

+

Configuration of the tests (Authorino features): +| Performance test | Identity | Metadata | Authorization | Response | +|----------------------------|:---------:|:-------------:|:------------------------------------------------------:|:--------:| +| ReconcileAuthConfig | OIDC/JWT | UserInfo, UMA | OPA
(inline Rego) | - | +| AuthPipeline | OIDC/JWT | - | JSON pattern-matching
(JWT claim check) | - | +| APIKeyAuthn | API key | N/A | N/A | N/A | +| JSONPatternMatchingAuthz | N/A | N/A | JSON pattern-matching | N/A | +| OPAAuthz | N/A | N/A | OPA
(inline Rego) | N/A |

+

Platform: linux/amd64
+CPU: Intel® Xeon® Platinum 8370C 2.80GHz
+Cores: 1, 4, 10

+

Results: +

ReconcileAuthConfig:
+
+        │   sec/op    │     B/op     │  allocs/op  │
+*         1.533m ± 2%   264.4Ki ± 0%   6.470k ± 0%
+*-4       1.381m ± 6%   264.5Ki ± 0%   6.471k ± 0%
+*-10      1.563m ± 5%   270.2Ki ± 0%   6.426k ± 0%
+geomean   1.491m        266.4Ki        6.456k
+
+AuthPipeline:
+
+        │   sec/op    │     B/op     │ allocs/op  │
+*         388.0µ ± 2%   80.70Ki ± 0%   894.0 ± 0%
+*-4       348.4µ ± 5%   80.67Ki ± 2%   894.0 ± 3%
+*-10      356.4µ ± 2%   78.97Ki ± 0%   860.0 ± 0%
+geomean   363.9µ        80.11Ki        882.5
+
+APIKeyAuthn:
+
+        │   sec/op    │    B/op      │ allocs/op  │
+*         3.246µ ± 1%   480.0 ± 0%     6.000 ± 0%
+*-4       3.111µ ± 0%   480.0 ± 0%     6.000 ± 0%
+*-10      3.091µ ± 1%   480.0 ± 0%     6.000 ± 0%
+geomean   3.148µ        480.0          6.000
+
+OPAAuthz vs JSONPatternMatchingAuthz:
+
+        │   OPAAuthz   │      JSONPatternMatchingAuthz       │
+        │    sec/op    │   sec/op     vs base                │
+*         87.469µ ± 1%   1.797µ ± 1%  -97.95% (p=0.000 n=10)
+*-4       95.954µ ± 3%   1.766µ ± 0%  -98.16% (p=0.000 n=10)
+*-10      96.789µ ± 4%   1.763µ ± 0%  -98.18% (p=0.000 n=10)
+geomean    93.31µ        1.775µ       -98.10%
+
+        │   OPAAuthz    │      JSONPatternMatchingAuthz      │
+        │     B/op      │    B/op     vs base                │
+*         28826.00 ± 0%   64.00 ± 0%  -99.78% (p=0.000 n=10)
+*-4       28844.00 ± 0%   64.00 ± 0%  -99.78% (p=0.000 n=10)
+*-10      28862.00 ± 0%   64.00 ± 0%  -99.78% (p=0.000 n=10)
+geomean    28.17Ki        64.00       -99.78%
+
+        │   OPAAuthz   │      JSONPatternMatchingAuthz      │
+        │  allocs/op   │ allocs/op   vs base                │
+*         569.000 ± 0%   2.000 ± 0%  -99.65% (p=0.000 n=10)
+*-4       569.000 ± 0%   2.000 ± 0%  -99.65% (p=0.000 n=10)
+*-10      569.000 ± 0%   2.000 ± 0%  -99.65% (p=0.000 n=10)
+geomean     569.0        2.000       -99.65%
+

+

Contributing

+

If you are interested in contributing to Authorino, please refer to the Developer's guide for info about the stack and requirements, workflow, policies and Code of Conduct.

+

Join us on kuadrant.slack.com for live discussions about the roadmap and more.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..655b5641 --- /dev/null +++ b/index.html @@ -0,0 +1,2080 @@ + + + + + + + + + + + + + + + + + + + + + + Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Overview

+

Kuadrant brings together Gateway API and Open Cluster Management to help you scale, load-balance and secure your Ingress Gateways as a key part of your application connectivity, in the single or multi-cluster environment.

+

Single-cluster

+

Kuadrant can be used to protect ingress gateways based on Gateway API1 with policy enforcement (rate limit and auth) in a Kuberentes cluster.

+
+ Topology + + Single cluster architecture +
+ +

Multi-cluster

+

In the multi-cluster environment2, you can utilize Kuadrant to manage DNS-based north-south connectivity, which can provide global load balancing underpinned by your cluster topology. Kuadrant's multi-cluster functionality also ensures gateway and policy consistency across clusters, focusing on critical aspects like TLS and application health.

+
+ Topology + + Multi cluster architecture +
+ +

Component Documentation

+
    +
  • Kuadrant Operator
    + Install and manage the lifecycle of the Kuadrant deployments and core Kuadrant policies for the data plane.
  • +
  • Authorino
    + Flexible, cloud-native, and lightweight external authorization server to implement identity verification (Kubernetes TokenReview, OIDC, OAuth2, API key, mTLS) and authorization policy rules (Kuberentes SubjectAccessReview, JWT claims, OPA, request pattern-matching, resource metadata, RBAC, ReBAC, ABAC, etc).
  • +
  • Limitador
    + Fast rate-limiter implemented in Rust, that can be used as a library, or as a service plugged in to the API gateway.
  • +
  • Multicluster Gateway Controller
    + Manage multi-cluster gateways, integrate with DNS providers, TLS providers and OCM (Open Cluster Management).
  • +
+
+
+
    +
  1. +

    Supported implementations: Istio, OpenShift Service Mesh

    +
  2. +
  3. +

    Based on Open Cluster Management

    +
  4. +
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/development/index.html b/kuadrant-operator/doc/development/index.html new file mode 100644 index 00000000..6a790ec3 --- /dev/null +++ b/kuadrant-operator/doc/development/index.html @@ -0,0 +1,2539 @@ + + + + + + + + + + + + + + + + + + + + + + + + Developer's Guide - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Development Guide

+ + + + + + +

Technology stack required for development

+ +

Build

+
make
+
+

Run locally

+

You need an active session open to a kubernetes cluster.

+

Optionally, run kind and deploy kuadrant deps

+
make local-env-setup
+
+

Then, run the operator locally

+
make run
+
+

Deploy the operator in a deployment object

+
make local-setup
+
+

List of tasks done by the command above:

+
    +
  • Create local cluster using kind
  • +
  • Build kuadrant docker image from the current working directory
  • +
  • Deploy Kuadrant control plane (including istio, authorino and limitador)
  • +
+

TODO: customize with custom authorino and limitador git refs. +Make sure Makefile propagates variable to deploy target

+

Deploy kuadrant operator using OLM

+

You can deploy kuadrant using OLM just running few commands. +No need to build any image. Kuadrant engineering team provides latest and +release version tagged images. They are available in +the Quay.io/Kuadrant image repository.

+

Create kind cluster

+
make kind-create-cluster
+
+

Deploy OLM system

+
make install-olm
+
+

Deploy kuadrant using OLM. The make deploy-catalog target accepts the following variables:

+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
CATALOG_IMGKuadrant operator catalog image URLquay.io/kuadrant/kuadrant-operator-catalog:latest
+
make deploy-catalog [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-catalog:latest]
+
+

Build custom OLM catalog

+

If you want to deploy (using OLM) a custom kuadrant operator, you need to build your own catalog. +Furthermore, if you want to deploy a custom limitador or authorino operator, you also need +to build your own catalog. The kuadrant operator bundle includes the authorino or limtador operator +dependency version, hence using other than latest version requires a custom kuadrant operator +bundle and a custom catalog including the custom bundle.

+

Build kuadrant operator bundle image

+

The make bundle target accepts the following variables:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault valueNotes
IMGKuadrant operator image URLquay.io/kuadrant/kuadrant-operator:latestTAG var could be use to build this URL, defaults to latest if not provided
VERSIONBundle version0.0.0
LIMITADOR_OPERATOR_BUNDLE_IMGLimitador operator bundle URLquay.io/kuadrant/limitador-operator-bundle:latestLIMITADOR_OPERATOR_VERSION var could be used to build this, defaults to latest if not provided
AUTHORINO_OPERATOR_BUNDLE_IMGAuthorino operator bundle URLquay.io/kuadrant/authorino-operator-bundle:latestAUTHORINO_OPERATOR_VERSION var could be used to build this, defaults to latest if not provided
RELATED_IMAGE_WASMSHIMWASM shim image URLoci://quay.io/kuadrant/wasm-shim:latestWASM_SHIM_VERSION var could be used to build this, defaults to latest if not provided
+
    +
  • Build the bundle manifests
  • +
+
make bundle [IMG=quay.io/kuadrant/kuadrant-operator:latest] \
+            [VERSION=0.0.0] \
+            [LIMITADOR_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest] \
+            [AUTHORINO_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/authorino-operator-bundle:latest] \
+            [RELATED_IMAGE_WASMSHIM=oci://quay.io/kuadrant/wasm-shim:latest]
+
+
    +
  • Build the bundle image from the manifests
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGKuadrant operator bundle image URLquay.io/kuadrant/kuadrant-operator-bundle:latest
+
make bundle-build [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]
+
+
    +
  • Push the bundle image to a registry
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGKuadrant operator bundle image URLquay.io/kuadrant/kuadrant-operator-bundle:latest
+
make bundle-push [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]
+
+

Frequently, you may need to build custom kuadrant bundle with the default (latest) Limitador and +Authorino bundles. These are the example commands to build the manifests, build the bundle image +and push to the registry.

+

In the example, a new kuadrant operator bundle version 0.8.0 will be created that references +the kuadrant operator image quay.io/kuadrant/kuadrant-operator:v0.5.0 and latest Limitador and +Authorino bundles.

+
# manifests
+make bundle IMG=quay.io/kuadrant/kuadrant-operator:v0.5.0 VERSION=0.8.0
+
+# bundle image
+make bundle-build BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:my-bundle
+
+# push bundle image
+make bundle-push BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:my-bundle
+
+

Build custom catalog

+

The catalog's format will be File-based Catalog.

+

Make sure all the required bundles are pushed to the registry. It is required by the opm tool.

+

The make catalog target accepts the following variables:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGKuadrant operator bundle image URLquay.io/kuadrant/kuadrant-operator-bundle:latest
LIMITADOR_OPERATOR_BUNDLE_IMGLimitador operator bundle URLquay.io/kuadrant/limitador-operator-bundle:latest
AUTHORINO_OPERATOR_BUNDLE_IMGAuthorino operator bundle URLquay.io/kuadrant/authorino-operator-bundle:latest
+
make catalog [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest] \
+            [LIMITADOR_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest] \
+            [AUTHORINO_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/authorino-operator-bundle:latest]
+
+
    +
  • Build the catalog image from the manifests
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
CATALOG_IMGKuadrant operator catalog image URLquay.io/kuadrant/kuadrant-operator-catalog:latest
+
make catalog-build [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-catalog:latest]
+
+
    +
  • Push the catalog image to a registry
  • +
+
make catalog-push [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]
+
+

You can try out your custom catalog image following the steps of the +Deploy kuadrant operator using OLM section.

+

Cleaning up

+
make local-cleanup
+
+

Run tests

+

Unittests

+
make test-unit
+
+

Optionally, add TEST_NAME makefile variable to run specific test

+
make test-unit TEST_NAME=TestLimitIndexEquals
+
+

or even subtest

+
make test-unit TEST_NAME=TestLimitIndexEquals/empty_indexes_are_equal
+
+

Integration tests

+

You need an active session open to a kubernetes cluster.

+

Optionally, run kind and deploy kuadrant deps

+
make local-env-setup
+
+

Run integration tests

+
make test-integration
+
+

All tests

+

You need an active session open to a kubernetes cluster.

+

Optionally, run kind and deploy kuadrant deps

+
make local-env-setup
+
+

Run all tests

+
make test
+
+

Lint tests

+
make run-lint
+
+

(Un)Install Kuadrant CRDs

+

You need an active session open to a kubernetes cluster.

+

Remove CRDs

+
make uninstall
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/images/kuadrant-architecture.svg b/kuadrant-operator/doc/images/kuadrant-architecture.svg new file mode 100644 index 00000000..43e68124 --- /dev/null +++ b/kuadrant-operator/doc/images/kuadrant-architecture.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/kuadrant-operator/doc/logging/index.html b/kuadrant-operator/doc/logging/index.html new file mode 100644 index 00000000..6b1a256e --- /dev/null +++ b/kuadrant-operator/doc/logging/index.html @@ -0,0 +1,1988 @@ + + + + + + + + + + + + + + + + + + + + + + + + Logging - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Logging

+

The kuadrant operator outputs 3 levels of log messages: (from lowest to highest level)

+
    +
  1. debug
  2. +
  3. info (default)
  4. +
  5. error
  6. +
+

info logging is restricted to high-level information. Actions like creating, deleteing or updating kubernetes resources will be logged with reduced details about the corresponding objects, and without any further detailed logs of the steps in between, except for errors.

+

Only debug logging will include processing details.

+

To configure the desired log level, set the environment variable LOG_LEVEL to one of the supported values listed above. Default log level is info.

+

Apart from log level, the operator can output messages to the logs in 2 different formats:

+
    +
  • production (default): each line is a parseable JSON object with properties {"level":string, "ts":int, "msg":string, "logger":string, extra values...}
  • +
  • development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\t<log-level>\t<logger>\t<message>\t{extra-values-as-json}
  • +
+

To configure the desired log mode, set the environment variable LOG_MODE to one of the supported values listed above. Default log level is production.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/proposals/authpolicy-crd/index.html b/kuadrant-operator/doc/proposals/authpolicy-crd/index.html new file mode 100644 index 00000000..1df2bd40 --- /dev/null +++ b/kuadrant-operator/doc/proposals/authpolicy-crd/index.html @@ -0,0 +1,2127 @@ + + + + + + + + + + + + + + + + + + + + AuthPolicy Proposal - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

AuthPolicy Proposal

+

Authors: Rahul Anand (rahanand@redhat.com), Craig Brookes (cbrookes@redhat.com)

+

Introduction

+

Istio offers an AuthorizationPolicy resource which requires it to be applied in the namespace of the workload. This means that all the configuration is completely decoupled from routing logic like hostnames and paths. For managed gateway scenario, users need to either ask cluster operator to apply their policies in the gateway's namespace (which is not scalable) or use sidecars/personal gateway for their workloads in their own namepsace which is not optimal.

+

The new GatewayAPI defines a standard policy attachment mechanism for hierarchical effect of vendor specific policies. We believe creating a new CRD with concepts from Gateway API that solves use cases of Istio's AuthorizationPolicy plus its limitations as described above.

+

Goals

+

With targetRef from policy attachment concept, following are the goals: +- Application developer should be able to target HTTPRoute object in their own namespace. This will define authorization policy at the hostname/domain/vHost level. +- Cluster operator should be able to target Gateway object along with HTTPRoute in the gateway's namespace. This will define policy at the listener level. +- To reduce context sharing at the gateway and external authorization provider, action type and auth-provider are defaulted to CUSTOM and authorino respectively.

+

Proposed Solution

+

Following is the proposed new CRD that combines policy attachment concepts with Istio's AuthorizationPolicy:

+
apiVersion: kuadrant.io/v1beta1
+kind: AuthPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: # Only takes gateway.networking.k8s.io
+    kind: HTTPRoute | Gateway
+    name: toystore
+  rules:
+    - hosts: ["*.toystore.com"]
+      methods: ["GET", "POST"]
+      paths: ["/admin"]
+  authScheme: # Embedded AuthConfigs
+    hosts: ["admin.toystore.com"]
+    identity:
+    - name: idp-users
+      oidc:
+        endpoint: https://my-idp.com/auth/realm
+    authorization:
+    - name: check-claim
+      json:
+        rules:
+        - selector: auth.identity.group
+          operator: eq
+          value: allowed-users
+status:
+  conditions:
+    - lastTransitionTime: "2022-06-06T11:03:04Z"
+      message: HTTPRoute/Gateway is protected/Error
+      reason: HTTPRouteProtected/GatewayProtected/Error
+      status: "True" | "False"
+      type: Available
+  observedGeneration: 1
+
+

Target Reference

+

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: +- Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. +- Kind is kind of the target resource. Only valid options are HTTPRoute and Gateway. +- Name is the name of the target resource. +- Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

+

Rule objects

+

rules field describe the requests that will be routed to external authorization provider (like authorino). It includes: +- hosts: a host is matched over Host request header or SNI if TLS is used.

+

Note: Each rule's host in a route level policy must match at least one hostname regex described in HTTPRoute's hostnames but Gateway level policies have no such restriction. +

                            targetRef
+       HTTPRoute  ◄─────────────────────────  AuthPolicy
+  hostnames: ["*.toystore.com"]             rules:
+                                           ┌────────────────────────────┐
+                            Rejected Rule: │- hosts: ["*.carstore.com"] │
+                            Regex mismatch │  methods: ["GET", "DELETE"]│
+                                           └────────────────────────────┘
+
+                                           ┌───────────────────────────────┐
+                            Accepted Rule: │- hosts: ["admin.toystore.com"]│
+                            Regex match    │  methods: ["POST", "DELETE"]  │
+                                           └───────────────────────────────┘
+

+
    +
  • paths: a path matches over request path like /admin/.
  • +
  • methods: a method matches over request method like DELETE.
  • +
+

Fields in a rule object are ANDed together but inner fields follow OR semantics. For example, +

hosts: ["*.toystore.com"]
+methods: ["GET", "POST"]
+paths: ["/admin"]
+
+The above rule matches if the host matches*.toystore.com AND the method is POST OR GET; AND path is /admin

+

Internally, All the rules in a AuthPolicy are translated into list of Operations under a single Istio's AuthorizationPolicy with CUSTOM action type and external authorization provider as authorino.

+

AuthScheme object

+

AuthScheme is embedded form of Authorino's AuthConfig. Applying an AuthPolicy resource with AuthScheme defined, would create an AuthConfig in the Gateway's namespace.

+

Note: Following the heirarchial constraints, spec.AuthScheme.Hosts must match at least one spec.Hosts for AuthPolicy to be validated.

+

The example AuthPolicy showed above will create the following AuthConfig:

+
apiVersion: authorino.kuadrant.io/v1beta1
+kind: AuthConfig
+metadata:
+  name: default-toystore-1
+spec:
+  hosts:
+  - "admin.toystore.com"
+  identity:
+    - name: idp-users
+      oidc:
+        endpoint: https://my-idp.com/auth/realm
+  authorization:
+    - name: check-claim
+      json:
+        rules:
+          - selector: auth.identity.group
+            operator: eq
+            value: allowed-users
+
+

Overall control structure looks like the following between the developer and the kuadrant operator: +

+

Checklist

+
    +
  • Issue tracking this proposal: https://github.com/Kuadrant/kuadrant-operator/issues/130
  • +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/proposals/images/authpolicy-control-structure.png b/kuadrant-operator/doc/proposals/images/authpolicy-control-structure.png new file mode 100644 index 00000000..ee6e621e Binary files /dev/null and b/kuadrant-operator/doc/proposals/images/authpolicy-control-structure.png differ diff --git a/kuadrant-operator/doc/proposals/rlp-target-gateway-resource/index.html b/kuadrant-operator/doc/proposals/rlp-target-gateway-resource/index.html new file mode 100644 index 00000000..664afd41 --- /dev/null +++ b/kuadrant-operator/doc/proposals/rlp-target-gateway-resource/index.html @@ -0,0 +1,2350 @@ + + + + + + + + + + + + + + + + + + + + RLP can target a Gateway resource - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

RLP can target a Gateway resource

+

Previous version: https://hackmd.io/IKEYD6NrSzuGQG1nVhwbcw

+

Based on: https://hackmd.io/_1k6eLCNR2eb9RoSzOZetg

+

Introduction

+

The current RateLimitPolicy CRD already implements a targetRef with a reference to Gateway API's HTTPRoute. This doc captures the design and some implementation details of allowing the targetRef to reference a Gateway API's Gateway.

+

Having in place this HTTPRoute - Gateway hierarchy, we are also considering to apply Policy Attachment's defaults/overrides approach to the RateLimitPolicy CRD. But for now, it will only be about targeting the Gateway resource.

+

+

On designing Kuadrant's rate limiting and considering Istio/Envoy's rate limiting offering, we hit two limitations (described here). Therefore, not giving up entirely in existing Envoy's RateLimit Filter, we decided to move on and leverage the Envoy's Wasm Network Filter and implement rate limiting wasm-shim module compliant with the Envoy's Rate Limit Service (RLS). This wasm-shim module accepts a PluginConfig struct object as input configuration object.

+

Use Cases targeting a gateway

+

A key use case is being able to provide governance over what service providers can and cannot do when exposing a service via a shared ingress gateway. As well as providing certainty that no service is exposed without my ability as a cluster administrator to protect my infrastructure from unplanned load from badly behaving clients etc.

+

Goals

+

The goal of this document is to define: +* The schema of this PluginConfig struct. +* The kuadrant-operator behavior filling the PluginConfig struct having as input the RateLimitPolicy k8s objects +* The behavior of the wasm-shim having the PluginConfig struct as input.

+

Envoy's Rate Limit Service Protocol

+

Kuadrant's rate limit relies on the Rate Limit Service (RLS) +protocol, hence the gateway generates, based on a set of +actions, +a set of descriptors +(one descriptor is a set of descriptor entries). Those descriptors are send to the external rate limit service provider. +When multiple descriptors are provided, the external service provider will limit on ALL of them and +return an OVER_LIMIT response if any of them are over limit.

+

Schema (CRD) of the RateLimitPolicy

+
---
+apiVersion: kuadrant.io/v1beta1
+kind: RateLimitPolicy
+metadata:
+  name: my-rate-limit-policy
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute / Gateway
+    name: myroute / mygateway
+  rateLimits:
+    - rules:
+        - paths: ["/admin/*"]
+          methods: ["GET"]
+          hosts: ["example.com"]
+      configurations:
+        - actions:
+          - generic_key:
+              descriptor_key: admin
+              descriptor_value: "yes"
+      limits:
+        - conditions: ["admin == yes"]
+          max_value: 500
+          seconds: 30
+          variables: []
+
+

.spec.rateLimits holds a list of rate limit configurations represented by the object RateLimit. +Each RateLimit object represents a complete rate limit configuration. It contains three fields:

+
    +
  • +

    rules (optional): Rules allow matching hosts and/or methods and/or paths. +Matching occurs when at least one rule applies against the incoming request. +If rules are not set, it is equivalent to matching all the requests.

    +
  • +
  • +

    configurations (required): Specifies a set of rate limit configurations that could be applied. +The rate limit configuration object is the equivalent of the +config.route.v3.RateLimit envoy object. +One configuration is, in turn, a list of rate limit actions. +Each action populates a descriptor entry. A vector of descriptor entries compose a descriptor. +Each configuration produces, at most, one descriptor. +Depending on the incoming request, one configuration may or may not produce a rate limit descriptor. +These rate limiting configuration rules provide flexibility to produce multiple descriptors. +For example, you may want to define one generic rate limit descriptor and another descriptor +depending on some header. +If the header does not exist, the second descriptor is not generated, but traffic keeps being rate +limited based on the generic descriptor.

    +
  • +
+
configurations:
+  - actions:
+    - request_headers:
+        header_name: "X-MY-CUSTOM-HEADER"
+        descriptor_key: "custom-header"
+        skip_if_absent: true
+  - actions:
+    - generic_key:
+        descriptor_key: admin
+        descriptor_value: "1"
+
+
    +
  • limits (optional): configuration of the rate limiting service (Limitador). +Check out limitador documentation for more information about the fields of each Limit object.
  • +
+

Note: No namespace/domain defined. Kuadrant operator will figure out.

+

Note: There is no PREAUTH, POSTAUTH stage defined. Ratelimiting filter should be placed after authorization filter to enable authenticated rate limiting. In the future, stage can be implemented.

+

Kuadrant-operator's behavior

+

One HTTPRoute can only be targeted by one rate limit policy.

+

Similarly, one Gateway can only be targeted by one rate limit policy.

+

However, indirectly, one gateway will be affected by multiple rate limit policies. +It is by design of the Gateway API, one gateway can be referenced by multiple HTTPRoute objects. +Furthermore, one HTTPRoute can reference multiple gateways.

+

The kuadrant operator will aggregate all the rate +limit policies that apply for each gateway, including RLP targeting HTTPRoutes and Gateways.

+

"VirtualHosting" RateLimitPolicies

+

Rate limit policies are scoped by the domains defined at the referenced HTTPRoute's +hostnames +and Gateway's Listener's Hostname.

+

Multiple HTTPRoutes with the same hostname

+

When there are multiple HTTPRoutes with the same hostname, HTTPRoutes are all admitted and +envoy merge the routing configuration in the same virtualhost. In these cases, the control plane +has to "merge" the rate limit configuration into a single entry for the wasm filter.

+

Overlapping HTTPRoutes

+

If some RLP targets a route for *.com and other RLP targets another route for api.com, +the control plane does not do any merging. +A request coming for api.com will be rate limited with the rules from the RLP targeting +the route api.com. +Also, a request coming for other.com will be rate limited with the rules from the RLP targeting +the route *.com.

+

examples

+

RLP A -> HTTPRoute A (api.toystore.com) -> Gateway G (*.com)

+

RLP B -> HTTPRoute B (other.toystore.com) -> Gateway G (*.com)

+

RLP H -> HTTPRoute H (*.toystore.com) -> Gateway G (*.com)

+

RLP G -> Gateway G (*.com)

+

Request 1 (api.toystore.com) -> apply RLP A and RLP G

+

Request 2 (other.toystore.com) -> apply RLP B and RLP G

+

Request 3 (unknown.toystore.com) -> apply RLP H and RLP G

+

Request 4 (other.com) -> apply RLP G

+

rate limit domain / limitador namespace

+

The kuadrant operator will add domain attribute of the Envoy's Rate Limit Service (RLS). It will also add the namespace attribute of the Limitador's rate limit config. The operator will ensure that the associated actions and rate limits have a common domain/namespace.

+

The value of this domain/namespace seems to be related to the virtualhost for which rate limit applies.

+

Schema of the WASM filter configuration object: the PluginConfig

+

Currently the PluginConfig looks like this:

+
#  The filter’s behaviour in case the rate limiting service does not respond back. When it is set to true, Envoy will not allow traffic in case of communication failure between rate limiting service and the proxy.
+failure_mode_deny: true
+ratelimitpolicies:
+  default/toystore: # rate limit policy {NAMESPACE/NAME}
+    hosts: # HTTPRoute hostnames
+      - '*.toystore.com'
+    rules: # route level actions
+      - operations:
+          - paths:
+              - /admin/toy
+            methods:
+              - POST
+              - DELETE
+        actions:
+          - generic_key:
+              descriptor_value: yes
+              descriptor_key: admin
+    global_actions: # virtualHost level actions
+      - generic_key:
+          descriptor_value: yes
+          descriptor_key: vhaction
+    upstream_cluster: rate-limit-cluster # Limitador address reference
+    domain: toystore-app # RLS protocol domain value
+
+

Proposed new design for the WASM filter configuration object (PluginConfig struct):

+
#  The filter’s behaviour in case the rate limiting service does not respond back. When it is set to true, Envoy will not allow traffic in case of communication failure between rate limiting service and the proxy.
+failure_mode_deny: true
+rate_limit_policies:
+  - name: toystore
+    rate_limit_domain: toystore-app
+    upstream_cluster: rate-limit-cluster
+    hostnames: ["*.toystore.com"]
+    gateway_actions:
+      - rules:
+          - paths: ["/admin/toy"]
+            methods: ["GET"]
+            hosts: ["pets.toystore.com"]
+        configurations:
+          - actions:
+            - generic_key:
+                descriptor_key: admin
+                descriptor_value: "1"
+
+

Update highlights: +* [minor] rate_limit_policies is a list instead of a map indexed by the name/namespace. +* [major] no distinction between "rules" and global actions +* [major] more aligned with RLS: multiple descriptors structured by "rate limit configurations" with matching rules

+

WASM-SHIM

+

WASM filter rate limit policies are not exactly the same as user managed RateLimitPolicy +custom resources. The WASM filter rate limit policies is part of the internal configuration +and therefore not exposed to the end user.

+

At the WASM filter level, there are no route level or gateway level rate limit policies. +The rate limit policies in the wasm plugin configuration may not map 1:1 to +user managed RateLimitPolicy custom resources. WASM rate limit policies have an internal logical +name and a set of hostnames to activate it based on the incoming request’s host header.

+

The WASM filter builds a tree based data structure holding the rate limit policies. +The longest (sub)domain match is used to select the policy to be applied. +Only one policy is being applied per invocation.

+

rate limit configurations

+

The WASM filter configuration object contains a list of rate limit configurations +to build a list of Envoy's RLS descriptors. These configurations are defined at

+
rate_limit_policies[*].gateway_actions[*].configurations
+
+

For example:

+
configurations:
+- actions:
+   - generic_key:
+        descriptor_key: admin
+        descriptor_value: "1"
+
+

How to read the policy:

+
    +
  • +

    Each configuration produces, at most, one descriptor. Depending on the incoming request, one configuration may or may not produce a rate limit descriptor.

    +
  • +
  • +

    Each policy configuration has associated, optionally, a set of rules to match. Rules allow matching hosts and/or methods and/or paths. Matching occurs when at least one rule applies against the incoming request. If rules are not set, it is equivalent to matching all the requests.

    +
  • +
  • +

    Each configuration object defines a list of actions. Each action may (or may not) produce a descriptor entry (descriptor list item). If an action cannot append a descriptor entry, no descriptor is generated for the configuration.

    +
  • +
+

Note: The external rate limit service will be called when the gateway_actions object produces at least one not empty descriptor.

+

example

+

WASM filter rate limit policy for *.toystore.com. I want some rate limit descriptors configuration +only for api.toystore.com and another set of descriptors for admin.toystore.com. +The wasm filter config would look like this:

+
failure_mode_deny: true
+rate_limit_policies:
+  - name: toystore
+    rate_limit_domain: toystore-app
+    upstream_cluster: rate-limit-cluster
+    hostnames: ["*.toystore.com"]
+    gateway_actions:
+      - configurations:  # no rules. Applies to all *.toystore.com traffic
+          - actions:
+              - generic_key:
+                  descriptor_key: toystore-app
+                  descriptor_value: "1"
+      - rules:
+          - hosts: ["api.toystore.com"]
+        configurations:
+          - actions:
+              - generic_key:
+                  descriptor_key: api
+                  descriptor_value: "1"
+      - rules:
+          - hosts: ["admin.toystore.com"]
+        configurations:
+          - actions:
+              - generic_key:
+                  descriptor_key: admin
+                  descriptor_value: "1"
+
+
    +
  • When a request for api.toystore.com hits the filter, the descriptors generated would be:
  • +
+

descriptor 1 +

("toystore-app", "1")
+
+descriptor 2 +
("api", "1")
+

+
    +
  • When a request for admin.toystore.com hits the filter, the descriptors generated would be:
  • +
+

descriptor 1 +

("toystore-app", "1")
+
+descriptor 2 +
("admin", "1")
+

+
    +
  • When a request for other.toystore.com hits the filter, the descriptors generated would be: +descriptor 1 +
    ("toystore-app", "1")
    +
  • +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/rate-limiting/index.html b/kuadrant-operator/doc/rate-limiting/index.html new file mode 100644 index 00000000..3b547e1e --- /dev/null +++ b/kuadrant-operator/doc/rate-limiting/index.html @@ -0,0 +1,2667 @@ + + + + + + + + + + + + + + + + + + + + + + + + RateLimitPolicy Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Kuadrant Rate Limiting

+

A Kuadrant RateLimitPolicy custom resource, often abbreviated "RLP":

+
    +
  1. Allows it to target Gateway API networking resources such as HTTPRoutes and Gateways, using these resources to obtain additional context, i.e., which traffic workload (HTTP attributes, hostnames, user attributes, etc) to rate limit.
  2. +
  3. Allows to specify which specific subsets of the targeted network resource to apply the limits to.
  4. +
  5. Abstracts the details of the underlying Rate Limit protocol and configuration resources, that have a much broader remit and surface area.
  6. +
  7. Supports cluster operators to set overrides (soon) and defaults that govern what can be done at the lower levels.
  8. +
+

How it works

+

Envoy's Rate Limit Service Protocol

+

Kuadrant's Rate Limit implementation relies on the Envoy's Rate Limit Service (RLS) protocol. The workflow per request goes: +1. On incoming request, the gateway checks the matching rules for enforcing rate limits, as stated in the RateLimitPolicy custom resources and targeted Gateway API networking objects +2. If the request matches, the gateway sends one RateLimitRequest to the external rate limiting service ("Limitador"). +1. The external rate limiting service responds with a RateLimitResponse back to the gateway with either an OK or OVER_LIMIT response code.

+

A RateLimitPolicy and its targeted Gateway API networking resource contain all the statements to configure both the ingress gateway and the external rate limiting service.

+

The RateLimitPolicy custom resource

+

Overview

+

The RateLimitPolicy spec includes, basically, two parts:

+
    +
  • A reference to an existing Gateway API resource (spec.targetRef)
  • +
  • Limit definitions (spec.limits)
  • +
+

Each limit definition includes: +* A set of rate limits (spec.limits.<limit-name>.rates[]) +* (Optional) A set of dynamic counter qualifiers (spec.limits.<limit-name>.counters[]) +* (Optional) A set of route selectors, to further qualify the specific routing rules when to activate the limit (spec.limits.<limit-name>.routeSelectors[]) +* (Optional) A set of additional dynamic conditions to activate the limit (spec.limits.<limit-name>.when[])

+ + + + + + +
Check out Kuadrant RFC 0002 to learn more about the Well-known Attributes that can be used to define counter qualifiers (counters) and conditions (when).
+ +

High-level example and field definition

+
apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: my-rate-limit-policy
+spec:
+  # reference to an existing networking resource to attach the policy to
+  # it can be a Gateway API HTTPRoute or Gateway resource
+  # it can only refer to objects in the same namespace as the RateLimitPolicy
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute / Gateway
+    name: myroute / mygateway
+
+  # the limits definitions to apply to the network traffic routed through the targeted resource
+  limits:
+    "my_limit":
+      # the rate limits associated with this limit definition
+      # e.g., to specify a 50rps rate limit, add `{ limit: 50, duration: 1, unit: secod }`
+      rates: []
+
+      # (optional) counter qualifiers
+      # each dynamic value in the data plane starts a separate counter, combined with each rate limit
+      # e.g., to define a separate rate limit for each user name detected by the auth layer, add `metadata.filter_metadata.envoy\.filters\.http\.ext_authz.username`
+      # check out Kuadrant RFC 0002 (https://github.com/Kuadrant/architecture/blob/main/rfcs/0002-well-known-attributes.md) to learn more about the Well-known Attributes that can be used in this field
+      counters: []
+
+      # (optional) further qualification of the scpecific HTTPRouteRules within the targeted HTTPRoute that should trigger the limit
+      # each element contains a HTTPRouteMatch object that will be used to select HTTPRouteRules that include at least one identical HTTPRouteMatch
+      # the HTTPRouteMatch part does not have to be fully identical, but the what's stated in the selector must be identically stated in the HTTPRouteRule
+      # do not use it on RateLimitPolicies that target a Gateway
+      routeSelectors: []
+
+      # (optional) additional dynamic conditions to trigger the limit.
+      # use it for filterring attributes not supported by HTTPRouteRule or with RateLimitPolicies that target a Gateway
+      # check out Kuadrant RFC 0002 (https://github.com/Kuadrant/architecture/blob/main/rfcs/0002-well-known-attributes.md) to learn more about the Well-known Attributes that can be used in this field
+      when: []
+
+

Using the RateLimitPolicy

+

Targeting a HTTPRoute networking resource

+

When a RLP targets a HTTPRoute, the policy is enforced to all traffic routed according to the rules and hostnames specified in the HTTPRoute, across all Gateways referenced in the spec.parentRefs field of the HTTPRoute.

+

The targeted HTTPRoute's rules and/or hostnames to which the policy must be enforced can be filtered to specific subsets, by specifying the routeSelectors field of the limit definition.

+

Target a HTTPRoute by setting the spec.targetRef field of the RLP as follows:

+
apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: <RLP name>
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: <HTTPRoute Name>
+  limits: {}
+
+

Rate limit policy targeting a HTTPRoute resource

+

Multiple HTTPRoutes with the same hostname

+

When multiple HTTPRoutes state the same hostname, these HTTPRoutes are usually all admitted and merged together by the gateway implemetation in the same virtual host configuration of the gateway. Similarly, the Kuadrant control plane will also register all rate limit policies referencing the HTTPRoutes, activating the correct limits across policies according to the routing matching rules of the targeted HTTPRoutes.

+

Hostnames and wildcards

+

If a RLP targets a route defined for *.com and another RLP targets another route for api.com, the Kuadrant control plane will not merge these two RLPs. Rather, it will mimic the behavior of gateway implementation by which the "most specific hostname wins", thus enforcing only the corresponding applicable policies and limit definitions.

+

E.g., a request coming for api.com will be rate limited according to the rules from the RLP that targets the route for api.com; while a request for other.com will be rate limited with the rules from the RLP targeting the route for *.com.

+

Example with 3 RLPs and 3 HTTPRoutes: +- RLP A → HTTPRoute A (a.toystore.com) +- RLP B → HTTPRoute B (b.toystore.com) +- RLP W → HTTPRoute W (*.toystore.com)

+

Expected behavior: +- Request to a.toystore.com → RLP A will be enforced +- Request to b.toystore.com → RLP B will be enforced +- Request to other.toystore.com → RLP W will be enforced

+

Targeting a Gateway networking resource

+

When a RLP targets a Gateway, the policy will be enforced to all HTTP traffic hitting the gateway, unless a more specific RLP targeting a matching HTTPRoute exists.

+

Any new HTTPRoute referrencing the gateway as parent will be automatically covered by the RLP that targets the Gateway, as well as changes in the existing HTTPRoutes.

+

This effectively provides cluster operators with the ability to set defaults to protect the infrastructure against unplanned and malicious network traffic attempt, such as by setting preemptive limits for hostnames and hostname wildcards.

+

Target a Gateway HTTPRoute by setting the spec.targetRef field of the RLP as follows:

+
apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: <RLP name>
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: Gateway
+    name: <Gateway Name>
+  limits: {}
+
+

rate limit policy targeting a Gateway resource

+

Overlapping Gateway and HTTPRoute RLPs

+

Gateway-targeted RLPs will serve as a default to protect all traffic routed through the gateway until a more specific HTTPRoute-targeted RLP exists, in which case the HTTPRoute RLP prevails.

+

Example with 4 RLPs, 3 HTTPRoutes and 1 Gateway (plus 2 HTTPRoute and 2 Gateways without RLPs attached): +- RLP A → HTTPRoute A (a.toystore.com) → Gateway G (*.com) +- RLP B → HTTPRoute B (b.toystore.com) → Gateway G (*.com) +- RLP W → HTTPRoute W (*.toystore.com) → Gateway G (*.com) +- RLP G → Gateway G (*.com)

+

Expected behavior: +- Request to a.toystore.com → RLP A will be enforced +- Request to b.toystore.com → RLP B will be enforced +- Request to other.toystore.com → RLP W will be enforced +- Request to other.com (suppose a route exists) → RLP G will be enforced +- Request to yet-another.net (suppose a route and gateway exist) → No RLP will be enforced

+

Limit definition

+

A limit will be activated whenever a request comes in and the request matches: +- any of the route rules selected by the limit (via routeSelectors or implicit "catch-all" selector), and +- all of the when conditions specified in the limit.

+

A limit can define: +- counters that are qualified based on dynamic values fetched from the request, or +- global counters (implicitly, when no qualified counter is specified)

+

A limit is composed of one or more rate limits.

+

E.g.

+
spec:
+  limits:
+    "toystore-all":
+      rates:
+      - limit: 5000
+        duration: 1
+        unit: second
+
+    "toystore-api-per-username":
+      rates:
+      - limit: 100
+        duration: 1
+        unit: second
+      - limit: 1000
+        duration: 1
+        unit: minute
+      counters:
+      - auth.identity.username
+      routeSelectors:
+        hostnames:
+        - api.toystore.com
+
+    "toystore-admin-unverified-users":
+      rates:
+      - limit: 250
+        duration: 1
+        unit: second
+      routeSelectors:
+        hostnames:
+        - admin.toystore.com
+      when:
+      - selector: auth.identity.email_verified
+        operator: eq
+        value: "false"
+
+ + + + + + + + + + + + + + + + + + + + + +
Request toRate limits enforced
api.toystore.com100rps/username or 1000rpm/username (whatever happens first)
admin.toystore.com250rps
other.toystore.com5000rps
+

Route selectors

+

The routeSelectors field of the limit definition allows to specify selectors of routes (or parts of a route), that transitively induce a set of conditions for a limit to be enforced. It is defined as a set of route matching rules, where these rules must exist, partially or identically stated within the HTTPRouteRules of the HTTPRoute that is targeted by the RLP.

+

The field is typed as a list of objects based on a special type defined from Gateway API's HTTPRouteMatch type (matches subfield of the route selector object), and an additional field hostnames.

+

Route selectors matches and the HTTPRoute's HTTPRouteMatches are pairwise compared to select or not select HTTPRouteRules that should activate a limit. To decide whether the route selector selects a HTTPRouteRule or not, for each pair of route selector HTTPRouteMatch and HTTPRoute HTTPRouteMatch: +1. The route selector selects the HTTPRoute's HTTPRouteRule if the HTTPRouteRule contains at least one HTTPRouteMatch that specifies fields that are literally identical to all the fields specified by at least one HTTPRouteMatch of the route selector. +2. A HTTPRouteMatch within a HTTPRouteRule may include other fields that are not specified in a route selector match, and yet the route selector match selects the HTTPRouteRule if all fields of the route selector match are identically included in the HTTPRouteRule's HTTPRouteMatch; the opposite is NOT true. +3. Each field path of a HTTPRouteMatch, as well as each field method of a HTTPRouteMatch, as well as each element of the fields headers and queryParams of a HTTPRouteMatch, is atomic – this is true for the HTTPRouteMatches within a HTTPRouteRule, as well as for HTTPRouteMatches of a route selector.

+

Additionally, at least one hostname specified in a route selector must identically match one of the hostnames specified (or inherited, when omitted) by the targeted HTTPRoute.

+

The semantics of the route selectors allows to assertively relate limit definitions to routing rules, with benefits for identifying the subsets of the network that are covered by a limit, while preventing unreachable definitions, as well as the overhead associated with the maintenance of such rules across multiple resources throughout time, according to network topology beneath. Moreover, the requirement of not having to be a full copy of the targeted HTTPRouteRule matches, but only partially identical, helps prevent repetition to some degree, as well as it enables to more easily define limits that scope across multiple HTTPRouteRules (by specifying less rules in the selector).

+

A few rules and corner cases to keep in mind while using the RLP's routeSelectors: +1. The golden rule – The route selectors in a RLP are not to be read strictly as the route matching rules that activate a limit, but as selectors of the route rules that activate the limit. +2. Due to (1) above, this can lead to cases, e.g., where a route selector that states matches: [{ method: POST }] selects a HTTPRouteRule that defines matches: [{ method: POST }, { method: GET }], effectively causing the limit to be activated on requests to the HTTP method POST, but also to the HTTP method GET. +3. The requirement for the route selector match to state patterns that are identical to the patterns stated by the HTTPRouteRule (partially or entirely) makes, e.g., a route selector such as matches: { path: { type: PathPrefix, value: /foo } } to select a HTTPRouteRule that defines matches: { path: { type: PathPrefix, value: /foo }, method: GET }, but not to select a HTTPRouteRule that only defines matches: { method: GET }, even though the latter includes technically all HTTP paths; nor it selects a HTTPRouteRule that only defines matches: { path: { type: Exact, value: /foo } }, even though all requests to the exact path /foo are also technically requests to /foo*. +4. The atomicity property of fields of the route selectors makes, e.g., a route selector such as matches: { path: { value: /foo } } to select a HTTPRouteRule that defines matches: { path: { value: /foo } }, but not to select a HTTPRouteRule that only defines matches: { path: { type: PathPrefix, value: /foo } }. (This case may actually never happen because PathPrefix is the default value for path.type and will be set automatically by the Kubernetes API server.)

+

Due to the nature of route selectors of defining pointers to HTTPRouteRules, the routeSelectors field is not supported in a RLP that targets a Gateway resource.

+

when conditions

+

when conditions can be used to scope a limit (i.e. to filter the traffic to which a limit definition applies) without any coupling to the underlying network topology, i.e. without making direct references to HTTPRouteRules via routeSelectors.

+

The syntax of the when conditions selectors comply with Kuadrant's Well-known Attributes (RFC 0002).

+

Use the when conditions to conditionally activate limits based on attributes that cannot be expressed in the HTTPRoutes' spec.hostnames and spec.rules.matches fields, or in general in RLPs that target a Gateway.

+

Examples

+

Check out the following user guides for examples of rate limiting services with Kuadrant: +* Simple Rate Limiting for Application Developers +* Authenticated Rate Limiting for Application Developers +* Gateway Rate Limiting for Cluster Operators +* Authenticated Rate Limiting with JWTs and Kubernetes RBAC

+

Known limitations

+
    +
  • One HTTPRoute can only be targeted by one RLP.
  • +
  • One Gateway can only be targeted by one RLP.
  • +
  • RLPs can only target HTTPRoutes/Gateways defined within the same namespace of the RLP.
  • +
+

Implementation details

+

Driven by limitations related to how Istio injects configuration in the filter chains of the ingress gateways, Kuadrant relies on Envoy's Wasm Network filter in the data plane, to manage the integration with rate limiting service ("Limitador"), instead of the Rate Limit filter.

+

Motivation: Multiple rate limit domains
+The first limitation comes from having only one filter chain per listener. This often leads to one single global rate limiting filter configuration per gateway, and therefore to a shared rate limit domain across applications and policies. Even though, in a rate limit filter, the triggering of rate limit calls, via actions to build so-called "descriptors", can be defined at the level of the virtual host and/or specific route rule, the overall rate limit configuration is only one, i.e., always the same rate limit domain for all calls to Limitador.

+

On the other hand, the possibility to configure and invoke the rate limit service for multiple domains depending on the context allows to isolate groups of policy rules, as well as to optimize performance in the rate limit service, which can rely on the domain for indexation.

+

Motivation: Fine-grained matching rules
+A second limitation of configuring the rate limit filter via Istio, particularly from Gateway API resources, is that rate limit descriptors at the level of a specific HTTP route rule require "named routes" – defined only in an Istio VirtualService resource and referred in an EnvoyFilter one. Because Gateway API HTTPRoute rules lack a "name" property1, as well as the Istio VirtualService resources are only ephemeral data structures handled by Istio in-memory in its implementation of gateway configuration for Gateway API, where the names of individual route rules are auto-generated and not referable by users in a policy23, rate limiting by attributes of the HTTP request (e.g., path, method, headers, etc) would be very limited while depending only on Envoy's Rate Limit filter.

+

Motivated by the desire to support multiple rate limit domains per ingress gateway, as well as fine-grained HTTP route matching rules for rate limiting, Kuadrant implements a wasm-shim that handles the rules to invoke the rate limiting service, complying with Envoy's Rate Limit Service (RLS) protocol.

+

The wasm module integrates with the gateway in the data plane via Wasm Network filter, and parses a configuration composed out of user-defined RateLimitPolicy resources by the Kuadrant control plane. Whereas the rate limiting service ("Limitador") remains an implementation of Envoy's RLS protocol, capable of being integrated directly via Rate Limit extension or by Kuadrant, via wasm module for the Istio Gateway API implementation.

+

As a consequence of this design: +- Users can define fine-grained rate limit rules that match their Gateway and HTTPRoute definitions including for subsections of these. +- Rate limit definitions are insulated, not leaking across unrelated policies or applications. +- Conditions to activate limits are evaluated in the context of the gateway process, reducing the gRPC calls to the external rate limiting service only to the cases where rate limit counters are known in advance to have to be checked/incremented. +- The rate limiting service can rely on the indexation to look up for groups of limit definitions and counters. +- Components remain compliant with industry protocols and flexible for different integration options.

+

A Kuadrant wasm-shim configuration for a composition of RateLimitPolicy custom resources looks like the following and it is generated automatically by the Kuadrant control plane:

+
apiVersion: extensions.istio.io/v1alpha1
+kind: WasmPlugin
+metadata:
+  name: kuadrant-istio-ingressgateway
+  namespace: istio-system
+  
+spec:
+  phase: STATS
+  pluginConfig:
+    failureMode: deny
+    rateLimitPolicies:
+    - domain: istio-system/gw-rlp # allows isolating policy rules and improve performance of the rate limit service
+      hostnames:
+      - '*.website'
+      - '*.io'
+      name: istio-system/gw-rlp
+      rules: # match rules from the gateway and according to conditions specified in the rlp
+      - conditions:
+        - allOf:
+          - operator: startswith
+            selector: request.url_path
+            value: /
+        data:
+        - static: # tells which rate limit definitions and counters to activate
+            key: limit.internet_traffic_all__593de456
+            value: "1"
+      - conditions:
+        - allOf:
+          - operator: startswith
+            selector: request.url_path
+            value: /
+          - operator: endswith
+            selector: request.host
+            value: .io
+        data:
+        - static:
+            key: limit.internet_traffic_apis_per_host__a2b149d2
+            value: "1"
+        - selector:
+            selector: request.host
+      service: kuadrant-rate-limiting-service
+    - domain: default/app-rlp
+      hostnames:
+      - '*.toystore.website'
+      - '*.toystore.io'
+      name: default/app-rlp
+      rules: # matches rules from a httproute and additional specified in the rlp
+      - conditions:
+        - allOf:
+          - operator: startswith
+            selector: request.url_path
+            value: /assets/
+        data:
+        - static:
+            key: limit.toystore_assets_all_domains__8cfb7371
+            value: "1"
+      - conditions:
+        - allOf:
+          - operator: startswith
+            selector: request.url_path
+            value: /v1/
+          - operator: eq
+            selector: request.method
+            value: GET
+          - operator: endswith
+            selector: request.host
+            value: .toystore.website
+          - operator: eq
+            selector: auth.identity.username
+            value: ""
+        - allOf:
+          - operator: startswith
+            selector: request.url_path
+            value: /v1/
+          - operator: eq
+            selector: request.method
+            value: POST
+          - operator: endswith
+            selector: request.host
+            value: .toystore.website
+          - operator: eq
+            selector: auth.identity.username
+            value: ""
+        data:
+        - static:
+            key: limit.toystore_v1_website_unauthenticated__3f9c40c6
+            value: "1"
+      service: kuadrant-rate-limiting-service
+  selector:
+    matchLabels:
+      istio.io/gateway-name: istio-ingressgateway
+  url: oci://quay.io/kuadrant/wasm-shim:v0.3.0
+
+
+
+
    +
  1. +

    https://github.com/kubernetes-sigs/gateway-api/pull/996 

    +
  2. +
  3. +

    https://github.com/istio/istio/issues/36790 

    +
  4. +
  5. +

    https://github.com/istio/istio/issues/37346 

    +
  6. +
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/ratelimitpolicy-reference/index.html b/kuadrant-operator/doc/ratelimitpolicy-reference/index.html new file mode 100644 index 00000000..5be40613 --- /dev/null +++ b/kuadrant-operator/doc/ratelimitpolicy-reference/index.html @@ -0,0 +1,2407 @@ + + + + + + + + + + + + + + + + + + + + + + + + RateLimitPolicy (v1beta2) - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

The RateLimitPolicy Custom Resource Definition (CRD)

+ +

RateLimitPolicy

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
specRateLimitPolicySpecYesThe specfication for RateLimitPolicy custom resource
statusRateLimitPolicyStatusNoThe status for the custom resource
+

RateLimitPolicySpec

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
targetRefPolicyTargetReferenceYesReference to a Kuberentes resource that the policy attaches to
limitsMapLimit>NoLimit definitions
+

Limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
rates[]RateLimitNoList of rate limits associated with the limit definition
counters[]StringNoList of rate limit counter qualifiers. Items must be a valid Well-known attribute. Each distinct value resolved in the data plane starts a separate counter for each rate limit.
routeSelectors[]RouteSelectorNoList of selectors of HTTPRouteRules whose matching rules activate the limit. At least one HTTPRouteRule must be selected to activate the limit. If omitted, all HTTPRouteRules of the targeted HTTPRoute activate the limit. Do not use it in policies targeting a Gateway.
when[]WhenConditionNoList of additional dynamic conditions (expressions) to activate the limit. All expression must evaluate to true for the limit to be applied. Use it for filterring attributes that cannot be expressed in the targeted HTTPRoute's spec.hostnames and spec.rules.matches fields, or when targeting a Gateway.
+

RateLimit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
limitNumberYesMaximum value allowed within the given period of time (duration)
durationNumberYesThe period of time in the specified unit that the limit applies
unitStringYesUnit of time for the duration of the limit. One-of: "second", "minute", "hour", "day".
+

RouteSelector

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
hostnames[]HostnameNoList of hostnames of the HTTPRoute that activate the limit
matches[]HTTPRouteMatchNoList of selectors of HTTPRouteRules whose matching rules activate the limit
+

Check out Kuadrant Rate Limiting > Route selectors for the semantics of how route selectors work.

+

WhenCondition

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeRequiredDescription
selectorStringYesA valid Well-known attribute whose resolved value in the data plane will be compared to value, using the operator.
operatorStringYesThe binary operator to be applied to the resolved value specified by the selector. One-of: "eq" (equal to), "neq" (not equal to)
valueStringYesThe static value to be compared to the one resolved from the selector.
+

RateLimitPolicyStatus

+ + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
observedGenerationStringNumber of the last observed generation of the resource. Use it to check if the status info is up to date with latest resource spec.
conditions[]ConditionSpecList of conditions that define that status of the resource.
+

ConditionSpec

+
    +
  • The lastTransitionTime field provides a timestamp for when the entity last transitioned from one status to another.
  • +
  • The message field is a human-readable message indicating details about the transition.
  • +
  • The reason field is a unique, one-word, CamelCase reason for the condition’s last transition.
  • +
  • The status field is a string, with possible values True, False, and Unknown.
  • +
  • The type field is a string with the following possible values:
  • +
  • Available: the resource has successfully configured;
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
typeStringCondition Type
statusStringStatus: True, False, Unknown
reasonStringCondition state reason
messageStringCondition state description
lastTransitionTimeTimestampLast transition timestamp
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/index.html b/kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/index.html new file mode 100644 index 00000000..dd399d82 --- /dev/null +++ b/kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/index.html @@ -0,0 +1,2309 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authenticated Rate Limiting for Application Developers - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Authenticated Rate Limiting for Application Developers

+

This user guide walks you through an example of how to configure authenticated rate limiting for an application using Kuadrant.

+


+

Authenticated rate limiting rate limits the traffic directed to an application based on attributes of the client user, who is authenticated by some authentication method. A few examples of authenticated rate limiting use cases are: +- User A can send up to 50rps ("requests per second"), while User B can send up to 100rps. +- Each user can send up to 20rpm ("request per minute"). +- Admin users (members of the 'admin' group) can send up to 100rps, while regular users (non-admins) can send up to 20rpm and no more than 5rps.

+


+

In this guide, we will rate limit a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request. The API exposes an endpoint at GET http://api.toystore.com/toy, to mimic an operation of reading toy records.

+

We will define 2 users of the API, which can send requests to the API at different rates, based on their user IDs. The authentication method used is API key.

+ + + + + + + + + + + + + + + + + +
User IDRate limit
alice5rp10s ("5 requests every 10 seconds")
bob2rp10s ("2 requests every 10 seconds")
+


+

Run the steps ① → ④

+

① Setup

+

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, +where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

+
+

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

+
+

Clone the project:

+
git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator
+
+

Setup the environment:

+
make local-setup
+
+

Request an instance of Kuadrant:

+
kubectl -n kuadrant-system apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: Kuadrant
+metadata:
+  name: kuadrant
+spec: {}
+EOF
+
+

② Deploy the Toy Store API

+

Create the deployment:

+
kubectl apply -f examples/toystore/toystore.yaml
+
+

Create a HTTPRoute to route traffic to the service via Istio Ingress Gateway:

+

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - api.toystore.com
+  rules:
+  - matches:
+    - path:
+        type: Exact
+        value: "/toy"
+      method: GET
+    backendRefs:
+    - name: toystore
+      port: 80
+EOF
+
+

Verify the route works:

+
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 200 OK
+
+
+

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

+
kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &
+
+
+

③ Enforce authentication on requests to the Toy Store API

+

Create a Kuadrant AuthPolicy to configure the authentication:

+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: AuthPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  rules:
+  - paths: ["/toy"]
+  authScheme:
+    identity:
+    - name: api-key-users
+      apiKey:
+        selector:
+          matchLabels:
+            app: toystore
+        allNamespaces: true
+      credentials:
+        in: authorization_header
+        keySelector: APIKEY
+    response:
+    - name: identity
+      json:
+        properties:
+        - name: userid
+          valueFrom:
+            authJSON: auth.identity.metadata.annotations.secret\.kuadrant\.io/user-id
+      wrapper: envoyDynamicMetadata
+EOF
+
+

Verify the authentication works by sending a request to the Toy Store API without API key:

+
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: APIKEY realm="api-key-users"
+# x-ext-auth-reason: "credential not found"
+
+

Create API keys for users alice and bob to authenticate:

+
+

Note: Kuadrant stores API keys as Kubernetes Secret resources. User metadata can be stored in the annotations of the resource.

+
+
kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Secret
+metadata:
+  name: bob-key
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    app: toystore
+  annotations:
+    secret.kuadrant.io/user-id: bob
+stringData:
+  api_key: IAMBOB
+type: Opaque
+---
+apiVersion: v1
+kind: Secret
+metadata:
+  name: alice-key
+  labels:
+    authorino.kuadrant.io/managed-by: authorino
+    app: toystore
+  annotations:
+    secret.kuadrant.io/user-id: alice
+stringData:
+  api_key: IAMALICE
+type: Opaque
+EOF
+
+

④ Enforce authenticated rate limiting on requests to the Toy Store API

+

Create a Kuadrant RateLimitPolicy to configure rate limiting:

+

+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    "alice-limit":
+      rates:
+      - limit: 5
+        duration: 10
+        unit: second
+      when:
+      - selector: metadata.filter_metadata.envoy\.filters\.http\.ext_authz.identity.userid
+        operator: eq
+        value: alice
+    "bob-limit":
+      rates:
+      - limit: 2
+        duration: 10
+        unit: second
+      when:
+      - selector: metadata.filter_metadata.envoy\.filters\.http\.ext_authz.identity.userid
+        operator: eq
+        value: bob
+EOF
+
+
+

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

+
+


+

Verify the rate limiting works by sending requests as Alice and Bob.

+

Up to 5 successful (200 OK) requests every 10 seconds allowed for Alice, then 429 Too Many Requests:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Up to 2 successful (200 OK) requests every 10 seconds allowed for Bob, then 429 Too Many Requests:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Cleanup

+
make local-cleanup
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/index.html b/kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/index.html new file mode 100644 index 00000000..1ca51228 --- /dev/null +++ b/kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/index.html @@ -0,0 +1,2576 @@ + + + + + + + + + + + + + + + + + + + + + + + + Authenticated Rate Limiting with JWTs and Kubernetes RBAC - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Authenticated Rate Limiting with JWTs and Kubernetes RBAC

+

This user guide walks you through an example of how to use Kuadrant to protect an application with policies to enforce: +- authentication based OpenId Connect (OIDC) ID tokens (signed JWTs), issued by a Keycloak server; +- alternative authentication method by Kubernetes Service Account tokens; +- authorization delegated to Kubernetes RBAC system; +- rate limiting by user ID.

+


+

In this example, we will protect a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request.

+

The API listens to requests at the hostnames *.toystore.com, where it exposes the endpoints GET /toy*, POST /admin/toy and DELETE /amind/toy, respectively, to mimic operations of reading, creating, and deleting toy records.

+

Any authenticated user/service account can send requests to the Toy Store API, by providing either a valid Keycloak-issued access token or Kubernetes token.

+

Privileges to execute the requested operation (read, create or delete) will be granted according to the following RBAC rules, stored in the Kubernetes authorization system:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OperationEndpointRequired role
ReadGET /toy*toystore-reader
CreatePOST /admin/toytoystore-write
DeleteDELETE /admin/toytoystore-write
+

Each user will be entitled to a maximum of 5rp10s (5 requests every 10 seconds).

+

Requirements

+ +

Run the guide ① → ⑥

+

① Setup a cluster with Kuadrant

+

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, +where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

+
+

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

+
+

Clone the project:

+
git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator
+
+

Setup the environment:

+
make local-setup
+
+

Request an instance of Kuadrant:

+
kubectl -n kuadrant-system apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: Kuadrant
+metadata:
+  name: kuadrant
+spec: {}
+EOF
+
+

② Deploy the Toy Store API

+

Deploy the application in the default namespace:

+
kubectl apply -f examples/toystore/toystore.yaml
+
+

Route traffic to the application:

+
kubectl apply -f examples/toystore/httproute.yaml
+
+

API lifecycle

+

Lifecycle

+

Try the API unprotected

+
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 200 OK
+
+

It should return 200 OK.

+
+

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

+
kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &
+
+
+

③ Deploy Keycloak

+

Create the namesapce:

+
kubectl create namespace keycloak
+
+

Deploy Keycloak with a bootstrap realm, users, and clients:

+
kubectl apply -n keycloak -f https://raw.githubusercontent.com/Kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml
+
+
+

Note: The Keycloak server may take a couple of minutes to be ready.

+
+

④ Enforce authentication and authorization for the Toy Store API

+

Create a Kuadrant AuthPolicy to configure authentication and authorization:

+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: AuthPolicy
+metadata:
+  name: toystore-protection
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  authScheme:
+    identity:
+    - name: keycloak-users
+      oidc:
+        endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant
+    - name: k8s-service-accounts
+      kubernetes:
+        audiences:
+        - https://kubernetes.default.svc.cluster.local
+      extendedProperties:
+      - name: sub
+        valueFrom:
+          authJSON: auth.identity.user.username
+    authorization:
+    - name: k8s-rbac
+      kubernetes:
+        user:
+          valueFrom:
+            authJSON: auth.identity.sub
+    response:
+    - name: identity
+      json:
+        properties:
+        - name: userid
+          valueFrom:
+            authJSON: auth.identity.sub
+      wrapper: envoyDynamicMetadata
+EOF
+
+

Try the API missing authentication

+
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 401 Unauthorized
+# www-authenticate: Bearer realm="keycloak-users"
+# www-authenticate: Bearer realm="k8s-service-accounts"
+# x-ext-auth-reason: {"k8s-service-accounts":"credential not found","keycloak-users":"credential not found"}
+
+

Try the API without permission

+

Obtain an access token with the Keycloak server:

+
ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)
+
+

Send a request to the API as the Keycloak-authenticated user while still missing permissions:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 403 Forbidden
+
+

Create a Kubernetes Service Account to represent a consumer of the API associated with the alternative source of identities k8s-service-accounts:

+
kubectl apply -f - <<EOF
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: client-app-1
+EOF
+
+

Obtain an access token for the client-app-1 service account:

+
SA_TOKEN=$(kubectl create token client-app-1)
+
+

Send a request to the API as the service account while still missing permissions:

+
curl -H "Authorization: Bearer $SA_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 403 Forbidden
+
+

⑤ Grant access to the Toy Store API for user and service account

+

Create the toystore-reader and toystore-writer roles:

+
kubectl apply -f - <<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: toystore-reader
+rules:
+- nonResourceURLs: ["/toy*"]
+  verbs: ["get"]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: toystore-writer
+rules:
+- nonResourceURLs: ["/admin/toy"]
+  verbs: ["post", "delete"]
+EOF
+
+

Add permissions to the user and service account:

+ + + + + + + + + + + + + + + + + + + + +
UserKindRoles
johnUser registered in Keycloaktoystore-reader, toystore-writer
client-app-1Kuberentes Service Accounttoystore-reader
+
kubectl apply -f - <<EOF
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: toystore-readers
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: toystore-reader
+subjects:
+- kind: User
+  name: $(jq -R -r 'split(".") | .[1] | @base64d | fromjson | .sub' <<< "$ACCESS_TOKEN")
+- kind: ServiceAccount
+  name: client-app-1
+  namespace: default
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: toystore-writers
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: toystore-writer
+subjects:
+- kind: User
+  name: $(jq -R -r 'split(".") | .[1] | @base64d | fromjson | .sub' <<< "$ACCESS_TOKEN")
+EOF
+
+
+ Q: Can I use Roles and RoleBindings instead of ClusterRoles and ClusterRoleBindings? + + Yes, you can. + + The example above is for non-resource URL Kubernetes roles. For using `Roles` and `RoleBindings` instead of + `ClusterRoles` and `ClusterRoleBindings`, thus more flexible resource-based permissions to protect the API, + see the spec for [Kubernetes SubjectAccessReview authorization](https://github.com/Kuadrant/authorino/blob/v0.5.0/docs/features.md#kubernetes-subjectaccessreview-authorizationkubernetes) + in the Authorino docs. +
+ +

Try the API with permission

+

Send requests to the API as the Keycloak-authenticated user:

+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 200 OK
+
+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -H 'Host: api.toystore.com' -X POST http://localhost:9080/admin/toy -i
+# HTTP/1.1 200 OK
+
+

Send requests to the API as the Kubernetes service account:

+
curl -H "Authorization: Bearer $SA_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy -i
+# HTTP/1.1 200 OK
+
+
curl -H "Authorization: Bearer $SA_TOKEN" -H 'Host: api.toystore.com' -X POST http://localhost:9080/admin/toy -i
+# HTTP/1.1 403 Forbidden
+
+

⑥ Enforce rate limiting on requests to the Toy Store API

+

Create a Kuadrant RateLimitPolicy to configure rate limiting:

+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    "per-user":
+      rates:
+      - limit: 5
+        duration: 10
+        unit: second
+      counters:
+      - metadata.filter_metadata.envoy\.filters\.http\.ext_authz.identity.userid
+EOF
+
+
+

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

+
+

Try the API rate limited

+

Each user should be entitled to a maximum of 5 requests every 10 seconds.

+
+

Note: If the tokens have expired, you may need to refresh them first.

+
+

Send requests as the Keycloak-authenticated user:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H "Authorization: Bearer $ACCESS_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Send requests as the Kubernetes service account:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H "Authorization: Bearer $SA_TOKEN" -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Cleanup

+
make local-cleanup
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/index.html b/kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/index.html new file mode 100644 index 00000000..9a6531a2 --- /dev/null +++ b/kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/index.html @@ -0,0 +1,2275 @@ + + + + + + + + + + + + + + + + + + + + + + + + Gateway Rate Limiting for Cluster Operators - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Gateway Rate Limiting for Cluster Operators

+

This user guide walks you through an example of how to configure rate limiting for all routes attached to an ingress gateway.

+


+

Run the steps ① → ⑤

+

① Setup

+

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, +where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

+
+

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

+
+

Clone the project:

+
git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator
+
+

Setup the environment:

+
make local-setup
+
+

Request an instance of Kuadrant:

+
kubectl -n kuadrant-system apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: Kuadrant
+metadata:
+  name: kuadrant
+spec: {}
+EOF
+
+

② Create the ingress gateways

+
kubectl -n istio-system apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: external
+  annotations:
+    kuadrant.io/namespace: kuadrant-system
+    networking.istio.io/service-type: ClusterIP
+spec:
+  gatewayClassName: istio
+  listeners:
+  - name: external
+    port: 80
+    protocol: HTTP
+    hostname: '*.io'
+    allowedRoutes:
+      namespaces:
+        from: All
+---
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: internal
+  annotations:
+    kuadrant.io/namespace: kuadrant-system
+    networking.istio.io/service-type: ClusterIP
+spec:
+  gatewayClassName: istio
+  listeners:
+  - name: local
+    port: 80
+    protocol: HTTP
+    hostname: '*.local'
+    allowedRoutes:
+      namespaces:
+        from: All
+EOF
+
+

③ Enforce rate limiting on requests incoming through the external gateway

+
    ┌───────────┐      ┌───────────┐
+    │ (Gateway) │      │ (Gateway) │
+    │  external │      │  internal │
+    │           │      │           │
+    │   *.io    │      │  *.local  │
+    └───────────┘      └───────────┘
+          ▲
+          │
+┌─────────┴─────────┐
+│ (RateLimitPolicy) │
+│       gw-rlp      │
+└───────────────────┘
+
+

Create a Kuadrant RateLimitPolicy to configure rate limiting:

+
kubectl apply -n istio-system -f - <<EOF
+apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: gw-rlp
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: Gateway
+    name: external
+  limits:
+    "global":
+      rates:
+      - limit: 5
+        duration: 10
+        unit: second
+EOF
+
+
+

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

+
+

④ Deploy a sample API to test rate limiting enforced at the level of the gateway

+
                           ┌───────────┐      ┌───────────┐
+┌───────────────────┐      │ (Gateway) │      │ (Gateway) │
+│ (RateLimitPolicy) │      │  external │      │  internal │
+│       gw-rlp      ├─────►│           │      │           │
+└───────────────────┘      │   *.io    │      │  *.local  │
+                           └─────┬─────┘      └─────┬─────┘
+                                 │                  │
+                                 └─────────┬────────┘
+                                           │
+                                 ┌─────────┴────────┐
+                                 │   (HTTPRoute)    │
+                                 │     toystore     │
+                                 │                  │
+                                 │ *.toystore.io    │
+                                 │ *.toystore.local │
+                                 └────────┬─────────┘
+                                          │
+                                   ┌──────┴───────┐
+                                   │   (Service)  │
+                                   │   toystore   │
+                                   └──────────────┘
+
+

Deploy the sample API:

+
kubectl apply -f examples/toystore/toystore.yaml
+
+

Route traffic to the API from both gateways:

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: toystore
+spec:
+  parentRefs:
+  - name: external
+    namespace: istio-system
+  - name: internal
+    namespace: istio-system
+  hostnames:
+  - "*.toystore.io"
+  - "*.toystore.local"
+  rules:
+  - backendRefs:
+    - name: toystore
+      port: 80
+EOF
+
+

⑤ Verify the rate limiting works by sending requests in a loop

+

Expose the gateways, respectively at the port numbers 9081 and 9082 of the local host:

+
kubectl port-forward -n istio-system service/external-istio 9081:80 2>&1 >/dev/null &
+kubectl port-forward -n istio-system service/internal-istio 9082:80 2>&1 >/dev/null &
+
+

Up to 5 successful (200 OK) requests every 10 seconds through the external ingress gateway (*.io), then 429 Too Many Requests:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.io' http://localhost:9081 | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Unlimited successful (200 OK) through the internal ingress gateway (*.local):

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.local' http://localhost:9082 | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Cleanup

+
make local-cleanup
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/index.html b/kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/index.html new file mode 100644 index 00000000..81231b66 --- /dev/null +++ b/kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/index.html @@ -0,0 +1,2200 @@ + + + + + + + + + + + + + + + + + + + + + + + + Simple Rate Limiting for Application Developers - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Simple Rate Limiting for Application Developers

+

This user guide walks you through an example of how to configure rate limiting for an endpoint of an application using Kuadrant.

+


+

In this guide, we will rate limit a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request. The API listens to requests at the hostname api.toystore.com, where it exposes the endpoints GET /toys* and POST /toys, respectively, to mimic a operations of reading and writing toy records.

+

We will rate limit the POST /toys endpoint to a maximum of 5rp10s ("5 requests every 10 seconds").

+


+

Run the steps ① → ③

+

① Setup

+

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, +where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

+
+

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

+
+

Clone the project:

+
git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator
+
+

Setup the environment:

+
make local-setup
+
+

Request an instance of Kuadrant:

+
kubectl -n kuadrant-system apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta1
+kind: Kuadrant
+metadata:
+  name: kuadrant
+spec: {}
+EOF
+
+

② Deploy the Toy Store API

+

Create the deployment:

+
kubectl apply -f examples/toystore/toystore.yaml
+
+

Create a HTTPRoute to route traffic to the service via Istio Ingress Gateway:

+

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: toystore
+spec:
+  parentRefs:
+  - name: istio-ingressgateway
+    namespace: istio-system
+  hostnames:
+  - api.toystore.com
+  rules:
+  - matches:
+    - method: GET
+      path:
+        type: PathPrefix
+        value: "/toys"
+    backendRefs:
+    - name: toystore
+      port: 80
+  - matches: # it has to be a separate HTTPRouteRule so we do not rate limit other endpoints
+    - method: POST
+      path:
+        type: Exact
+        value: "/toys"
+    backendRefs:
+    - name: toystore
+      port: 80
+EOF
+
+

Verify the route works:

+
curl -H 'Host: api.toystore.com' http://localhost:9080/toys -i
+# HTTP/1.1 200 OK
+
+
+

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

+
kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &
+
+
+

③ Enforce rate limiting on requests to the Toy Store API

+

Create a Kuadrant RateLimitPolicy to configure rate limiting:

+

+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1beta2
+kind: RateLimitPolicy
+metadata:
+  name: toystore
+spec:
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: HTTPRoute
+    name: toystore
+  limits:
+    "create-toy":
+      rates:
+      - limit: 5
+        duration: 10
+        unit: second
+      routeSelectors:
+      - matches: # selects the 2nd HTTPRouteRule of the targeted route
+        - method: POST
+          path:
+            type: Exact
+            value: "/toys"
+EOF
+
+
+

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

+
+


+

Verify the rate limiting works by sending requests in a loop.

+

Up to 5 successful (200 OK) requests every 10 seconds to POST /toys, then 429 Too Many Requests:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.com' http://localhost:9080/toys -X POST | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Unlimited successful (200 OK) to GET /toys:

+
while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.com' http://localhost:9080/toys | egrep --color "\b(429)\b|$"; sleep 1; done
+
+

Cleanup

+
make local-cleanup
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/kuadrant-operator/index.html b/kuadrant-operator/index.html new file mode 100644 index 00000000..b154abbc --- /dev/null +++ b/kuadrant-operator/index.html @@ -0,0 +1,2503 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Kuadrant Operator

+

Code Style +Testing +codecov +License

+

The Operator to install and manage the lifecycle of the Kuadrant components deployments.

+ + + + +

Overview

+

Kuadrant is a re-architecture of API Management using Cloud Native concepts and separating the components to be less coupled, +more reusable and leverage the underlying kubernetes platform. It aims to deliver a smooth experience to providers and consumers +of applications & services when it comes to rate limiting, authentication, authorization, discoverability, change management, usage contracts, insights, etc.

+

Kuadrant aims to produce a set of loosely coupled functionalities built directly on top of Kubernetes. +Furthermore, it only strives to provide what Kubernetes doesn’t offer out of the box, i.e. Kuadrant won’t be designing a new Gateway/proxy, +instead it will opt to connect with what’s there and what’s being developed (think Envoy, Istio, GatewayAPI).

+

Kuadrant is a system of cloud-native k8s components that grows as users’ needs grow.

+
    +
  • From simple protection of a Service (via AuthN) that is used by teammates working on the same cluster, or “sibling” services, up to AuthZ of users using OIDC plus custom policies.
  • +
  • From no rate-limiting to rate-limiting for global service protection on to rate-limiting by users/plans
  • +
+

Architecture

+

Kuadrant relies on Istio and the Gateway API +to operate the cluster (Istio's) ingress gateway to provide API management with authentication (authN), +authorization (authZ) and rate limiting capabilities.

+

Kuadrant components

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CRDDescription
Control PlaneThe control plane takes the customer desired configuration (declaratively as kubernetes custom resources) as input and ensures all components are configured to obey customer's desired behavior.
This repository contains the source code of the kuadrant control plane
Kuadrant OperatorA Kubernetes Operator to manage the lifecycle of the kuadrant deployment
AuthorinoThe AuthN/AuthZ enforcer. As the external istio authorizer (envoy external authorization serving gRPC service)
LimitadorThe external rate limiting service. It exposes a gRPC service implementing the Envoy Rate Limit protocol (v3)
Authorino OperatorA Kubernetes Operator to manage Authorino instances
Limitador OperatorA Kubernetes Operator to manage Limitador instances
+

Provided APIs

+

The kuadrant control plane owns the following Custom Resource Definitions, CRDs:

+ + + + + + + + + + + + + + + + + + + + +
CRDDescriptionExample
RateLimitPolicy CRD [doc] [reference]Enable access control on workloads based on HTTP rate limitingRateLimitPolicy CR
AuthPolicy CRDEnable AuthN and AuthZ based access control on workloadsAuthPolicy CR
+

Additionally, Kuadrant provides the following CRDs

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CRDOwnerDescriptionExample
Kuadrant CRDKuadrant OperatorRepresents an instance of kuadrantKuadrant CR
Limitador CRDLimitador OperatorRepresents an instance of LimitadorLimitador CR
Authorino CRDAuthorino OperatorRepresents an instance of AuthorinoAuthorino CR
+

Kuadrant Architecture

+

Getting started

+

Pre-requisites

+ +

Installing Kuadrant

+

Installing Kuadrant is a two-step procedure. Firstly, install the Kuadrant Operator and secondly, +request a Kuadrant instance by creating a Kuadrant custom resource.

+

1. Install the Kuadrant Operator

+

The Kuadrant Operator is available in public community operator catalogs, such as the Kubernetes OperatorHub.io and the Openshift Container Platform and OKD OperatorHub.

+

Kubernetes

+

The operator is available from OperatorHub.io. +Just go to the linked page and follow installation steps (or just run these two commands):

+
# Install Operator Lifecycle Manager (OLM), a tool to help manage the operators running on your cluster.
+
+curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.23.1/install.sh | bash -s v0.23.1
+
+# Install the operator by running the following command:
+
+kubectl create -f https://operatorhub.io/install/kuadrant-operator.yaml
+
+

Openshift

+

The operator is available from the Openshift Console OperatorHub. +Just follow installation steps choosing the "Kuadrant Operator" from the catalog:

+

Kuadrant Operator in OperatorHub

+

2. Request a Kuadrant instance

+

Create the namespace:

+
kubectl create namespace kuadrant
+
+

Apply the Kuadrant custom resource:

+
kubectl -n kuadrant apply -f - <<EOF
+---
+apiVersion: kuadrant.io/v1beta1
+kind: Kuadrant
+metadata:
+  name: kuadrant-sample
+spec: {}
+EOF
+
+

Protect your service

+

If you are an API Provider

+
    +
  • Deploy the service/API to be protected ("Upstream")
  • +
  • Expose the service/API using the kubernetes Gateway API, ie + HTTPRoute object.
  • +
  • Write and apply the Kuadrant's RateLimitPolicy and/or + AuthPolicy custom resources targeting the HTTPRoute resource + to have your API protected.
  • +
+

If you are a Cluster Operator

+
    +
  • (Optionally) deploy istio ingress gateway using the + Gateway resource.
  • +
  • Write and apply the Kuadrant's RateLimitPolicy and/or + AuthPolicy custom resources targeting the Gateway resource + to have your gateway traffic protected.
  • +
+

User guides

+

The user guides section of the docs gathers several use-cases as well as the instructions to implement them using kuadrant.

+ +

Kuadrant Rate Limiting

+

Documentation

+

Docs can be found on the Kuadrant website.

+

Contributing

+

The Development guide describes how to build the kuadrant operator and +how to test your changes before submitting a patch or opening a PR.

+

Join us on kuadrant.slack.com +for live discussions about the roadmap and more.

+

Licensing

+

This software is licensed under the Apache 2.0 license.

+

See the LICENSE and NOTICE files that should have been provided along with this software for details.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/doc/development/index.html b/limitador-operator/doc/development/index.html new file mode 100644 index 00000000..72c2e4bd --- /dev/null +++ b/limitador-operator/doc/development/index.html @@ -0,0 +1,2477 @@ + + + + + + + + + + + + + + + + + + + + + + + + Developer's Guide - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Development Guide

+ + + + + + +

Technology stack required for development

+ +

Build

+
make
+
+

Run locally

+

You need an active session open to a kubernetes cluster.

+

Optionally, run kind with local-env-setup.

+
make local-env-setup
+
+

Then, run the operator locally

+
make run
+
+

Deploy the operator in a deployment object

+
make local-setup
+
+

Deploy the operator using OLM

+

You can deploy the operator using OLM just running a few commands. +No need to build any image. Kuadrant engineering team provides latest and +released version tagged images. They are available in +the Quay.io/Kuadrant image repository.

+

Create kind cluster

+
make kind-create-cluster
+
+

Deploy OLM system

+
make install-olm
+
+

Deploy the operator using OLM. The make deploy-catalog target accepts the following variables:

+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
CATALOG_IMGCatalog image URLquay.io/kuadrant/limitador-operator-catalog:latest
+
make deploy-catalog [CATALOG_IMG=quay.io/kuadrant/limitador-operator-catalog:latest]
+
+

Build custom OLM catalog

+

If you want to deploy (using OLM) a custom limitador operator, you need to build your own catalog.

+

Build operator bundle image

+

The make bundle target accepts the following variables:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault valueNotes
IMGOperator image URLquay.io/kuadrant/limitador-operator:latest
VERSIONBundle version0.0.0
RELATED_IMAGE_LIMITADORLimitador bundle URLquay.io/kuadrant/limitador:latestLIMITADOR_VERSION var could be use to build this URL providing the tag
+
    +
  • Build the bundle manifests
  • +
+
make bundle [IMG=quay.io/kuadrant/limitador-operator:latest] \
+            [VERSION=0.0.0] \
+            [RELATED_IMAGE_LIMITADOR=quay.io/kuadrant/limitador:latest]
+
+
    +
  • Build the bundle image from the manifests
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGOperator bundle image URLquay.io/kuadrant/limitador-operator-bundle:latest
+
make bundle-build [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]
+
+
    +
  • Push the bundle image to a registry
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGOperator bundle image URLquay.io/kuadrant/limitador-operator-bundle:latest
+
make bundle-push [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]
+
+

Build custom catalog

+

The catalog format will be File-based Catalog.

+

Make sure all the required bundles are pushed to the registry. It is required by the opm tool.

+

The make catalog target accepts the following variables:

+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
BUNDLE_IMGOperator bundle image URLquay.io/kuadrant/limitador-operator-bundle:latest
+
make catalog [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]
+
+
    +
  • Build the catalog image from the manifests
  • +
+ + + + + + + + + + + + + + + +
Makefile VariableDescriptionDefault value
CATALOG_IMGOperator catalog image URLquay.io/kuadrant/limitador-operator-catalog:latest
+
make catalog-build [CATALOG_IMG=quay.io/kuadrant/limitador-operator-catalog:latest]
+
+
    +
  • Push the catalog image to a registry
  • +
+
make catalog-push [CATALOG_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]
+
+

You can try out your custom catalog image following the steps of the +Deploy the operator using OLM section.

+

Cleaning up

+
make local-cleanup
+
+

Run tests

+

Unittests

+
make test-unit
+
+

Optionally, add TEST_NAME makefile variable to run specific test

+
make test-unit TEST_NAME=TestConstants
+
+

or even subtest

+
make test-unit TEST_NAME=TestLimitIndexEquals/empty_indexes_are_equal
+
+

Integration tests

+

Run integration tests

+
make test-integration
+
+

All tests

+

Run all tests

+
make test
+
+

Lint tests

+
make run-lint
+
+

(Un)Install Limitador CRD

+

You need an active session open to a kubernetes cluster.

+

Remove CRDs

+
make uninstall
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/doc/logging/index.html b/limitador-operator/doc/logging/index.html new file mode 100644 index 00000000..3c9306b4 --- /dev/null +++ b/limitador-operator/doc/logging/index.html @@ -0,0 +1,1984 @@ + + + + + + + + + + + + + + + + + + + + + + + + Logging - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Logging

+

The limitador operator outputs 3 levels of log messages: (from lowest to highest level) +1. debug +2. info (default) +3. error

+

info logging is restricted to high-level information. Actions like creating, deleting or updating kubernetes resources will be logged with reduced details about the corresponding objects, and without any further detailed logs of the steps in between, except for errors.

+

Only debug logging will include processing details.

+

To configure the desired log level, set the environment variable LOG_LEVEL to one of the supported values listed above. Default log level is info.

+

Apart from log level, the controller can output messages to the logs in 2 different formats: +- production (default): each line is a parseable JSON object with properties {"level":string, "ts":int, "msg":string, "logger":string, extra values...} +- development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\t<log-level>\t<logger>\t<message>\t{extra-values-as-json}

+

To configure the desired log mode, set the environment variable LOG_MODE to one of the supported values listed above. Default log level is production.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/doc/rate-limit-headers/index.html b/limitador-operator/doc/rate-limit-headers/index.html new file mode 100644 index 00000000..9096dcd9 --- /dev/null +++ b/limitador-operator/doc/rate-limit-headers/index.html @@ -0,0 +1,1987 @@ + + + + + + + + + + + + + + + + + + + + + + + + Rate limit headers - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Rate Limit Headers

+

It enables RateLimit Header Fields for HTTP as specified in +Rate Limit Headers Draft

+
apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  rateLimitHeaders: DRAFT_VERSION_03
+
+

Current valid values are: +* DRAFT_VERSION_03 (ref: https://datatracker.ietf.org/doc/id/draft-polli-ratelimit-headers-03.html) +* NONE

+

By default, when spec.rateLimitHeaders is null, --rate-limit-headers command line arg is not +included in the limitador's deployment.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/doc/resource-requirements/index.html b/limitador-operator/doc/resource-requirements/index.html new file mode 100644 index 00000000..a7d2a1a1 --- /dev/null +++ b/limitador-operator/doc/resource-requirements/index.html @@ -0,0 +1,2056 @@ + + + + + + + + + + + + + + + + + + + + Resource Requirements - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Resource Requirements

+

The default resource requirement for Limitador deployments is specified in Limitador v1alpha1 API reference +and will be applied if the resource requirement is not set in the spec.

+
apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  listener:
+    http:
+      port: 8080
+    grpc:
+      port: 8081
+  limits:
+    - conditions: ["get_toy == 'yes'"]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []  
+
+ + + + + + + + + + + + + + + + + + + + + +
Fieldjson/yaml fieldTypeRequiredDefault valueDescription
ResourceRequirementsresourceRequirements*corev1.ResourceRequirementsNo{"limits": {"cpu": "500m","memory": "64Mi"},"requests": {"cpu": "250m","memory": "32Mi"}}Limitador deployment resource requirements
+

Example with resource limits

+

The resource requests and limits for the deployment can be set like the following:

+
apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  listener:
+    http:
+      port: 8080
+    grpc:
+      port: 8081
+  limits:
+    - conditions: ["get_toy == 'yes'"]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []
+  resourceRequirements:
+    limits:
+      cpu: 200m
+      memory: 400Mi
+    requests:
+      cpu: 101m  
+      memory: 201Mi    
+
+

To specify the deployment without resource requests or limits, set an empty struct {} to the field: +

apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  listener:
+    http:
+      port: 8080
+    grpc:
+      port: 8081
+  limits:
+    - conditions: [ "get_toy == 'yes'" ]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []
+  resourceRequirements: {}
+

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/doc/storage/index.html b/limitador-operator/doc/storage/index.html new file mode 100644 index 00000000..7868fe4e --- /dev/null +++ b/limitador-operator/doc/storage/index.html @@ -0,0 +1,2142 @@ + + + + + + + + + + + + + + + + + + + + + + + + Storage - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Storage

+

The default storage for Limitador limits counter is in memory, which there's no configuration needed. +In order to configure a Redis data structure store, currently there are 2 alternatives:

+
    +
  • Redis
  • +
  • Redis Cached
  • +
+

For any of those, one should store the URL of the Redis service, inside a K8s opaque +Secret.

+
apiVersion: v1
+kind: Secret
+metadata:
+  name: redisconfig
+stringData:
+  URL: redis://127.0.0.1/a # Redis URL of its running instance
+type: Opaque
+
+

It's also required to setup Spec.Storage

+

Redis

+
apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  storage:
+    redis:
+      configSecretRef: # The secret reference storing the URL for Redis
+        name: redisconfig
+        namespace: default # optional
+  limits:
+    - conditions: ["get_toy == 'yes'"]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []
+
+

Redis Cached

+

Options

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OptionDescription
ttlTTL for cached counters in milliseconds [default: 5000]
ratioRatio to apply to the TTL from Redis on cached counters [default:
flush-periodFlushing period for counters in milliseconds [default: 1000]
max-cachedMaximum amount of counters cached [default: 10000]
+
apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  storage:
+    redis-cached:
+      configSecretRef: # The secret reference storing the URL for Redis
+        name: redisconfig
+        namespace: default # optional
+      options: # Every option is optional
+        ttl: 1000
+
+  limits:
+    - conditions: ["get_toy == 'yes'"]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador-operator/index.html b/limitador-operator/index.html new file mode 100644 index 00000000..3c25af49 --- /dev/null +++ b/limitador-operator/index.html @@ -0,0 +1,2144 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Limitador Operator

+

License +codecov

+

Overview

+

The Operator to manage Limitador deployments.

+

CustomResourceDefinitions

+
    +
  • Limitador, which defines a desired Limitador deployment.
  • +
+

Limitador CRD

+

Limitador v1alpha1 API reference

+

Example:

+
---
+apiVersion: limitador.kuadrant.io/v1alpha1
+kind: Limitador
+metadata:
+  name: limitador-sample
+spec:
+  listener:
+    http:
+      port: 8080
+    grpc:
+      port: 8081
+  limits:
+    - conditions: ["get_toy == 'yes'"]
+      max_value: 2
+      namespace: toystore-app
+      seconds: 30
+      variables: []
+
+

Features

+ +

Contributing

+

The Development guide describes how to build the operator and +how to test your changes before submitting a patch or opening a PR.

+

Join us on kuadrant.slack.com +for live discussions about the roadmap and more.

+

Licensing

+

This software is licensed under the Apache 2.0 license.

+

See the LICENSE and NOTICE files that should have been provided along with this software for details.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/doc/how-it-works/index.html b/limitador/doc/how-it-works/index.html new file mode 100644 index 00000000..a8420fe5 --- /dev/null +++ b/limitador/doc/how-it-works/index.html @@ -0,0 +1,2084 @@ + + + + + + + + + + + + + + + + + + + + + + + + How it works - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

How it works

+ +

How it works

+

Limitador ensures that the most restrictive limit configuration will apply.

+

Limitador will try to match each incoming descriptor with the same namespaced +counter's conditions and variables. +The namespace for the descriptors is defined by the domain field +whereas for the rate limit configuration the namespace field is being used. +For each matching counter, the counter is increased and the limits checked.

+

One example to illustrate:

+

Let's say we have 1 rate limit configuration (one counter per config):

+
conditions: ["KEY_A == 'VALUE_A'"]
+max_value: 1
+seconds: 60
+variables: []
+namespace: example.org
+
+

Limitador receives one descriptor with two entries:

+
domain: example.org
+descriptors:
+  - entries:
+    - KEY_A: VALUE_A
+    - OTHER_KEY: OTHER_VALUE
+
+

The counter's condition will match. Then, the counter will be increased and the limit checked. +If the limit is exceeded, the request will be rejected with 429 Too Many Requests, +otherwise accepted.

+

Note that the counter is being activated even though it does not match all the entries of the +descriptor. The same rule applies for the variables field.

+

Currently, the implementation of condition only allow for equal (==) and not equal (!=) operators. +More operators will be implemented based off the use cases for them.

+

The variables field is a list of keys. +The matching rule is defined just as the existence of the list of descriptor entries with the +same key values. If variables is variables: [A, B, C], +one descriptor matches if it has at least three entries with the same A, B, C keys.

+

Few examples to illustrate.

+

Having the following descriptors:

+
domain: example.org
+descriptors:
+  - entries:
+    - KEY_A: VALUE_A
+    - OTHER_KEY: OTHER_VALUE
+
+

the following counters would not be activated.

+

conditions: ["KEY_B == 'VALUE_B'"]
+max_value: 1
+seconds: 60
+variables: []
+namespace: example.org
+
+Reason: conditions key does not exist

+

conditions:
+  - "KEY_A == 'VALUE_A'"
+  - "OTHER_KEY == 'WRONG_VALUE'"
+max_value: 1
+seconds: 60
+variables: []
+namespace: example.org
+
+Reason: not all the conditions match

+

conditions: []
+max_value: 1
+seconds: 60
+variables: ["MY_VAR"]
+namespace: example.org
+
+Reason: the variable name does not exist

+

conditions: ["KEY_B == 'VALUE_B'"]
+max_value: 1
+seconds: 60
+variables: ["MY_VAR"]
+namespace: example.org
+
+Reason: Both variables and conditions must match. In this particular case, only conditions match

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/doc/migrations/conditions/index.html b/limitador/doc/migrations/conditions/index.html new file mode 100644 index 00000000..df2dadfd --- /dev/null +++ b/limitador/doc/migrations/conditions/index.html @@ -0,0 +1,2032 @@ + + + + + + + + + + + + + + + + + + + + New condition syntax - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

New condition syntax

+

With limitador-server version 1.0.0 (and the limitador crate version 0.3.0), the syntax for conditions within +limit definitions has changed.

+

Changes

+

The new syntax

+

The new syntax formalizes what part of an expression is the identifier and which is the value to test against. +Identifiers are simple string value, while string literals are to be demarcated by single quotes (') or double quotes +(") so that foo == " bar" now makes it explicit that the value is to be prefixed with a space character.

+

A few remarks: + - Only string values are supported, as that's what they really are + - There is no escape character sequence supported in string literals + - A new operator has been added, !=

+

The issue with the deprecated syntax

+

The previous syntax wouldn't differentiate between values and the identifier, so that foo == bar was valid. In this +case foo was the identifier of the variable, while bar was the value to evaluate it against. Whitespaces before and +after the operator == would be equally important. SO that foo == bar would test for a foo variable being equal +to bar where the trailing whitespace after the identifier, and the one prefixing the value, would have been +evaluated.

+

Server binary users

+

The server still allows for the deprecated syntax, but warns about its usage. You can easily migrate your limits file, +using the following command:

+
limitador-server --validate old_limits.yaml > updated_limits.yaml
+
+

Which should output Deprecated syntax for conditions corrected! to stderr while stdout would be the limits using +the new syntax. It is recommended you manually verify the resulting LIMITS_FILE.

+

Crate users

+

A feature lenient_conditions has been added, which lets you use the syntax used in previous version of the crate. +The function limitador::limit::check_deprecated_syntax_usages_and_reset() lets you verify if the deprecated syntax +has been used as limit::Limits are created with their condition strings using the deprecated syntax.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/doc/server/configuration/index.html b/limitador/doc/server/configuration/index.html new file mode 100644 index 00000000..94c5a24b --- /dev/null +++ b/limitador/doc/server/configuration/index.html @@ -0,0 +1,2520 @@ + + + + + + + + + + + + + + + + + + + + Limitador configuration - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Limitador configuration

+

Command line configuration

+

The preferred way of starting and configuring the Limitador server is using the command line:

+
Rate Limiting Server
+
+Usage: limitador-server [OPTIONS] <LIMITS_FILE> [STORAGE]
+
+STORAGES:
+  memory        Counters are held in Limitador (ephemeral)
+  disk          Counters are held on disk (persistent)
+  redis         Uses Redis to store counters
+  redis_cached  Uses Redis to store counters, with an in-memory cache
+
+Arguments:
+  <LIMITS_FILE>  The limit file to use
+
+Options:
+  -b, --rls-ip <ip>
+          The IP to listen on for RLS [default: 0.0.0.0]
+  -p, --rls-port <port>
+          The port to listen on for RLS [default: 8081]
+  -B, --http-ip <http_ip>
+          The IP to listen on for HTTP [default: 0.0.0.0]
+  -P, --http-port <http_port>
+          The port to listen on for HTTP [default: 8080]
+  -l, --limit-name-in-labels
+          Include the Limit Name in prometheus label
+  -v...
+          Sets the level of verbosity
+      --validate
+          Validates the LIMITS_FILE and exits
+  -H, --rate-limit-headers <rate_limit_headers>
+          Enables rate limit response headers [default: NONE] [possible values: NONE, DRAFT_VERSION_03]
+  -h, --help
+          Print help
+  -V, --version
+          Print version
+
+

The values used are authoritative over any environment variables independently set.

+

Limit definitions

+

The LIMITS_FILE provided is the source of truth for all the limits that will be enforced. The file location will be +monitored by the server for any changes and be hot reloaded. If the changes are invalid, they will be ignored on hot +reload, or the server will fail to start.

+

The LIMITS_FILE's format

+

When starting the server, you point it to a LIMITS_FILE, which is expected to be a yaml file with an array of +limit definitions, with the following format:

+
---
+"$schema": http://json-schema.org/draft-04/schema#
+type: object
+properties:
+  name:
+    type: string
+  namespace:
+    type: string
+  seconds:
+    type: integer
+  max_value:
+    type: integer
+  conditions:
+    type: array
+    items:
+      - type: string
+  variables:
+    type: array
+    items:
+      - type: string
+required:
+  - namespace
+  - seconds
+  - max_value
+  - conditions
+  - variables
+
+

Here is an example of such a limit definition:

+
namespace: example.org
+max_value: 10
+seconds: 60
+conditions:
+  - "req.method == 'GET'"
+variables:
+  - user_id
+
+
    +
  • namespace namespaces the limit, will generally be the domain, see here
  • +
  • seconds is the duration for which the limit applies, in seconds: e.g. 60 is a span of time of one minute
  • +
  • max_value is the actual limit, e.g. 100 would limit to 100 requests
  • +
  • name lets the user optionally name the limit
  • +
  • variables is an array of variables, which once resolved, will be used to qualify counters for the limit, + e.g. api_key to limit per api keys
  • +
  • conditions is an array of conditions, which once evaluated will decide whether to apply the limit or not
  • +
+

condition syntax

+

Each condition is an expression producing a boolean value (true or false). All conditions must evaluate to +true for the limit to be applied on a request.

+

Expressions follow the following syntax: $IDENTIFIER $OP $STRING_LITERAL, where:

+
    +
  • $IDENTIFIER will be used to resolve the value at evaluation time, e.g. role
  • +
  • $OP is an operator, either == or !=
  • +
  • $STRING_LITERAL is a literal string value, " or ' demarcated, e.g. "admin"
  • +
+

So that role != "admin" would apply the limit on request from all users, but admin's.

+

Counter storages

+

Limitador will load all the limit definitions from the LIMITS_FILE and keep these in memory. To enforce these +limits, Limitador needs to track requests in the form of counters. There would be at least one counter per limit, but +that number grows when variables are used to qualify counters per some arbitrary values.

+

memory

+

As the name implies, Limitador will keep all counters in memory. This yields the best results in terms of latency as +well as accuracy. By default, only up to 1000 "concurrent" counters will be kept around, evicting the oldest entries. +"Concurrent" in this context means counters that need to exist at the "same time", based of the period of the limit, +as "expired" counters are discarded.

+

This storage is ephemeral, as if the process is restarted, all the counters are lost and effectively "reset" all the +limits as if no traffic had been rate limited, which can be fine for short-lived limits, less for longer-lived ones.

+

redis

+

When you want persistence of your counters, such as for disaster recovery or across restarts, using redis will store +the counters in a redis instance using the provided URL. Increments to individual counters is made within redis +itself, providing accuracy over these, races tho can occur when multiple Limitador servers are used against a single +redis and using "stacked" limits (i.e. over different periods). Latency is also impacted, as it results in one +additional hop to talk to redis and maintain the counters.

+
Uses Redis to store counters
+
+Usage: limitador-server <LIMITS_FILE> redis <URL>
+
+Arguments:
+  <URL>  Redis URL to use
+
+Options:
+  -h, --help  Print help
+
+

redis_cached

+

In order to avoid some communication overhead to redis, redis_cached adds an in memory caching layer within the +Limitador servers. This lowers the latency, but sacrifices some accuracy as it will not only cache counters, but also +coalesce counters updates to redis over time. See this configuration option for more +information.

+
Uses Redis to store counters, with an in-memory cache
+
+Usage: limitador-server <LIMITS_FILE> redis_cached [OPTIONS] <URL>
+
+Arguments:
+  <URL>  Redis URL to use
+
+Options:
+      --ttl <TTL>             TTL for cached counters in milliseconds [default: 5000]
+      --ratio <ratio>         Ratio to apply to the TTL from Redis on cached counters [default: 10000]
+      --flush-period <flush>  Flushing period for counters in milliseconds [default: 1000]
+      --max-cached <max>      Maximum amount of counters cached [default: 10000]
+  -h, --help                  Print help
+
+

disk

+

Disk storage using RocksDB. Counters are held on disk (persistent).

+
Counters are held on disk (persistent)
+
+Usage: limitador-server <LIMITS_FILE> disk [OPTIONS] <PATH>
+
+Arguments:
+  <PATH>  Path to counter DB
+
+Options:
+      --optimize <OPTIMIZE>  Optimizes either to save disk space or higher throughput [default: throughput] [possible values: throughput, disk]
+  -h, --help                 Print help
+
+

infinispan optional storage - experimental

+

The default binary will not support Infinispan as a storage backend for counters. If you +want to give it a try, you would need to build your own binary of the server using:

+
cargo build --release --features=infinispan
+
+

Which will add the infinispan to the supported STORAGES.

+
USAGE:
+    limitador-server <LIMITS_FILE> infinispan [OPTIONS] <URL>
+
+ARGS:
+    <URL>    Infinispan URL to use
+
+OPTIONS:
+    -n, --cache-name <cache name>      Name of the cache to store counters in [default: limitador]
+    -c, --consistency <consistency>    The consistency to use to read from the cache [default:
+                                       Strong] [possible values: Strong, Weak]
+    -h, --help                         Print help information
+
+

For an in-depth coverage of the different topologies supported and how they affect the behavior, see the +topologies' document.

+

Configuration using environment variables

+

The Limitador server has some options that can be configured with environment variables. These will override the +default values the server uses. Any argument used when starting the server will prevail over the +environment variables.

+

ENVOY_RLS_HOST

+
    +
  • Host where the Envoy RLS server listens.
  • +
  • Optional. Defaults to "0.0.0.0".
  • +
  • Format: string.
  • +
+

ENVOY_RLS_PORT

+
    +
  • Port where the Envoy RLS server listens.
  • +
  • Optional. Defaults to 8081.
  • +
  • Format: integer.
  • +
+

HTTP_API_HOST

+
    +
  • Host where the HTTP server listens.
  • +
  • Optional. Defaults to "0.0.0.0".
  • +
  • Format: string.
  • +
+

HTTP_API_PORT

+
    +
  • Port where the HTTP API listens.
  • +
  • Optional. Defaults to 8080.
  • +
  • Format: integer.
  • +
+

LIMITS_FILE

+
    +
  • YAML file that contains the limits to create when Limitador boots. If the +limits specified already have counters associated, Limitador will not delete them. +Changes to the file will be picked up by the running server.
  • +
  • Required. No default
  • +
  • Format: string, file path.
  • +
+

LIMIT_NAME_IN_PROMETHEUS_LABELS

+
    +
  • Enables using limit names as labels in Prometheus metrics. This is disabled by +default because for a few limits it should be fine, but it could become a +problem when defining lots of limits. See the caution note in the Prometheus +docs
  • +
  • Optional. Disabled by default.
  • +
  • Format: bool, set to "1" to enable.
  • +
+

REDIS_LOCAL_CACHE_ENABLED

+
    +
  • Enables a storage implementation that uses Redis, but also caches some data in +memory. The idea is to improve throughput and latencies by caching the counters +in memory to reduce the number of accesses to Redis. To achieve that, this mode +sacrifices some rate-limit accuracy. This mode does two things:
      +
    • Batches counter updates. Instead of updating the counters on every +request, it updates them in memory and commits them to Redis in batches. The +flushing interval can be configured with the +REDIS_LOCAL_CACHE_FLUSHING_PERIOD_MS +env. The trade-off is that when running several instances of Limitador, +other instances will not become aware of the counter updates until they're +committed to Redis.
    • +
    • Caches counters. Instead of fetching the value of a counter every time +it's needed, the value is cached for a configurable period. The trade-off is +that when running several instances of Limitador, an instance will not +become aware of the counter updates other instances do while the value is +cached. When a counter is already at 0 (limit exceeded), it's cached until +it expires in Redis. In this case, no matter what other instances do, we +know that the quota will not be reestablished until the key expires in +Redis, so in this case, rate-limit accuracy is not affected. When a counter +has still some quota remaining the situation is different, that's why we can +tune for how long it will be cached. The formula is as follows: +MIN(ttl_in_redis/REDIS_LOCAL_CACHE_TTL_RATIO_CACHED_COUNTERS, +REDIS_LOCAL_CACHE_MAX_TTL_CACHED_COUNTERS_MS). +For example, let's image that the current TTL (time remaining until the +limit resets) in Redis for a counter is 10 seconds, and we set the ratio to +2, and the max time for 30s. In this case, the counter will be cached for 5s +(min(10/2, 30)). During those 5s, Limitador will not fetch the value of that +counter from Redis, so it will answer faster, but it will also miss the +updates done by other instances, so it can go over the limits in that 5s +interval.
    • +
    +
  • +
  • Optional. Disabled by default.
  • +
  • Format: set to "1" to enable.
  • +
  • Note: "REDIS_URL" needs to be set.
  • +
+

REDIS_LOCAL_CACHE_FLUSHING_PERIOD_MS

+
    +
  • Used to configure the local cache when using Redis. See +REDIS_LOCAL_CACHE_ENABLED. This env only applies +when "REDIS_LOCAL_CACHE_ENABLED" == 1.
  • +
  • Optional. Defaults to 1000.
  • +
  • Format: integer. Duration in milliseconds.
  • +
+

REDIS_LOCAL_CACHE_MAX_TTL_CACHED_COUNTERS_MS

+
    +
  • Used to configure the local cache when using Redis. See +REDIS_LOCAL_CACHE_ENABLED. This env only applies +when "REDIS_LOCAL_CACHE_ENABLED" == 1.
  • +
  • Optional. Defaults to 5000.
  • +
  • Format: integer. Duration in milliseconds.
  • +
+

REDIS_LOCAL_CACHE_TTL_RATIO_CACHED_COUNTERS

+
    +
  • Used to configure the local cache when using Redis. See +REDIS_LOCAL_CACHE_ENABLED. This env only applies +when "REDIS_LOCAL_CACHE_ENABLED" == 1.
  • +
  • Optional. Defaults to 10.
  • +
  • Format: integer.
  • +
+

REDIS_URL

+
    +
  • Redis URL. Required only when you want to use Redis to store the limits.
  • +
  • Optional. By default, Limitador stores the limits in memory and does not +require Redis.
  • +
  • Format: string, URL in the format of "redis://127.0.0.1:6379".
  • +
+

RUST_LOG

+
    +
  • Defines the log level.
  • +
  • Optional. Defaults to "error".
  • +
  • Format: enum: "debug", "error", "info", "warn", or "trace".
  • +
+

When built with the infinispan feature - experimental

+

INFINISPAN_CACHE_NAME

+
    +
  • The name of the Infinispan cache that Limitador will use to store limits and + counters. This variable applies only when INFINISPAN_URL is + set.
  • +
  • Optional. By default, Limitador will use a cache called "limitador".
  • +
  • Format: string.
  • +
+

INFINISPAN_COUNTERS_CONSISTENCY

+
    +
  • Defines the consistency mode for the Infinispan counters created by Limitador. + This variable applies only when INFINISPAN_URL is set.
  • +
  • Optional. Defaults to "strong".
  • +
  • Format: enum: "Strong" or "Weak".
  • +
+

INFINISPAN_URL

+
    +
  • Infinispan URL. Required only when you want to use Infinispan to store the + limits.
  • +
  • Optional. By default, Limitador stores the limits in memory and does not + require Infinispan.
  • +
  • Format: URL, in the format of http://username:password@127.0.0.1:11222.
  • +
+

RATE_LIMIT_HEADERS

+
    +
  • Enables rate limit response headers. Only supported by the RLS server.
  • +
  • Optional. Defaults to "NONE".
  • +
  • Must be one of:
  • +
  • "NONE" - Does not add any additional headers to the http response.
  • +
  • "DRAFT_VERSION_03". Adds response headers per https://datatracker.ietf.org/doc/id/draft-polli-ratelimit-headers-03.html
  • +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/doc/topologies/index.html b/limitador/doc/topologies/index.html new file mode 100644 index 00000000..60c36dfd --- /dev/null +++ b/limitador/doc/topologies/index.html @@ -0,0 +1,2136 @@ + + + + + + + + + + + + + + + + + + + + + + + + Topologies - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Deployment topologies

+

In-memory

+

Redis

+

Redis active-active storage

+

The RedisLabs version of Redis supports active-active +replication. +Limitador is compatible with that deployment mode, but there are a few things to +take into account regarding limit accuracy.

+

Considerations

+

With an active-active deployment, the data needs to be replicated between +instances. An update in an instance takes a short time to be reflected in the +other. That time lag depends mainly on the network speed between the Redis +instances, and it affects the accuracy of the rate-limiting performed by +Limitador because it can go over limits while the updates of the counters are +being replicated.

+

The impact of that greatly depends on the use case. With limits of a few +seconds, and a low number of hits, we could easily go over limits. On the other +hand, if we have defined limits with a high number of hits and a long period, +the effect will be basically negligible. For example, if we define a limit of +one hour, and we know that the data takes around one second to be replicated, +the accuracy loss is going to be negligible.

+

Set up

+

In order to try active-active replication, you can follow this tutorial from +RedisLabs.

+

Disk

+

Disk storage using RocksDB. Counters are held on disk (persistent).

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/index.html b/limitador/index.html new file mode 100644 index 00000000..50ffdd59 --- /dev/null +++ b/limitador/index.html @@ -0,0 +1,2182 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Limitador

+

Limitador GH Workflow +docs.rs +Crates.io +Docker Repository on Quay +codecov

+

Limitador is a generic rate-limiter written in Rust. It can be used as a +library, or as a service. The service exposes HTTP endpoints to apply and observe +limits. Limitador can be used with Envoy because it also exposes a grpc service, on a different +port, that implements the Envoy Rate Limit protocol (v3).

+ +

Limitador is under active development, and its API has not been stabilized yet.

+

Getting started

+ +

Rust library

+

Add this to your Cargo.toml: +

[dependencies]
+limitador = { version = "0.3.0" }
+

+

For more information, see the README of the crate

+

Server

+

Run with Docker (replace latest with the version you want): +

docker run --rm --net=host -it quay.io/kuadrant/limitador:v1.0.0
+

+

Run locally: +

cargo run --release --bin limitador-server -- --help
+

+

Refer to the help message on how to start up the server. More information are available +in the server's README.md

+

Development

+

Build

+
cargo build
+
+

Run the tests

+

Some tests need a redis deployed in localhost:6379. You can run it in Docker with: +

docker run --rm -p 6379:6379 -it redis
+

+

Some tests need a infinispan deployed in localhost:11222. You can run it in Docker with: +

docker run --rm -p 11222:11222 -it -e USER=username -e PASS=password infinispan/server:11.0.9.Final
+

+

Then, run the tests:

+
cargo test --all-features
+
+

or you can run tests disabling the "redis storage" feature: +

cd limitador; cargo test --no-default-features
+

+

License

+

Apache 2.0 License

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/limitador-server/docs/http_server_spec.json b/limitador/limitador-server/docs/http_server_spec.json new file mode 100644 index 00000000..1a2631e8 --- /dev/null +++ b/limitador/limitador-server/docs/http_server_spec.json @@ -0,0 +1,301 @@ +{ + "swagger": "2.0", + "definitions": { + "CheckAndReportInfo": { + "type": "object", + "properties": { + "delta": { + "type": "integer", + "format": "int64" + }, + "namespace": { + "type": "string" + }, + "values": { + "type": "object", + "additionalProperties": { + "type": "string" + } + } + }, + "required": [ + "delta", + "namespace", + "values" + ] + }, + "Counter": { + "type": "object", + "properties": { + "expires_in_seconds": { + "type": "integer", + "format": "int64" + }, + "limit": { + "type": "object", + "properties": { + "conditions": { + "type": "array", + "items": { + "type": "string" + } + }, + "max_value": { + "type": "integer", + "format": "int64" + }, + "name": { + "type": "string" + }, + "namespace": { + "type": "string" + }, + "seconds": { + "type": "integer", + "format": "int64" + }, + "variables": { + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "conditions", + "max_value", + "namespace", + "seconds", + "variables" + ] + }, + "remaining": { + "type": "integer", + "format": "int64" + }, + "set_variables": { + "type": "object", + "additionalProperties": { + "type": "string" + } + } + }, + "required": [ + "limit", + "set_variables" + ] + }, + "Limit": { + "type": "object", + "properties": { + "conditions": { + "type": "array", + "items": { + "type": "string" + } + }, + "max_value": { + "type": "integer", + "format": "int64" + }, + "name": { + "type": "string" + }, + "namespace": { + "type": "string" + }, + "seconds": { + "type": "integer", + "format": "int64" + }, + "variables": { + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "conditions", + "max_value", + "namespace", + "seconds", + "variables" + ] + } + }, + "paths": { + "/check": { + "post": { + "responses": { + "200": { + "description": "OK", + "schema": {} + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "body", + "name": "body", + "required": true, + "schema": { + "$ref": "#/definitions/CheckAndReportInfo" + } + } + ] + } + }, + "/check_and_report": { + "post": { + "responses": { + "200": { + "description": "OK", + "schema": {} + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "body", + "name": "body", + "required": true, + "schema": { + "$ref": "#/definitions/CheckAndReportInfo" + } + } + ] + } + }, + "/counters/{namespace}": { + "get": { + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/Counter" + } + } + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "path", + "name": "namespace", + "required": true, + "type": "string" + } + ] + } + }, + "/limits/{namespace}": { + "get": { + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/Limit" + } + } + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "path", + "name": "namespace", + "required": true, + "type": "string" + } + ] + }, + "delete": { + "responses": { + "200": { + "description": "OK", + "schema": {} + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "path", + "name": "namespace", + "required": true, + "type": "string" + } + ] + } + }, + "/report": { + "post": { + "responses": { + "200": { + "description": "OK", + "schema": {} + }, + "429": { + "description": "Too Many Requests" + }, + "500": { + "description": "Internal Server Error" + } + }, + "parameters": [ + { + "in": "body", + "name": "body", + "required": true, + "schema": { + "$ref": "#/definitions/CheckAndReportInfo" + } + } + ] + } + }, + "/status": { + "get": { + "responses": { + "200": { + "description": "OK", + "schema": {} + } + } + } + } + }, + "info": { + "version": "1.0.0", + "title": "Limitador server endpoints" + } +} diff --git a/limitador/limitador-server/docs/sandbox/index.html b/limitador/limitador-server/docs/sandbox/index.html new file mode 100644 index 00000000..0f9a68ec --- /dev/null +++ b/limitador/limitador-server/docs/sandbox/index.html @@ -0,0 +1,2189 @@ + + + + + + + + + + + + + + + + + + + + + + + + Sandbox - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Sandbox

+ +

Testing Environment

+

Requirements

+
    +
  • docker
  • +
  • docker-compose
  • +
+

Setup

+

Clone the project

+
git clone https://github.com/Kuadrant/limitador.git
+cd limitador/limitador-server/sandbox
+
+

Check out make help for all the targets.

+

Deployment options

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Limitador's configurationCommandInfo
In-memory configurationmake deploy-in-memoryCounters are held in Limitador (ephemeral)
Redismake deploy-redisUses Redis to store counters
Redis Cachedmake deploy-redis-cachedUses Redis to store counters, with an in-memory cache
Infinispanmake deploy-infinispanUses Infinispan to store counters
+

Limitador's admin HTTP endpoint

+
curl -i http://127.0.0.1:18080/limits/test_namespace
+
+

Downstream traffic

+

Upstream service implemented by httpbin.org

+
curl -i -H "Host: example.com" http://127.0.0.1:18000/get
+
+

Limitador Image

+

By default, the sandbox will run Limitador's limitador-testing:latest image.

+

Building limitador-testing:latest image

+

You can easily build the limitador's image from the current workspace code base with:

+
make build
+
+

The image will be tagged with limitador-testing:latest

+

Using custom Limitador's image

+

The LIMITADOR_IMAGE environment variable overrides the default image. For example:

+
make deploy-in-memory LIMITADOR_IMAGE=quay.io/kuadrant/limitador:latest
+
+

Tear Down

+
make tear-down
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/limitador-server/index.html b/limitador/limitador-server/index.html new file mode 100644 index 00000000..c7b1284b --- /dev/null +++ b/limitador/limitador-server/index.html @@ -0,0 +1,2085 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Limitador (server)

+

Docker Repository on Quay

+

By default, Limitador starts the HTTP server in localhost:8080 and the grpc +service that implements the Envoy Rate Limit protocol in localhost:8081. That +can be configured with these ENVs: ENVOY_RLS_HOST, ENVOY_RLS_PORT, +HTTP_API_HOST, and HTTP_API_PORT.

+

Or using the command line arguments:

+
Rate Limiting Server
+
+Usage: limitador-server [OPTIONS] <LIMITS_FILE> [STORAGE]
+
+STORAGES:
+  memory        Counters are held in Limitador (ephemeral)
+  disk          Counters are held on disk (persistent)
+  redis         Uses Redis to store counters
+  redis_cached  Uses Redis to store counters, with an in-memory cache
+
+Arguments:
+  <LIMITS_FILE>  The limit file to use
+
+Options:
+  -b, --rls-ip <ip>
+          The IP to listen on for RLS [default: 0.0.0.0]
+  -p, --rls-port <port>
+          The port to listen on for RLS [default: 8081]
+  -B, --http-ip <http_ip>
+          The IP to listen on for HTTP [default: 0.0.0.0]
+  -P, --http-port <http_port>
+          The port to listen on for HTTP [default: 8080]
+  -l, --limit-name-in-labels
+          Include the Limit Name in prometheus label
+  -v...
+          Sets the level of verbosity
+      --validate
+          Validates the LIMITS_FILE and exits
+  -H, --rate-limit-headers <rate_limit_headers>
+          Enables rate limit response headers [default: NONE] [possible values: NONE, DRAFT_VERSION_03]
+  -h, --help
+          Print help
+  -V, --version
+          Print version
+
+

When using environment variables, these will override the defaults. While environment variable are themselves +overridden by the command line arguments provided. See the individual STORAGES help for more options relative to +each of the storages.

+

The OpenAPI spec of the HTTP service is +here.

+

Limitador has to be started with a YAML file that has some limits defined. There's an example +file that allows 10 requests per minute +and per user_id when the HTTP method is "GET" and 5 when it is a "POST". You can +run it with Docker (replace latest with the version you want): +

docker run --rm --net=host -it -v $(pwd)/examples/limits.yaml:/home/limitador/my_limits.yaml:ro quay.io/kuadrant/limitador:latest limitador-server /home/limitador/my_limits.yaml
+

+

You can also use the YAML file when running locally: +

cargo run --release --bin limitador-server ./examples/limits.yaml
+

+

If you want to use Limitador with Envoy, there's a minimal Envoy config for +testing purposes here. The config +forwards the "userid" header and the request method to Limitador. It assumes +that there's an upstream API deployed on port 1323. You can use +echo, for example.

+

Limitador has several options that can be configured via ENV. This +doc specifies them.

+

Limits storage

+

Limitador can store its limits and counters in-memory, disk or in Redis. In-memory is +faster, but the limits are applied per instance. When using Redis, multiple +instances of Limitador can share the same limits, but it's slower.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/limitador/limitador/index.html b/limitador/limitador/index.html new file mode 100644 index 00000000..a801498c --- /dev/null +++ b/limitador/limitador/index.html @@ -0,0 +1,2061 @@ + + + + + + + + + + + + + + + + + + + + + + + + Crate - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Limitador (library)

+

Crates.io +docs.rs

+

An embeddable rate-limiter library supporting in-memory, Redis and Infinispan data stores. +Limitador can also be compiled to WebAssembly.

+

For the complete documentation of the crate's API, please refer to docs.rs

+

Features

+
    +
  • redis_storage: support for using Redis as the data storage backend.
  • +
  • infinispan_storage: support for using Infinispan as the data storage backend.
  • +
  • lenient_conditions: support for the deprecated syntax of Conditions
  • +
  • default: redis_storage.
  • +
+

WebAssembly support

+

To use Limitador in a project that compiles to WASM, there are some features +that need to be disabled. Add this to your Cargo.toml instead:

+
[dependencies]
+limitador = { version = "0.3.0", default-features = false }
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/contribution/contributing/index.html b/multicluster-gateway-controller/docs/contribution/contributing/index.html new file mode 100644 index 00000000..c62caa6e --- /dev/null +++ b/multicluster-gateway-controller/docs/contribution/contributing/index.html @@ -0,0 +1,1950 @@ + + + + + + + + + + + + + + + + + + + + Contributing - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Contributing

+ + + + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/contribution/vscode-debugging/index.html b/multicluster-gateway-controller/docs/contribution/vscode-debugging/index.html new file mode 100644 index 00000000..f190f02d --- /dev/null +++ b/multicluster-gateway-controller/docs/contribution/vscode-debugging/index.html @@ -0,0 +1,2102 @@ + + + + + + + + + + + + + + + + + + + + + + + + Debugging with VSCode - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Debugging in VS code

+

Introduction

+

The following document will show how to setup debugging for multi gateway controller.

+

There is an included VSCode launch.json.

+

Starting the controller

+

Instead of starting the Gateway Controller via something like:

+
make build-controller install run-controller
+
+

You can now simply hit F5 in VSCode. The controller will launch with the following config:

+
{
+  "version": "0.2.0",
+  "configurations": [
+    {
+      "name": "Debug",
+      "type": "go",
+      "request": "launch",
+      "mode": "auto",
+      "program": "./cmd/controller/main.go",
+      "args": [
+        "--metrics-bind-address=:8080",
+        "--health-probe-bind-address=:8081"
+      ]
+    }
+  ]
+}
+
+

Running Debugger

+

VSCode Debugger 1

+

Debugging Tests

+

VSCode Debugger 2

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (1).png b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (1).png new file mode 100644 index 00000000..572fc7d0 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (1).png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (2).png b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (2).png new file mode 100644 index 00000000..fd2d9051 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (2).png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (3).png b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (3).png new file mode 100644 index 00000000..9dc510e3 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy (3).png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy.png b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy.png new file mode 100644 index 00000000..50978b98 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/Kuadrant - DNSPolicy.png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/cleanup.sh b/multicluster-gateway-controller/docs/demos/dns-policy/cleanup.sh new file mode 100644 index 00000000..b507b42d --- /dev/null +++ b/multicluster-gateway-controller/docs/demos/dns-policy/cleanup.sh @@ -0,0 +1,31 @@ +#!/bin/bash + +kubectl --context kind-mgc-workload-2 delete -f resources/echo-app.yaml +kubectl --context kind-mgc-workload-1 delete -f resources/echo-app.yaml +kubectl --context kind-mgc-control-plane delete -f resources/echo-app.yaml + +kubectl delete tlspolicy --all -A +sleep 2 +kubectl delete dnspolicy --all -A +sleep 2 +kubectl delete dnsrecords --all -A +kubectl delete gateways --all -A + +kubectl delete -f resources/gateway_prod-web.yaml +kubectl delete -f ../../../hack/ocm/gatewayclass.yaml +kubectl delete -f resources/placement_http-gateway.yaml +kubectl delete -f resources/managed-cluster-set-binding_gateway-clusters.yaml +kubectl delete -f resources/managed-cluster-set_gateway-clusters.yaml +kubectl --context kind-mgc-control-plane remove -f resources/tlspolicy_prod-web.yaml + +kubectl label managedcluster kind-mgc-control-plane ingress-cluster- +kubectl label managedcluster kind-mgc-workload-1 ingress-cluster- +kubectl label managedcluster kind-mgc-workload-2 ingress-cluster- + +kubectl label managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-geo-code- +kubectl label managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code- +kubectl label managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code- + +kubectl label managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-custom-weight- +kubectl label managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight- +kubectl label managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight- diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/default.png b/multicluster-gateway-controller/docs/demos/dns-policy/default.png new file mode 100644 index 00000000..02416581 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/default.png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/index.html b/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/index.html new file mode 100644 index 00000000..7c466fb4 --- /dev/null +++ b/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/index.html @@ -0,0 +1,2132 @@ + + + + + + + + + + + + + + + + + + + + Kuadrant DNSPolicy Demo - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Kuadrant DNSPolicy Demo

+

Goals

+
    +
  • Show changes in how MGC manages DNS resources through a direct attachment DNS policy
  • +
  • Show changes to the DNS Record structure
  • +
  • Show weighted load balancing strategy and how it can be configured
  • +
  • Show geo load balancing strategy and how it can be configured
  • +
+

Setup

+
# make local-setup OCM_SINGLE=true MGC_WORKLOAD_CLUSTERS_COUNT=2
+
+
./install.sh
+(export $(cat ./controller-config.env | xargs) && export $(cat ./aws-credentials.env | xargs) && make build-controller install run-controller)
+
+

Preamble

+

Three managed clusters labeled as ingress clusters +

kubectl get managedclusters --show-labels
+

+

Show managed zone +

kubectl get managedzones -n multi-cluster-gateways
+

+

Show gateway created on the hub +

kubectl get gateway -n multi-cluster-gateways
+
+Show gateways +
# Check gateways
+kubectl --context kind-mgc-control-plane get gateways -A
+kubectl --context kind-mgc-workload-1 get gateways -A
+kubectl --context kind-mgc-workload-2 get gateways -A
+

+

Show application deployed to each cluster +

curl -k -s -o /dev/null -w "%{http_code}\n" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.200.0'
+curl -k -s -o /dev/null -w "%{http_code}\n" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.201.0'
+curl -k -s -o /dev/null -w "%{http_code}\n" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.202.0'
+

+

Show status of gateway on the hub: +

kubectl get gateway prod-web -n multi-cluster-gateways -o=jsonpath='{.status}'
+

+

DNSPolicy using direct attachment

+

Explain the changes that have been made to the dns reconciliation, that it now uses direct policy attachement and that a DNSPOlicy must be created and attached to a target gateway before any dns updates will be made for a gateway.

+

Show no dnsrecord +

kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways
+

+

Show no response for host +

# Warning, will cache for 5 mins!!!!!!
+curl -k https://bfa.jm.hcpapps.net
+

+

Show no dnspolicy +

kubectl --context kind-mgc-control-plane get dnspolicy -n multi-cluster-gateways
+

+

Create dnspolicy +

cat resources/dnspolicy_prod-web-default.yaml
+kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-default.yaml -n multi-cluster-gateways
+

+
# Check policy attachment
+kubectl --context kind-mgc-control-plane get gateway prod-web -n multi-cluster-gateways -o=jsonpath='{.metadata.annotations}'
+
+

Show dnsrecord created +

kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways
+

+

Show response for host +

curl -k https://bfa.jm.hcpapps.net
+

+

DNS Record Structure

+

Show the new record structure

+
kubectl get dnsrecord prod-web-api -n multi-cluster-gateways -o=jsonpath='{.spec.endpoints}'
+
+

Weighted loadbalancing by default

+

Show and update default weight in policy (Show result sin Route53) +

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways
+

+

"A DNSPolicy with an empty loadBalancing spec, or with a loadBalancing.weighted.defaultWeight set and nothing else produces a set of records grouped and weighted to produce a Round Robin routing strategy where all target clusters will have an equal chance of being returned in DNS queries."

+

Custom Weighting

+

Edit dnsPolicy and add custom weights: +

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways
+

+
spec:
+  loadBalancing:
+    weighted:
+      custom:
+      - value: AWS
+        weight: 200
+      - value: GCP
+        weight: 10
+      defaultWeight: 100
+
+

Add custom weight labels +

kubectl get managedclusters --show-labels
+kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS
+kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP
+

+

Geo load balancing

+

Edit dnsPolicy and add default geo: +

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways
+

+
spec:
+  loadBalancing:
+    geo:
+      defaultGeo: US
+    weighted:
+      custom:
+      - value: AWS
+        weight: 20
+      - value: GCP
+        weight: 200
+      defaultWeight: 100
+
+

Add geo labels +```bash +kubectl get managedclusters --show-labels +kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=FR +kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=ES

+

Checkout that DNS:

+

https://www.whatsmydns.net/#A/bfa.jm.hcpapps.net

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy.tape b/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy.tape new file mode 100644 index 00000000..d32ff520 --- /dev/null +++ b/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy.tape @@ -0,0 +1,294 @@ +# This is a vhs (https://github.com/charmbracelet/vhs/) tape - for reproducable CLI recordings + +Output dnspolicy.mp4 +Set WindowBar Colorful +Set FontSize 25 +Set Width 1920 +Set Height 1080 +Set Framerate 24 + + + +Set Shell zsh + + +Type "kind get clusters" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl label managedcluster kind-mgc-workload-2 ingress-cluster=true" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl apply -f resources/managed-cluster-set_gateway-clusters.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl apply -f resources/managed-cluster-set-binding_gateway-clusters.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl apply -f resources/placement_http-gateway.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl create -f ../../../hack/ocm/gatewayclass.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl apply -f resources/gateway_prod-web.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl label gateway prod-web 'cluster.open-cluster-management.io/placement'='http-gateway' -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl --context kind-mgc-control-plane apply -f resources/tlspolicy_prod-web.yaml" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl get managedclusters --show-labels" +Sleep 500ms +Enter +Sleep 5s + + +Type "cat resources/echo-app.yaml | more" +Sleep 500ms +Enter +Sleep 10s +Type "q" +Enter + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl --context kind-mgc-control-plane apply -f resources/echo-app.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl --context kind-mgc-workload-1 apply -f resources/echo-app.yaml" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl --context kind-mgc-workload-2 apply -f resources/echo-app.yaml" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.200.0'" +Sleep 500ms +Enter +Sleep 5s + +Type "curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.201.0'" +Sleep 500ms +Enter +Sleep 5s + +Type "curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.202.0'" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl --context kind-mgc-control-plane get gateways -A" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl --context kind-mgc-workload-1 get gateways -A" +Sleep 500ms +Enter +Sleep 5s + +Type "kubectl --context kind-mgc-workload-2 get gateways -A" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl get gateway prod-web -n multi-cluster-gateways -o yaml | yq .status" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 5s + +Type "cat resources/dnspolicy_prod-web-default.yaml" +Sleep 500ms +Enter +Sleep 10s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-default.yaml -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl get dnsrecord prod-web-api -n multi-cluster-gateways -o json | jq .spec.endpoints" +Sleep 500ms +Enter +Sleep 5s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "cat resources/dnspolicy_prod-web-weighted.yaml" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-weighted.yaml -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 10s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +Type "kubectl label --overwrite managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-custom-weight=AWS" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl get managedclusters --show-labels" +Sleep 500ms +Enter +Sleep 10s + +Hide +Type "clear" +Enter +Show +Sleep 500ms + +# Show AWS + +Type "kubectl label --overwrite managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-geo-code=ES" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=DE" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=US" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl get managedclusters --show-labels" +Sleep 500ms +Enter +Sleep 10s + +Type "cat resources/dnspolicy_prod-web-weighted-geo.yaml" +Sleep 500ms +Enter +Sleep 10s + +Type "kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-weighted-geo.yaml -n multi-cluster-gateways" +Sleep 500ms +Enter +Sleep 10s + + +# https://www.whatsmydns.net/#A/bfa.jm.hcpapps.net +# Bug: Most traffic should go to GCP (WL2, ES) diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/script.sh b/multicluster-gateway-controller/docs/demos/dns-policy/script.sh new file mode 100644 index 00000000..0316fd10 --- /dev/null +++ b/multicluster-gateway-controller/docs/demos/dns-policy/script.sh @@ -0,0 +1,100 @@ +# We use https://github.com/charmbracelet/vhs to record the terminal session +# For this demo, we have already setup 3 kind Kubernetes clusters. +# We used the Kuadrant quickstart script to set these up, and to install Kuadrant components and dependencies. +# You can run this too, by running the following: +# export MGC_WORKLOAD_CLUSTERS_COUNT=2; curl https://raw.githubusercontent.com/kuadrant/multicluster-gateway-controller/main/hack/quickstart-setup.sh | bash + + +kind get clusters +# We have got some local kind clusters: two workload clusters, one OCM Hub/Control Plane + +# First, let us label each of these clusters as ingress-clusters which we can place Gateways on +kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true +kubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true +kubectl label managedcluster kind-mgc-workload-2 ingress-cluster=true + + +# Next, create a ManagedClusterSet with OCM, specifiying a label selector to select the clusters we just labelled with ingress-cluster=true +kubectl apply -f resources/managed-cluster-set_gateway-clusters.yaml + +# Now we create a ManagedClusterSetBinding to link the ManagedClusterSet named gateway-clusters to the multi-cluster-gateways namespace +kubectl apply -f resources/managed-cluster-set-binding_gateway-clusters.yaml + +# Create a Placement for the ManagedClusterSet, for our 3 clusters +kubectl apply -f resources/placement_http-gateway.yaml + +# Create a GatewayClass resource, to specify that Gateways of this class will be managed by the Kuadrant multi-cluster gateway controller +kubectl create -f ../../../hack/ocm/gatewayclass.yaml + +# Create a Gateway, called `prod-web`, (bfa.jm.hcpapps.net) +kubectl apply -f resources/gateway_prod-web.yaml +# Associate the `prod-web` Gateway with the Placement we created earlier +kubectl label gateway prod-web "cluster.open-cluster-management.io/placement"="http-gateway" -n multi-cluster-gateways + +# We have already created several OCM resources, such as a ManagedClusterSet for our clusters a Placement for this ManagedClusterSet, and a GatewayClass resource for Kuadrant to utilise our multicluster-gateway-controller +# Create a TLSPolicy +kubectl --context kind-mgc-control-plane apply -f resources/tlspolicy_prod-web.yaml + +# Get our ManagedClusters +kubectl get managedclusters --show-labels + +# We have got an echo app, which we will deploy to each of our managed clusters +cat resources/echo-app.yaml + +# Deploy an echo app to mgc-control-plane, mgc-workload-1 and mgc-workload-2 +kubectl --context kind-mgc-control-plane apply -f resources/echo-app.yaml +kubectl --context kind-mgc-workload-1 apply -f resources/echo-app.yaml +kubectl --context kind-mgc-workload-2 apply -f resources/echo-app.yaml + +# Check the apps +curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.200.0' +curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.201.0' +curl -k -s -o /dev/null -w '%{http_code}\n' https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.202.0' + +# Check the Gateways +kubectl --context kind-mgc-control-plane get gateways -A +kubectl --context kind-mgc-workload-1 get gateways -A +kubectl --context kind-mgc-workload-2 get gateways -A + +# And their status +kubectl get gateway prod-web -n multi-cluster-gateways -o yaml | yq .status + +# Look at a simple, RR DNSPolicy +cat resources/dnspolicy_prod-web-default.yaml + +# Apply it +kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-default.yaml -n multi-cluster-gateways + +# Observe records created +kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways +kubectl get dnsrecord prod-web-api -n multi-cluster-gateways -o json | jq .spec.endpoints + +# Setup weighted DNS for specifically labeled clusters +cat resources/dnspolicy_prod-web-weighted.yaml +kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-weighted.yaml -n multi-cluster-gateways + +# Label the managedcluster clusters +kubectl label --overwrite managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-custom-weight=AWS +kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS +kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP + +# Show our labels +kubectl get managedclusters --show-labels + +# Show AWS + +# Next: Geo + Weighted +# Label the cluster geos +kubectl label --overwrite managedcluster kind-mgc-control-plane kuadrant.io/lb-attribute-geo-code=ES +kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=DE +kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=US + +# Show the labels +kubectl get managedclusters --show-labels + +# Show & apply the Geo + Weighted policy +cat resources/dnspolicy_prod-web-weighted-geo.yaml +kubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-weighted-geo.yaml -n multi-cluster-gateways + + +# Show Geo DNS working via https://www.whatsmydns.net/#A/bfa.jm.hcpapps.net diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo-dnschecker.png b/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo-dnschecker.png new file mode 100644 index 00000000..1d88294f Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo-dnschecker.png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo.png b/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo.png new file mode 100644 index 00000000..c3cbc03a Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/weighted-geo.png differ diff --git a/multicluster-gateway-controller/docs/demos/dns-policy/weighted.png b/multicluster-gateway-controller/docs/demos/dns-policy/weighted.png new file mode 100644 index 00000000..8e6f7d80 Binary files /dev/null and b/multicluster-gateway-controller/docs/demos/dns-policy/weighted.png differ diff --git a/multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/index.html b/multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/index.html new file mode 100644 index 00000000..91937c95 --- /dev/null +++ b/multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/index.html @@ -0,0 +1,2172 @@ + + + + + + + + + + + + + + + + + + + + + + + + DNS Health Checks - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

DNS Health Checks

+

DNS Health Checks are a crucial tool for ensuring the availability and reliability of your multi-cluster applications. Kuadrant offers a powerful feature known as DNSPolicy, which allows you to configure and verify health checks for DNS endpoints. This guide provides a comprehensive overview of how to set up, utilize, and understand DNS health checks.

+

What are DNS Health Checks?

+

DNS Health Checks are a way to assess the availability and health of DNS endpoints associated with your applications. These checks involve sending periodic requests to the specified endpoints to determine their responsiveness and health status. by configuring these checks via the DNSPolicy, you can ensure that your applications are correctly registered, operational, and serving traffic as expected.

+

Configuration of Health Checks

+
+

Note: By default, health checks occur at 60-second intervals.

+
+

To configure a DNS health check, you need to specify the healthCheck section of the DNSPolicy. The key part of this configuration is the healthCheck section, which includes important properties such as:

+
    +
  • allowInsecureCertificates: Added for development environments, allows health probes to not fail when finding an invalid (e.g. self-signed) certificate.
  • +
  • additionalHeadersRef: This refers to a secret that holds extra headers, often containing important elements like authentication tokens.
  • +
  • endpoint: This is the path where the health checks take place, usually represented as '/healthz' or something similar.
  • +
  • expectedResponses: This setting lets you specify the expected HTTP response codes. If you don't set this, the default values assumed are 200 and 201.
  • +
  • failureThreshold: It's the number of times the health check can fail for the endpoint before it's marked as unhealthy.
  • +
  • interval: This property allows you to specify the time interval between consecutive health checks. The minimum allowed value is 5 seconds.
  • +
  • port: Specific port for the connection to be checked.
  • +
  • protocol: Type of protocol being used, like HTTP or HTTPS. (Required)
  • +
+

kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  healthCheck:
+    allowInsecureCertificates: true
+    endpoint: /
+    expectedResponses:
+      - 200
+      - 201
+      - 301
+    failureThreshold: 5
+    port: 443
+    protocol: https
+EOF
+
+This configuration sets up a DNS health check by creating DNSHealthCheckProbes for the specified prod-web Gateway endpoints.

+

How to Validate DNS Health Checks

+

After setting up DNS Health Checks to improve application reliability, it is important to verify their effectiveness. This guide provides a simple validation process to ensure that health checks are working properly and improving the operation of your applications.

+
    +
  1. +

    Verify Configuration: +The first step in the validation process is to verify that the probes were created. Notice the label kuadrant.io/gateway=prod-web that only shows DNSHealthCheckProbes for the specified prod-web Gateway. +

    kubectl get -l kuadrant.io/gateway=prod-web dnshealthcheckprobes -A
    +

    +
  2. +
  3. +

    Monitor Health Status: +The next step is to monitor the health status of the designated endpoints. This can be done by analyzing logs, metrics generated or by the health check probes status. By reviewing this data, you can confirm that endpoints are being actively monitored and that their status is being reported accurately.

    +
  4. +
+

The following metrics can be used to check all the attempts and failures for a listener. +

mgc_dns_health_check_failures_total
+mgc_dns_health_check_attempts_total
+

+
    +
  1. +

    Test Failure Scenarios: +To gain a better understanding of how your system responds to failures, you can deliberately create endpoint failures. This can be done by stopping applications running on the endpoint or by blocking traffic, or for instance, deliberately omit specifying the expected 200 response code. This will allow you to see how DNS Health Checks dynamically redirect traffic to healthy endpoints and demonstrate their routing capabilities.

    +
  2. +
  3. +

    Monitor Recovery: +After inducing failures, it is important to monitor how your system recovers. Make sure that traffic is being redirected correctly and that applications are resuming normal operation.

    +
  4. +
+

What Happens When a Health Check Fails

+

A pivotal aspect of DNS Health Checks is understanding of a health check failure. When a health check detects an endpoint as unhealthy, it triggers a series of strategic actions to mitigate potential disruptions:

+
    +
  1. +

    The health check probe identifies an endpoint as "unhealthy" and it’s got greater consecutive failures than failure threshold.

    +
  2. +
  3. +

    The system reacts by immediately removing the unhealthy endpoint from the list of available endpoints, any endpoint that doesn’t have at least 1 healthy child will also be removed.

    +
  4. +
  5. +

    This removal causes traffic to automatically get redirected to the remaining healthy endpoints.

    +
  6. +
  7. +

    The health check continues monitoring the endpoint's status. If it becomes healthy again, endpoint is added to the list of available endpoints.

    +
  8. +
+

Limitations

+
    +
  1. +

    Delayed Detection: DNS health checks are not immediate; they depend on the check intervals. Immediate issues might not be detected promptly.

    +
  2. +
  3. +

    No Wildcard Listeners: Unsuitable for wildcard DNS listeners or dynamic domain resolution. DNS health checks do not cover wildcard listeners. Each endpoint must be explicitly defined.

    +
  4. +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/dnspolicy/dns-policy/index.html b/multicluster-gateway-controller/docs/dnspolicy/dns-policy/index.html new file mode 100644 index 00000000..37bb4056 --- /dev/null +++ b/multicluster-gateway-controller/docs/dnspolicy/dns-policy/index.html @@ -0,0 +1,2819 @@ + + + + + + + + + + + + + + + + + + + + + + + + DNSPolicy Reference - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

DNS Policy

+

The DNSPolicy is a GatewayAPI policy that uses Direct Policy Attachment as defined in the policy attachment mechanism standard. +This policy is used to provide dns management for gateway listeners by managing the lifecycle of dns records in external dns providers such as AWS Route53 and Google DNS.

+

Terms

+
    +
  • GatewayAPI: resources that model service networking in Kubernetes.
  • +
  • Gateway: Kubernetes Gateway resource.
  • +
  • ManagedZone: Kuadrant resource representing a Zone Apex in a dns provider.
  • +
  • DNSPolicy: Kuadrant policy for managing gateway dns.
  • +
  • DNSRecord: Kuadrant resource representing a set of records in a managed zone.
  • +
+

DNS Provider Setup

+

A DNSPolicy acts against a target Gateway by processing its listeners for hostnames that it can create dns records for. In order for it to do this, it must know about dns providers, and what domains these dns providers are currently hosting. +This is done through the creation of ManagedZones and dns provider secrets containing the credentials for the dns provider account.

+

If for example a Gateway is created with a listener with a hostname of echo.apps.hcpapps.net: +

apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+  listeners:
+    - allowedRoutes:
+        namespaces:
+          from: All
+      name: api
+      hostname: echo.apps.hcpapps.net
+      port: 80
+      protocol: HTTP
+

+

In order for the DNSPolicy to act upon that listener, a ManagedZone must exist for that hostnames domain.

+

A secret containing the provider credentials must first be created: +

kubectl create secret generic my-aws-credentials --type=kuadrant.io/aws --from-env-file=./aws-credentials.env -n multi-cluster-gateways
+kubectl get secrets my-aws-credentials -n multi-cluster-gateways -o yaml
+apiVersion: v1
+data:
+  AWS_ACCESS_KEY_ID: <AWS_ACCESS_KEY_ID>
+  AWS_REGION: <AWS_REGION>
+  AWS_SECRET_ACCESS_KEY: <AWS_SECRET_ACCESS_KEY>
+kind: Secret
+metadata:
+  name: my-aws-credentials
+  namespace: multi-cluster-gateways
+type: kuadrant.io/aws
+

+

And then a ManagedZone can be added for the desired domain referencing the provider credentials: +

apiVersion: kuadrant.io/v1alpha1
+kind: ManagedZone
+metadata:
+  name: apps.hcpapps.net
+  namespace: multi-cluster-gateways
+spec:
+  domainName: apps.hcpapps.net
+  description: "apps.hcpapps.net managed domain"
+  dnsProviderSecretRef:
+    name: my-aws-credentials
+    namespace: multi-cluster-gateways
+

+

DNSPolicy creation and attachment

+

Once an appropriate ManagedZone is configured for a Gateways listener hostname, we can now create and attach a DNSPolicy to start managing dns for it.

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  healthCheck:
+    allowInsecureCertificates: true
+    additionalHeadersRef:
+      name: probe-headers
+    endpoint: /
+    expectedResponses:
+      - 200
+      - 201
+      - 301
+    failureThreshold: 5
+    port: 80
+    protocol: http
+
+

Target Reference

+

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: +- Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. +- Kind is kind of the target resource. Only valid options are Gateway. +- Name is the name of the target resource. +- Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

+

Health Check

+

The health check section is optional, the following fields are available:

+
    +
  • allowInsecureCertificates: Added for development environments, allows health probes to not fail when finding an invalid (e.g. self-signed) certificate.
  • +
  • additionalHeadersRef: A reference to a secret which contains additional headers such as an authentication token
  • +
  • endpoint: The path to specify for these health checks, e.g. /healthz
  • +
  • expectedResponses: Defaults to 200 or 201, this allows other responses to be considered valid
  • +
  • failureThreshold: How many consecutive fails are required to consider this endpoint unhealthy
  • +
  • port: The port to connect to
  • +
  • protocol: The protocol to use for this connection
  • +
+

Checking status of health checks

+

To list all health checks: +

kubectl get dnshealthcheckprobes -A
+
+This will list all probes in the hub cluster, and whether they are currently healthy or not.

+

To find more information on why a specific health check is failing, look at the status of that probe: +

kubectl get dnshealthcheckprobe <name> -n <namespace> -o yaml
+

+

DNSRecord Resources

+

The DNSPolicy will create a DNSRecord resource for each listener hostname with a suitable ManagedZone configured. The DNSPolicy resource uses the status of the Gateway to determine what dns records need to be created based on the clusters it has been placed onto.

+

Given the following Gateway status: +

status:
+  addresses:
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: kind-mgc-workload-1/172.31.201.1
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: kind-mgc-workload-2/172.31.202.1
+  conditions:
+    - lastTransitionTime: "2023-07-24T19:09:54Z"
+      message: Handled by kuadrant.io/mgc-gw-controller
+      observedGeneration: 1
+      reason: Accepted
+      status: "True"
+      type: Accepted
+    - lastTransitionTime: "2023-07-24T19:09:55Z"
+      message: 'gateway placed on clusters [kind-mgc-workload-1 kind-mgc-workload-2] '
+      observedGeneration: 1
+      reason: Programmed
+      status: "True"
+      type: Programmed
+  listeners:
+    - attachedRoutes: 1
+      conditions: []
+      name: kind-mgc-workload-1.api
+      supportedKinds: []
+    - attachedRoutes: 1
+      conditions: []
+      name: kind-mgc-workload-2.api
+      supportedKinds: []        
+

+

The example DNSPolicy shown above would create a DNSRecord like the following: +

apiVersion: kuadrant.io/v1alpha1
+kind: DNSRecord
+metadata:
+  creationTimestamp: "2023-07-24T19:09:56Z"
+  finalizers:
+    - kuadrant.io/dns-record
+  generation: 3
+  labels:
+    kuadrant.io/Gateway-uid: 0877f97c-f3a6-4f30-97f4-e0d7f25cc401
+    kuadrant.io/record-id: echo
+  name: echo.apps.hcpapps.net
+  namespace: multi-cluster-gateways
+  ownerReferences:
+    - apiVersion: gateway.networking.k8s.io/v1beta1
+      kind: Gateway
+      name: echo-app
+      uid: 0877f97c-f3a6-4f30-97f4-e0d7f25cc401
+    - apiVersion: kuadrant.io/v1alpha1
+      blockOwnerDeletion: true
+      controller: true
+      kind: ManagedZone
+      name: apps.hcpapps.net
+      uid: 26a06799-acff-476b-a1a3-c831fd19dcc7
+  resourceVersion: "25464"
+  uid: 365bf57f-10b4-42e8-a8e7-abb6dce93985
+spec:
+  endpoints:
+    - dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+      recordTTL: 60
+      recordType: A
+      targets:
+        - 172.31.202.1
+    - dnsName: default.lb-2903yb.echo.apps.hcpapps.net
+      providerSpecific:
+        - name: weight
+          value: "120"
+      recordTTL: 60
+      recordType: CNAME
+      setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+      targets:
+        - 24osuu.lb-2903yb.echo.apps.hcpapps.net
+    - dnsName: default.lb-2903yb.echo.apps.hcpapps.net
+      providerSpecific:
+        - name: weight
+          value: "120"
+      recordTTL: 60
+      recordType: CNAME
+      setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+      targets:
+        - lrnse3.lb-2903yb.echo.apps.hcpapps.net
+    - dnsName: echo.apps.hcpapps.net
+      recordTTL: 300
+      recordType: CNAME
+      targets:
+        - lb-2903yb.echo.apps.hcpapps.net
+    - dnsName: lb-2903yb.echo.apps.hcpapps.net
+      providerSpecific:
+        - name: geo-country-code
+          value: '*'
+      recordTTL: 300
+      recordType: CNAME
+      setIdentifier: default
+      targets:
+        - default.lb-2903yb.echo.apps.hcpapps.net
+    - dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+      recordTTL: 60
+      recordType: A
+      targets:
+        - 172.31.201.1
+  managedZone:
+    name: apps.hcpapps.net   
+

+

Which results in the following records being created in AWS Route53 (The provider we used in our example ManagedZone above):

+

aws-recordset-list-1

+

The listener hostname is now be resolvable through dns:

+
dig echo.apps.hcpapps.net +short
+lb-2903yb.echo.apps.hcpapps.net.
+default.lb-2903yb.echo.apps.hcpapps.net.
+lrnse3.lb-2903yb.echo.apps.hcpapps.net.
+172.31.201.1
+
+

More information about the dns record structure can be found in the DNSRecord structure document.

+

Load Balancing

+

Configuration of DNS Load Balancing features is done through the loadBalancing field in the DNSPolicy spec.

+

loadBalancing field contains the specification of how dns will be configured in order to provide balancing of load across multiple clusters. Fields included inside: +- weighted field describes how weighting will be applied to weighted dns records. Fields included inside: + - defaultWeight arbitrary weight value that will be applied to weighted dns records by default. Integer greater than 0 and no larger than the maximum value accepted by the target dns provider. + - custom array of custom weights to apply when custom attribute values match. +- geo field enables the geo routing strategy. Fields included inside: + - defaultGeo geo code to apply to geo dns records by default. The values accepted are determined by the target dns provider.

+

Weighted

+

A DNSPolicy with an empty loadBalancing spec, or with a loadBalancing.weighted.defaultWeight set and nothing else produces a set of records grouped and weighted to produce a Round Robin routing strategy where all target clusters will have an equal chance of being returned in DNS queries.

+

If we apply the following update to the DNSPolicy: +

apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  loadBalancing:
+    weighted:
+      defaultWeight: 100 # <--- New Default Weight being added
+

+

The weight of all records is adjusted to reflect the new defaultWeight value of 100. This will still produce the same Round Robin routing strategy as before since all records still have equal weight values.

+

Custom Weights

+

In order to manipulate how much traffic individual clusters receive, custom weights can be added to the DNSPolicy.

+

If we apply the following update to the DNSPolicy: +

apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  loadBalancing:
+    weighted:
+      defaultWeight: 120
+      custom: # <--- New Custom Weights being added
+        - weight: 255
+          selector:
+            matchLabels:
+              kuadrant.io/lb-attribute-custom-weight: AWS
+        - weight: 10
+          selector:
+            matchLabels:
+              kuadrant.io/lb-attribute-custom-weight: GCP
+

+

And apply custom-weight labels to each of our managed cluster resources:

+
kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS
+kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP
+
+

The DNSRecord for our listener host gets updated, and the weighted records are adjusted to have the new values:

+
kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints
+- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.202.1
+- dnsName: default.lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: weight
+      value: "10" # <--- Weight is updated
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - 24osuu.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: default.lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: weight
+      value: "255" # <--- Weight is updated
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - lrnse3.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: echo.apps.hcpapps.net
+  recordTTL: 300
+  recordType: CNAME
+  targets:
+    - lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: geo-country-code
+      value: '*'
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: default
+  targets:
+    - default.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.201.1
+
+

aws-recordset-list-2

+

In the above scenario the managed cluster kind-mgc-workload-2 (GCP) IP address will be returned far less frequently in DNS queries than kind-mgc-workload-1 (AWS)

+

Geo

+

To enable Geo Load balancing the loadBalancing.geo.defaultGeo field should be added. This informs the DNSPolicy that we now want to start making use of Geo Location features in our target provider. +This will change the single record set group created from default (What is created for weighted only load balancing) to a geo specific one based on the value of defaultGeo.

+

If we apply the following update to the DNSPolicy: +

apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  loadBalancing:
+    weighted:
+      defaultWeight: 120
+      custom:
+        - weight: 255
+          selector:
+            matchLabels:
+              kuadrant.io/lb-attribute-custom-weight: AWS
+        - weight: 10
+          selector:
+            matchLabels:
+              kuadrant.io/lb-attribute-custom-weight: GCP
+    geo:
+      defaultGeo: US # <--- New `geo.defaultGeo` added for `US` (United States)
+

+

The DNSRecord for our listener host gets updated, and the default geo is replaced with the one we specified:

+
kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints
+- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.202.1
+- dnsName: echo.apps.hcpapps.net
+  recordTTL: 300
+  recordType: CNAME
+  targets:
+    - lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net # <--- New `us` geo location CNAME is created
+  providerSpecific:
+    - name: geo-country-code
+      value: US
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: US
+  targets:
+    - us.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: geo-country-code
+      value: '*'
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: default
+  targets:
+    - us.lb-2903yb.echo.apps.hcpapps.net # <--- Default catch all CNAME is updated to point to `us` target
+- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.201.1
+- dnsName: us.lb-2903yb.echo.apps.hcpapps.net # <--- Gateway default group is now `us`
+  providerSpecific:
+    - name: weight
+      value: "10"
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - 24osuu.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: us.lb-2903yb.echo.apps.hcpapps.net # <--- Gateway default group is now `us`
+  providerSpecific:
+    - name: weight
+      value: "255"
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - lrnse3.lb-2903yb.echo.apps.hcpapps.net
+
+

aws-recordset-list-3

+

The listener hostname is still resolvable, but now routed through the us record set:

+
dig echo.apps.hcpapps.net +short
+lb-2903yb.echo.apps.hcpapps.net.
+us.lb-2903yb.echo.apps.hcpapps.net. # <--- `us` CNAME now in the chain
+lrnse3.lb-2903yb.echo.apps.hcpapps.net.
+172.31.201.1
+
+

Configuring Cluster Geo Locations

+

The defaultGeo as described above puts all clusters into the same geo group, but for geo to be useful we need to mark our clusters as being in different locations. +We can do this though by adding geo-code attributes on the ManagedCluster to show which county each cluster is in. The values that can be used are determined by the dns provider (See Below).

+

Apply geo-code labels to each of our managed cluster resources: +

kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=US
+kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=ES
+

+

The above indicates that kind-mgc-workload-1 is located in the US (United States), which is the same as our current default geo, and kind-mgc-workload-2 is in ES (Spain).

+

The DNSRecord for our listener host gets updated, and records are now divided into two groups, us and es: +

kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints
+- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.202.1
+- dnsName: echo.apps.hcpapps.net
+  recordTTL: 300
+  recordType: CNAME
+  targets:
+    - lb-2903yb.echo.apps.hcpapps.net
+- dnsName: es.lb-2903yb.echo.apps.hcpapps.net # <--- kind-mgc-workload-2 target now added to `es` group
+  providerSpecific:
+    - name: weight
+      value: "10"
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - 24osuu.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net # <--- New `es` geo location CNAME is created
+  providerSpecific:
+    - name: geo-country-code
+      value: ES
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: ES
+  targets:
+    - es.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: geo-country-code
+      value: US
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: US
+  targets:
+    - us.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: geo-country-code
+      value: '*'
+  recordTTL: 300
+  recordType: CNAME
+  setIdentifier: default
+  targets:
+    - us.lb-2903yb.echo.apps.hcpapps.net
+- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  recordTTL: 60
+  recordType: A
+  targets:
+    - 172.31.201.1
+- dnsName: us.lb-2903yb.echo.apps.hcpapps.net
+  providerSpecific:
+    - name: weight
+      value: "255"
+  recordTTL: 60
+  recordType: CNAME
+  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net
+  targets:
+    - lrnse3.lb-2903yb.echo.apps.hcpapps.net
+
+aws-recordset-list-4

+

In the above scenario any requests made in Spain will be returned the IP address of kind-mgc-workload-2 and requests made from anywhere else in the world will be returned the IP address of kind-mgc-workload-1. +Weighting of records is still enforced between clusters in the same geo group, in the case above however they are having no effect since there is only one cluster in each group.

+

❗ +If an unsupported value is given to a provider, DNS records will not be created. Please choose carefully. For more information on what location is right for your needs please, read that provider's documentation (see links below).

+
Locations supported per DNS provider
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SupportedAWSGCP
Continents✅❌
Country codes✅❌
States✅❌
Regions❌✅
+
Continents and country codes supported by AWS Route 53
+

:Note: ❗ For more information please the official AWS documentation

+

To see all regions supported by AWS Route 53, please see the official (documentation)[https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-geo.html]

+
Regions supported by GCP CLoud DNS
+

To see all regions supported by GCP Cloud DNS, please see the official (documentation)[https://cloud.google.com/compute/docs/regions-zones]

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/dnspolicy/dns-provider/index.html b/multicluster-gateway-controller/docs/dnspolicy/dns-provider/index.html new file mode 100644 index 00000000..950c64b0 --- /dev/null +++ b/multicluster-gateway-controller/docs/dnspolicy/dns-provider/index.html @@ -0,0 +1,2160 @@ + + + + + + + + + + + + + + + + + + + + + + + + DNS Providers - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Configuring a DNS Provider

+

In order to be able to interact with supported DNS providers, Kuadrant needs a credential that it can use. This credential is leveraged by the multi-cluster gateway controller in order to create and manage DNS records within zones used by the listeners defined in your gateways.

+

Supported Providers

+

Kuadrant Supports the following DNS providers currently

+
    +
  • AWS route 53 (aws)
  • +
  • Google DNS (gcp)
  • +
+

Configuring an AWS Route 53 provider

+

Kuadant expects a secret with a credential. Below is an example for AWS Route 53. It is important to set the secret type to aws

+
apiVersion: v1
+data:
+  AWS_ACCESS_KEY_ID: XXXXX
+  AWS_REGION: XXXXX
+  AWS_SECRET_ACCESS_KEY: XXXXX
+kind: Secret
+metadata:
+  name: aws-credentials
+  namespace: multicluster-gateway-controller-system
+type: kuadrant.io/aws
+
+

IAM permissions required

+

We have tested using the available policy AmazonRoute53FullAccess however it should also be possible to restrict the credential down to a particular zone. More info can be found in the AWS docs +https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/access-control-managing-permissions.html

+

Configuring a Google DNS provider

+

Kuadant expects a secret with a credential. Below is an example for Google DNS. It is important to set the secret type to gcp

+
apiVersion: v1
+data:
+  GOOGLE: {"client_id": "00000000-00000000000000.apps.googleusercontent.com","client_secret": "d-FL95Q00000000000000","refresh_token": "00000aaaaa00000000-AAAAAAAAAAAAKFGJFJDFKDK","type": "authorized_user"}
+  PROJECT_ID: "my-project"
+kind: Secret
+metadata:
+  name: gcp-credentials
+  namespace: multicluster-gateway-controller-system
+type: kuadrant.io/gcp
+
+

Access permissions required

+

https://cloud.google.com/dns/docs/access-control#dns.admin

+

Where to create the secret.

+

It is recommended that you create the secret in the same namespace as your ManagedZones +Now that we have the credential created we have a DNS provdier ready to go and can start using it.

+

Using a credential

+

Once a secret like the one shown above is created, in order for it to be used, it needs to be associated with a ManagedZone.

+

See ManagedZone

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/index.html b/multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/index.html new file mode 100644 index 00000000..bd922a62 --- /dev/null +++ b/multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/index.html @@ -0,0 +1,2105 @@ + + + + + + + + + + + + + + + + + + + + + + + + DNSPolicy Quickstart - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Defining a basic DNSPolicy

+

What is a DNSPolicy

+

DNSPolicy is a Custom Resource Definition supported by the Multi-Cluster Gateway Controller (MGC) that follows the +policy attachment model, +which allows users to enable and configure DNS against the Gateway leveraging an existing cloud based DNS provider.

+

This document describes how to enable DNS by creating a basic DNSPolicy

+

Pre-requisites

+
    +
  • A ManagedZone has been created
  • +
  • A Gateway has been created
  • +
  • A HTTPRoute has been created and attached to the Gateway (Note: It's not a +requirement to create the HTTPRoute beforehand, but DNS records will only +be created once a DNSPolicy has been created)
  • +
+
+

See the Multicluster Gateways walkthrough for step by step +instructions on deploying these with a simple application.

+
+

Steps

+

The DNSPolicy will target the existing Multi Cluster Gateway, resulting in the +creation of DNS Records for each of the Gateway listeners backed by a managed zone, +ensuring traffic reaches the correct gateway instances and is balanced across them, as well as optional DNS health checks and load balancing.

+

In order to enable basic DNS, create a minimal DNSPolicy resource

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: basic-dnspolicy
+  namespace: <Gateway namespace>
+spec:
+  targetRef:
+    name: <Gateway name>
+    group: gateway.networking.k8s.io
+    kind: Gateway     
+
+

Once created, the multi-cluster Gateway Controller will reconcile the DNS records. +By default it will setup a round robin / evenly weighted set of records to ensure a balance of traffic across each provisioned gateway instance. You can see the status by querying the DNSRecord resources.

+
kubectl get dnsrecords -A
+
+

The DNS records will be propagated in a few minutes, and the application will +be available through the defined hosts.

+

Advanced DNS configuration

+

The DNSPolicy supports other optional configuration options like geographic and +weighted load balancing and health checks. For more detailed information about these options, see DNSPolicy reference

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/dnspolicy/managed-zone/index.html b/multicluster-gateway-controller/docs/dnspolicy/managed-zone/index.html new file mode 100644 index 00000000..565dd6b8 --- /dev/null +++ b/multicluster-gateway-controller/docs/dnspolicy/managed-zone/index.html @@ -0,0 +1,2246 @@ + + + + + + + + + + + + + + + + + + + + + + + + ManagedZone Reference - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Creating and using a ManagedZone resource.

+

What is a ManagedZone

+

A ManagedZone is a reference to a DNS zone. +By creating a ManagedZone we are instructing the MGC about a domain or subdomain that can be used as a host by any gateways in the same namespace. +These gateways can use a subdomain of the ManagedZone.

+

If a gateway attempts to a use a domain as a host, and there is no matching ManagedZone for that host, then that host on that gateway will fail to function.

+

A gateway's host will be matched to any ManagedZone that the host is a subdomain of, i.e. test.api.hcpapps.net will be matched by any ManagedZone (in the same namespace) of: test.api.hcpapps.net, api.hcpapps.net or hcpapps.net.

+

When MGC wants to create the DNS Records for a host, it will create them in the most exactly matching ManagedZone. +e.g. given the zones hcpapps.net and api.hcpapps.net the DNS Records for the host test.api.hcpapps.net will be created in the api.hcpapps.net zone.

+

Delegation

+

Delegation allows you to give control of a subdomain of a root domain to MGC while the root domain has it's DNS zone elsewhere.

+

In the scenario where a root domain has a zone outside Route53, e.g. external.com, and a ManagedZone for delegated.external.com is required, the following steps can be taken: +- Create the ManagedZone for delegated.external.com and wait until the status is updated with an array of nameservers (e.g. ns1.hcpapps.net, ns2.hcpapps.net). +- Copy these nameservers to your root zone for external.com, you can create a NS record for each nameserver against the delegated.external.com record.

+

For example: +

delegated.external.com. 3600 IN NS ns1.hcpapps.net.
+delegated.external.com. 3600 IN NS ns2.hcpapps.net.
+

+

Now, when MGC creates a DNS record in it's Route53 zone for delegated.external.com, it will be resolved correctly.

+

Walkthrough

+

There is an existing walkthrough, which involves using a managed zone.

+

Current limitations

+

At the moment the MGC is given credentials to connect to the DNS provider at startup using environment variables, because of that, MGC is limited to one provider type (Route53), and all zones must be in the same Route53 account.

+

There are plans to make this more customizable and dynamic in the future, work tracked here.

+

Spec of a ManagedZone

+

The ManagedZone is a simple resource with an uncomplicated API, see a sample here.

+

Mandatory fields

+

The ManagedZone spec has 1 required field domainName: +

apiVersion: kuadrant.io/v1alpha1
+kind: ManagedZone
+metadata:
+  name: testmz.hcpapps.net
+spec:
+  domainName: testmz.hcapps.net
+  dnsProviderSecretRef:
+    Name: my-credential
+    NameSpace: ns
+

+

Secret Ref

+

This is a reference to a secret that contains a credential for accessing the DNS Provider. See DNSProvider for more details.

+

Optional fields

+

The following fields are optional:

+

ID

+

By setting the ID, you are referring to an existing zone in the DNS provider which MGC will use to manage the DNS of this zone. +By leaving the ID empty, MGC will create a zone in the DNS provider, and store the reference in this field.

+

Description

+

This is simply a human-readable description of this resource (e.g. "Use this zone for the staging environment")

+

ParentManagedZone

+

This allows a zone to be owned by another zone (e.g test.api.domain.com could be owned by api.domain.com), MGC will use this owner relationship to manage the NS values for the subdomain in the parent domain. +Note that for this to work, both the owned and owner zones must be in the Route53 account accessible by MGC.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/index.html b/multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/index.html new file mode 100644 index 00000000..1879e201 --- /dev/null +++ b/multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/index.html @@ -0,0 +1,2122 @@ + + + + + + + + + + + + + + + + + + + + + + + + Kuadrant and Skupper Gateway Resiliency - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Skupper proof of concept: 2 clusters & gateways, resiliency walkthrough

+

Introduction

+

This walkthrough shows how Skupper can be used to provide service resiliency +across 2 clusters. Each cluster is running a Gateway with a HttpRoute in front +of an application Service. By leveraging Skupper, the application Service can be +exposed (using the skupper cli) from either cluster. If the Service is +unavailable on the local cluster, it will be routed to another cluster that has +exposed that Service.

+

arch

+

Requirements

+
    +
  • Local environment has been set up with a hub and spoke cluster, as per the Multicluster Gateways Walkthrough.
  • +
  • The example multi-cluster Gateway has been deployed to both clusters
  • +
  • The example echo HttpRoute, Service and Deployment have been deployed to both clusters in the default namespace, and the MGC_SUB_DOMAIN env var set in your terminal
  • +
  • Skupper CLI has been installed.
  • +
+

Skupper Setup

+

Continuing on from the previous walkthrough, in first terminal, T1, install +Skupper on the hub & spoke clusters using the following command:

+
make skupper-setup
+
+

In T1 expose the Service in the default namespace:

+
skupper expose deployment/echo --port 8080
+
+

Do the same in the workload cluster T2:

+
skupper expose deployment/echo --port 8080
+
+

Verify the application route can be hit, +taking note of the pod name in the response:

+
curl -k https://$MGC_SUB_DOMAIN
+Request served by <POD_NAME>
+
+

Locate the pod that is currently serving requests. It is either in the hub or +spoke cluster. There goal is to scale down the deployment to 0 replicas. +Check in both T1 and T2:

+
kubectl get po -n default | grep echo
+
+

Run this command to scale down the deployment in the right cluster:

+
kubectl scale deployment echo --replicas=0 -n default
+
+

Verify the application route can still be hit, +and the pod name matches the one that has not been scaled down.

+
curl -k https://$MGC_SUB_DOMAIN
+
+

You can also force resolve the DNS result to alternate between the 2 Gateway +clusters to verify requests get routed across the Skupper network.

+
curl -k --resolve $MGC_SUB_DOMAIN:443:172.31.200.2 https://$MGC_SUB_DOMAIN
+curl -k --resolve $MGC_SUB_DOMAIN:443:172.31.201.2 https://$MGC_SUB_DOMAIN
+
+

Known Issues

+

If you get an error response no healthy upstream from curl, there may be a +problem with the skupper network or link. Check back on the output from earlier +commands for any indication of problems setting up the network or link. The +skupper router & service controller logs can be checked in the default +namespace in both clusters.

+

You may see an error like below when running the make skupper-setup cmd. +

Error: Failed to create token: Policy validation error: Timed out trying to communicate with the API: context deadline exceeded
+
+This may be a timing issue or a platform specific problem. Either way, you can +try install a different version of the skupper CLI. This problem was seen on at +least 1 setup when using skupper v1.4.2, but didn't happen when dropped back to +1.3.0.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/index.html b/multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/index.html new file mode 100644 index 00000000..c01f3ee7 --- /dev/null +++ b/multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/index.html @@ -0,0 +1,2571 @@ + + + + + + + + + + + + + + + + + + + + + + + + Kuadrant and Submariner Gateway Resiliency - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Submariner proof of concept 2 clusters & gateways resiliency walkthrough

+

Introduction

+

This walkthrough shows how submariner can be used to provide service resiliency across 2 clusters. +Each cluster is running a Gateway with a HttpRoute in front of an application Service. +By leveraging Submariner (and the Multi Cluster Services API), the application Service can be exported (via a ServiceExport resource) from either cluster, +and imported (via a ServiceImport resource) to either cluster. +This provides a clusterset hostname for the service in either cluster e.g. echo.default.svc.clusterset.local +The HttpRoute has a backendRef to a Service that points to this hostname. +If the Service is unavailable on the local cluster, it will be routed to another cluster that has exported that Service.

+

Requirements

+
    +
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone
  • +
+
+

Note: ❗ this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

+

Note: ❗ replace.this is a placeholder that you will need to replace with your own domain

+
+

Installation and Setup

+

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

+

Open three windows, which we'll refer to throughout this walkthrough as:

+
    +
  • T1 (Hub Cluster)
  • +
  • T2 (Where we'll run our controller locally)
  • +
  • T3 (Workloads cluster)
  • +
+

To setup a local instance with submariner, in T1, create kind clusters by:

+

make local-setup-kind MGC_WORKLOAD_CLUSTERS_COUNT=1
+
+And deploy onto them by running: +
make local-setup-mgc OCM_SINGLE=true SUBMARINER=true MGC_WORKLOAD_CLUSTERS_COUNT=1
+

+

In the hub cluster (T1) we are going to label the control plane managed cluster as an Ingress cluster:

+
kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true
+kubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true
+
+

Next, in T1, create the ManagedClusterSet that uses the ingress label to select clusters:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta2
+kind: ManagedClusterSet
+metadata:
+  name: gateway-clusters
+spec:
+  clusterSelector:
+    labelSelector: 
+      matchLabels:
+        ingress-cluster: "true"
+    selectorType: LabelSelector
+EOF
+
+

Next, in T1 we need to bind this cluster set to our multi-cluster-gateways namespace so that we can use those clusters to place Gateways on:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta2
+kind: ManagedClusterSetBinding
+metadata:
+  name: gateway-clusters
+  namespace: multi-cluster-gateways
+spec:
+  clusterSet: gateway-clusters
+EOF
+
+

Create a placement for our Gateways

+

In order to place our Gateways onto clusters, we need to setup a placement resource. Again, in T1, run:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta1
+kind: Placement
+metadata:
+  name: http-gateway
+  namespace: multi-cluster-gateways
+spec:
+  numberOfClusters: 2
+  clusterSets:
+    - gateway-clusters
+EOF
+
+

Create the Gateway class

+

Lastly, we will set up our multi-cluster GatewayClass. In T1, run:

+
kubectl create -f hack/ocm/gatewayclass.yaml
+
+

Start the Gateway Controller

+

In T2 run the following to start the Gateway Controller:

+
make build-controller install run-controller
+
+

Create a Gateway

+

We know will create a multi-cluster Gateway definition in the hub cluster. In T1, run the following:

+

Important: ❗ Make sure to replace sub.replace.this with a subdomain of your root domain.

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+  listeners:
+  - allowedRoutes:
+      namespaces:
+        from: All
+    name: api
+    hostname: sub.replace.this
+    port: 443
+    protocol: HTTPS
+    tls:
+      mode: Terminate
+      certificateRefs:
+        - name: apps-hcpapps-tls
+          kind: Secret
+EOF
+
+

Enable TLS

+
    +
  1. +

    In T1, create a TLSPolicy and attach it to your Gateway:

    +
    kubectl apply -f - <<EOF
    +apiVersion: kuadrant.io/v1alpha1
    +kind: TLSPolicy
    +metadata:
    +  name: prod-web
    +  namespace: multi-cluster-gateways
    +spec:
    +  targetRef:
    +    name: prod-web
    +    group: gateway.networking.k8s.io
    +    kind: Gateway
    +  issuerRef:
    +    group: cert-manager.io
    +    kind: ClusterIssuer
    +    name: glbc-ca   
    +EOF
    +
    +
  2. +
  3. +

    You should now see a Certificate resource in the hub cluster. In T1, run:

    +

    kubectl get certificates -A
    +
    + you'll see the following:

    +
  4. +
+

NAMESPACE NAME READY SECRET AGE + multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

+

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

+

Place the Gateway

+

To place the Gateway, we need to add a placement label to Gateway resource to instruct the Gateway controller where we want this Gateway instantiated. In T1, run:

+
kubectl label gateways.gateway.networking.k8s.io prod-web "cluster.open-cluster-management.io/placement"="http-gateway" -n multi-cluster-gateways
+
+

Now on the hub cluster you should find there is a configured Gateway and instantiated Gateway. In T1, run:

+
kubectl get gateways.gateway.networking.k8s.io -A
+
+
kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s
+multi-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s
+
+

Create and attach a HTTPRoute

+

Let's create a simple echo app with a HTTPRoute and 2 Services (one that routes to the app, and one that uses an externalName) in the first cluster. +Remember to replace the hostnames. Again we are creating this in the single hub cluster for now. In T1, run:

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: my-route
+spec:
+  parentRefs:
+  - kind: Gateway
+    name: prod-web
+    namespace: kuadrant-multi-cluster-gateways
+  hostnames:
+  - "sub.replace.this"  
+  rules:
+  - backendRefs:
+    - name: echo-import-proxy
+      port: 8080
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo-import-proxy
+spec:
+  type: ExternalName
+  externalName: echo.default.svc.clusterset.local
+  ports:
+  - port: 8080
+    targetPort: 8080
+    protocol: TCP
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo
+spec:
+  ports:
+    - name: http-port
+      port: 8080
+      targetPort: http-port
+      protocol: TCP
+  selector:
+    app: echo
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: echo
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: echo
+  template:
+    metadata:
+      labels:
+        app: echo
+    spec:
+      containers:
+        - name: echo
+          image: docker.io/jmalloc/echo-server
+          ports:
+            - name: http-port
+              containerPort: 8080
+              protocol: TCP   
+EOF
+
+

Enable DNS

+
    +
  1. In T1, create a DNSPolicy and attach it to your Gateway:
  2. +
+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway     
+EOF
+
+

Once this is done, the Kuadrant multi-cluster Gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that Gateway for the host defined in that listener.

+

You should now see a DNSRecord and only 1 endpoint added which corresponds to address assigned to the Gateway where the HTTPRoute was created. In T1, run:

+
kubectl get dnsrecord -n multi-cluster-gateways -o=yaml
+
+

Introducing the second cluster

+

In T3, targeting the second cluster, go ahead and create the HTTPRoute & 2 Services in the second Gateway cluster.

+
kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/local/kube/workload1.yaml && export KUBECONFIG=$(pwd)/local/kube/workload1.yaml
+
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: my-route
+spec:
+  parentRefs:
+  - kind: Gateway
+    name: prod-web
+    namespace: kuadrant-multi-cluster-gateways
+  hostnames:
+  - "sub.replace.this"  
+  rules:
+  - backendRefs:
+    - name: echo-import-proxy
+      port: 8080
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo-import-proxy
+spec:
+  type: ExternalName
+  externalName: echo.default.svc.clusterset.local
+  ports:
+  - port: 8080
+    targetPort: 8080
+    protocol: TCP
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo
+spec:
+  ports:
+    - name: http-port
+      port: 8080
+      targetPort: http-port
+      protocol: TCP
+  selector:
+    app: echo
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: echo
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: echo
+  template:
+    metadata:
+      labels:
+        app: echo
+    spec:
+      containers:
+        - name: echo
+          image: docker.io/jmalloc/echo-server
+          ports:
+            - name: http-port
+              containerPort: 8080
+              protocol: TCP   
+EOF
+
+

Now if you move back to the hub context in T1 and take a look at the dnsrecord, you will see we now have two A records configured:

+
kubectl get dnsrecord -n multi-cluster-gateways -o=yaml
+
+

Create the ServiceExports and ServiceImports

+

In T1, export the Apps echo service from cluster 1 to cluster 2, and vice versa.

+
./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig --namespace default echo
+./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig --namespace default echo
+
+

In T1, verify the ServiceExport was created on cluster 1 and cluster 2

+
kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceexport echo
+kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceexport echo
+
+

In T1, verify the ServiceImport was created on both clusters

+
kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceimport echo
+kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceimport echo
+
+

At this point you should get a 200 response. +It might take a minute for dns to propagate internally after importing the services above.

+
curl -Ik https://sub.replace.this
+
+

You can force resolve the IP to either cluster and verify a 200 is returned when routed to both cluster Gateways.

+
curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this
+curl -Ik --resolve sub.replace.this:443:172.31.201.0 https://sub.replace.this
+
+

Testing resiliency

+

In T1, stop the echo pod on cluster 2

+
kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig scale deployment/echo --replicas=0
+
+

Verify a 200 is still returned when routed to either cluster

+
curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this
+curl -Ik --resolve sub.replace.this:443:172.31.201.0 https://sub.replace.this
+
+

Known issues

+

At the time of writing, Istio does not support adding a ServiceImport as a backendRef directly as per the Gateway API proposal - GEP-1748. +This is why the walkthrough uses a Service of type ExternalName to route to the clusterset host instead. +There is an issue questioning the current state of support.

+

The installation of the subctl cli fails on macs with arm architecture. The error is curl: (22) The requested URL returned error: 404. A workaround for this is to download the amd64 darwin release manually from the releases page and extract it to the ./bin directory.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/index.html b/multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/index.html new file mode 100644 index 00000000..9016526f --- /dev/null +++ b/multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/index.html @@ -0,0 +1,2394 @@ + + + + + + + + + + + + + + + + + + + + Submariner proof of concept with a Hub Gateway & 2 Workload Clusters - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Submariner proof of concept with a Hub Gateway & 2 Workload Clusters

+

Introduction

+

This walkthrough shows how submariner can be used to provide service resiliency across 2 clusters with a hub cluster as the Gateway. +The hub cluster is running a Gateway with a HttpRoute in front of an application Service. +By leveraging Submariner (and the Multi Cluster Services API), the application Service can be exported (via a ServiceExport resource) from the 2 workload clusters, +and imported (via a ServiceImport resource) to the hub cluster. +This provides a clusterset hostname for the service in the hub cluster e.g. echo.kuadrant-multi-cluster-gateways.svc.clusterset.local +The HttpRoute has a backendRef to a Service that points to this hostname. +If the Service is unavailable in either workload cluster, it will be routed to the other workload cluster.

+

arch

+

Requirements

+
    +
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone
  • +
+
+

Note: ❗ this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

+

Note: ❗ replace.this is a placeholder that you will need to replace with your own domain

+
+

Installation and Setup

+

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

+

Open three windows, which we'll refer to throughout this walkthrough as:

+
    +
  • T1 (Hub Cluster)
  • +
  • T2 (Where we'll run our controller locally)
  • +
  • T3 (Workload cluster 1)
  • +
  • T4 (Workload cluster 2)
  • +
+

To setup a local instance with submariner, in T1, create kind clusters:

+

make local-setup-kind MGC_WORKLOAD_CLUSTERS_COUNT=2
+
+And deploy onto the using: +
make local-setup-mgc OCM_SINGLE=true SUBMARINER=true MGC_WORKLOAD_CLUSTERS_COUNT=2
+

+

In the hub cluster (T1) we are going to label the control plane managed cluster as an Ingress cluster:

+
kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true
+
+

Next, in T1, create the ManagedClusterSet that uses the ingress label to select clusters:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta2
+kind: ManagedClusterSet
+metadata:
+  name: gateway-clusters
+spec:
+  clusterSelector:
+    labelSelector: 
+      matchLabels:
+        ingress-cluster: "true"
+    selectorType: LabelSelector
+EOF
+
+

Next, in T1 we need to bind this cluster set to our multi-cluster-gateways namespace so that we can use that cluster to place Gateway on:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta2
+kind: ManagedClusterSetBinding
+metadata:
+  name: gateway-clusters
+  namespace: multi-cluster-gateways
+spec:
+  clusterSet: gateway-clusters
+EOF
+
+

Create a placement for our Gateway

+

In order to place our Gateway onto the hub clusters, we need to setup a placement resource. Again, in T1, run:

+
kubectl apply -f - <<EOF
+apiVersion: cluster.open-cluster-management.io/v1beta1
+kind: Placement
+metadata:
+  name: http-gateway
+  namespace: multi-cluster-gateways
+spec:
+  numberOfClusters: 1
+  clusterSets:
+    - gateway-clusters
+EOF
+
+

Create the GatewayClass

+

Lastly, we will set up our multi-cluster GatewayClass. In T1, run:

+
kubectl create -f hack/ocm/gatewayclass.yaml
+
+

Start the Gateway Controller

+

In T2 run the following to start the Gateway Controller:

+
kind export kubeconfig --name=mgc-control-plane --kubeconfig=$(pwd)/local/kube/control-plane.yaml && export KUBECONFIG=$(pwd)/local/kube/control-plane.yaml
+make build-controller install run-controller
+
+

Create a Gateway

+

We know will create a multi-cluster Gateway definition in the hub cluster. In T1, run the following:

+

Important: ❗ Make sure to replace sub.replace.this with a subdomain of your root domain.

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+  listeners:
+  - allowedRoutes:
+      namespaces:
+        from: All
+    name: api
+    hostname: sub.replace.this
+    port: 443
+    protocol: HTTPS
+    tls:
+      mode: Terminate
+      certificateRefs:
+        - name: apps-hcpapps-tls
+          kind: Secret
+EOF
+
+

Enable TLS

+
    +
  1. +

    In T1, create a TLSPolicy and attach it to your Gateway:

    +
    kubectl apply -f - <<EOF
    +apiVersion: kuadrant.io/v1alpha1
    +kind: TLSPolicy
    +metadata:
    +  name: prod-web
    +  namespace: multi-cluster-gateways
    +spec:
    +  targetRef:
    +    name: prod-web
    +    group: gateway.networking.k8s.io
    +    kind: Gateway
    +  issuerRef:
    +    group: cert-manager.io
    +    kind: ClusterIssuer
    +    name: glbc-ca   
    +EOF
    +
    +
  2. +
  3. +

    You should now see a Certificate resource in the hub cluster. In T1, run:

    +

    kubectl get certificates -A
    +
    + you'll see the following:

    +
  4. +
+

NAMESPACE NAME READY SECRET AGE + multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

+

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

+

Place the gateway

+

To place the Gateway, we need to add a placement label to Gateway resource to instruct the Gateway controller where we want this Gateway instantiated. In T1, run:

+
kubectl label gateways.gateway.networking.k8s.io prod-web "cluster.open-cluster-management.io/placement"="http-gateway" -n multi-cluster-gateways
+
+

Now on the hub cluster you should find there is a configured Gateway and instantiated Gateway. In T1, run:

+
kubectl get gateways.gateway.networking.k8s.io -A
+
+
kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s
+multi-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s
+
+

Deploy the App to the 2 workload clusters

+

We do this before the HttpRoute is created for the purposes of the walkthrough. +If we don't do it in this order, there may be negative dns caching of the ServiceImport clusterset host resulting in 503 responses. +In T3, targeting the 1st workload cluster, go ahead and create Service and Deployment.

+
kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/local/kube/workload1.yaml && export KUBECONFIG=$(pwd)/local/kube/workload1.yaml
+kubectl create namespace kuadrant-multi-cluster-gateways
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  ports:
+    - name: http-port
+      port: 8080
+      targetPort: http-port
+      protocol: TCP
+  selector:
+    app: echo
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: echo
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: echo
+  template:
+    metadata:
+      labels:
+        app: echo
+    spec:
+      containers:
+        - name: echo
+          image: docker.io/jmalloc/echo-server
+          ports:
+            - name: http-port
+              containerPort: 8080
+              protocol: TCP   
+EOF
+
+

In T4, targeting the 2nd workload cluster, go ahead and create Service and Deployment there too.

+
kind export kubeconfig --name=mgc-workload-2 --kubeconfig=$(pwd)/local/kube/workload2.yaml && export KUBECONFIG=$(pwd)/local/kube/workload2.yaml
+kubectl create namespace kuadrant-multi-cluster-gateways
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  ports:
+    - name: http-port
+      port: 8080
+      targetPort: http-port
+      protocol: TCP
+  selector:
+    app: echo
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: echo
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: echo
+  template:
+    metadata:
+      labels:
+        app: echo
+    spec:
+      containers:
+        - name: echo
+          image: docker.io/jmalloc/echo-server
+          ports:
+            - name: http-port
+              containerPort: 8080
+              protocol: TCP   
+EOF
+
+

Create the ServiceExports and ServiceImports

+

In T1, export the Apps echo service from the workload clusters.

+
./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig --namespace kuadrant-multi-cluster-gateways echo
+./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-2.kubeconfig --namespace kuadrant-multi-cluster-gateways echo
+
+

In T1, verify the ServiceExport was created on both clusters

+
kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceexport echo -n kuadrant-multi-cluster-gateways
+kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-2.kubeconfig get serviceexport echo -n kuadrant-multi-cluster-gateways
+
+

In T1, verify the ServiceImport was created in the hub

+
kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceimport echo -n kuadrant-multi-cluster-gateways
+
+

Create and attach a HTTPRoute and Service

+

Let's create a HTTPRoute and a Service (that uses an externalName) in the hub cluster. +Remember to replace the hostnames. In T1, run:

+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+  name: my-route
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  parentRefs:
+  - kind: Gateway
+    name: prod-web
+    namespace: kuadrant-multi-cluster-gateways
+  hostnames:
+  - "sub.replace.this"  
+  rules:
+  - backendRefs:
+    - name: echo-import-proxy
+      port: 8080
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: echo-import-proxy
+  namespace: kuadrant-multi-cluster-gateways
+spec:
+  type: ExternalName
+  externalName: echo.kuadrant-multi-cluster-gateways.svc.clusterset.local
+  ports:
+  - port: 8080
+    targetPort: 8080
+    protocol: TCP
+EOF
+
+

Enable DNS

+
    +
  1. In T1, create a DNSPolicy and attach it to your Gateway:
  2. +
+
kubectl apply -f - <<EOF
+apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway     
+EOF
+
+

Once this is done, the Kuadrant multi-cluster Gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that Gateway for the host defined in that listener.

+

You should now see a DNSRecord and only 1 endpoint added which corresponds to address assigned to the Gateway where the HTTPRoute was created. In T1, run:

+
kubectl get dnsrecord -n multi-cluster-gateways -o=yaml
+
+

Verify the HttpRoute works

+

At this point you should get a 200 response. +It might take a minute for dns to propagate internally by submariner after importing the services above.

+
curl -Ik https://sub.replace.this
+
+

If DNS is not resolving for you yet, you may get a 503. +In that case you can force resolve the IP to the hub cluster and verify a 200 is returned.

+
curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this
+
+

Known issues

+

At the time of writing, Istio does not support adding a ServiceImport as a backendRef directly as per the Gateway API proposal - GEP-1748. +This is why the walkthrough uses a Service of type ExternalName to route to the clusterset host instead. +There is an issue questioning the current state of support.

+

The installation of the subctl cli fails on macs with arm architecture. The error is curl: (22) The requested URL returned error: 404. A workaround for this is to download the amd64 darwin release manually from the releases page and extract it to the ./bin directory.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/index.html b/multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/index.html new file mode 100644 index 00000000..6af858cb --- /dev/null +++ b/multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/index.html @@ -0,0 +1,2148 @@ + + + + + + + + + + + + + + + + + + + + + + + + Defining and Distributing Multicluster Gateways with OCM - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Defining and Distributing Multicluster Gateways with OCM

+ +

Define and Place Gateways

+

In this guide, we will go through defining a Gateway in the OCM hub cluster that can then be distributed to and instantiated on a set of managed spoke clusters.

+

Pre Requisites

+ +

You should start this guide with OCM installed, 1 or more spoke clusters registered with the hub and Kuadrant installed into the hub.

+

Going through the installation will also ensure that a supported GatewayClass is registered in the hub cluster that the Kuadrant multi-cluster gateway controller will handle.

+

Defining a Gateway

+

Once you have Kudarant installed in to the OCM hub cluster, you can begin defining and placing Gateways across your OCM managed infrastructure.

+

To define a Gateway and have it managed by the multi-cluster gateway controller, we need to do the following things

+
    +
  • Create a Gateway API Gateway resource in the Hub cluster
  • +
  • Ensure that gateway resource specifies the correct gateway class so that it will be picked up and managed by the multi-cluster gateway controller
  • +
+

So really there is very little different from setting up a gateway in a none OCM hub. The key difference here is this gateway definition, represents a "template" gateway that will then be distributed and provisioned on chosen spoke clusters. The actual provider for this Gateway instance defaults to Istio. This is because kuadrant also offers APIs that integrate at the gateway provider level and the gateway provider we currently support is Istio.

+

The Gateway API CRDS will have been installed into your hub as part of installation of Kuadrant into the hub. Below is an example gateway. More Examples. Assuming you have the correct RBAC permissions and a namespace, the key thing is to define the correct GatewayClass name to use and a listener host.

+
apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster #this needs to be set in your gateway definiton
+  listeners:
+  - allowedRoutes:
+      namespaces:
+        from: All
+    name: specific
+    hostname: 'some.domain.example.com'
+    port: 443
+    protocol: HTTP
+
+

Placing a Gateway

+

To place a gateway, we will need to create a Placement resource.

+

Below is an example placement resource. To learn more about placement check out the OCM docs placement

+
 apiVersion: cluster.open-cluster-management.io/v1beta1
+  kind: Placement
+  metadata:
+    name: http-gateway-placement
+    namespace: multi-cluster-gateways
+  spec:
+    clusterSets:
+    - gateway-clusters # defines which ManagedClusterSet to use. https://open-cluster-management.io/concepts/managedclusterset/ 
+    numberOfClusters: 2 #defines how many clusters to select from the chosen clusterSets
+
+

Finally in order to actually have the Gateway instances deployed to your spoke clusters that can start receiving traffic, you need to label the hub gateway with a placement label. In the above example we would add the following label to the gateway.

+
cluster.open-cluster-management.io/placement: http-gateway #this value should match the name of your placement.
+
+

What if you want to use a different gateway provider?

+

While we recommend using Istio as the gateway provider as that is how you will get access to the full suite of policy APIs, it is possible to use another provider if you choose to however this will result in a reduced set of applicable policy objects.

+

If you are only using the DNSPolicy and TLSPolicy resources, you can use these APIs with any Gateway provider. To change the underlying provider, you need to set the gatewayclass param downstreamClass. To do this create the following configmap:

+
apiVersion: v1
+data:
+  params: |
+    {
+      "downstreamClass": "eg" #this is the class for envoy gateway used as an example
+    }
+kind: ConfigMap
+metadata:
+  name: gateway-params
+  namespace: multi-cluster-gateways
+
+

Once this has been created, any gateway created from that gateway class will result in a downstream gateway being provisioned with the configured downstreamClass.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/gateways/gateway-deletion/index.html b/multicluster-gateway-controller/docs/gateways/gateway-deletion/index.html new file mode 100644 index 00000000..7255d1b7 --- /dev/null +++ b/multicluster-gateway-controller/docs/gateways/gateway-deletion/index.html @@ -0,0 +1,2071 @@ + + + + + + + + + + + + + + + + + + + + + + + + Gateway Deletion - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Gateway Deletion

+ +

Gateway deletion

+

When deleting a gateway it should ONLY be deleted in the control plane cluster. This will the trigger the following events:

+

Workload cluster(s):

+
    +
  1. The corresponding gateway in the workload clusters will also be deleted.
  2. +
+

Control plane cluster(s):

+
    +
  1. +

    DNS Record deletion:

    +

    Gateways and DNS records have a 1:1 relationship only, when a gateway gets deleted the corresponding DNS record also gets marked for deletion. This then triggers the DNS record to be removed from the managed zone in the DNS provider (currently only route 53 is accepted). +3. Certs and secrets deletion :

    +

    When a gateway is created a cert is also created for the host in the gateway, this is also removed when the gateway is deleted.

    +
  2. +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/getting-started/index.html b/multicluster-gateway-controller/docs/getting-started/index.html new file mode 100644 index 00000000..5bf6219d --- /dev/null +++ b/multicluster-gateway-controller/docs/getting-started/index.html @@ -0,0 +1,2141 @@ + + + + + + + + + + + + + + + + + + + + + + + + Getting Started - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Getting Started

+ +

Getting Started

+

Prerequisites

+ +

Config

+

Export environment variables with the keys listed below. Fill in your own values as appropriate. Note that you will need to have created a root domain in AWS using Route 53:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Env VarExample ValueDescription
MGC_ZONE_ROOT_DOMAINjbloggs.hcpapps.netHostname for the root Domain
MGC_AWS_DNS_PUBLIC_ZONE_IDZ01234567US0IQE3YLO00AWS Route 53 Zone ID for specified MGC_ZONE_ROOT_DOMAIN
MGC_AWS_ACCESS_KEY_IDAKIA1234567890000000Access Key ID, with access to resources in Route 53
MGC_AWS_SECRET_ACCESS_KEYZ01234567US0000000Access Secret Access Key, with access to resources in Route 53
MGC_AWS_REGIONeu-west-1AWS Region
MGC_SUB_DOMAINmyapp.jbloggs.hcpapps.netAWS Region
+
+

Alternatively, to set defaults, add the above environment variables to your .zshrc or .bash_profile.

+
+

Set Up Clusters and Multicluster Gateway Controller

+
 curl https://raw.githubusercontent.com/kuadrant/multicluster-gateway-controller/main/hack/quickstart-setup.sh | bash
+
+

What's Next

+

Now that you have two Kind clusters configured with the Multicluster Gateway Controller installed you are ready to begin the Multicluster Gateways walkthrough.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/how-to/metrics-federation/index.html b/multicluster-gateway-controller/docs/how-to/metrics-federation/index.html new file mode 100644 index 00000000..f32c11f4 --- /dev/null +++ b/multicluster-gateway-controller/docs/how-to/metrics-federation/index.html @@ -0,0 +1,2163 @@ + + + + + + + + + + + + + + + + + + + + + + + + Metrics Federation - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Metrics Federation (WIP)

+

Introduction

+

This walkthrough shows how to install a metrics federation stack locally and query Istio metrics from the hub.

+
+

Note: ❗ this walkthrough is incomplete. It will be updated as issues from https://github.com/Kuadrant/multicluster-gateway-controller/issues/197 land

+
+

arch

+

Requirements

+
    +
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone
  • +
+
+

Note: ❗ this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

+
+

Installation and Setup

+

To setup a local instance with metrics federation, run:

+
make local-setup OCM_SINGLE=true METRICS_FEDERATION=true MGC_WORKLOAD_CLUSTERS_COUNT=1
+
+

Once complete, you should see something like the below in the output (you may need to scroll)

+
    Connect to Thanos Query UI
+
+        URL : https://thanos-query.172.31.0.2.nip.io
+
+

Open the url in a browser, accepting the non CA signed certificate. +In the Thanos UI query box, enter the below query and press 'Execute'

+
sum(rate(container_cpu_usage_seconds_total{namespace="monitoring",container="prometheus"}[5m]))
+
+

You should see a response in the table view. +In the Graph view you should see some data over time as well.

+

arch

+

Istio Metrics

+

Thanos Query UI

+

To query Istio workload metrics, you should first deploy a Gateway & HttpRoute, and send traffic to it. +The easiest way to do this is by following the steps in the OCM Walkthrough. Before going through the walkthrough, there are two things to note: Firstly, you do not need to re-run the make local-setup step, as that should have already been run with the METRICS_FEDERATION flag above. Secondly, you should set METRICS=true when it comes to the step to start and deploy the gateway controller, i.e:

+
make build-controller kind-load-controller deploy-controller METRICS=true
+
+

After completing the OCM walkthrough, use curl to send some traffic to the application

+
while true; do curl -k https://$MGC_SUB_DOMAIN && sleep 5; done
+
+

Open the Thanos Query UI again and try the below query:

+
sum(rate(istio_requests_total{}[5m])) by(destination_workload)
+
+

In the graph view you should see something that looks like the graph below. +This shows the rate of requests (per second) for each Isito workload. +In this case, there is 1 workload, balanced across 2 clusters.

+

arch

+

To see the rate of requests per cluster (actually per pod across all clusters), the below query can be used. +Over long periods of time, this graph can show traffic load balancing between application instances.

+
sum(rate(istio_requests_total{}[5m])) by(pod)
+
+

arch

+

Grafana UI

+

In the output from local-setup, you should see something like the below (you may need to scroll)

+
    Connect to Grafana Query UI
+
+        URL : https://grafana.172.31.0.2.nip.io
+
+

Open Grafana in a browser, accepting the non CA signed certificate. +The default login is admin/admin.

+

Using the left sidebar in the Grafana UI, navigate to Dashboards > Browse and click on the Istio Workload Dashboard.

+

arch

+

You should be able to see the following layout, which will include data from the curl command you ran in the previous section.

+

arch

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/how-to/metrics-walkthrough/index.html b/multicluster-gateway-controller/docs/how-to/metrics-walkthrough/index.html new file mode 100644 index 00000000..175f81a3 --- /dev/null +++ b/multicluster-gateway-controller/docs/how-to/metrics-walkthrough/index.html @@ -0,0 +1,2125 @@ + + + + + + + + + + + + + + + + + + + + + + + + Metrics Walkthrough - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Metrics Walkthrough

+ +

Installation and Configuration of Metrics

+

This document will guide you in installing metrics for your application and provide directions on where to access them. Additionally, it will include dashboards set up to display these metrics.

+

Requirements/prerequisites

+

Prior to commencing the metrics installation process, it is imperative that you have successfully completed the initial getting started guide. For reference, please consult the guide available at the following link: Getting Started Guide.

+

Setting Up Metrics

+

To establish metrics, simply execute the following script in your terminal:

+
    curl https://raw.githubusercontent.com/kuadrant/multicluster-gateway-controller/main/hack/quickstart-metrics.sh | bash
+
+

This script will initiate the setup process for your metrics configuration. +After the script finishes running, you should see something like:

+
Connect to Thanos Query UI
+    URL: https://thanos-query.172.31.0.2.nip.io
+
+Connect to Grafana UI
+    URL: https://grafana.172.31.0.2.nip.io
+
+

You can visit the Grafana dashboard by accessing the provided URL for Grafana UI. (you may need to scroll)

+

Monitoring Operational Status in Grafana Dashboard

+

After setting up metrics, you can monitor the operational status of your system using the Grafana dashboard.

+

To generate traffic to the application, use curl as follows:

+
while true; do curl -k https://$MGC_SUB_DOMAIN && sleep 5; done
+
+

Accessing the Grafana Dashboard

+

To view the operational metrics and status, proceed with the following steps:

+
    +
  1. Access the Grafana dashboard by clicking or entering the provided URL for the Grafana UI in your web browser.
  2. +
+
https://grafana.172.31.0.2.nip.io
+
+
+

Note: The default login credentials for Grafana are admin/admin. You may need to accept the non-CA signed certificate to proceed.

+
+
    +
  1. Navigate to the included Grafana Dashboard
  2. +
+

Using the left sidebar in the Grafana UI, navigate to Dashboards > Browse and select either the Istio Workload Dashboard or MGC SRE Dashboard.

+

arch

+

In Istio Workload Dashboard you should be able to see the following layout, which will include data from the curl command you ran in the previous section.

+

arch

+

The MGC SRE Dashboard displays real-time insights and visualizations of resources managed by the multicluster-gateway-controller e.g. DNSPolicy, TLSPolicy, DNSRecord etc..

+

arch

+

The Grafana dashboard will provide you with real-time insights and visualizations of your gateway's performance and metrics.

+

By utilizing the Grafana dashboard, you can effectively monitor the health and behavior of your system, making informed decisions based on the displayed data. This monitoring capability enables you to proactively identify and address any potential issues to ensure the smooth operation of your environment.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/index.html b/multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/index.html new file mode 100644 index 00000000..d3d33883 --- /dev/null +++ b/multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/index.html @@ -0,0 +1,2527 @@ + + + + + + + + + + + + + + + + + + + + + + + + Multicluster Gateways Walkthrough - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Multicluster Gateways Walkthrough

+

Introduction

+

This document will walk you through using Open Cluster Management (OCM) and Kuadrant to configure and deploy a multi-cluster gateway.

+

You will also deploy a simple application that uses that gateway for ingress and protects that applications endpoints with a rate limit policy.

+

We will start with a single cluster and move to multiple clusters to illustrate how a single gateway definition can be used across multiple clusters and highlight the automatic TLS integration and also the automatic DNS load balancing between gateway instances.

+

Requirements

+ +

Open terminal sessions and set cluster context

+

For this walkthrough, we're going to use multiple terminal sessions/windows.

+

Open two windows, which we'll refer to throughout this walkthrough as:

+
    +
  • T1 (Hub Cluster)
  • +
  • T2 (Workloads cluster)
  • +
+

Set the kubecontext for each terminal, refer back to these commands if re-config is needed.

+

In T1 run kind export kubeconfig --name=mgc-control-plane --kubeconfig=$(pwd)/control-plane.yaml && export KUBECONFIG=$(pwd)/control-plane.yaml

+

In T2 run kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/workload1.yaml && export KUBECONFIG=$(pwd)/workload1.yaml

+

export MGC_SUB_DOMAIN in each terminal if you haven't already added it to your .zshrc or .bash_profile.

+

Create a gateway

+

Check the managed zone

+
    +
  1. +

    First let's ensure the managedzone is present. In T1, run the following:

    +

    kubectl get managedzone -n multi-cluster-gateways
    +
    +1. You should see the following: +
    NAME          DOMAIN NAME      ID                                  RECORD COUNT   NAMESERVERS                                                                                        READY
    +mgc-dev-mz   test.hcpapps.net   /hostedzone/Z08224701SVEG4XHW89W0   7              ["ns-1414.awsdns-48.org","ns-1623.awsdns-10.co.uk","ns-684.awsdns-21.net","ns-80.awsdns-10.com"]   True
    +

    +
  2. +
+

You are now ready to begin creating a gateway! 🎉

+
    +
  1. We will now create a multi-cluster gateway definition in the hub cluster. In T1, run the following:
  2. +
+
kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+  listeners:
+  - allowedRoutes:
+      namespaces:
+        from: All
+    name: api
+    hostname: $MGC_SUB_DOMAIN
+    port: 443
+    protocol: HTTPS
+    tls:
+      mode: Terminate
+      certificateRefs:
+        - name: apps-hcpapps-tls
+          kind: Secret
+EOF
+
+

Enable TLS

+
    +
  1. +

    In T1, create a TLSPolicy and attach it to your Gateway:

    +
    kubectl apply -f - <<EOF
    +apiVersion: kuadrant.io/v1alpha1
    +kind: TLSPolicy
    +metadata:
    +  name: prod-web
    +  namespace: multi-cluster-gateways
    +spec:
    +  targetRef:
    +    name: prod-web
    +    group: gateway.networking.k8s.io
    +    kind: Gateway
    +  issuerRef:
    +    group: cert-manager.io
    +    kind: ClusterIssuer
    +    name: glbc-ca   
    +EOF
    +
    +
  2. +
  3. +

    You should now see a Certificate resource in the hub cluster. In T1, run:

    +

    kubectl get certificates -A
    +
    +you'll see the following:

    +
  4. +
+

NAMESPACE NAME READY SECRET AGE + multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

+

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

+

Place the gateway

+

In the hub cluster there will be a single gateway definition but no actual gateway for handling traffic yet.

+

This is because we haven't placed the gateway yet onto any of our ingress clusters (in this case the hub and ingress cluster are the same)

+
    +
  1. +

    To place the gateway, we need to add a placement label to gateway resource to instruct the gateway controller where we want this gateway instantiated. In T1, run:

    +
    kubectl label gateway prod-web "cluster.open-cluster-management.io/placement"="http-gateway" -n multi-cluster-gateways
    +
    +
  2. +
  3. +

    Now on the hub cluster you should find there is a configured gateway and instantiated gateway. In T1, run:

    +

    kubectl get gateway -A
    +
    +you'll see the following:

    +
    kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s
    +multi-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s
    +
    +

    The instantiated gateway in this case is handled by Istio and has been assigned the 172.x address. You can define this gateway to be handled in the multi-cluster-gateways namespace. +As we are in a single cluster you can see both. Later on we will add in another ingress cluster and in that case you will only see the instantiated gateway.

    +

    Additionally, you should be able to see a secret containing a self-signed certificate.

    +
  4. +
  5. +

    In T1, run:

    +

    kubectl get secrets -n kuadrant-multi-cluster-gateways
    +
    +you'll see the following: +
    NAME               TYPE                DATA   AGE
    +apps-hcpapps-tls   kubernetes.io/tls   3      13m
    +

    +
  6. +
+

The listener is configured to use this TLS secret also. So now our gateway has been placed and is running in the right locations with the right configuration and TLS has been setup for the HTTPS listeners.

+

So what about DNS how do we bring traffic to these gateways?

+

Create and attach a HTTPRoute

+
    +
  1. +

    In T1, using the following command in the hub cluster, you will see we currently have no DNSRecord resources.

    +

    kubectl get dnsrecord -A
    +
    +
    No resources found
    +

    +
  2. +
  3. +

    Let's create a simple echo app with a HTTPRoute in one of the gateway clusters. Remember to replace the hostnames. Again we are creating this in the single hub cluster for now. In T1, run:

    +
    kubectl apply -f - <<EOF
    +apiVersion: gateway.networking.k8s.io/v1beta1
    +kind: HTTPRoute
    +metadata:
    +  name: my-route
    +spec:
    +  parentRefs:
    +  - kind: Gateway
    +    name: prod-web
    +    namespace: kuadrant-multi-cluster-gateways
    +  hostnames:
    +  - "$MGC_SUB_DOMAIN"  
    +  rules:
    +  - backendRefs:
    +    - name: echo
    +      port: 8080
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  name: echo
    +spec:
    +  ports:
    +    - name: http-port
    +      port: 8080
    +      targetPort: http-port
    +      protocol: TCP
    +  selector:
    +    app: echo     
    +---
    +apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: echo
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: echo
    +  template:
    +    metadata:
    +      labels:
    +        app: echo
    +    spec:
    +      containers:
    +        - name: echo
    +          image: docker.io/jmalloc/echo-server
    +          ports:
    +            - name: http-port
    +              containerPort: 8080
    +              protocol: TCP       
    +EOF
    +
    +
  4. +
+

Enable DNS

+
    +
  1. +

    In T1, create a DNSPolicy and attach it to your Gateway:

    +
    kubectl apply -f - <<EOF
    +apiVersion: kuadrant.io/v1alpha1
    +kind: DNSPolicy
    +metadata:
    +  name: prod-web
    +  namespace: multi-cluster-gateways
    +spec:
    +  targetRef:
    +    name: prod-web
    +    group: gateway.networking.k8s.io
    +    kind: Gateway     
    +EOF
    +
    +
  2. +
+

Once this is done, the Kuadrant multi-cluster gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that gateway for the host defined in that listener.

+
    +
  1. +

    You should now see a DNSRecord resource in the hub cluster. In T1, run:

    +

    kubectl get dnsrecord -A
    +
    +
    NAMESPACE                NAME                 READY
    +multi-cluster-gateways   prod-web-api         True
    +

    +
  2. +
  3. +

    You should also be able to see there is only 1 endpoint added which corresponds to address assigned to the gateway where the HTTPRoute was created. In T1, run:

    +
    kubectl get dnsrecord -n multi-cluster-gateways -o=yaml
    +
    +
  4. +
  5. +

    Give DNS a minute or two to update. You should then be able to execute the following and get back the correct A record.

    +

    dig $MGC_SUB_DOMAIN
    +
    +1. You should also be able to curl that endpoint

    +
    curl -k https://$MGC_SUB_DOMAIN
    +
    +# Request served by echo-XXX-XXX
    +
    +
  6. +
+

Introducing the second cluster

+

So now we have a working gateway with DNS and TLS configured. Let place this gateway on a second cluster and bring traffic to that gateway also.

+
    +
  1. +

    First add the second cluster to the clusterset, by running the following in T1:

    +
    kubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true
    +
    +
  2. +
  3. +

    This has added our workload-1 cluster to the ingress clusterset. Next we need to modify our placement to update our numberOfClusters to 2. To patch, in T1, run:

    +
    kubectl patch placement http-gateway -n multi-cluster-gateways --type='json' -p='[{"op": "replace", "path": "/spec/numberOfClusters", "value": 2}]'
    +
    +
  4. +
  5. +

    In T2 window execute the following to see the gateway on the workload-1 cluster:

    +

    kubectl get gateways -A
    +
    +You'll see the following +
    NAMESPACE                         NAME       CLASS   ADDRESS        PROGRAMMED   AGE
    +kuadrant-multi-cluster-gateways   prod-web   istio   172.31.201.0                90s
    +

    +

    So now we have second ingress cluster configured with the same Gateway.

    +
  6. +
  7. +

    In T2, targeting the second cluster, go ahead and create the HTTPRoute in the second gateway cluster.

    +
    +

    ❗ Note: Ensure the MGC_SUB_DOMAIN environment variable has been exported in this terminal session before applying this config.

    +
    +
    kubectl apply -f - <<EOF
    +apiVersion: gateway.networking.k8s.io/v1beta1
    +kind: HTTPRoute
    +metadata:
    +  name: my-route
    +spec:
    +  parentRefs:
    +  - kind: Gateway
    +    name: prod-web
    +    namespace: kuadrant-multi-cluster-gateways
    +  hostnames:
    +  - "$MGC_SUB_DOMAIN"  
    +  rules:
    +  - backendRefs:
    +    - name: echo
    +      port: 8080
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  name: echo
    +spec:
    +  ports:
    +    - name: http-port
    +      port: 8080
    +      targetPort: http-port
    +      protocol: TCP
    +  selector:
    +    app: echo     
    +---
    +apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: echo
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: echo
    +  template:
    +    metadata:
    +      labels:
    +        app: echo
    +    spec:
    +      containers:
    +        - name: echo
    +          image: docker.io/jmalloc/echo-server
    +          ports:
    +            - name: http-port
    +              containerPort: 8080
    +              protocol: TCP       
    +EOF
    +
    +
  8. +
  9. +

    Now if you move back to the hub context in T1 and take a look at the dnsrecord, you will see we now have two A records configured:

    +
  10. +
+
kubectl get dnsrecord -n multi-cluster-gateways -o=yaml
+
+

Watching DNS changes

+

If you want you can use watch dig $MGC_SUB_DOMAIN to see the DNS switching between the two addresses

+

Follow on Walkthroughs

+

Some good follow on walkthroughs that build on this walkthrough

+ + + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/how-to/template/index.html b/multicluster-gateway-controller/docs/how-to/template/index.html new file mode 100644 index 00000000..2432783c --- /dev/null +++ b/multicluster-gateway-controller/docs/how-to/template/index.html @@ -0,0 +1,2040 @@ + + + + + + + + + + + + + + + + + + + + Title - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Title

+

Introduction

+

blah blah amazing and wonderful feature blah blah gateway blah blah DNS

+

Requirements

+
    +
  • A computer
  • +
  • Electricity
  • +
  • Kind
  • +
  • AWS Account
  • +
  • Route 53 enabled
  • +
  • Other Walkthroughs
  • +
+

## Installation and Setup +1. Clone this repo locally +1. Setup a ./controller-config.env file in the root of the repo with the following key values

+
```bash
+# this sets up your default managed zone
+AWS_DNS_PUBLIC_ZONE_ID=<AWS ZONE ID>
+# this is the domain at the root of your zone (foo.example.com)
+ZONE_ROOT_DOMAIN=<replace.this>
+LOG_LEVEL=1
+```
+
+
    +
  1. +

    Setup a ./aws-credentials.env with credentials to access route 53

    +

    For example:

    +
    AWS_ACCESS_KEY_ID=<access_key_id>
    +AWS_SECRET_ACCESS_KEY=<secret_access_key>
    +AWS_REGION=eu-west-1
    +
    +
  2. +
+

Open terminal sessions

+

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

+

Open three windows, which we'll refer to throughout this walkthrough as:

+
    +
  • T1 (Hub Cluster)
  • +
  • T2 (Where we'll run our controller locally)
  • +
  • T3 (Workloads cluster)
  • +
+

To setup a local instance, in T1, run:

+

Known bugs

+

buzzzzz

+

Follow on Walkthroughs

+

Some good follow on walkthroughs that build on this walkthrough

+

Helpful symbols (For dev use)

+

🆘 +❗ +* for more see https://gist.github.com/rxaviers/7360908

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-1.png b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-1.png new file mode 100644 index 00000000..b1490ec1 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-1.png differ diff --git a/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-2.png b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-2.png new file mode 100644 index 00000000..1db6cc50 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-2.png differ diff --git a/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-3.png b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-3.png new file mode 100644 index 00000000..1dd20375 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-3.png differ diff --git a/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-4.png b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-4.png new file mode 100644 index 00000000..2d294ddd Binary files /dev/null and b/multicluster-gateway-controller/docs/images/dns-policy/aws-recordset-list-4.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-example-data.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-example-data.png new file mode 100644 index 00000000..0f662f9c Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-example-data.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-1.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-1.png new file mode 100644 index 00000000..47079d1f Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-1.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-2.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-2.png new file mode 100644 index 00000000..98ac10cd Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-2.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-3.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-3.png new file mode 100644 index 00000000..ca0c4ccd Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-3.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-4.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-4.png new file mode 100644 index 00000000..efe3983c Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-grafana-dashboard-4.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-grafana.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-grafana.png new file mode 100644 index 00000000..deb00794 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-grafana.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-per-pod.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-per-pod.png new file mode 100644 index 00000000..3d330107 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data-per-pod.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data.png new file mode 100644 index 00000000..078d7f42 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation-traffic-data.png differ diff --git a/multicluster-gateway-controller/docs/images/metrics/metrics-federation.png b/multicluster-gateway-controller/docs/images/metrics/metrics-federation.png new file mode 100644 index 00000000..e1e238b0 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/metrics/metrics-federation.png differ diff --git a/multicluster-gateway-controller/docs/images/skupper/skupper-poc-2-gateways-resiliency-walkthrough.png b/multicluster-gateway-controller/docs/images/skupper/skupper-poc-2-gateways-resiliency-walkthrough.png new file mode 100644 index 00000000..c0b16aa3 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/skupper/skupper-poc-2-gateways-resiliency-walkthrough.png differ diff --git a/multicluster-gateway-controller/docs/images/submariner/submariner-poc-hub-gateway-diagram.png b/multicluster-gateway-controller/docs/images/submariner/submariner-poc-hub-gateway-diagram.png new file mode 100644 index 00000000..c9583ae0 Binary files /dev/null and b/multicluster-gateway-controller/docs/images/submariner/submariner-poc-hub-gateway-diagram.png differ diff --git a/multicluster-gateway-controller/docs/images/vscode/vscode-1.png b/multicluster-gateway-controller/docs/images/vscode/vscode-1.png new file mode 100644 index 00000000..525a269f Binary files /dev/null and b/multicluster-gateway-controller/docs/images/vscode/vscode-1.png differ diff --git a/multicluster-gateway-controller/docs/images/vscode/vscode-2.png b/multicluster-gateway-controller/docs/images/vscode/vscode-2.png new file mode 100644 index 00000000..5d9c906f Binary files /dev/null and b/multicluster-gateway-controller/docs/images/vscode/vscode-2.png differ diff --git a/multicluster-gateway-controller/docs/installation/control-plane-installation/index.html b/multicluster-gateway-controller/docs/installation/control-plane-installation/index.html new file mode 100644 index 00000000..5adcf37a --- /dev/null +++ b/multicluster-gateway-controller/docs/installation/control-plane-installation/index.html @@ -0,0 +1,2204 @@ + + + + + + + + + + + + + + + + + + + + + + + + Control Plane installation with Existing OCM - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + +

Setting up MGC in Existing OCM Clusters

+

This guide will show you how to install and configure the Multi-Cluster Gateway Controller in preexisting Open Cluster Management-configured clusters.

+

Prerequisites

+
    +
  • A hub cluster running the OCM control plane (v0.11.0 or greater)
  • +
  • Any number of additional spoke clusters that have been configured as OCM ManagedClusters
  • +
  • Kubectl (>= v1.14.0)
  • +
  • Either a preexisting cert-manager installation or the Kustomize and Helm CLIs
  • +
+

Configure OCM with RawFeedbackJsonString Feature Gate

+

All OCM spoke clusters must be configured with the RawFeedbackJsonString feature gate enabled. This can be done in two ways:

+
    +
  1. When running the clusteradm join command that joins the spoke cluster to the hub:
  2. +
+
clusteradm join <snip> --feature-gates=RawFeedbackJsonString=true
+
+
    +
  1. By patching each spoke cluster's klusterlet in an existing OCM install:
  2. +
+
kubectl patch klusterlet klusterlet --type merge --patch '{"spec": {"workConfiguration": {"featureGates": [{"feature": "RawFeedbackJsonString", "mode": "Enable"}]}}}' --context <EACH_SPOKE_CLUSTER>
+
+

Installing MGC

+

First, run the following command in the context of your hub cluster to install the Gateway API CRDs:

+
kubectl apply -k "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.2"
+
+

We can then add a wait to verify the CRDs have been established:

+

kubectl wait --timeout=5m crd/gatewayclasses.gateway.networking.k8s.io crd/gateways.gateway.networking.k8s.io crd/httproutes.gateway.networking.k8s.io --for=condition=Established
+
+
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io condition met
+customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io condition met
+customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io condition met
+

+

Then run the following command to install the MGC:

+
kubectl apply -k "github.com/kuadrant/multicluster-gateway-controller.git/config/mgc-install-guide?ref=main"
+
+

In addition to the MGC, this will also install the Kuadrant add-on manager and a GatewayClass from which MGC-managed Gateways can be instantiated.

+

After the configuration has been applied, you can verify that the MGC and add-on manager have been installed and are running:

+

kubectl wait --timeout=5m -n multicluster-gateway-controller-system deployment/mgc-controller-manager deployment/mgc-kuadrant-add-on-manager --for=condition=Available
+
+
deployment.apps/mgc-controller-manager condition met
+deployment.apps/mgc-kuadrant-add-on-manager condition met
+

+

We can also verify that the GatewayClass has been accepted by the MGC:

+

kubectl wait --timeout=5m gatewayclass/kuadrant-multi-cluster-gateway-instance-per-cluster --for=condition=Accepted
+
+
gatewayclass.gateway.networking.k8s.io/kuadrant-multi-cluster-gateway-instance-per-cluster condition met
+

+

Creating a ManagedZone

+

To manage the creation of DNS records, MGC uses ManagedZone resources. A ManagedZone can be configured to use DNS Zones on either AWS (Route53), and GCP. We will now create a ManagedZone on the cluster using AWS credentials.

+

First, export the environment variables detailed here in a terminal session.

+

Next, create a secret containing the AWS credentials. We'll also create a namespace for your MGC configs:

+
cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: multi-cluster-gateways
+---
+apiVersion: v1
+kind: Secret
+metadata:
+  name: mgc-aws-credentials
+  namespace: multi-cluster-gateways
+type: "kuadrant.io/aws"
+stringData:
+  AWS_ACCESS_KEY_ID: ${MGC_AWS_ACCESS_KEY_ID}
+  AWS_SECRET_ACCESS_KEY: ${MGC_AWS_SECRET_ACCESS_KEY}
+  AWS_REGION: ${MGC_AWS_REGION}
+EOF
+
+

A ManagedZone can then be created:

+
cat <<EOF | kubectl apply -f -
+apiVersion: kuadrant.io/v1alpha1
+kind: ManagedZone
+metadata:
+  name: mgc-dev-mz
+  namespace: multi-cluster-gateways
+spec:
+  id: ${MGC_AWS_DNS_PUBLIC_ZONE_ID}
+  domainName: ${MGC_ZONE_ROOT_DOMAIN}
+  description: "Dev Managed Zone"
+  dnsProviderSecretRef:
+    name: mgc-aws-credentials
+    namespace: multi-cluster-gateways
+EOF
+
+

You can now verify that the ManagedZone has been created and is in a ready state:

+

kubectl get managedzone -n multi-cluster-gateways
+
+
NAME         DOMAIN NAME      ID                                  RECORD COUNT   NAMESERVERS                                                                                         READY
+mgc-dev-mz   ef.hcpapps.net   /hostedzone/Z06419551EM30QQYMZN7F   2              ["ns-1547.awsdns-01.co.uk","ns-533.awsdns-02.net","ns-200.awsdns-25.com","ns-1369.awsdns-43.org"]   True
+

+

Creating a Cert Issuer

+

To create a CertIssuer, cert-manager first needs to be installed on your hub cluster. If this has not previously been installed on the cluster you can run the command below to do so:

+
kustomize --load-restrictor LoadRestrictionsNone build "github.com/kuadrant/multicluster-gateway-controller.git/config/mgc-install-guide/cert-manager?ref=main" --enable-helm | kubectl apply -f -
+
+

We will now create a ClusterIssuer to be used with cert-manager. For simplicity, we will create a self-signed cert issuer here, but other issuers can also be configured.

+
cat <<EOF | kubectl apply -f -
+apiVersion: cert-manager.io/v1
+kind: ClusterIssuer
+metadata:
+  name: mgc-ca
+  namespace: cert-manager
+spec:
+  selfSigned: {}
+EOF
+
+

Verify that the clusterIssuer is ready:

+

kubectl wait --timeout=5m -n cert-manager clusterissuer/mgc-ca --for=condition=Ready
+
+
clusterissuer.cert-manager.io/mgc-ca condition met
+

+

Next Steps

+

Now that you have MGC installed and configured in your hub cluster, you can now continue with any of these follow-on guides:

+ + + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/installation/service-protection-installation/index.html b/multicluster-gateway-controller/docs/installation/service-protection-installation/index.html new file mode 100644 index 00000000..785e2aad --- /dev/null +++ b/multicluster-gateway-controller/docs/installation/service-protection-installation/index.html @@ -0,0 +1,2139 @@ + + + + + + + + + + + + + + + + + + + + + + + + Service Protection installation with Existing OCM - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Installing Kuadrant Service Protection into an existing OCM Managed Cluster

+

Introduction

+

This walkthrough will show you how to install and setup the Kuadrant Operator into an OCM Managed Cluster.

+

Prerequisites

+
    +
  • Access to an Open Cluster Management (>= v0.11.0) Managed Cluster, which has already been bootstrapped and registered with a hub cluster
  • +
  • We have a guide which covers this in detail
  • +
  • Also see:
      +
    • https://open-cluster-management.io/getting-started/quick-start/
    • +
    • https://open-cluster-management.io/concepts/managedcluster/
    • +
    +
  • +
  • OLM will need to be installed into the ManagedCluster where you want to run the Kuadrant Service Protection components
  • +
  • See https://olm.operatorframework.io/docs/getting-started/
  • +
  • Kuadrant uses Istio as a Gateway API provider - this will need to be installed into the data plane clusters
  • +
  • We recommend installing Istio 1.17.0, including Gateway API v0.6.2
  • +
  • bash + kubectl apply -k "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.2"
  • +
  • See also: https://istio.io/v1.17/blog/2022/getting-started-gtwapi/
  • +
+

Alternatively, if you'd like to quickly get started locally, without having to worry to much about the pre-requisites, take a look our Quickstart Guide. It will get you setup with Kind, OLM, OCM & Kuadrant in a few short steps.

+

Install the Kuadrant OCM Add-On

+

Note: if you've run our Getting Started Guide, you'll be set to run this command as-is.

+

To install the Kuadrant Service Protection components into a ManagedCluster, target your OCM hub cluster with kubectl and run:

+

kubectl apply -k "github.com/kuadrant/multicluster-gateway-controller.git/config/service-protection-install-guide?ref=main" -n <your-managed-cluster>

+

The above command will install the ManagedClusterAddOn resource needed to install the Kuadrant addon into the specified namespace, and install the Kuadrant data-plane components into the open-cluster-management-agent-addon namespace.

+

The Kuadrant addon will install:

+
    +
  • the Kuadrant Operator
  • +
  • Limitador (and its associated operator)
  • +
  • Authorino (and its associated operator)
  • +
+

For more details, see the Kuadrant components installed by the (kuadrant-operator)[https://github.com/Kuadrant/kuadrant-operator#kuadrant-components]

+

Existing Istio installations and changing the default Istio Operator name

+

In the case where you have an existing Istio installation on a cluster, you may encounter an issue where the Kuadrant Operator expects Istio's Operator to be named istiocontrolplane.

+

The istioctl command saves the IstioOperator CR that was used to install Istio in a copy of the CR named installed-state.

+

To let the Kuadrant operator use this existing installation, set the following:

+

kubectl annotate managedclusteraddon kuadrant-addon "addon.open-cluster-management.io/values"='{"IstioOperator":"installed-state"}' -n <managed spoke cluster>

+

This will propogate down and update the Kuadrant Operator, used by the Kuadrant OCM Addon.

+

Verify the Kuadrant addon installation

+

To verify the Kuadrant OCM addon has installed currently, run:

+
kubectl wait --timeout=5m -n kuadrant-system kuadrant/kuadrant-sample --for=condition=Ready
+
+

You should see the namespace kuadrant-system, and the following pods come up: +* authorino-value +* authorino-operator-value +* kuadrant-operator-controller-manager-value +* limitador-value +* limitador-operator-controller-manager-value

+

Further Reading

+

With the Kuadrant data plane components installed, here is some further reading material to help you utilise Authorino and Limitador:

+

Getting started with Authorino +Getting started With Limitador

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/DNSPolicy/index.html b/multicluster-gateway-controller/docs/proposals/DNSPolicy/index.html new file mode 100644 index 00000000..cfc5769f --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/DNSPolicy/index.html @@ -0,0 +1,2273 @@ + + + + + + + + + + + + + + + + + + + + DNS Policy - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + +

DNS Policy

+

Problem

+

Gateway admins, need a way to define the DNS policy for a gateway distributed across multiple clusters in order to control how much and which traffic reaches these gateways. +Ideally we would allow them to express a strategy that they want to use without needing to get into the details of each provider and needing to create and maintain dns record structure and individual records for all the different gateways that may be within their infrastructure.

+

Use Cases

+

As a gateway admin, I want to be able to reduce latency for my users by routing traffic based on the GEO location of the client. I want this strategy to automatically expand and adjust as my gateway topology grows and changes.

+

As a gateway admin, I have a discount with a particular cloud provider and want to send more of my traffic to the gateways hosted in that providers infrastructure and as I add more gateways I want that balance to remain constant and evolve to include my new gateways.

+

Goals

+
    +
  • Allow definition of a DNS load balancing strategy to decide how traffic should be weighted across multiple gateway instances from the central control plane.
  • +
+

None Goals

+
    +
  • Allow different DNS policies for different listeners. Although this may be something we look to support in the future, currently policy attachment does not allow for this type of targeting. This means a DNSPolicy is applied for the whole gateway currently.
  • +
  • Define how health checks should work, this will be part of a separate proposal
  • +
+

Terms

+
    +
  • managed listener: This is a listener with a host backed by a DNS zone managed by the multi-cluster gateway controller
  • +
  • hub cluster: control plane cluster that managed 1 or more spokes
  • +
  • spoke cluster: a cluster managed by the hub control plane cluster. This is where gateway are instantiated
  • +
+

Proposal

+

Provide a control plane DNSPolicy API that uses the idea of direct policy attachment from gateway API that allows a load balancing strategy to be applied to the DNS records structure for any managed listeners being served by the data plane instances of this gateway. +The DNSPolicy also covers health checks that inform the DNS response but that is not covered in this document.

+

Below is a draft API for what we anticipate the DNSPolicy to look like

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  health:
+   ...
+  loadBalancing:
+    weighted:
+     defaultWeight: 10
+     custom: #optional
+     - value: AWS  #optional with both GEO and weighted. With GEO the custom weight is applied to gateways within a Geographic region
+       weight: 10
+     - value: GCP
+       weight: 20
+    GEO: #optional
+      defaultGeo: IE # required with GEO. Chooses a default DNS response when no particular response is defined for a request from an unknown GEO.
+
+

Available Load Balancing Strategies

+

GEO and Weighted load balancing are well understood strategies and this API effectively allow a complex requirement to be expressed relatively simply and executed by the gateway controller in the chosen DNS provider. Our default policy will execute a "Round Robin" weighted strategy which reflects the current default behaviour.

+

With the above API we can provide weighted and GEO and weighted within a GEO. A weighted strategy with a minimum of a default weight is always required and the simplest type of policy. The multi-cluster gateway controller will set up a default policy when a gateway is discovered (shown below). This policy can be replaced or modified by the user. A weighted strategy can be complimented with a GEO strategy IE they can be used together in order to provide a GEO and weighted (within a GEO) load balancing. By defining a GEO section, you are indicating that you want to use a GEO based strategy (how this works is covered below).

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+name: default-policy
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  loadBalancing:
+    weighted: # required
+     defaultWeight: 10  #required, all records created get this weight
+  health:
+   ...   
+
+

In order to provide GEO based DNS and allow customisation of the weighting, we need some additional information to be provided by the gateway / cluster admin about where this gateway has been placed. For example if they want to use GEO based DNS as a strategy, we need to know what GEO identifier(s) to use for each record we create and a default GEO to use as a catch-all. Also, if the desired load balancing approach is to provide custom weighting and no longer simply use Round Robin, we will need a way to identify which records to apply that custom weighting to based on the clusters the gateway is placed on.

+

To solve this we will allow two new attributes to be added to the ManagedCluster resource as labels:

+
   kuadrant.io/lb-attribute-geo-code: "IE"
+   kuadrant.io/lb-attribute-custom-weight: "GCP"
+
+

These two labels allow setting values in the DNSPolicy that will be reflected into DNS records for gateways placed on that cluster depending on the strategies used. (see the first DNSPolicy definition above to see how these values are used) or take a look at the examples at the bottom.

+

example : +

apiVersion: cluster.open-cluster-management.io/v1
+kind: ManagedCluster
+metadata:
+ labels:
+   kuadrant.io/lb-attribute-geo-code: "IE"
+   kuadrant.io/lb-attribute-custom-weight: "GCP"
+spec:    
+

+

The attributes provide the key and value we need in order to understand how to define records for a given LB address based on the DNSPolicy targeting the gateway.

+

The kuadrant.io/lb-attribute-geo-code attribute value is provider specific, using an invalid code will result in an error status condition in the DNSrecord resource.

+

DNS Record Structure

+

This is an advanced topic and so is broken out into its own proposal doc DNS Record Structure

+

Custom Weighting

+

Custom weighting will use the associated custom-weight attribute set on the ManagedCluster to decide which records should get a specific weight. The value of this attribute is up to the end user.

+

example:

+
apiVersion: cluster.open-cluster-management.io/v1
+kind: ManagedCluster
+metadata:
+ labels:
+   kuadrant.io/lb-attribute-custom-weight: "GCP"
+
+

The above is then used in the DNSPolicy to set custom weights for the records associated with the target gateway.

+
    - value: GCP
+      weight: 20
+
+

So any gateway targeted by a DNSPolicy with the above definition that is placed on a ManagedCluster with the kuadrant.io/lb-attribute-custom-weight set with a value of GCP will get an A record with a weight of 20

+

Status

+

DNSPolicy should have a ready condition that reflect that the DNSRecords have been created and configured as expected. In the case that there is an invalid policy, the status message should reflect this and indicate to the user that the old DNS has been preserved.

+

We will also want to add a status condition to the gateway status indicating it is effected by this policy. Gateway API recommends the following status condition

+
- type: gateway.networking.k8s.io/PolicyAffected
+  status: True 
+  message: "DNSPolicy has been applied"
+  reason: PolicyApplied
+  ...
+
+

https://github.com/kubernetes-sigs/gateway-api/pull/2128/files#diff-afe84021d0647e83f420f99f5d18b392abe5ec82d68f03156c7534de9f19a30aR888

+

Example Policies

+

Round Robin (the default policy)

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+name: RoundRobinPolicy
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  loadBalancing:
+    weighted:
+     defaultWeight: 10
+
+

GEO (Round Robin)

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+name: GEODNS
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  loadBalancing:
+    weighted:
+     defaultWeight: 10
+    GEO:
+     defaultGeo: IE
+
+

Custom

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+name: SendMoreToAzure
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  loadBalancing:
+    weighted:
+     defaultWeight: 10
+     custom:
+     - attribute: cloud
+       value: Azure #any record associated with a gateway on a cluster without this value gets the default
+       weight: 30
+
+

GEO with Custom Weights

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+name: GEODNSAndSendMoreToAzure
+spec:
+  targetRef: # defaults to gateway gvk and current namespace
+    name: gateway-name
+  loadBalancing:
+    weighted:
+     defaultWeight: 10
+     custom:
+     - attribute: cloud
+       value: Azure
+       weight: 30
+    GEO:
+      defaultGeo: IE
+
+

Considerations and Limitations

+

You cannot have a different load balancing strategy for each listener within a gateway. So in the following gateway definition

+
spec:
+    gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+    listeners:
+    - allowedRoutes:
+        namespaces:
+          from: All
+      hostname: myapp.hcpapps.net
+      name: api
+      port: 443
+      protocol: HTTPS
+    - allowedRoutes:
+        namespaces:
+          from: All
+      hostname: other.hcpapps.net
+      name: api
+      port: 443
+      protocol: HTTPS      
+
+

The DNS policy targeting this gateway will apply to both myapp.hcpapps.net and other.hcpapps.net

+

However, there is still significant value even with this limitation. This limitation is something we will likely revisit in the future

+

Background Docs

+

DNS Provider Support

+

AWS DNS

+

Google DNS

+

Azure DNS

+

Direct Policy Attachment

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/DNSRecordStructure/index.html b/multicluster-gateway-controller/docs/proposals/DNSRecordStructure/index.html new file mode 100644 index 00000000..9a8e1e32 --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/DNSRecordStructure/index.html @@ -0,0 +1,2169 @@ + + + + + + + + + + + + + + + + + + + + DNSRecordStructure - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

DNSRecordStructure

+ +

DNSRecord is our API for expressing DNS endpoints via a kube CRD based API. It is managed by the multi-cluster gateway controller based on the desired state expressed in higher level APIs such as the Gateway or a DNSPolicy. In order to provide our feature set, we need to carefully consider how we structure our records and the types of records we need. This document proposes a particular structure based on the requirements and feature set we have.

+

Requirements

+

We want to be able to support Gateway definitions that use the following listener definitions:

+
    +
  • wildcard: *.example.com and fully qualified listener host www.example.com definitions with the notable exception of fully wildcarded ie * as we cannot provide any DNS or TLS for something with no defined hostname.
  • +
  • listeners that have HTTPRoute defined on less than all the clusters where the listener is available. IE we don't want to send traffic to clusters where there is no HTTPRoute attached to the listener.
  • +
  • Gateway instances that provide IPs that are deployed alongside instances on different infra that provide host names causing the addresses types on each of gateway instance to be different (IPAddress or HostAddress).
  • +
  • We want to provide GEO based DNS as a feature of DNSPolicy and so our DNSRecord structure must support this.
  • +
  • We want to offer default weighted and custom weighted DNS as part of DNSPolicy
  • +
  • We want to allow root or apex domain to be used as listener hosts
  • +
+

Diagram

+

https://lucid.app/lucidchart/2f95c9c9-8ddf-4609-af37-48145c02ef7f/edit?viewport_loc=-188%2C-61%2C2400%2C1183%2C0_0&invitationId=inv_d5f35eb7-16a9-40ec-b568-38556de9b568

+

Proposal

+

For each listener defined in a gateway, we will create a set of records with the following rules.

+

none apex domain:

+

We will have a generated lb (load balancer) dns name that we will use as a CNAME for the listener hostname. This DNS name is not intended for use within a HTTPRoute but is instead just a DNS construct. This will allow us to set up additional CNAME records for that DNS name in the future that are returned based a GEO location. These DNS records will also be CNAMES pointing to specific gateway dns names, this will allow us to setup a weighted response. So the first layer CNAME handles balancing based on geo, the second layer handles balancing based on weighting.

+
                                        shop.example.com
+                                        |             |
+                                      (IE)          (AUS)
+                                CNAME lb.shop..      lb.shop..
+                                    |     |         |      |
+                                 (w 100) (w 200)   (w 100) (w100)
+                                CNAME g1.lb.. g2.lb..   g3.lb..  g4.lb..
+                                A 192..   A 81..  CNAME  aws.lb   A 82..
+
+

When there is no geo strategy defined within the DNSPolicy, we will put everything into a default geo (IE a catch-all record) default.lb-{guid}.{listenerHost} but set the routing policy to GEO allowing us to add more geo based records in the future if the gateway admin decides to move to a geo strategy as their needs grow.

+

To ensure this lb dns name is unique and does not clash we will use a short guid as part of the subdomain so lb-{guid}.{listenerHost}. this guid will be based on the gateway name and gateway namespace in the control plane.

+

For a geo strategy we will add a geo record with a prefix to the lb subdomain based on the geo code. When there is no geo we will use default as the prefix. {geo-code}.lb-{guid}.{listenerHost}. +Finally, for each gateway instance on a target cluster we will add a {spokeClusterName}.lb-{guid}.{listenerHost}

+

To allow for a mix of hostname and IP address types, we will always use a CNAME . So we will create a dns name for IPAddress with the following structure: {guid}.lb-{guid}.{listenerHost} where the first guid will be based on the cluster name where the gateway is placed.

+

Apex Domains

+

An apex domain is the domain at the apex or root of a zone. These are handled differently by DNS as they often have NS and SOA records. Generally it is not possible to set up a CNAME for apex domain (although some providers allow it).

+

If a listener is added to a gateway that is an apex domain, we can only add A records for that domain to keep ourselves compliant with as many providers as possible. +If a listener is the apex domain, we will setup A records for that domain (favouring gateways with an IP address or resolving the IP behind a host) but there will be no special balancing/weighting done. Instead, we will expect that the owner of that will setup a HTTPRoute with a 301 permanent redirect sending users from the apex domain e.g. example.com to something like: www.example.com where the www subdomain based listener would use the rules of the none apex domains and be where advanced geo and weighted strategies are applied.

+
    +
  • gateway listener host name : example.com
      +
    • example.com A 81.17.241.20
    • +
    +
  • +
+

Geo Agnostic (everything is in a default * geo catch all)

+

This is the type of DNS Record structure that would back our default DNSPolicy.

+
    +
  • +

    gateway listener host name : www.example.com

    +

    DNSRecords: +- www.example.com CNAME lb-1ab1.www.example.com +- lb-1ab1.www.example.com CNAME geolocation * default.lb-1ab1.www.example.com +- default.lb-1ab1.www.example.com CNAME weighted 100 1bc1.lb-1ab1.www.example.com +- default.lb-1ab1.www.example.com CNAME weighted 100 aws.lb.com +- 1bc1.lb-1ab1.www.example.com A 192.22.2.1

    +
  • +
+

So in the above example working up from the bottom, we have a mix of hostname and IP based addresses for the gateway instance. We have 2 evenly weighted records that balance between the two available gateways, then next we have the geo based record that is set to a default catch all as no geo has been specified then finally we have the actual listener hostname that points at our DNS based load balancer name.

+

DNSRecord Yaml

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSRecord
+metadata:
+  name: {gateway-name}-{listenerName}
+  namespace: multi-cluster-gateways
+spec:
+  dnsName: www.example.com
+  managedZone:
+    name: mgc-dev-mz
+  endpoints:
+    - dnsName: www.example.com
+      recordTTL: 300
+      recordType: CNAME
+      targets:
+        - lb-1ab1.www.example.com
+    - dnsName: lb-1ab1.www.example.com
+      recordTTL: 300
+      recordType: CNAME
+      setIdentifier: mygateway-multicluster-gateways
+      providerSpecific:
+        - name: "geolocation-country-code"
+          value: "*"
+      targets:
+        - default.lb-1ab1.www.example.com
+    - dnsName: default.lb-1ab1.www.example.com
+      recordTTL: 300
+      recordType: CNAME
+      setIdentifier: cluster1
+      providerSpecific:
+        - name: "weight"
+          value: "100"
+      targets:
+        - 1bc1.lb-1ab1.www.example.com
+    - dnsName: default.lb-a1b2.shop.example.com
+      recordTTL: 300
+      recordType: CNAME
+      setIdentifier: cluster2
+      providerSpecific:
+        - name: "weight"
+          value: "100"
+      targets:
+        - aws.lb.com
+    - dnsName: 1bc1.lb-1ab1.www.example.com
+      recordTTL: 60
+      recordType: A
+      targets:
+        - 192.22.2.1
+
+

geo specific

+

Once the end user selects to use a geo strategy via the DNSPolicy, we then need to restructure our DNS to add in our geo specific records. Here the default record

+

lb short code is {gw name + gw namespace} +gw short code is {cluster name}

+
    +
  • +

    gateway listener host : shop.example.com

    +

    DNSRecords: +- shop.example.com CNAME lb-a1b2.shop.example.com +- lb-a1b2.shop.example.com CNAME geolocation ireland ie.lb-a1b2.shop.example.com +- lb-a1b2.shop.example.com geolocation australia aus.lb-a1b2.shop.example.com +- lb-a1b2.shop.example.com geolocation default ie.lb-a1b2.shop.example.com (set by the default geo option) +- ie.lb-a1b2.shop.example.com CNAME weighted 100 ab1.lb-a1b2.shop.example.com +- ie.lb-a1b2.shop.example.com CNAME weighted 100 aws.lb.com +- aus.lb-a1b2.shop.example.com CNAME weighted 100 ab2.lb-a1b2.shop.example.com +- aus.lb-a1b2.shop.example.com CNAME weighted 100 ab3.lb-a1b2.shop.example.com +- ab1.lb-a1b2.shop.example.com A 192.22.2.1 192.22.2.5 +- ab2.lb-a1b2.shop.example.com A 192.22.2.3 +- ab3.lb-a1b2.shop.example.com A 192.22.2.4

    +
  • +
+

In the above example we move from a default catch all to geo specific setup. Based on a DNSPolicy that specifies IE as the default geo location. We leave the default subdomain in place to allow for clients that may still be using that and set up geo specific subdomains that allow us to route traffic based on its origin. In this example we are load balancing across 2 geos and 4 clusters

+

WildCards

+

In the examples we have used fully qualified domain names, however sometimes it may be required to use a wildcard subdomain. example:

+
    +
  • gateway listener host : *.example.com
  • +
+

To support these we need to change the name of the DNSRecord away from the name of the listener as the k8s resource does not allow * in the name.

+

To do this we will set the dns record resource name to be a combination of {gateway-name}-{listenerName}

+

to keep a record of the host this is for we will set a top level property named dnsName. You can see an example in the DNSRecord above.

+

Pros

+

This setup allows us a powerful set of features and flexibility

+

Cons

+

With this CNAME based approach we are increasing the number of DNS lookups required to get to an IP which will increase the cost and add a small amount of latency. To counteract this, we will set a reasonably high TTL (at least 5 mins) for our CNAMES and (2 mins) for A records

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-geo-weighted.png b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-geo-weighted.png new file mode 100644 index 00000000..97af8d29 Binary files /dev/null and b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-geo-weighted.png differ diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-weighted.png b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-weighted.png new file mode 100644 index 00000000..d6831e45 Binary files /dev/null and b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws-weighted.png differ diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws/index.html b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws/index.html new file mode 100644 index 00000000..2d0ae0ec --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws/index.html @@ -0,0 +1,1954 @@ + + + + + + + + + + + + + + + + + + + + Aws - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Aws

+ +

AWS supports Weighted(Weighted Round Robin) and Geolocation routing policies https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html. Both of these can be configured directly on records in AWS route 53.

+

GEO Weighted

+

img.png

+

Weighted

+

img.png

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure-traffic-manager-request.json b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure-traffic-manager-request.json new file mode 100644 index 00000000..8a97dc47 --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure-traffic-manager-request.json @@ -0,0 +1,18 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "name": { + "value": "hcggeotest" + }, + "relativeName": { + "value": "hcggeotest" + }, + "trafficRoutingMethod": { + "value": "Geographic" + }, + "maxReturn": { + "value": 0 + } + } +} \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/index.html b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/index.html new file mode 100644 index 00000000..9ce0b76a --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/index.html @@ -0,0 +1,2017 @@ + + + + + + + + + + + + + + + + + + + + Azure - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Azure

+ +

Azure

+

https://portal.azure.com/

+

Azure supports Weighted and Geolocation routing policies, but requires records to alias to a Traffic Manager resource that must also be created in the users account https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods

+

Notes:

+
    +
  • A Traffic Manager Profile is created per record set and is created with a routing method (Weighted or Geographic) https://portal.azure.com/#view/Microsoft_Azure_Network/LoadBalancingHubMenuBlade/~/TrafficManagers
  • +
  • Only a singe IP can be added to a DNSRecord set. A traffic manager profile must be created and aliased from a DNSRecord set for anything that involves more than a single target.
  • +
  • Significantly more resources to manage in order to achieve functionality comparable with Google and AWS.
  • +
  • The modelling of the records is significantly different from AWS Route53, but the current DNSRecord spec could still work. The azure implementation will have to process the endpoint list and create traffic manager policies as required to satisfy the record set.
  • +
+

Given the example DNSRecord here describing a record set for a geo location routing policy with four clusters, two in two regions (North America and Europe), the following Azure resources are required.

+

Three DNSRecords, each aliased to a different traffic manager:

+

dnsrecord-geo-recordset

+
    +
  • dnsrecord-geo-azure-hcpapps-net (dnsrecord-geo.azure.hcpapps.net) aliased to Traffic Manager Profile 1 (dnsrecord-geo-azure-hcpapps-net)
  • +
  • dnsrecord-geo-na.azure-hcpapps-net (dnsrecord-geo.na.azure.hcpapps.net) aliased to Traffic Manager Profile 2 (dnsrecord-geo-na-azure-hcpapps-net)
  • +
  • dnsrecord-geo-eu.azure-hcpapps-net (dnsrecord-geo.eu.azure.hcpapps.net) aliased to Traffic Manager Profile 3 (dnsrecord-geo-eu-azure-hcpapps-net)
  • +
+

Three Traffic Manager Profiles:

+

dnsrecord-geo-traffic-manager-profiles

+
    +
  • Traffic Manager Profile 1 (dnsrecord-geo-azure-hcpapps-net): Geolocation routing policy with two region specific FQDN targets (dnsrecord-geo.eu.azure.hcpapps.net and dnsrecord-geo.na.azure.hcpapps.net).
  • +
  • Traffic Manager Profile 2 (dnsrecord-geo-na-azure-hcpapps-net): Weighted routed policy with two IP address endpoints (172.31.0.1 and 172.31.0.2) with equal weighting.
  • +
  • Traffic Manager Profile 3 (dnsrecord-geo-eu-azure-hcpapps-net): Weighted routed policy with two IP address endpoints (172.31.0.3 and 172.31.0.4) with equal weighting.
  • +
+
dig dnsrecord-geo.azure.hcpapps.net
+
+; <<>> DiG 9.18.12 <<>> dnsrecord-geo.azure.hcpapps.net
+;; global options: +cmd
+;; Got answer:
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16236
+;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 65494
+;; QUESTION SECTION:
+;dnsrecord-geo.azure.hcpapps.net. IN    A
+
+;; ANSWER SECTION:
+dnsrecord-geo.azure.hcpapps.net. 60 IN  CNAME   dnsrecord-geo-azure-hcpapps-net.trafficmanager.net.
+dnsrecord-geo-azure-hcpapps-net.trafficmanager.net. 60 IN CNAME dnsrecord-geo.eu.azure.hcpapps.net.
+dnsrecord-geo.eu.azure.hcpapps.net. 60 IN A     172.31.0.3
+
+;; Query time: 88 msec
+;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
+;; WHEN: Tue May 30 15:05:07 IST 2023
+;; MSG SIZE  rcvd: 168
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-recordset.png b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-recordset.png new file mode 100644 index 00000000..4f56fcc3 Binary files /dev/null and b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-recordset.png differ diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-traffic-manager-profiles.png b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-traffic-manager-profiles.png new file mode 100644 index 00000000..8ec0dbf8 Binary files /dev/null and b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/dnsrecord-geo-traffic-manager-profiles.png differ diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-a-weighted-request.json b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-a-weighted-request.json new file mode 100644 index 00000000..75175e62 --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-a-weighted-request.json @@ -0,0 +1,23 @@ +{ + "name": "dnsrecord-geo.na.google.hcpapps.net.", + "routingPolicy": { + "wrr": { + "item": [ + { + "weight": 60.0, + "rrdata": [ + "172.31.0.1" + ] + }, + { + "weight": 60.0, + "rrdata": [ + "172.31.0.2" + ] + } + ] + } + }, + "ttl": 60, + "type": "A" +} \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-cname-geo-request.json b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-cname-geo-request.json new file mode 100644 index 00000000..9d8d418b --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-cname-geo-request.json @@ -0,0 +1,24 @@ +{ + "name": "dnsrecord-geo.google.hcpapps.net.", + "routingPolicy": { + "geo": { + "item": [ + { + "location": "us-east1", + "rrdata": [ + "dnsrecord-geo.na.google.hcpapps.net." + ] + }, + { + "location": "europe-west1", + "rrdata": [ + "dnsrecord-geo.eu.google.hcpapps.net." + ] + } + ], + "enableFencing": false + } + }, + "ttl": 60, + "type": "CNAME" +} \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-record-list.png b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-record-list.png new file mode 100644 index 00000000..ecab5fbb Binary files /dev/null and b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google-record-list.png differ diff --git a/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/index.html b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/index.html new file mode 100644 index 00000000..a3bbc95a --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/index.html @@ -0,0 +1,2061 @@ + + + + + + + + + + + + + + + + + + + + Google - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Google

+ +

Google

+

https://console.cloud.google.com/net-services/dns/zones

+

Google supports Weighted(Weighted Round Robin) and Geolocation routing policies https://cloud.google.com/dns/docs/zones/manage-routing-policies. Both of these can be configured directly on records in Google Cloud DNS and no secondary Traffic Management resource is required.

+

Notes:

+
    +
  • Record sets are modelled as a single endpoint with routing policy embedded. This is a different approach to Route53 where each individual A/CNAME would have its own record entry.
  • +
  • Weight must be an integer between 0 - 10000
  • +
  • There are no continent options for region, only finer grained regions such as us-east1, europe-west-1 etc...
  • +
  • There appears to be no way to set a default region, google just routes requests to the nearest supported region.
  • +
  • The current approach used in AWS Route53 for geo routing will work in the same way on Google DNS. A single CNAME record with geo routing policy specifying multiple geo specific A record entries as targets.
  • +
  • Geo and weighted routing can be combined, as with AWS Route53, allowing traffic with a region to be routed using weightings.
  • +
  • The modelling of the records is slightly different from AWS, but the current DNSRecord spec could still work. The Google implementation of AddRecords will have to process the list of endpoints in order to group related endpoints in order to build up the required API request. +In this case there would not be a 1:1 mapping between an endpoint in a DNSRecord and the dns provider, but the DNSRecord contents would be kept consistent across all providers and compatibility with external-dns would be maintained.
  • +
+

Example request for Geo CNAME record:

+

POST https://dns.googleapis.com/dns/v1beta2/projects/it-cloud-gcp-rd-midd-san/managedZones/google-hcpapps-net/rrsets +

{
+  "name": "dnsrecord-geo.google.hcpapps.net.",
+  "routingPolicy": {
+    "geo": {
+      "item": [
+        {
+          "location": "us-east1",
+          "rrdata": [
+            "dnsrecord-geo.na.google.hcpapps.net."
+          ]
+        },
+        {
+          "location": "europe-west1",
+          "rrdata": [
+            "dnsrecord-geo.eu.google.hcpapps.net."
+          ]
+        }
+      ],
+      "enableFencing": false
+    }
+  },
+  "ttl": 60,
+  "type": "CNAME"
+}
+

+

Example request for Weighted A record:

+

POST https://dns.googleapis.com/dns/v1beta2/projects/it-cloud-gcp-rd-midd-san/managedZones/google-hcpapps-net/rrsets +

{
+  "name": "dnsrecord-geo.na.google.hcpapps.net.",
+  "routingPolicy": {
+    "wrr": {
+      "item": [
+        {
+          "weight": 60.0,
+          "rrdata": [
+            "172.31.0.1"
+          ]
+        },
+        {
+          "weight": 60.0,
+          "rrdata": [
+            "172.31.0.2"
+          ]
+        }
+      ]
+    }
+  },
+  "ttl": 60,
+  "type": "A"
+}
+

+

Given the example DNSRecord here describing a record set for a geo location routing policy with four clusters, two in two regions (North America and Europe), the following resources are required.

+

Three DNSRecords, one CNAME (dnsrecord-geo.google.hcpapps.net) and 2 A records (dnsrecord-geo.na.google.hcpapps.net and dnsrecord-geo.eu.google.hcpapps.net)

+

img.png

+
dig dnsrecord-geo.google.hcpapps.net
+
+; <<>> DiG 9.18.12 <<>> dnsrecord-geo.google.hcpapps.net
+;; global options: +cmd
+;; Got answer:
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22504
+;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 65494
+;; QUESTION SECTION:
+;dnsrecord-geo.google.hcpapps.net. IN   A
+
+;; ANSWER SECTION:
+dnsrecord-geo.google.hcpapps.net. 60 IN CNAME   dnsrecord-geo.eu.google.hcpapps.net.
+dnsrecord-geo.eu.google.hcpapps.net. 60 IN A    172.31.0.4
+
+;; Query time: 33 msec
+;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
+;; WHEN: Tue May 30 15:05:25 IST 2023
+;; MSG SIZE  rcvd: 108
+
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/index.html b/multicluster-gateway-controller/docs/proposals/index.html new file mode 100644 index 00000000..7dc8d69d --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/index.html @@ -0,0 +1,1971 @@ + + + + + + + + + + + + + + + + + + + + Index - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Index

+ +

Proposals

+

This directory contains proposals accepted into the MGC. The template for add a proposal is located in this directory. Make a copy of the template and use it to define your own proposal.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/index.html b/multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/index.html new file mode 100644 index 00000000..bef2b5ce --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/index.html @@ -0,0 +1,2236 @@ + + + + + + + + + + + + + + + + + + + + + + + + Proposal: Multiple DNS Provider Support - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Multiple DNS Provider Support

+

Authors: Michael Nairn @mikenairn

+

Epic: https://github.com/Kuadrant/multicluster-gateway-controller/issues/189

+

Date: 25th May 2023

+

Job Stories

+
    +
  • As a developer, I want to use MGC with a domain hosted in one of the major cloud DNS providers (Google Cloud DNS, Azure DNS or AWS Route53)
  • +
  • As a developer, I want to use multiple domains with a single instance of MGC, each hosted on different cloud providers
  • +
+

Goals

+
    +
  • Add ManagedZone and DNSRecord support for Google Cloud DNS
  • +
  • Add ManagedZone and DNSRecord support for Azure DNS
  • +
  • Add DNSRecord support for CoreDNS (Default for development environment)
  • +
  • Update ManagedZone and DNSRecord support for AWS Route53
  • +
  • Add support for multiple providers with a single instance of MGC
  • +
+

Non Goals

+
    +
  • Support for every DNS provider
  • +
  • Support for health checks
  • +
+

Current Approach

+

Currently, MGC only supports AWS Route53 as a dns provider. A single instance of a DNSProvider resource is created per MGC instance which is configured with AWS config loaded from the environment. +This provider is loaded into all controllers requiring dns access (ManagedZone and DNSRecord reconciliations), allowing a single instance of MGC to operate against a single account on a single dns provider.

+

Proposed Solution

+

MGC has three features it requires of any DNS provider in order to offer full support, DNSRecord management, Zone management and DNS Health checks. We do not however want to limit to providers that only offer this functionality, so to add support for a provider the minimum that provider should offer is API access to managed DNS records. +MGC will continue to provide Zone management and DNS Health checks support on a per-provider basis.

+

Support will be added for AWS(Route53), Google(Google Cloud DNS), Azure and investigation into possible adding CoreDNS (intended for local dev purposes), with the following proposed initial support:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ProviderDNS RecordsDNS ZonesDNS Health
AWS Route53XXX
Google Cloud DNSXX-
AzureDNSXX-
CoreDNSX--
+

Add DNSProvider as an API for MGC which contains all the required config for that particular provider including the credentials. This can be thought of in a similar way to a cert manager Issuer. +Update ManagedZone to add a reference to a DNSProvider. This will be a required field on the ManagedZone and a DNSProvider must exist before a ManagedZone can be created. +Update all controllers load the DNSProvider directly from the ManagedZone during reconciliation loops and remove the single controller wide instance. +Add new provider implementations for google, azure and coredns. + * All providers constructors should accept a single struct containing all required config for that particular provider. + * Providers must be configured from credentials passed in the config and not rely on environment variables.

+

Other Solutions investigated

+

Investigation was carried out into the suitability of [External DNS] (https://github.com/kubernetes-sigs/external-dns) as the sole means of managing dns resources. +Unfortunately, while external dns does offer support for basic dns record management with a wide range of providers, there were too many features missing making it unsuitable at this time for integration.

+

External DNS as a separate controller

+

Run external dns, as intended, as a separate controller alongside mgc, and pass all responsibility for reconciling DNSRecord resources to it. All DNSRecord reconciliation is removed from MGC.

+

Issues:

+
    +
  • A single instance of external dns will only work with a single provider and a single set of credentials. As it is, in order to support more than a single provider, more than one external dns instance would need to be created, one for each provider/account pair.
  • +
  • Geo and Weighted routing policies are not implemented for any provider other than AWS Route53.
  • +
  • Only supports basic dns record management (A,CNAME, NS records etc ..), with no support for managed zones or health checks.
  • +
+

External DNS as a module dependency

+

Add external dns as a module dependency in order to make use of their DNS Providers, but continue to reconcile DNSRecords in MGC.

+

Issues:

+
    +
  • External DNS Providers all create clients using the current environment. Would require extensive refactoring in order to modify each provider to optionally be constructed using static credentials.
  • +
  • Clients were all internal making it impossible, without modification, to use the upstream code to extend the provider behaviour to support additional functionality such as managed zone creation.
  • +
+

Checklist

+
    +
  • [ ] An epic has been created and linked to
  • +
  • [ ] Reviewers have been added. It is important that the right reviewers are selected.
  • +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/index.html b/multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/index.html new file mode 100644 index 00000000..f69103aa --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/index.html @@ -0,0 +1,2282 @@ + + + + + + + + + + + + + + + + + + + + Provider agnostic DNS Health checks - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Provider agnostic DNS Health checks

+

Introduction

+

The MGC has the ability to extend the DNS configuration of the gateway with the DNSPolicy resource. This resource allows +users to configure health checks. As a result of configuring health checks, the controller creates the health checks in +Route53, attaching them to the related DNS records. This has the benefit of automatically disabling an endpoint if it +becomes unhealthy, and enabling it again when it becomes healthy again.

+

This feature has a few shortfalls: +1. It’s tightly coupled with Route53. If other DNS providers are supported they must either provide a similar feature, +or health checks will not be supported +2. Lacks the ability to reach endpoints in private networks +3. requires using the gateway controller to implement, maintain and test multiple providers

+

This document describes a proposal to extend the current health check implementation to overcome these shortfalls.

+

Goals

+
    +
  • Ability to configure health checks in the DNSPolicy associated to a Gateway
  • +
  • DNS records are disabled when the associated health check fails
  • +
  • Current status of the defined health checks is visible to the end user
  • +
+

Nongoals

+
    +
  • Ability for the health checks to reach endpoints in separate private networks
  • +
  • Transparently keep support for other health check providers like Route53
  • +
  • Having health checks for wildcard listeners
  • +
+

Use-cases

+
    +
  • As a gateway administrator, I would like to define a health check that each service sitting behind a particular +listener across the production clusters has to implement to ensure we can automatically respond, failover and +mitigate a failing instance of the service
  • +
+

Proposal

+

Currently, this functionality will be added to the existing MGC, and executed within that component. This will be created +with the knowledge that it may need to be made into an external component in the future.

+

DNSPolicy resource

+

The presence of the healthCheck means that for every DNS endpoint (that is either an A record, or a CNAME to an external host), +a health check is created based on the health check configuration in the DNSPolicy.

+

A failureThreshold field will be added to the health spec, allowing users to configure a number of consecutive health +check failures that must be observed before the endpoint is considered unhealthy.

+

Example DNS Policy with a defined health check. +

apiVersion: kuadrant.io/v1alpha1
+kind: DNSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  healthCheck:
+    endpoint: /health
+    failureThreshold: 5
+    port: 443
+    protocol: https
+    additionalHeaders: <SecretRef>
+    expectedResponses:
+      - 200
+      - 301
+      - 302
+      - 407
+    AllowInsecureCertificates: true
+  targetRef:
+    group: gateway.networking.k8s.io
+    kind: Gateway
+    name: prod-web
+    namespace: multi-cluster-gateways
+

+

DNSHealthCheckProbe resource

+

The DNSHealthCheckProbe resource configures a health probe in the controller to perform the health checks against an +identified final A or CNAME endpoint. When created by the controller as a result of a DNS Policy, this will have an +owner ref of the DNS Policy that caused it to be created.

+
apiVersion: kuadrant.io/v1alpha1
+kind: DNSHealthCheckProbe
+metadata:
+  name: example-probe
+spec:
+  port: "..."
+  host: “...”
+  address: "..."
+  path: "..."
+  protocol: "..."
+  interval: "..."
+  additionalHeaders: <SecretRef>
+  expectedResponses:
+  - 200
+    201
+    301
+  AllowInsecureCertificate: true
+status:
+  healthy: true
+  consecutiveFailures: 0
+  reason: ""
+  lastCheck: "..."
+
+

Spec Fields Definition

+
    +
  • Port The port to use
  • +
  • Address The address to connect to (e.g. IP address or hostname of a clusters loadbalancer)
  • +
  • Host The host to request in the Host header
  • +
  • Path The path to request
  • +
  • Protocol The protocol to use for this request
  • +
  • Interval How frequently this check would ideally be executed.
  • +
  • AdditionalHeaders Optional secret ref which contains k/v: headers and their values that can be specified to ensure the health check is successful.
  • +
  • ExpectedResponses Optional HTTP response codes that should be considered healthy (defaults are 200 and 201).
  • +
  • AllowInsecureCertificate Optional flag to allow using invalid (e.g. self-signed) certificates, default is false.
  • +
+

The reconciliation of this resource results in the configuration of a health probe, which targets the endpoint and +updates the status. The status is propagated to the providerSpecific status of the equivalent endpoint in the DNSRecord

+

Changes to current controllers

+

In order to support this new feature, the following changes in the behaviour of the controllers are proposed.

+

DNSPolicy controller

+

Currently, the reconciliation loop of this controller creates health checks in the configured DNS provider +(Route53 currently) based on the spec of the DNSPolicy, separately from the reconciliation of the DNSRecords. +The proposed change is to reconcile health check probe CRs based on the combination of DNS Records and DNS Policies.

+

Instead of Route53 health checks, the controller will create DNSHealthCheckProbe resources.

+

DNSRecord controller

+

When reconciling a DNS Record, the DNS Record reconciler will retrieve the relevant DNSHealthCheckProbe CRs, and consult +the status of them when determining what value to assign to a particular endpoint's weight.

+

DNS Record Structure Diagram:

+

https://lucid.app/lucidchart/2f95c9c9-8ddf-4609-af37-48145c02ef7f/edit?viewport_loc=-188%2C-61%2C2400%2C1183%2C0_0&invitationId=inv_d5f35eb7-16a9-40ec-b568-38556de9b568 +How

+

Removing unhealthy Endpoints

+

When a DNS health check probe is failing, it will update the DNS Record CR with a custom field on that endpoint to mark it as failing.

+

There are then 3 scenarios which we need to consider: +1 - All endpoints are healthy +2 - All endpoints are unhealthy +3 - Some endpoints are healthy and some are unhealthy.

+

In the cases 1 and 2, the result should be the same: All records are published to the DNS Provider.

+

When scenario 3 is encountered the following process should be followed:

+
For each gateway IP or CNAME: this should be omitted if unhealthy.
+For each managed gateway CNAME: This should be omitted if all child records are unhealthy.
+For each GEO CNAME: This should be omitted if all the managed gateway CNAMEs have been omitted.
+Load balancer CNAME: This should never be omitted.
+
+

If we consider the DNS record to be a hierarchy of parents and children, then whenever any parent has no healthy +children that parent is also considered unhealthy. No unhealthy elements are to be included in the DNS Record.

+

Removal Process

+

When removing DNS records, we will want to avoid any NXDOMAIN responses from the DNS service as this will cause the +resolver to cache this missed domain for a while (30 minutes or more). The NXDOMAIN response is triggered when the +resolver attempts to resolve a host that does not have any records in the zone file.

+

The situation that would cause this to occur is when we have removed a record but still refer to it from other +records.

+

As we wish to avoid any NXDOMAIN responses from the nameserver - causing the resolver to cache this missed response +we will need to ensure that any time a DNS Record (CNAME or A) is removed, we also remove any records that refer to the +removed record. (e.g. when the gateway A record is removed, we will need to remove the managed gateway CNAME that +refers to that A record).

+
Removal Example
+

Given the following DNS Records (simplified hosts used in example): +

01 host.example.com. 300 IN CNAME lb.hcpapps.net.
+02 lb.hcpapps.net. 60 IN CNAME default-geo.hcpapps.net.
+03 default-geo.hcpapps.net. 120 IN CNAME cluster1.hcpapps.net.
+04 default-geo.hcpapps.net. 120 IN CNAME cluster2.hcpapps.net.
+05 cluster1.hcpapps.net. 300 IN CNAME cluster1-gw1.hcpapps.net.
+06 cluster1.hcpapps.net. 300 IN CNAME cluster1-gw2.hcpapps.net.
+07 cluster2.hcpapps.net. 300 IN CNAME cluster2-gw1.hcpapps.net.
+08 cluster2.hcpapps.net. 300 IN CNAME cluster2-gw2.hcpapps.net.
+09 cluster1-gw1.hcpapps.net. 60 IN CNAME cluster1-gw1.aws.com.
+10 cluster1-gw2.hcpapps.net. 60 IN CNAME cluster1-gw2.aws.com.
+11 cluster2-gw1.hcpapps.net. 60 IN CNAME cluster2-gw1.aws.com.
+12 cluster2-gw2.hcpapps.net. 60 IN CNAME cluster2-gw2.aws.com.
+
+cases: +- Record 09 becomes unhealthy: remove records 09 and 05. +- Record 09 and 10 become unhealthy: remove records 09, 10, 05, 06, 03

+

Further reading

+

Domain Names RFC: https://datatracker.ietf.org/doc/html/rfc1034

+

Executing the probes

+

There will be a DNSHealthCheckProbe CR controller added to the controller. This controller will create an instance of a +HealthMonitor, the HealthMonitor ensures that each DNSHealthCheckProbe CR has a matching probeQueuer object running. +It will also handle both the updating of the probeQueuer on CR update and the removal of probeQueuers, when a +DNSHealthcheckProbe is removed.

+

The ProbeQueuer will add a health check request to a queue based on a configured interval, this queue is consumed by a +ProbeWorker, probeQueuers work on their own goroutine.

+

The ProbeWorker is responsible for actually executing the probe, and updating the DNSHealthCheckProbe CR status. The +probeWorker executes on its own goroutine.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/status-aggregation/index.html b/multicluster-gateway-controller/docs/proposals/status-aggregation/index.html new file mode 100644 index 00000000..89e771d6 --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/status-aggregation/index.html @@ -0,0 +1,2198 @@ + + + + + + + + + + + + + + + + + + + + + + Proposal: Aggregation of Status Conditions - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Proposal: Aggregation of Status Conditions

+

Background

+

Status conditions are used to represent the current state of a resource and provide information about any problems or issues that might be affecting it. They are defined as an array of Condition objects within the status section of a resource's YAML definition.

+

Problem Statement

+

When multiple instances of a resource (e.g. a Gateway) are running across multiple clusters, it can be difficult to know the current state of each instance without checking each one individually. This can be time-consuming and error-prone, especially when there are a large number of clusters or resources.

+

Proposal

+

To solve this problem, I'm proposing we leverage the status block in the control plane instance of that resource, aggregating the statuses to convey the necessary information.

+

Status Conditions

+

For example, if the Ready status condition type of a Gateway is True for all instances of the Gateway resource across all clusters, then the Gateway in the control plane will have the Ready status condition type also set to True.

+
status:
+  conditions:
+  - type: Ready
+    status: True
+    message: All listeners are valid
+
+

If the Ready status condition type of some instances is not True, the Ready status condition type of the Gateway in the control plane will be False.

+
status:
+  conditions:
+  - type: Ready
+    status: False
+
+

In addition, if the Ready status condition type is False, the Gateway in the control plane should include a status message for each Gateway instance where Ready is False. This message would indicate the reason why the condition is not true for each Gateway.

+
status:
+  conditions:
+  - type: Ready
+    status: False
+    message: "gateway-1 Listener certificate is expired; gateway-3 No listener configured for port 80"
+
+

In this example, the Ready status condition type is False because two of the three Gateway instances (gateway-1 and gateway-3) have issues with their listeners. For gateway-1, the reason for the False condition is that the listener certificate is expired, and for gateway-3, the reason is that no listener is configured for port 80. These reasons are included as status messages in the Gateway resource in the control plane.

+

As there may be different reasons for the condition being False across different clusters, it doesn't make sense to aggregate the reason field. The reason field is intended to be a programmatic identifier, while the message field allows for a human readable message i.e. a semi-colon separated list of messages.

+

The lastTransitionTime and observedGeneration fields will behave as normal for the resource in the control plane.

+

Addresses and Listeners status

+

The Gateway status can include information about addresses, like load balancer IP Addresses assigned to the Gateway, +and listeners, such as the number of attached routes for each listener. +This information is useful at the control plane level. +For example, a DNS Record should only exist as long as there is at least 1 attached route for a listener. +It can also be more complicated than that when it comes to multi cluster gateways. +A DNS Record should only include the IP Addresses of the Gateway instances where the listener has at least 1 attached route. +This is important when initial setup of DNS Records happen as applications start. +It doesn't make sense to route traffic to a Gateway where a listener isn't ready/attached yet. +It also comes into play when a Gateway is displaced either due to changing placement decision or removal.

+

In summary, the IP Addresses and number of attached routes per listener per Gateway instance is needed in the control plane to manage DNS effectively. +This proposal adds that information the hub Gateway status block. +This will ensure a decoupling of the DNS logic from the underlying resource/status syncing implementation (i.e. ManifestWork status feedback rules)

+

First, here are 2 instances of a multi cluster Gateway in 2 separate spoke clusters. +The yaml is shortened to highlight the status block.

+
apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: gateway
+status:
+  addresses:
+  - type: IPAddress
+    value: 172.31.200.0
+  - type: IPAddress
+    value: 172.31.201.0
+  listeners:
+  - attachedRoutes: 0
+    conditions:
+    name: api
+  - attachedRoutes: 1
+    conditions:
+    name: web
+---
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: gateway
+status:
+  addresses:
+  - type: IPAddress
+    value: 172.31.202.0
+  - type: IPAddress
+    value: 172.31.203.0
+  listeners:
+  - attachedRoutes: 1
+    name: api
+  - attachedRoutes: 1
+    name: web
+
+

And here is the proposed status aggregation in the hub Gateway:

+
apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: gateway
+status:
+  addresses:
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: cluster_1/172.31.200.0
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: cluster_1/172.31.201.0
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: cluster_2/172.31.202.0
+    - type: kuadrant.io/MultiClusterIPAddress
+      value: cluster_2/172.31.203.0
+  listeners:
+    - attachedRoutes: 0
+      name: cluster_1.api
+    - attachedRoutes: 1
+      name: cluster_1.web
+    - attachedRoutes: 1
+      name: cluster_2.api
+    - attachedRoutes: 1
+      name: cluster_2.web
+
+

The MultiCluster Gateway Controller will use a custom implementation of the addresses and listenerers fields. +The address type is of type AddressType, where the type is a domain-prefixed string identifier. +The value can be split on the forward slash, /, to give the cluster name and the underlying Gateway IPAddress value of type IPAddress. +Both the IPAddress and Hostname types will be supported. +The type strings for either will be kuadrant.io/MultiClusterIPAddress and kuadrant.io/MultiClusterHostname

+

The listener name is of type SectionName, with validation on allowed characters and max length of 253. +The name can be split on the period, ., to give the cluster name and the underlying listener name. +As there are limits on the character length for the name field, this puts a lower limit restriction on the cluster names and listener names used to ensure proper operation of this status aggregation. +If the validation fails, a status condition showing a validation error should be included in the hub Gateway status block.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/proposals/template/index.html b/multicluster-gateway-controller/docs/proposals/template/index.html new file mode 100644 index 00000000..fc84b79c --- /dev/null +++ b/multicluster-gateway-controller/docs/proposals/template/index.html @@ -0,0 +1,2032 @@ + + + + + + + + + + + + + + + + + + + + Proposal Template - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Proposal Template

+

Authors: {authors names} +Epic: {Issue of type epic this relates to} +Date: {date proposed}

+

Job Stories

+

{ A bullet point list of stories this proposal solves}

+

Goals

+

{A bullet point list of the goals this will achieve}

+

Non Goals

+

{A bullet point list of goals that this will not achieve, IE scoping}

+

Current Approach

+

{outline the current approach if any}

+

Proposed Solution

+

{outline the proposed solution, links to diagrams and PRs can go here along with the details of your solution}

+

Testing

+

{outline any testing considerations. Does this need some form of load/performance test. Are there any considerations when thinking about an e2e test}

+

Checklist

+
    +
  • [ ] An epic has been created and linked to
  • +
  • [ ] Reviewers have been added. It is important that the right reviewers are selected.
  • +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/tlspolicy/tls-policy/index.html b/multicluster-gateway-controller/docs/tlspolicy/tls-policy/index.html new file mode 100644 index 00000000..a53d3e19 --- /dev/null +++ b/multicluster-gateway-controller/docs/tlspolicy/tls-policy/index.html @@ -0,0 +1,2250 @@ + + + + + + + + + + + + + + + + + + + + + + + + TLSPpolicy Reference - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

TLS Policy

+

The TLSPolicy is a GatewayAPI policy that uses Direct Policy Attachment as defined in the policy attachment mechanism standard. +This policy is used to provide tls for gateway listeners by managing the lifecycle of tls certificates using CertManager, and is a policy implementation of securing gateway resources.

+

Terms

+
    +
  • GatewayAPI: resources that model service networking in Kubernetes.
  • +
  • Gateway: Kubernetes Gateway resource.
  • +
  • CertManager: X.509 certificate management for Kubernetes and OpenShift.
  • +
  • TLSPolicy: Kuadrant policy for managing tls certificates with certificate manager.
  • +
+

TLS Provider Setup

+

A TLSPolicy acts against a target Gateway by processing its listeners for appropriately configured tls sections.

+

If for example a Gateway is created with a listener with a hostname of echo.apps.hcpapps.net: +

apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster
+  listeners:
+    - allowedRoutes:
+        namespaces:
+          from: All
+      name: api
+      hostname: echo.apps.hcpapps.net
+      port: 443
+      protocol: HTTPS
+      tls:
+        mode: Terminate
+        certificateRefs:
+          - name: apps-hcpapps-tls
+            kind: Secret
+

+

TLSPolicy creation and attachment

+

The TLSPolicy requires a reference to an existing [CertManager Issuer] https://cert-manager.io/docs/configuration/. +If we create a selfigned cluster issuer with the following:

+
apiVersion: cert-manager.io/v1
+kind: ClusterIssuer
+metadata:
+  name: selfsigned-cluster-issuer
+spec:
+  selfSigned: {}
+
+

We can then create and attach a TLSPolicy to start managing tls certificates for it:

+
apiVersion: kuadrant.io/v1alpha1
+kind: TLSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  issuerRef:
+    group: cert-manager.io
+    kind: ClusterIssuer
+    name: selfsigned-cluster-issuer
+
+

Target Reference

+

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: +- Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. +- Kind is kind of the target resource. Only valid options are Gateway. +- Name is the name of the target resource. +- Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

+

Issuer Reference

+

issuerRef field is required and is a reference to a [CertManager Issuer] https://cert-manager.io/docs/configuration/. Fields included inside: +- Group is the group of the target resource. Only valid option is cert-manager.io. +- Kind is kind of issuer. Only valid options are Issuer and ClusterIssuer. +- Name is the name of the target issuer.

+

The example TLSPolicy shown above would create a CertManager Certificate like the following: +

apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+  labels:
+    gateway: prod-web
+    gateway-namespace: multi-cluster-gateways
+    kuadrant.io/tlspolicy: prod-web
+    kuadrant.io/tlspolicy-namespace: multi-cluster-gateways
+  name: apps-hcpapps-tls
+  namespace: multi-cluster-gateways
+spec:
+  dnsNames:
+  - echo.apps.hcpapps.net
+  issuerRef:
+    group: cert-manager.io
+    kind: ClusterIssuer
+    name: selfsigned-cluster-issuer
+  secretName: apps-hcpapps-tls
+  secretTemplate:
+    labels:
+      gateway: prod-web
+      gateway-namespace: multi-cluster-gateways
+      kuadrant.io/tlspolicy: prod-web
+      kuadrant.io/tlspolicy-namespace: multi-cluster-gateways
+  usages:
+  - digital signature
+  - key encipherment
+

+

And valid tls secrets generated and synced out to workload clusters:

+
kubectl get secrets -A | grep apps-hcpapps-tls
+kuadrant-multi-cluster-gateways   apps-hcpapps-tls                    kubernetes.io/tls               3      6m42s
+multi-cluster-gateways            apps-hcpapps-tls                    kubernetes.io/tls               3      7m12s
+
+

Let's Encrypt Issuer for Route53 hosted domain

+

Any type of Issuer that is supported by CertManager can be referenced in the TLSPolicy. The following shows how you would create a TLSPolicy that uses let's encypt to create production certs for a domain hosted in AWS Route53.

+

Create a secret containing AWS access key and secret: +

kubectl create secret generic mgc-aws-credentials --from-literal=AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> --from-literal=AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> -n multi-cluster-gateways
+

+

Create a new Issuer: +

apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+  name: le-production
+spec:
+  acme:
+    email: <YOUR EMAIL>
+    preferredChain: ""
+    privateKeySecretRef:
+      name: le-production
+    server: https://acme-v02.api.letsencrypt.org/directory
+    solvers:
+      - dns01:
+          route53:
+            hostedZoneID: <YOUR HOSTED ZONE ID>
+            region: us-east-1
+            accessKeyID: <AWS_SECRET_ACCESS_KEY>
+            secretAccessKeySecretRef:
+              key: AWS_SECRET_ACCESS_KEY
+              name: mgc-aws-credentials
+

+

Create a TLSPolicy: +

apiVersion: kuadrant.io/v1alpha1
+kind: TLSPolicy
+metadata:
+  name: prod-web
+  namespace: multi-cluster-gateways
+spec:
+  targetRef:
+    name: prod-web
+    group: gateway.networking.k8s.io
+    kind: Gateway
+  issuerRef:
+    group: cert-manager.io
+    kind: Issuer
+    name: le-production
+

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/docs/versioning/olm/index.html b/multicluster-gateway-controller/docs/versioning/olm/index.html new file mode 100644 index 00000000..7829acab --- /dev/null +++ b/multicluster-gateway-controller/docs/versioning/olm/index.html @@ -0,0 +1,2076 @@ + + + + + + + + + + + + + + + + + + + + Olm - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Olm

+ +

How to create a MGC OLM bundle, catalog and how to install MGC via OLM

+

❗ NOTE: You can supply different env vars to the following make commands these include:

+
* Version using the env var VERSION 
+* Tag via the env var IMAGE_TAG for tags not following the semantic format.
+* Image registry via the env var REGISTRY
+* Registry org via the env var ORG
+
+For example
+
+

make bundle-build-push VERISON=2.0.1 + make catalog-build-push IMAGE_TAG=asdf

+

Creating the bundle

+
    +
  1. Generate build and push the OLM bundle manifests for MGC, run the following make target: +
    make bundle-build-push
    +
  2. +
+

Creating the catalog

+
    +
  1. Build and push the catalog image +
    make catalog-build-push
    +
  2. +
+

Installing the operator via OLM catalog

+
    +
  1. +

    Create a namespace: +

       cat <<EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Namespace
    +metadata:
    +  name: multi-cluster-gateways-system
    +EOF
    +

    +
  2. +
  3. +

    Create a catalog source: + ```bash + cat <<EOF | kubectl apply -f - +apiVersion: operators.coreos.com/v1alpha1 +kind: CatalogSource +metadata: + name: mgc-catalog + namespace: olm +spec: + sourceType: grpc + image: quay.io/kuadrant/multicluster-gateway-controller-catalog:v6.5.4 + grpcPodConfig: + securityContextConfig: restricted + displayName: mgc-catalog + publisher: Red Hat +EOF +

    3. Create a subscription
    +```bash
    +    cat <<EOF | kubectl apply -f -
    +apiVersion: operators.coreos.com/v1alpha1
    +kind: Subscription
    +metadata:
    +  name: multicluster-gateway-controller
    +  namespace: multi-cluster-gateways-system
    +spec:
    +  channel: alpha
    +  name: multicluster-gateway-controller
    +  source: mgc-catalog
    +  sourceNamespace: olm
    +  installPlanApproval: Automatic
    +EOF
    +

    +
  4. +
  5. Create a operator group +bash + cat <<EOF | kubectl apply -f - +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: og-mgc + namespace: multi-cluster-gateways-system +EOF + For more information on each of these OLM resources please see the offical docs
  6. +
+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/multicluster-gateway-controller/index.html b/multicluster-gateway-controller/index.html new file mode 100644 index 00000000..4a24536c --- /dev/null +++ b/multicluster-gateway-controller/index.html @@ -0,0 +1,2246 @@ + + + + + + + + + + + + + + + + + + + + + + + + Overview - Kuadrant Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

multicluster-gateway-controller

+

Description:

+

The multi-cluster gateway controller, leverages the gateway API standard and Open Cluster Management to provide multi-cluster connectivity and global load balancing

+

Key Features:

+
    +
  • Central Gateway Definition that can then be distributed to multiple clusters
  • +
  • Automatic TLS and cert distribution for HTTPS based listeners
  • +
  • DNSPolicy to decide how North-South based traffic should be balanced and reach the gateways
  • +
  • Health checks to detect and take remedial action against unhealthy endpoints
  • +
  • Cloud DNS provider integrations (AWS route 53) with new ones being added (google DNS)
  • +
+

When deploying the multicluster gateway controller using the make targets, the following will be created: +* Kind cluster(s) +* Gateway API CRDs in the control plane cluster +* Ingress controller +* Cert manager +* ArgoCD instance +* K8s Dashboard +* LetsEncrypt certs

+

Prerequisites:

+
    +
  • AWS or GCP
  • +
  • Various dependencies installed into $(pwd)/bin e.g. kind, yq etc.
  • +
  • Run make dependencies
  • +
  • openssl>=3
      +
    • On macOS a later version is available with brew install openssl. You'll need to update your PATH as macOS provides an older version via libressl as well
    • +
    • On Fedora use dnf install openssl
    • +
    +
  • +
  • go >= 1.20
  • +
+

1. Running the controller in the cluster:

+
    +
  1. +

    Set up your DNS Provider by following these steps

    +
  2. +
  3. +

    Setup your local environment +

    make local-setup MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER>
    +

    +
  4. +
  5. +

    Build the controller image and load it into the control plane + sh + kubectl config use-context kind-mgc-control-plane + make kind-load-controller

    +
  6. +
  7. +

    Deploy the controller to the control plane cluster +

    make deploy-controller
    +

    +
  8. +
  9. +

    (Optional) View the logs of the deployed controller +

    kubectl logs -f $(kubectl get pods -n multi-cluster-gateways | grep "mgc-" | awk '{print $1}') -n multi-cluster-gateways
    +

    +
  10. +
+

2. Running the controller locally:

+
    +
  1. +

    Set up your DNS Provider by following these steps

    +
  2. +
  3. +

    Setup your local environment

    +
    make local-setup MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER>
    +
    +
  4. +
  5. +

    Run the controller locally: +

    kubectl config use-context kind-mgc-control-plane 
    +make build-controller install run-controller
    +

    +
  6. +
+

3. Running the agent in the cluster:

+
    +
  1. +

    Build the agent image and load it into the workload cluster +

    kubectl config use-context kind-mgc-workload-1 
    +make kind-load-agent
    +

    +
  2. +
  3. +

    Deploy the agent to the workload cluster +

    make deploy-agent
    +

    +
  4. +
+

4. Running the agent locally

+
    +
  1. Target the workload cluster you wish to run on: +
    export KUBECONFIG=./tmp/kubeconfigs/mgc-workload-1.kubeconfig
    +
  2. +
  3. Run the agent locally: +
    make build-agent run-agent
    +
  4. +
+

5. Clean up local environment

+

In any terminal window target control plane cluster by: +

kubectl config use-context kind-mgc-control-plane 
+
+If you want to wipe everything clean consider using: +
make local-cleanup # Remove kind clusters created locally and cleanup any generated local files.
+
+If the intention is to cleanup kind cluster and prepare them for re-installation consider using: +
make local-cleanup-mgc MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER> # prepares clusters for make local-setup-mgc
+

+

License

+

Copyright 2022 Red Hat.

+

Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at

+
http://www.apache.org/licenses/LICENSE-2.0
+
+

Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License.

+ + + + + + +
+
+ + +
+ +
+ +
+ + + +
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..507e4b11 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

Kuadrant brings together Gateway API and Open Cluster Management to help you scale, load-balance and secure your Ingress Gateways as a key part of your application connectivity, in the single or multi-cluster environment.

"},{"location":"#single-cluster","title":"Single-cluster","text":"

Kuadrant can be used to protect ingress gateways based on Gateway API1 with policy enforcement (rate limit and auth) in a Kuberentes cluster.

Topology"},{"location":"#multi-cluster","title":"Multi-cluster","text":"

In the multi-cluster environment2, you can utilize Kuadrant to manage DNS-based north-south connectivity, which can provide global load balancing underpinned by your cluster topology. Kuadrant's multi-cluster functionality also ensures gateway and policy consistency across clusters, focusing on critical aspects like TLS and application health.

Topology"},{"location":"#component-documentation","title":"Component Documentation","text":"
  • Kuadrant Operator Install and manage the lifecycle of the Kuadrant deployments and core Kuadrant policies for the data plane.
  • Authorino Flexible, cloud-native, and lightweight external authorization server to implement identity verification (Kubernetes TokenReview, OIDC, OAuth2, API key, mTLS) and authorization policy rules (Kuberentes SubjectAccessReview, JWT claims, OPA, request pattern-matching, resource metadata, RBAC, ReBAC, ABAC, etc).
  • Limitador Fast rate-limiter implemented in Rust, that can be used as a library, or as a service plugged in to the API gateway.
  • Multicluster Gateway Controller Manage multi-cluster gateways, integrate with DNS providers, TLS providers and OCM (Open Cluster Management).
  1. Supported implementations: Istio, OpenShift Service Mesh. \u21a9

  2. Based on Open Cluster Management.\u00a0\u21a9

"},{"location":"kuadrant-operator/","title":"Kuadrant Operator","text":"

The Operator to install and manage the lifecycle of the Kuadrant components deployments.

  • Overview
  • Architecture
    • Kuadrant components
    • Provided APIs
  • Getting started
    • Pre-requisites
    • Installing Kuadrant
    • Protect Your Service
    • If you are an API Provider
    • If you are a Cluster Operator
  • User guides
  • Kuadrant Rate Limiting
  • Documentation
  • Contributing
  • Licensing
"},{"location":"kuadrant-operator/#overview","title":"Overview","text":"

Kuadrant is a re-architecture of API Management using Cloud Native concepts and separating the components to be less coupled, more reusable and leverage the underlying kubernetes platform. It aims to deliver a smooth experience to providers and consumers of applications & services when it comes to rate limiting, authentication, authorization, discoverability, change management, usage contracts, insights, etc.

Kuadrant aims to produce a set of loosely coupled functionalities built directly on top of Kubernetes. Furthermore, it only strives to provide what Kubernetes doesn\u2019t offer out of the box, i.e. Kuadrant won\u2019t be designing a new Gateway/proxy, instead it will opt to connect with what\u2019s there and what\u2019s being developed (think Envoy, Istio, GatewayAPI).

Kuadrant is a system of cloud-native k8s components that grows as users\u2019 needs grow.

  • From simple protection of a Service (via AuthN) that is used by teammates working on the same cluster, or \u201csibling\u201d services, up to AuthZ of users using OIDC plus custom policies.
  • From no rate-limiting to rate-limiting for global service protection on to rate-limiting by users/plans
"},{"location":"kuadrant-operator/#architecture","title":"Architecture","text":"

Kuadrant relies on Istio and the Gateway API to operate the cluster (Istio's) ingress gateway to provide API management with authentication (authN), authorization (authZ) and rate limiting capabilities.

"},{"location":"kuadrant-operator/#kuadrant-components","title":"Kuadrant components","text":"CRD Description Control Plane The control plane takes the customer desired configuration (declaratively as kubernetes custom resources) as input and ensures all components are configured to obey customer's desired behavior. This repository contains the source code of the kuadrant control plane Kuadrant Operator A Kubernetes Operator to manage the lifecycle of the kuadrant deployment Authorino The AuthN/AuthZ enforcer. As the external istio authorizer (envoy external authorization serving gRPC service) Limitador The external rate limiting service. It exposes a gRPC service implementing the Envoy Rate Limit protocol (v3) Authorino Operator A Kubernetes Operator to manage Authorino instances Limitador Operator A Kubernetes Operator to manage Limitador instances"},{"location":"kuadrant-operator/#provided-apis","title":"Provided APIs","text":"

The kuadrant control plane owns the following Custom Resource Definitions, CRDs:

CRD Description Example RateLimitPolicy CRD [doc] [reference] Enable access control on workloads based on HTTP rate limiting RateLimitPolicy CR AuthPolicy CRD Enable AuthN and AuthZ based access control on workloads AuthPolicy CR

Additionally, Kuadrant provides the following CRDs

CRD Owner Description Example Kuadrant CRD Kuadrant Operator Represents an instance of kuadrant Kuadrant CR Limitador CRD Limitador Operator Represents an instance of Limitador Limitador CR Authorino CRD Authorino Operator Represents an instance of Authorino Authorino CR

"},{"location":"kuadrant-operator/#getting-started","title":"Getting started","text":""},{"location":"kuadrant-operator/#pre-requisites","title":"Pre-requisites","text":"
  • Istio is installed in the cluster. Otherwise, refer to the Istio getting started guide.
  • Kubernetes Gateway API is installed in the cluster. Otherwise, configure Istio to expose a service using the Kubernetes Gateway API.
"},{"location":"kuadrant-operator/#installing-kuadrant","title":"Installing Kuadrant","text":"

Installing Kuadrant is a two-step procedure. Firstly, install the Kuadrant Operator and secondly, request a Kuadrant instance by creating a Kuadrant custom resource.

"},{"location":"kuadrant-operator/#1-install-the-kuadrant-operator","title":"1. Install the Kuadrant Operator","text":"

The Kuadrant Operator is available in public community operator catalogs, such as the Kubernetes OperatorHub.io and the Openshift Container Platform and OKD OperatorHub.

Kubernetes

The operator is available from OperatorHub.io. Just go to the linked page and follow installation steps (or just run these two commands):

# Install Operator Lifecycle Manager (OLM), a tool to help manage the operators running on your cluster.\ncurl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.23.1/install.sh | bash -s v0.23.1\n\n# Install the operator by running the following command:\nkubectl create -f https://operatorhub.io/install/kuadrant-operator.yaml\n

Openshift

The operator is available from the Openshift Console OperatorHub. Just follow installation steps choosing the \"Kuadrant Operator\" from the catalog:

"},{"location":"kuadrant-operator/#2-request-a-kuadrant-instance","title":"2. Request a Kuadrant instance","text":"

Create the namespace:

kubectl create namespace kuadrant\n

Apply the Kuadrant custom resource:

kubectl -n kuadrant apply -f - <<EOF\n---\napiVersion: kuadrant.io/v1beta1\nkind: Kuadrant\nmetadata:\n  name: kuadrant-sample\nspec: {}\nEOF\n
"},{"location":"kuadrant-operator/#protect-your-service","title":"Protect your service","text":""},{"location":"kuadrant-operator/#if-you-are-an-api-provider","title":"If you are an API Provider","text":"
  • Deploy the service/API to be protected (\"Upstream\")
  • Expose the service/API using the kubernetes Gateway API, ie HTTPRoute object.
  • Write and apply the Kuadrant's RateLimitPolicy and/or AuthPolicy custom resources targeting the HTTPRoute resource to have your API protected.
"},{"location":"kuadrant-operator/#if-you-are-a-cluster-operator","title":"If you are a Cluster Operator","text":"
  • (Optionally) deploy istio ingress gateway using the Gateway resource.
  • Write and apply the Kuadrant's RateLimitPolicy and/or AuthPolicy custom resources targeting the Gateway resource to have your gateway traffic protected.
"},{"location":"kuadrant-operator/#user-guides","title":"User guides","text":"

The user guides section of the docs gathers several use-cases as well as the instructions to implement them using kuadrant.

  • Simple Rate Limiting for Application Developers
  • Authenticated Rate Limiting for Application Developers
  • Gateway Rate Limiting for Cluster Operators
  • Authenticated Rate Limiting with JWTs and Kubernetes RBAC
"},{"location":"kuadrant-operator/#kuadrant-rate-limiting","title":"Kuadrant Rate Limiting","text":""},{"location":"kuadrant-operator/#documentation","title":"Documentation","text":"

Docs can be found on the Kuadrant website.

"},{"location":"kuadrant-operator/#contributing","title":"Contributing","text":"

The Development guide describes how to build the kuadrant operator and how to test your changes before submitting a patch or opening a PR.

Join us on kuadrant.slack.com for live discussions about the roadmap and more.

"},{"location":"kuadrant-operator/#licensing","title":"Licensing","text":"

This software is licensed under the Apache 2.0 license.

See the LICENSE and NOTICE files that should have been provided along with this software for details.

"},{"location":"kuadrant-operator/doc/development/","title":"Development Guide","text":"
  • Technology stack required for development
  • Build
  • Run locally
  • Deploy the operator in a deployment object
  • Deploy kuadrant operator using OLM
  • Build custom OLM catalog
    • Build kuadrant operator bundle image
    • Build custom catalog
  • Cleaning up
  • Run tests
    • Unit tests
    • Integration tests
    • All tests
    • Lint tests
  • (Un)Install Kuadrant CRDs
"},{"location":"kuadrant-operator/doc/development/#technology-stack-required-for-development","title":"Technology stack required for development","text":"
  • operator-sdk version v1.28.1
  • kind version v0.20.0
  • git
  • go version 1.20+
  • kubernetes version v1.19+
  • kubectl version v1.19+
"},{"location":"kuadrant-operator/doc/development/#build","title":"Build","text":"
make\n
"},{"location":"kuadrant-operator/doc/development/#run-locally","title":"Run locally","text":"

You need an active session open to a kubernetes cluster.

Optionally, run kind and deploy kuadrant deps

make local-env-setup\n

Then, run the operator locally

make run\n
"},{"location":"kuadrant-operator/doc/development/#deploy-the-operator-in-a-deployment-object","title":"Deploy the operator in a deployment object","text":"
make local-setup\n

List of tasks done by the command above:

  • Create local cluster using kind
  • Build kuadrant docker image from the current working directory
  • Deploy Kuadrant control plane (including istio, authorino and limitador)

TODO: customize with custom authorino and limitador git refs. Make sure Makefile propagates variable to deploy target

"},{"location":"kuadrant-operator/doc/development/#deploy-kuadrant-operator-using-olm","title":"Deploy kuadrant operator using OLM","text":"

You can deploy kuadrant using OLM just running few commands. No need to build any image. Kuadrant engineering team provides latest and release version tagged images. They are available in the Quay.io/Kuadrant image repository.

Create kind cluster

make kind-create-cluster\n

Deploy OLM system

make install-olm\n

Deploy kuadrant using OLM. The make deploy-catalog target accepts the following variables:

Makefile Variable Description Default value CATALOG_IMG Kuadrant operator catalog image URL quay.io/kuadrant/kuadrant-operator-catalog:latest
make deploy-catalog [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-catalog:latest]\n
"},{"location":"kuadrant-operator/doc/development/#build-custom-olm-catalog","title":"Build custom OLM catalog","text":"

If you want to deploy (using OLM) a custom kuadrant operator, you need to build your own catalog. Furthermore, if you want to deploy a custom limitador or authorino operator, you also need to build your own catalog. The kuadrant operator bundle includes the authorino or limtador operator dependency version, hence using other than latest version requires a custom kuadrant operator bundle and a custom catalog including the custom bundle.

"},{"location":"kuadrant-operator/doc/development/#build-kuadrant-operator-bundle-image","title":"Build kuadrant operator bundle image","text":"

The make bundle target accepts the following variables:

Makefile Variable Description Default value Notes IMG Kuadrant operator image URL quay.io/kuadrant/kuadrant-operator:latest TAG var could be use to build this URL, defaults to latest if not provided VERSION Bundle version 0.0.0 LIMITADOR_OPERATOR_BUNDLE_IMG Limitador operator bundle URL quay.io/kuadrant/limitador-operator-bundle:latest LIMITADOR_OPERATOR_VERSION var could be used to build this, defaults to latest if not provided AUTHORINO_OPERATOR_BUNDLE_IMG Authorino operator bundle URL quay.io/kuadrant/authorino-operator-bundle:latest AUTHORINO_OPERATOR_VERSION var could be used to build this, defaults to latest if not provided RELATED_IMAGE_WASMSHIM WASM shim image URL oci://quay.io/kuadrant/wasm-shim:latest WASM_SHIM_VERSION var could be used to build this, defaults to latest if not provided
  • Build the bundle manifests
make bundle [IMG=quay.io/kuadrant/kuadrant-operator:latest] \\\n[VERSION=0.0.0] \\\n[LIMITADOR_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest] \\\n[AUTHORINO_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/authorino-operator-bundle:latest] \\\n[RELATED_IMAGE_WASMSHIM=oci://quay.io/kuadrant/wasm-shim:latest]\n
  • Build the bundle image from the manifests
Makefile Variable Description Default value BUNDLE_IMG Kuadrant operator bundle image URL quay.io/kuadrant/kuadrant-operator-bundle:latest
make bundle-build [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]\n
  • Push the bundle image to a registry
Makefile Variable Description Default value BUNDLE_IMG Kuadrant operator bundle image URL quay.io/kuadrant/kuadrant-operator-bundle:latest
make bundle-push [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]\n

Frequently, you may need to build custom kuadrant bundle with the default (latest) Limitador and Authorino bundles. These are the example commands to build the manifests, build the bundle image and push to the registry.

In the example, a new kuadrant operator bundle version 0.8.0 will be created that references the kuadrant operator image quay.io/kuadrant/kuadrant-operator:v0.5.0 and latest Limitador and Authorino bundles.

# manifests\nmake bundle IMG=quay.io/kuadrant/kuadrant-operator:v0.5.0 VERSION=0.8.0\n\n# bundle image\nmake bundle-build BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:my-bundle\n\n# push bundle image\nmake bundle-push BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:my-bundle\n
"},{"location":"kuadrant-operator/doc/development/#build-custom-catalog","title":"Build custom catalog","text":"

The catalog's format will be File-based Catalog.

Make sure all the required bundles are pushed to the registry. It is required by the opm tool.

The make catalog target accepts the following variables:

Makefile Variable Description Default value BUNDLE_IMG Kuadrant operator bundle image URL quay.io/kuadrant/kuadrant-operator-bundle:latest LIMITADOR_OPERATOR_BUNDLE_IMG Limitador operator bundle URL quay.io/kuadrant/limitador-operator-bundle:latest AUTHORINO_OPERATOR_BUNDLE_IMG Authorino operator bundle URL quay.io/kuadrant/authorino-operator-bundle:latest
make catalog [BUNDLE_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest] \\\n[LIMITADOR_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest] \\\n[AUTHORINO_OPERATOR_BUNDLE_IMG=quay.io/kuadrant/authorino-operator-bundle:latest]\n
  • Build the catalog image from the manifests
Makefile Variable Description Default value CATALOG_IMG Kuadrant operator catalog image URL quay.io/kuadrant/kuadrant-operator-catalog:latest
make catalog-build [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-catalog:latest]\n
  • Push the catalog image to a registry
make catalog-push [CATALOG_IMG=quay.io/kuadrant/kuadrant-operator-bundle:latest]\n

You can try out your custom catalog image following the steps of the Deploy kuadrant operator using OLM section.

"},{"location":"kuadrant-operator/doc/development/#cleaning-up","title":"Cleaning up","text":"
make local-cleanup\n
"},{"location":"kuadrant-operator/doc/development/#run-tests","title":"Run tests","text":""},{"location":"kuadrant-operator/doc/development/#unittests","title":"Unittests","text":"
make test-unit\n

Optionally, add TEST_NAME makefile variable to run specific test

make test-unit TEST_NAME=TestLimitIndexEquals\n

or even subtest

make test-unit TEST_NAME=TestLimitIndexEquals/empty_indexes_are_equal\n
"},{"location":"kuadrant-operator/doc/development/#integration-tests","title":"Integration tests","text":"

You need an active session open to a kubernetes cluster.

Optionally, run kind and deploy kuadrant deps

make local-env-setup\n

Run integration tests

make test-integration\n
"},{"location":"kuadrant-operator/doc/development/#all-tests","title":"All tests","text":"

You need an active session open to a kubernetes cluster.

Optionally, run kind and deploy kuadrant deps

make local-env-setup\n

Run all tests

make test\n
"},{"location":"kuadrant-operator/doc/development/#lint-tests","title":"Lint tests","text":"
make run-lint\n
"},{"location":"kuadrant-operator/doc/development/#uninstall-kuadrant-crds","title":"(Un)Install Kuadrant CRDs","text":"

You need an active session open to a kubernetes cluster.

Remove CRDs

make uninstall\n
"},{"location":"kuadrant-operator/doc/logging/","title":"Logging","text":"

The kuadrant operator outputs 3 levels of log messages: (from lowest to highest level)

  1. debug
  2. info (default)
  3. error

info logging is restricted to high-level information. Actions like creating, deleteing or updating kubernetes resources will be logged with reduced details about the corresponding objects, and without any further detailed logs of the steps in between, except for errors.

Only debug logging will include processing details.

To configure the desired log level, set the environment variable LOG_LEVEL to one of the supported values listed above. Default log level is info.

Apart from log level, the operator can output messages to the logs in 2 different formats:

  • production (default): each line is a parseable JSON object with properties {\"level\":string, \"ts\":int, \"msg\":string, \"logger\":string, extra values...}
  • development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\\t<log-level>\\t<logger>\\t<message>\\t{extra-values-as-json}

To configure the desired log mode, set the environment variable LOG_MODE to one of the supported values listed above. Default log level is production.

"},{"location":"kuadrant-operator/doc/rate-limiting/","title":"Kuadrant Rate Limiting","text":"

A Kuadrant RateLimitPolicy custom resource, often abbreviated \"RLP\":

  1. Allows it to target Gateway API networking resources such as HTTPRoutes and Gateways, using these resources to obtain additional context, i.e., which traffic workload (HTTP attributes, hostnames, user attributes, etc) to rate limit.
  2. Allows to specify which specific subsets of the targeted network resource to apply the limits to.
  3. Abstracts the details of the underlying Rate Limit protocol and configuration resources, that have a much broader remit and surface area.
  4. Supports cluster operators to set overrides (soon) and defaults that govern what can be done at the lower levels.
"},{"location":"kuadrant-operator/doc/rate-limiting/#how-it-works","title":"How it works","text":""},{"location":"kuadrant-operator/doc/rate-limiting/#envoys-rate-limit-service-protocol","title":"Envoy's Rate Limit Service Protocol","text":"

Kuadrant's Rate Limit implementation relies on the Envoy's Rate Limit Service (RLS) protocol. The workflow per request goes: 1. On incoming request, the gateway checks the matching rules for enforcing rate limits, as stated in the RateLimitPolicy custom resources and targeted Gateway API networking objects 2. If the request matches, the gateway sends one RateLimitRequest to the external rate limiting service (\"Limitador\"). 1. The external rate limiting service responds with a RateLimitResponse back to the gateway with either an OK or OVER_LIMIT response code.

A RateLimitPolicy and its targeted Gateway API networking resource contain all the statements to configure both the ingress gateway and the external rate limiting service.

"},{"location":"kuadrant-operator/doc/rate-limiting/#the-ratelimitpolicy-custom-resource","title":"The RateLimitPolicy custom resource","text":""},{"location":"kuadrant-operator/doc/rate-limiting/#overview","title":"Overview","text":"

The RateLimitPolicy spec includes, basically, two parts:

  • A reference to an existing Gateway API resource (spec.targetRef)
  • Limit definitions (spec.limits)

Each limit definition includes: * A set of rate limits (spec.limits.<limit-name>.rates[]) * (Optional) A set of dynamic counter qualifiers (spec.limits.<limit-name>.counters[]) * (Optional) A set of route selectors, to further qualify the specific routing rules when to activate the limit (spec.limits.<limit-name>.routeSelectors[]) * (Optional) A set of additional dynamic conditions to activate the limit (spec.limits.<limit-name>.when[])

Check out Kuadrant RFC 0002 to learn more about the Well-known Attributes that can be used to define counter qualifiers (counters) and conditions (when)."},{"location":"kuadrant-operator/doc/rate-limiting/#high-level-example-and-field-definition","title":"High-level example and field definition","text":"
apiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\nname: my-rate-limit-policy\nspec:\n# reference to an existing networking resource to attach the policy to\n# it can be a Gateway API HTTPRoute or Gateway resource\n# it can only refer to objects in the same namespace as the RateLimitPolicy\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute / Gateway\nname: myroute / mygateway\n# the limits definitions to apply to the network traffic routed through the targeted resource\nlimits:\n\"my_limit\":\n# the rate limits associated with this limit definition\n# e.g., to specify a 50rps rate limit, add `{ limit: 50, duration: 1, unit: secod }`\nrates: [\u2026]\n# (optional) counter qualifiers\n# each dynamic value in the data plane starts a separate counter, combined with each rate limit\n# e.g., to define a separate rate limit for each user name detected by the auth layer, add `metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.username`\n# check out Kuadrant RFC 0002 (https://github.com/Kuadrant/architecture/blob/main/rfcs/0002-well-known-attributes.md) to learn more about the Well-known Attributes that can be used in this field\ncounters: [\u2026]\n# (optional) further qualification of the scpecific HTTPRouteRules within the targeted HTTPRoute that should trigger the limit\n# each element contains a HTTPRouteMatch object that will be used to select HTTPRouteRules that include at least one identical HTTPRouteMatch\n# the HTTPRouteMatch part does not have to be fully identical, but the what's stated in the selector must be identically stated in the HTTPRouteRule\n# do not use it on RateLimitPolicies that target a Gateway\nrouteSelectors: [\u2026]\n# (optional) additional dynamic conditions to trigger the limit.\n# use it for filterring attributes not supported by HTTPRouteRule or with RateLimitPolicies that target a Gateway\n# check out Kuadrant RFC 0002 (https://github.com/Kuadrant/architecture/blob/main/rfcs/0002-well-known-attributes.md) to learn more about the Well-known Attributes that can be used in this field\nwhen: [\u2026]\n
"},{"location":"kuadrant-operator/doc/rate-limiting/#using-the-ratelimitpolicy","title":"Using the RateLimitPolicy","text":""},{"location":"kuadrant-operator/doc/rate-limiting/#targeting-a-httproute-networking-resource","title":"Targeting a HTTPRoute networking resource","text":"

When a RLP targets a HTTPRoute, the policy is enforced to all traffic routed according to the rules and hostnames specified in the HTTPRoute, across all Gateways referenced in the spec.parentRefs field of the HTTPRoute.

The targeted HTTPRoute's rules and/or hostnames to which the policy must be enforced can be filtered to specific subsets, by specifying the routeSelectors field of the limit definition.

Target a HTTPRoute by setting the spec.targetRef field of the RLP as follows:

apiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\nname: <RLP name>\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: <HTTPRoute Name>\nlimits: {\u2026}\n

"},{"location":"kuadrant-operator/doc/rate-limiting/#multiple-httproutes-with-the-same-hostname","title":"Multiple HTTPRoutes with the same hostname","text":"

When multiple HTTPRoutes state the same hostname, these HTTPRoutes are usually all admitted and merged together by the gateway implemetation in the same virtual host configuration of the gateway. Similarly, the Kuadrant control plane will also register all rate limit policies referencing the HTTPRoutes, activating the correct limits across policies according to the routing matching rules of the targeted HTTPRoutes.

"},{"location":"kuadrant-operator/doc/rate-limiting/#hostnames-and-wildcards","title":"Hostnames and wildcards","text":"

If a RLP targets a route defined for *.com and another RLP targets another route for api.com, the Kuadrant control plane will not merge these two RLPs. Rather, it will mimic the behavior of gateway implementation by which the \"most specific hostname wins\", thus enforcing only the corresponding applicable policies and limit definitions.

E.g., a request coming for api.com will be rate limited according to the rules from the RLP that targets the route for api.com; while a request for other.com will be rate limited with the rules from the RLP targeting the route for *.com.

Example with 3 RLPs and 3 HTTPRoutes: - RLP A \u2192 HTTPRoute A (a.toystore.com) - RLP B \u2192 HTTPRoute B (b.toystore.com) - RLP W \u2192 HTTPRoute W (*.toystore.com)

Expected behavior: - Request to a.toystore.com \u2192 RLP A will be enforced - Request to b.toystore.com \u2192 RLP B will be enforced - Request to other.toystore.com \u2192 RLP W will be enforced

"},{"location":"kuadrant-operator/doc/rate-limiting/#targeting-a-gateway-networking-resource","title":"Targeting a Gateway networking resource","text":"

When a RLP targets a Gateway, the policy will be enforced to all HTTP traffic hitting the gateway, unless a more specific RLP targeting a matching HTTPRoute exists.

Any new HTTPRoute referrencing the gateway as parent will be automatically covered by the RLP that targets the Gateway, as well as changes in the existing HTTPRoutes.

This effectively provides cluster operators with the ability to set defaults to protect the infrastructure against unplanned and malicious network traffic attempt, such as by setting preemptive limits for hostnames and hostname wildcards.

Target a Gateway HTTPRoute by setting the spec.targetRef field of the RLP as follows:

apiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\nname: <RLP name>\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: Gateway\nname: <Gateway Name>\nlimits: {\u2026}\n

"},{"location":"kuadrant-operator/doc/rate-limiting/#overlapping-gateway-and-httproute-rlps","title":"Overlapping Gateway and HTTPRoute RLPs","text":"

Gateway-targeted RLPs will serve as a default to protect all traffic routed through the gateway until a more specific HTTPRoute-targeted RLP exists, in which case the HTTPRoute RLP prevails.

Example with 4 RLPs, 3 HTTPRoutes and 1 Gateway (plus 2 HTTPRoute and 2 Gateways without RLPs attached): - RLP A \u2192 HTTPRoute A (a.toystore.com) \u2192 Gateway G (*.com) - RLP B \u2192 HTTPRoute B (b.toystore.com) \u2192 Gateway G (*.com) - RLP W \u2192 HTTPRoute W (*.toystore.com) \u2192 Gateway G (*.com) - RLP G \u2192 Gateway G (*.com)

Expected behavior: - Request to a.toystore.com \u2192 RLP A will be enforced - Request to b.toystore.com \u2192 RLP B will be enforced - Request to other.toystore.com \u2192 RLP W will be enforced - Request to other.com (suppose a route exists) \u2192 RLP G will be enforced - Request to yet-another.net (suppose a route and gateway exist) \u2192 No RLP will be enforced

"},{"location":"kuadrant-operator/doc/rate-limiting/#limit-definition","title":"Limit definition","text":"

A limit will be activated whenever a request comes in and the request matches: - any of the route rules selected by the limit (via routeSelectors or implicit \"catch-all\" selector), and - all of the when conditions specified in the limit.

A limit can define: - counters that are qualified based on dynamic values fetched from the request, or - global counters (implicitly, when no qualified counter is specified)

A limit is composed of one or more rate limits.

E.g.

spec:\nlimits:\n\"toystore-all\":\nrates:\n- limit: 5000\nduration: 1\nunit: second\n\"toystore-api-per-username\":\nrates:\n- limit: 100\nduration: 1\nunit: second\n- limit: 1000\nduration: 1\nunit: minute\ncounters:\n- auth.identity.username\nrouteSelectors:\nhostnames:\n- api.toystore.com\n\"toystore-admin-unverified-users\":\nrates:\n- limit: 250\nduration: 1\nunit: second\nrouteSelectors:\nhostnames:\n- admin.toystore.com\nwhen:\n- selector: auth.identity.email_verified\noperator: eq\nvalue: \"false\"\n
Request to Rate limits enforced api.toystore.com 100rps/username or 1000rpm/username (whatever happens first) admin.toystore.com 250rps other.toystore.com 5000rps"},{"location":"kuadrant-operator/doc/rate-limiting/#route-selectors","title":"Route selectors","text":"

The routeSelectors field of the limit definition allows to specify selectors of routes (or parts of a route), that transitively induce a set of conditions for a limit to be enforced. It is defined as a set of route matching rules, where these rules must exist, partially or identically stated within the HTTPRouteRules of the HTTPRoute that is targeted by the RLP.

The field is typed as a list of objects based on a special type defined from Gateway API's HTTPRouteMatch type (matches subfield of the route selector object), and an additional field hostnames.

Route selectors matches and the HTTPRoute's HTTPRouteMatches are pairwise compared to select or not select HTTPRouteRules that should activate a limit. To decide whether the route selector selects a HTTPRouteRule or not, for each pair of route selector HTTPRouteMatch and HTTPRoute HTTPRouteMatch: 1. The route selector selects the HTTPRoute's HTTPRouteRule if the HTTPRouteRule contains at least one HTTPRouteMatch that specifies fields that are literally identical to all the fields specified by at least one HTTPRouteMatch of the route selector. 2. A HTTPRouteMatch within a HTTPRouteRule may include other fields that are not specified in a route selector match, and yet the route selector match selects the HTTPRouteRule if all fields of the route selector match are identically included in the HTTPRouteRule's HTTPRouteMatch; the opposite is NOT true. 3. Each field path of a HTTPRouteMatch, as well as each field method of a HTTPRouteMatch, as well as each element of the fields headers and queryParams of a HTTPRouteMatch, is atomic \u2013 this is true for the HTTPRouteMatches within a HTTPRouteRule, as well as for HTTPRouteMatches of a route selector.

Additionally, at least one hostname specified in a route selector must identically match one of the hostnames specified (or inherited, when omitted) by the targeted HTTPRoute.

The semantics of the route selectors allows to assertively relate limit definitions to routing rules, with benefits for identifying the subsets of the network that are covered by a limit, while preventing unreachable definitions, as well as the overhead associated with the maintenance of such rules across multiple resources throughout time, according to network topology beneath. Moreover, the requirement of not having to be a full copy of the targeted HTTPRouteRule matches, but only partially identical, helps prevent repetition to some degree, as well as it enables to more easily define limits that scope across multiple HTTPRouteRules (by specifying less rules in the selector).

A few rules and corner cases to keep in mind while using the RLP's routeSelectors: 1. The golden rule \u2013 The route selectors in a RLP are not to be read strictly as the route matching rules that activate a limit, but as selectors of the route rules that activate the limit. 2. Due to (1) above, this can lead to cases, e.g., where a route selector that states matches: [{ method: POST }] selects a HTTPRouteRule that defines matches: [{ method: POST }, { method: GET }], effectively causing the limit to be activated on requests to the HTTP method POST, but also to the HTTP method GET. 3. The requirement for the route selector match to state patterns that are identical to the patterns stated by the HTTPRouteRule (partially or entirely) makes, e.g., a route selector such as matches: { path: { type: PathPrefix, value: /foo } } to select a HTTPRouteRule that defines matches: { path: { type: PathPrefix, value: /foo }, method: GET }, but not to select a HTTPRouteRule that only defines matches: { method: GET }, even though the latter includes technically all HTTP paths; nor it selects a HTTPRouteRule that only defines matches: { path: { type: Exact, value: /foo } }, even though all requests to the exact path /foo are also technically requests to /foo*. 4. The atomicity property of fields of the route selectors makes, e.g., a route selector such as matches: { path: { value: /foo } } to select a HTTPRouteRule that defines matches: { path: { value: /foo } }, but not to select a HTTPRouteRule that only defines matches: { path: { type: PathPrefix, value: /foo } }. (This case may actually never happen because PathPrefix is the default value for path.type and will be set automatically by the Kubernetes API server.)

Due to the nature of route selectors of defining pointers to HTTPRouteRules, the routeSelectors field is not supported in a RLP that targets a Gateway resource.

"},{"location":"kuadrant-operator/doc/rate-limiting/#when-conditions","title":"when conditions","text":"

when conditions can be used to scope a limit (i.e. to filter the traffic to which a limit definition applies) without any coupling to the underlying network topology, i.e. without making direct references to HTTPRouteRules via routeSelectors.

The syntax of the when conditions selectors comply with Kuadrant's Well-known Attributes (RFC 0002).

Use the when conditions to conditionally activate limits based on attributes that cannot be expressed in the HTTPRoutes' spec.hostnames and spec.rules.matches fields, or in general in RLPs that target a Gateway.

"},{"location":"kuadrant-operator/doc/rate-limiting/#examples","title":"Examples","text":"

Check out the following user guides for examples of rate limiting services with Kuadrant: * Simple Rate Limiting for Application Developers * Authenticated Rate Limiting for Application Developers * Gateway Rate Limiting for Cluster Operators * Authenticated Rate Limiting with JWTs and Kubernetes RBAC

"},{"location":"kuadrant-operator/doc/rate-limiting/#known-limitations","title":"Known limitations","text":"
  • One HTTPRoute can only be targeted by one RLP.
  • One Gateway can only be targeted by one RLP.
  • RLPs can only target HTTPRoutes/Gateways defined within the same namespace of the RLP.
"},{"location":"kuadrant-operator/doc/rate-limiting/#implementation-details","title":"Implementation details","text":"

Driven by limitations related to how Istio injects configuration in the filter chains of the ingress gateways, Kuadrant relies on Envoy's Wasm Network filter in the data plane, to manage the integration with rate limiting service (\"Limitador\"), instead of the Rate Limit filter.

Motivation: Multiple rate limit domains The first limitation comes from having only one filter chain per listener. This often leads to one single global rate limiting filter configuration per gateway, and therefore to a shared rate limit domain across applications and policies. Even though, in a rate limit filter, the triggering of rate limit calls, via actions to build so-called \"descriptors\", can be defined at the level of the virtual host and/or specific route rule, the overall rate limit configuration is only one, i.e., always the same rate limit domain for all calls to Limitador.

On the other hand, the possibility to configure and invoke the rate limit service for multiple domains depending on the context allows to isolate groups of policy rules, as well as to optimize performance in the rate limit service, which can rely on the domain for indexation.

Motivation: Fine-grained matching rules A second limitation of configuring the rate limit filter via Istio, particularly from Gateway API resources, is that rate limit descriptors at the level of a specific HTTP route rule require \"named routes\" \u2013 defined only in an Istio VirtualService resource and referred in an EnvoyFilter one. Because Gateway API HTTPRoute rules lack a \"name\" property1, as well as the Istio VirtualService resources are only ephemeral data structures handled by Istio in-memory in its implementation of gateway configuration for Gateway API, where the names of individual route rules are auto-generated and not referable by users in a policy23, rate limiting by attributes of the HTTP request (e.g., path, method, headers, etc) would be very limited while depending only on Envoy's Rate Limit filter.

Motivated by the desire to support multiple rate limit domains per ingress gateway, as well as fine-grained HTTP route matching rules for rate limiting, Kuadrant implements a wasm-shim that handles the rules to invoke the rate limiting service, complying with Envoy's Rate Limit Service (RLS) protocol.

The wasm module integrates with the gateway in the data plane via Wasm Network filter, and parses a configuration composed out of user-defined RateLimitPolicy resources by the Kuadrant control plane. Whereas the rate limiting service (\"Limitador\") remains an implementation of Envoy's RLS protocol, capable of being integrated directly via Rate Limit extension or by Kuadrant, via wasm module for the Istio Gateway API implementation.

As a consequence of this design: - Users can define fine-grained rate limit rules that match their Gateway and HTTPRoute definitions including for subsections of these. - Rate limit definitions are insulated, not leaking across unrelated policies or applications. - Conditions to activate limits are evaluated in the context of the gateway process, reducing the gRPC calls to the external rate limiting service only to the cases where rate limit counters are known in advance to have to be checked/incremented. - The rate limiting service can rely on the indexation to look up for groups of limit definitions and counters. - Components remain compliant with industry protocols and flexible for different integration options.

A Kuadrant wasm-shim configuration for a composition of RateLimitPolicy custom resources looks like the following and it is generated automatically by the Kuadrant control plane:

apiVersion: extensions.istio.io/v1alpha1\nkind: WasmPlugin\nmetadata:\nname: kuadrant-istio-ingressgateway\nnamespace: istio-system\n\u2026\nspec:\nphase: STATS\npluginConfig:\nfailureMode: deny\nrateLimitPolicies:\n- domain: istio-system/gw-rlp # allows isolating policy rules and improve performance of the rate limit service\nhostnames:\n- '*.website'\n- '*.io'\nname: istio-system/gw-rlp\nrules: # match rules from the gateway and according to conditions specified in the rlp\n- conditions:\n- allOf:\n- operator: startswith\nselector: request.url_path\nvalue: /\ndata:\n- static: # tells which rate limit definitions and counters to activate\nkey: limit.internet_traffic_all__593de456\nvalue: \"1\"\n- conditions:\n- allOf:\n- operator: startswith\nselector: request.url_path\nvalue: /\n- operator: endswith\nselector: request.host\nvalue: .io\ndata:\n- static:\nkey: limit.internet_traffic_apis_per_host__a2b149d2\nvalue: \"1\"\n- selector:\nselector: request.host\nservice: kuadrant-rate-limiting-service\n- domain: default/app-rlp\nhostnames:\n- '*.toystore.website'\n- '*.toystore.io'\nname: default/app-rlp\nrules: # matches rules from a httproute and additional specified in the rlp\n- conditions:\n- allOf:\n- operator: startswith\nselector: request.url_path\nvalue: /assets/\ndata:\n- static:\nkey: limit.toystore_assets_all_domains__8cfb7371\nvalue: \"1\"\n- conditions:\n- allOf:\n- operator: startswith\nselector: request.url_path\nvalue: /v1/\n- operator: eq\nselector: request.method\nvalue: GET\n- operator: endswith\nselector: request.host\nvalue: .toystore.website\n- operator: eq\nselector: auth.identity.username\nvalue: \"\"\n- allOf:\n- operator: startswith\nselector: request.url_path\nvalue: /v1/\n- operator: eq\nselector: request.method\nvalue: POST\n- operator: endswith\nselector: request.host\nvalue: .toystore.website\n- operator: eq\nselector: auth.identity.username\nvalue: \"\"\ndata:\n- static:\nkey: limit.toystore_v1_website_unauthenticated__3f9c40c6\nvalue: \"1\"\nservice: kuadrant-rate-limiting-service\nselector:\nmatchLabels:\nistio.io/gateway-name: istio-ingressgateway\nurl: oci://quay.io/kuadrant/wasm-shim:v0.3.0\n
  1. https://github.com/kubernetes-sigs/gateway-api/pull/996\u00a0\u21a9

  2. https://github.com/istio/istio/issues/36790\u00a0\u21a9

  3. https://github.com/istio/istio/issues/37346\u00a0\u21a9

"},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/","title":"The RateLimitPolicy Custom Resource Definition (CRD)","text":"
  • RateLimitPolicy
  • RateLimitPolicySpec
  • Limit
    • RateLimit
    • RouteSelector
    • WhenCondition
  • RateLimitPolicyStatus
  • ConditionSpec
"},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#ratelimitpolicy","title":"RateLimitPolicy","text":"Field Type Required Description spec RateLimitPolicySpec Yes The specfication for RateLimitPolicy custom resource status RateLimitPolicyStatus No The status for the custom resource"},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#ratelimitpolicyspec","title":"RateLimitPolicySpec","text":"Field Type Required Description targetRef PolicyTargetReference Yes Reference to a Kuberentes resource that the policy attaches to limits MapLimit> No Limit definitions"},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#limit","title":"Limit","text":"Field Type Required Description rates []RateLimit No List of rate limits associated with the limit definition counters []String No List of rate limit counter qualifiers. Items must be a valid Well-known attribute. Each distinct value resolved in the data plane starts a separate counter for each rate limit. routeSelectors []RouteSelector No List of selectors of HTTPRouteRules whose matching rules activate the limit. At least one HTTPRouteRule must be selected to activate the limit. If omitted, all HTTPRouteRules of the targeted HTTPRoute activate the limit. Do not use it in policies targeting a Gateway. when []WhenCondition No List of additional dynamic conditions (expressions) to activate the limit. All expression must evaluate to true for the limit to be applied. Use it for filterring attributes that cannot be expressed in the targeted HTTPRoute's spec.hostnames and spec.rules.matches fields, or when targeting a Gateway."},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#ratelimit","title":"RateLimit","text":"Field Type Required Description limit Number Yes Maximum value allowed within the given period of time (duration) duration Number Yes The period of time in the specified unit that the limit applies unit String Yes Unit of time for the duration of the limit. One-of: \"second\", \"minute\", \"hour\", \"day\"."},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#routeselector","title":"RouteSelector","text":"Field Type Required Description hostnames []Hostname No List of hostnames of the HTTPRoute that activate the limit matches []HTTPRouteMatch No List of selectors of HTTPRouteRules whose matching rules activate the limit

Check out Kuadrant Rate Limiting > Route selectors for the semantics of how route selectors work.

"},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#whencondition","title":"WhenCondition","text":"Field Type Required Description selector String Yes A valid Well-known attribute whose resolved value in the data plane will be compared to value, using the operator. operator String Yes The binary operator to be applied to the resolved value specified by the selector. One-of: \"eq\" (equal to), \"neq\" (not equal to) value String Yes The static value to be compared to the one resolved from the selector."},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#ratelimitpolicystatus","title":"RateLimitPolicyStatus","text":"Field Type Description observedGeneration String Number of the last observed generation of the resource. Use it to check if the status info is up to date with latest resource spec. conditions []ConditionSpec List of conditions that define that status of the resource."},{"location":"kuadrant-operator/doc/ratelimitpolicy-reference/#conditionspec","title":"ConditionSpec","text":"
  • The lastTransitionTime field provides a timestamp for when the entity last transitioned from one status to another.
  • The message field is a human-readable message indicating details about the transition.
  • The reason field is a unique, one-word, CamelCase reason for the condition\u2019s last transition.
  • The status field is a string, with possible values True, False, and Unknown.
  • The type field is a string with the following possible values:
  • Available: the resource has successfully configured;
Field Type Description type String Condition Type status String Status: True, False, Unknown reason String Condition state reason message String Condition state description lastTransitionTime Timestamp Last transition timestamp"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/","title":"AuthPolicy Proposal","text":"

Authors: Rahul Anand (rahanand@redhat.com), Craig Brookes (cbrookes@redhat.com)

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#introduction","title":"Introduction","text":"

Istio offers an AuthorizationPolicy resource which requires it to be applied in the namespace of the workload. This means that all the configuration is completely decoupled from routing logic like hostnames and paths. For managed gateway scenario, users need to either ask cluster operator to apply their policies in the gateway's namespace (which is not scalable) or use sidecars/personal gateway for their workloads in their own namepsace which is not optimal.

The new GatewayAPI defines a standard policy attachment mechanism for hierarchical effect of vendor specific policies. We believe creating a new CRD with concepts from Gateway API that solves use cases of Istio's AuthorizationPolicy plus its limitations as described above.

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#goals","title":"Goals","text":"

With targetRef from policy attachment concept, following are the goals: - Application developer should be able to target HTTPRoute object in their own namespace. This will define authorization policy at the hostname/domain/vHost level. - Cluster operator should be able to target Gateway object along with HTTPRoute in the gateway's namespace. This will define policy at the listener level. - To reduce context sharing at the gateway and external authorization provider, action type and auth-provider are defaulted to CUSTOM and authorino respectively.

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#proposed-solution","title":"Proposed Solution","text":"

Following is the proposed new CRD that combines policy attachment concepts with Istio's AuthorizationPolicy:

apiVersion: kuadrant.io/v1beta1\nkind: AuthPolicy\nmetadata:\nname: toystore\nspec:\ntargetRef:\ngroup: # Only takes gateway.networking.k8s.io\nkind: HTTPRoute | Gateway\nname: toystore\nrules:\n- hosts: [\"*.toystore.com\"]\nmethods: [\"GET\", \"POST\"]\npaths: [\"/admin\"]\nauthScheme: # Embedded AuthConfigs\nhosts: [\"admin.toystore.com\"]\nidentity:\n- name: idp-users\noidc:\nendpoint: https://my-idp.com/auth/realm\nauthorization:\n- name: check-claim\njson:\nrules:\n- selector: auth.identity.group\noperator: eq\nvalue: allowed-users\nstatus:\nconditions:\n- lastTransitionTime: \"2022-06-06T11:03:04Z\"\nmessage: HTTPRoute/Gateway is protected/Error\nreason: HTTPRouteProtected/GatewayProtected/Error\nstatus: \"True\" | \"False\"\ntype: Available\nobservedGeneration: 1\n
"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#target-reference","title":"Target Reference","text":"

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: - Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. - Kind is kind of the target resource. Only valid options are HTTPRoute and Gateway. - Name is the name of the target resource. - Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#rule-objects","title":"Rule objects","text":"

rules field describe the requests that will be routed to external authorization provider (like authorino). It includes: - hosts: a host is matched over Host request header or SNI if TLS is used.

Note: Each rule's host in a route level policy must match at least one hostname regex described in HTTPRoute's hostnames but Gateway level policies have no such restriction.

                            targetRef\n       HTTPRoute  \u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500  AuthPolicy\n  hostnames: [\"*.toystore.com\"]             rules:\n                                           \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                            Rejected Rule: \u2502- hosts: [\"*.carstore.com\"] \u2502\n                            Regex mismatch \u2502  methods: [\"GET\", \"DELETE\"]\u2502\n                                           \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n                                           \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                            Accepted Rule: \u2502- hosts: [\"admin.toystore.com\"]\u2502\n                            Regex match    \u2502  methods: [\"POST\", \"DELETE\"]  \u2502\n                                           \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

  • paths: a path matches over request path like /admin/.
  • methods: a method matches over request method like DELETE.

Fields in a rule object are ANDed together but inner fields follow OR semantics. For example,

hosts: [\"*.toystore.com\"]\nmethods: [\"GET\", \"POST\"]\npaths: [\"/admin\"]\n
The above rule matches if the host matches*.toystore.com AND the method is POST OR GET; AND path is /admin

Internally, All the rules in a AuthPolicy are translated into list of Operations under a single Istio's AuthorizationPolicy with CUSTOM action type and external authorization provider as authorino.

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#authscheme-object","title":"AuthScheme object","text":"

AuthScheme is embedded form of Authorino's AuthConfig. Applying an AuthPolicy resource with AuthScheme defined, would create an AuthConfig in the Gateway's namespace.

Note: Following the heirarchial constraints, spec.AuthScheme.Hosts must match at least one spec.Hosts for AuthPolicy to be validated.

The example AuthPolicy showed above will create the following AuthConfig:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: default-toystore-1\nspec:\nhosts:\n- \"admin.toystore.com\"\nidentity:\n- name: idp-users\noidc:\nendpoint: https://my-idp.com/auth/realm\nauthorization:\n- name: check-claim\njson:\nrules:\n- selector: auth.identity.group\noperator: eq\nvalue: allowed-users\n

Overall control structure looks like the following between the developer and the kuadrant operator:

"},{"location":"kuadrant-operator/doc/proposals/authpolicy-crd/#checklist","title":"Checklist","text":"
  • Issue tracking this proposal: https://github.com/Kuadrant/kuadrant-operator/issues/130
"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/","title":"RLP can target a Gateway resource","text":"

Previous version: https://hackmd.io/IKEYD6NrSzuGQG1nVhwbcw

Based on: https://hackmd.io/_1k6eLCNR2eb9RoSzOZetg

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#introduction","title":"Introduction","text":"

The current RateLimitPolicy CRD already implements a targetRef with a reference to Gateway API's HTTPRoute. This doc captures the design and some implementation details of allowing the targetRef to reference a Gateway API's Gateway.

Having in place this HTTPRoute - Gateway hierarchy, we are also considering to apply Policy Attachment's defaults/overrides approach to the RateLimitPolicy CRD. But for now, it will only be about targeting the Gateway resource.

On designing Kuadrant's rate limiting and considering Istio/Envoy's rate limiting offering, we hit two limitations (described here). Therefore, not giving up entirely in existing Envoy's RateLimit Filter, we decided to move on and leverage the Envoy's Wasm Network Filter and implement rate limiting wasm-shim module compliant with the Envoy's Rate Limit Service (RLS). This wasm-shim module accepts a PluginConfig struct object as input configuration object.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#use-cases-targeting-a-gateway","title":"Use Cases targeting a gateway","text":"

A key use case is being able to provide governance over what service providers can and cannot do when exposing a service via a shared ingress gateway. As well as providing certainty that no service is exposed without my ability as a cluster administrator to protect my infrastructure from unplanned load from badly behaving clients etc.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#goals","title":"Goals","text":"

The goal of this document is to define: * The schema of this PluginConfig struct. * The kuadrant-operator behavior filling the PluginConfig struct having as input the RateLimitPolicy k8s objects * The behavior of the wasm-shim having the PluginConfig struct as input.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#envoys-rate-limit-service-protocol","title":"Envoy's Rate Limit Service Protocol","text":"

Kuadrant's rate limit relies on the Rate Limit Service (RLS) protocol, hence the gateway generates, based on a set of actions, a set of descriptors (one descriptor is a set of descriptor entries). Those descriptors are send to the external rate limit service provider. When multiple descriptors are provided, the external service provider will limit on ALL of them and return an OVER_LIMIT response if any of them are over limit.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#schema-crd-of-the-ratelimitpolicy","title":"Schema (CRD) of the RateLimitPolicy","text":"
---\napiVersion: kuadrant.io/v1beta1\nkind: RateLimitPolicy\nmetadata:\nname: my-rate-limit-policy\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute / Gateway\nname: myroute / mygateway\nrateLimits:\n- rules:\n- paths: [\"/admin/*\"]\nmethods: [\"GET\"]\nhosts: [\"example.com\"]\nconfigurations:\n- actions:\n- generic_key:\ndescriptor_key: admin\ndescriptor_value: \"yes\"\nlimits:\n- conditions: [\"admin == yes\"]\nmax_value: 500\nseconds: 30\nvariables: []\n

.spec.rateLimits holds a list of rate limit configurations represented by the object RateLimit. Each RateLimit object represents a complete rate limit configuration. It contains three fields:

  • rules (optional): Rules allow matching hosts and/or methods and/or paths. Matching occurs when at least one rule applies against the incoming request. If rules are not set, it is equivalent to matching all the requests.

  • configurations (required): Specifies a set of rate limit configurations that could be applied. The rate limit configuration object is the equivalent of the config.route.v3.RateLimit envoy object. One configuration is, in turn, a list of rate limit actions. Each action populates a descriptor entry. A vector of descriptor entries compose a descriptor. Each configuration produces, at most, one descriptor. Depending on the incoming request, one configuration may or may not produce a rate limit descriptor. These rate limiting configuration rules provide flexibility to produce multiple descriptors. For example, you may want to define one generic rate limit descriptor and another descriptor depending on some header. If the header does not exist, the second descriptor is not generated, but traffic keeps being rate limited based on the generic descriptor.

configurations:\n- actions:\n- request_headers:\nheader_name: \"X-MY-CUSTOM-HEADER\"\ndescriptor_key: \"custom-header\"\nskip_if_absent: true\n- actions:\n- generic_key:\ndescriptor_key: admin\ndescriptor_value: \"1\"\n
  • limits (optional): configuration of the rate limiting service (Limitador). Check out limitador documentation for more information about the fields of each Limit object.

Note: No namespace/domain defined. Kuadrant operator will figure out.

Note: There is no PREAUTH, POSTAUTH stage defined. Ratelimiting filter should be placed after authorization filter to enable authenticated rate limiting. In the future, stage can be implemented.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#kuadrant-operators-behavior","title":"Kuadrant-operator's behavior","text":"

One HTTPRoute can only be targeted by one rate limit policy.

Similarly, one Gateway can only be targeted by one rate limit policy.

However, indirectly, one gateway will be affected by multiple rate limit policies. It is by design of the Gateway API, one gateway can be referenced by multiple HTTPRoute objects. Furthermore, one HTTPRoute can reference multiple gateways.

The kuadrant operator will aggregate all the rate limit policies that apply for each gateway, including RLP targeting HTTPRoutes and Gateways.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#virtualhosting-ratelimitpolicies","title":"\"VirtualHosting\" RateLimitPolicies","text":"

Rate limit policies are scoped by the domains defined at the referenced HTTPRoute's hostnames and Gateway's Listener's Hostname.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#multiple-httproutes-with-the-same-hostname","title":"Multiple HTTPRoutes with the same hostname","text":"

When there are multiple HTTPRoutes with the same hostname, HTTPRoutes are all admitted and envoy merge the routing configuration in the same virtualhost. In these cases, the control plane has to \"merge\" the rate limit configuration into a single entry for the wasm filter.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#overlapping-httproutes","title":"Overlapping HTTPRoutes","text":"

If some RLP targets a route for *.com and other RLP targets another route for api.com, the control plane does not do any merging. A request coming for api.com will be rate limited with the rules from the RLP targeting the route api.com. Also, a request coming for other.com will be rate limited with the rules from the RLP targeting the route *.com.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#examples","title":"examples","text":"

RLP A -> HTTPRoute A (api.toystore.com) -> Gateway G (*.com)

RLP B -> HTTPRoute B (other.toystore.com) -> Gateway G (*.com)

RLP H -> HTTPRoute H (*.toystore.com) -> Gateway G (*.com)

RLP G -> Gateway G (*.com)

Request 1 (api.toystore.com) -> apply RLP A and RLP G

Request 2 (other.toystore.com) -> apply RLP B and RLP G

Request 3 (unknown.toystore.com) -> apply RLP H and RLP G

Request 4 (other.com) -> apply RLP G

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#rate-limit-domain-limitador-namespace","title":"rate limit domain / limitador namespace","text":"

The kuadrant operator will add domain attribute of the Envoy's Rate Limit Service (RLS). It will also add the namespace attribute of the Limitador's rate limit config. The operator will ensure that the associated actions and rate limits have a common domain/namespace.

The value of this domain/namespace seems to be related to the virtualhost for which rate limit applies.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#schema-of-the-wasm-filter-configuration-object-the-pluginconfig","title":"Schema of the WASM filter configuration object: the PluginConfig","text":"

Currently the PluginConfig looks like this:

#  The filter\u2019s behaviour in case the rate limiting service does not respond back. When it is set to true, Envoy will not allow traffic in case of communication failure between rate limiting service and the proxy.\nfailure_mode_deny: true\nratelimitpolicies:\ndefault/toystore: # rate limit policy {NAMESPACE/NAME}\nhosts: # HTTPRoute hostnames\n- '*.toystore.com'\nrules: # route level actions\n- operations:\n- paths:\n- /admin/toy\nmethods:\n- POST\n- DELETE\nactions:\n- generic_key:\ndescriptor_value: yes\ndescriptor_key: admin\nglobal_actions: # virtualHost level actions\n- generic_key:\ndescriptor_value: yes\ndescriptor_key: vhaction\nupstream_cluster: rate-limit-cluster # Limitador address reference\ndomain: toystore-app # RLS protocol domain value\n

Proposed new design for the WASM filter configuration object (PluginConfig struct):

#  The filter\u2019s behaviour in case the rate limiting service does not respond back. When it is set to true, Envoy will not allow traffic in case of communication failure between rate limiting service and the proxy.\nfailure_mode_deny: true\nrate_limit_policies:\n- name: toystore\nrate_limit_domain: toystore-app\nupstream_cluster: rate-limit-cluster\nhostnames: [\"*.toystore.com\"]\ngateway_actions:\n- rules:\n- paths: [\"/admin/toy\"]\nmethods: [\"GET\"]\nhosts: [\"pets.toystore.com\"]\nconfigurations:\n- actions:\n- generic_key:\ndescriptor_key: admin\ndescriptor_value: \"1\"\n

Update highlights: * [minor] rate_limit_policies is a list instead of a map indexed by the name/namespace. * [major] no distinction between \"rules\" and global actions * [major] more aligned with RLS: multiple descriptors structured by \"rate limit configurations\" with matching rules

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#wasm-shim","title":"WASM-SHIM","text":"

WASM filter rate limit policies are not exactly the same as user managed RateLimitPolicy custom resources. The WASM filter rate limit policies is part of the internal configuration and therefore not exposed to the end user.

At the WASM filter level, there are no route level or gateway level rate limit policies. The rate limit policies in the wasm plugin configuration may not map 1:1 to user managed RateLimitPolicy custom resources. WASM rate limit policies have an internal logical name and a set of hostnames to activate it based on the incoming request\u2019s host header.

The WASM filter builds a tree based data structure holding the rate limit policies. The longest (sub)domain match is used to select the policy to be applied. Only one policy is being applied per invocation.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#rate-limit-configurations","title":"rate limit configurations","text":"

The WASM filter configuration object contains a list of rate limit configurations to build a list of Envoy's RLS descriptors. These configurations are defined at

rate_limit_policies[*].gateway_actions[*].configurations\n

For example:

configurations:\n- actions:\n- generic_key:\ndescriptor_key: admin\ndescriptor_value: \"1\"\n

How to read the policy:

  • Each configuration produces, at most, one descriptor. Depending on the incoming request, one configuration may or may not produce a rate limit descriptor.

  • Each policy configuration has associated, optionally, a set of rules to match. Rules allow matching hosts and/or methods and/or paths. Matching occurs when at least one rule applies against the incoming request. If rules are not set, it is equivalent to matching all the requests.

  • Each configuration object defines a list of actions. Each action may (or may not) produce a descriptor entry (descriptor list item). If an action cannot append a descriptor entry, no descriptor is generated for the configuration.

Note: The external rate limit service will be called when the gateway_actions object produces at least one not empty descriptor.

"},{"location":"kuadrant-operator/doc/proposals/rlp-target-gateway-resource/#example","title":"example","text":"

WASM filter rate limit policy for *.toystore.com. I want some rate limit descriptors configuration only for api.toystore.com and another set of descriptors for admin.toystore.com. The wasm filter config would look like this:

failure_mode_deny: true\nrate_limit_policies:\n- name: toystore\nrate_limit_domain: toystore-app\nupstream_cluster: rate-limit-cluster\nhostnames: [\"*.toystore.com\"]\ngateway_actions:\n- configurations:  # no rules. Applies to all *.toystore.com traffic\n- actions:\n- generic_key:\ndescriptor_key: toystore-app\ndescriptor_value: \"1\"\n- rules:\n- hosts: [\"api.toystore.com\"]\nconfigurations:\n- actions:\n- generic_key:\ndescriptor_key: api\ndescriptor_value: \"1\"\n- rules:\n- hosts: [\"admin.toystore.com\"]\nconfigurations:\n- actions:\n- generic_key:\ndescriptor_key: admin\ndescriptor_value: \"1\"\n
  • When a request for api.toystore.com hits the filter, the descriptors generated would be:

descriptor 1

(\"toystore-app\", \"1\")\n
descriptor 2
(\"api\", \"1\")\n

  • When a request for admin.toystore.com hits the filter, the descriptors generated would be:

descriptor 1

(\"toystore-app\", \"1\")\n
descriptor 2
(\"admin\", \"1\")\n

  • When a request for other.toystore.com hits the filter, the descriptors generated would be: descriptor 1
    (\"toystore-app\", \"1\")\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/","title":"Authenticated Rate Limiting for Application Developers","text":"

This user guide walks you through an example of how to configure authenticated rate limiting for an application using Kuadrant.

Authenticated rate limiting rate limits the traffic directed to an application based on attributes of the client user, who is authenticated by some authentication method. A few examples of authenticated rate limiting use cases are: - User A can send up to 50rps (\"requests per second\"), while User B can send up to 100rps. - Each user can send up to 20rpm (\"request per minute\"). - Admin users (members of the 'admin' group) can send up to 100rps, while regular users (non-admins) can send up to 20rpm and no more than 5rps.

In this guide, we will rate limit a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request. The API exposes an endpoint at GET http://api.toystore.com/toy, to mimic an operation of reading toy records.

We will define 2 users of the API, which can send requests to the API at different rates, based on their user IDs. The authentication method used is API key.

User ID Rate limit alice 5rp10s (\"5 requests every 10 seconds\") bob 2rp10s (\"2 requests every 10 seconds\")

"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#run-the-steps-1-4","title":"Run the steps \u2460 \u2192 \u2463","text":""},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#1-setup","title":"\u2460 Setup","text":"

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

Clone the project:

git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator\n

Setup the environment:

make local-setup\n

Request an instance of Kuadrant:

kubectl -n kuadrant-system apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: Kuadrant\nmetadata:\n  name: kuadrant\nspec: {}\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#2-deploy-the-toy-store-api","title":"\u2461 Deploy the Toy Store API","text":"

Create the deployment:

kubectl apply -f examples/toystore/toystore.yaml\n

Create a HTTPRoute to route traffic to the service via Istio Ingress Gateway:

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: toystore\nspec:\n  parentRefs:\n  - name: istio-ingressgateway\n    namespace: istio-system\n  hostnames:\n  - api.toystore.com\n  rules:\n  - matches:\n    - path:\n        type: Exact\n        value: \"/toy\"\n      method: GET\n    backendRefs:\n    - name: toystore\n      port: 80\nEOF\n

Verify the route works:

curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 200 OK\n

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#3-enforce-authentication-on-requests-to-the-toy-store-api","title":"\u2462 Enforce authentication on requests to the Toy Store API","text":"

Create a Kuadrant AuthPolicy to configure the authentication:

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: AuthPolicy\nmetadata:\n  name: toystore\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: toystore\n  rules:\n  - paths: [\"/toy\"]\n  authScheme:\n    identity:\n    - name: api-key-users\n      apiKey:\n        selector:\n          matchLabels:\n            app: toystore\n        allNamespaces: true\n      credentials:\n        in: authorization_header\n        keySelector: APIKEY\n    response:\n    - name: identity\n      json:\n        properties:\n        - name: userid\n          valueFrom:\n            authJSON: auth.identity.metadata.annotations.secret\\.kuadrant\\.io/user-id\n      wrapper: envoyDynamicMetadata\nEOF\n

Verify the authentication works by sending a request to the Toy Store API without API key:

curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: APIKEY realm=\"api-key-users\"\n# x-ext-auth-reason: \"credential not found\"\n

Create API keys for users alice and bob to authenticate:

Note: Kuadrant stores API keys as Kubernetes Secret resources. User metadata can be stored in the annotations of the resource.

kubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: bob-key\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    app: toystore\n  annotations:\n    secret.kuadrant.io/user-id: bob\nstringData:\n  api_key: IAMBOB\ntype: Opaque\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: alice-key\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    app: toystore\n  annotations:\n    secret.kuadrant.io/user-id: alice\nstringData:\n  api_key: IAMALICE\ntype: Opaque\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#4-enforce-authenticated-rate-limiting-on-requests-to-the-toy-store-api","title":"\u2463 Enforce authenticated rate limiting on requests to the Toy Store API","text":"

Create a Kuadrant RateLimitPolicy to configure rate limiting:

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\n  name: toystore\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: toystore\n  limits:\n    \"alice-limit\":\n      rates:\n      - limit: 5\n        duration: 10\n        unit: second\n      when:\n      - selector: metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid\n        operator: eq\n        value: alice\n    \"bob-limit\":\n      rates:\n      - limit: 2\n        duration: 10\n        unit: second\n      when:\n      - selector: metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid\n        operator: eq\n        value: bob\nEOF\n

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

Verify the rate limiting works by sending requests as Alice and Bob.

Up to 5 successful (200 OK) requests every 10 seconds allowed for Alice, then 429 Too Many Requests:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n

Up to 2 successful (200 OK) requests every 10 seconds allowed for Bob, then 429 Too Many Requests:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/#cleanup","title":"Cleanup","text":"
make local-cleanup\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/","title":"Authenticated Rate Limiting with JWTs and Kubernetes RBAC","text":"

This user guide walks you through an example of how to use Kuadrant to protect an application with policies to enforce: - authentication based OpenId Connect (OIDC) ID tokens (signed JWTs), issued by a Keycloak server; - alternative authentication method by Kubernetes Service Account tokens; - authorization delegated to Kubernetes RBAC system; - rate limiting by user ID.

In this example, we will protect a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request.

The API listens to requests at the hostnames *.toystore.com, where it exposes the endpoints GET /toy*, POST /admin/toy and DELETE /amind/toy, respectively, to mimic operations of reading, creating, and deleting toy records.

Any authenticated user/service account can send requests to the Toy Store API, by providing either a valid Keycloak-issued access token or Kubernetes token.

Privileges to execute the requested operation (read, create or delete) will be granted according to the following RBAC rules, stored in the Kubernetes authorization system:

Operation Endpoint Required role Read GET /toy* toystore-reader Create POST /admin/toy toystore-write Delete DELETE /admin/toy toystore-write

Each user will be entitled to a maximum of 5rp10s (5 requests every 10 seconds).

"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#requirements","title":"Requirements","text":"
  • Docker
  • kubectl command-line tool
  • jq
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#run-the-guide-1-6","title":"Run the guide \u2460 \u2192 \u2465","text":""},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#1-setup-a-cluster-with-kuadrant","title":"\u2460 Setup a cluster with Kuadrant","text":"

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

Clone the project:

git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator\n

Setup the environment:

make local-setup\n

Request an instance of Kuadrant:

kubectl -n kuadrant-system apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: Kuadrant\nmetadata:\n  name: kuadrant\nspec: {}\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#2-deploy-the-toy-store-api","title":"\u2461 Deploy the Toy Store API","text":"

Deploy the application in the default namespace:

kubectl apply -f examples/toystore/toystore.yaml\n

Route traffic to the application:

kubectl apply -f examples/toystore/httproute.yaml\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#api-lifecycle","title":"API lifecycle","text":""},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#try-the-api-unprotected","title":"Try the API unprotected","text":"
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 200 OK\n

It should return 200 OK.

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#3-deploy-keycloak","title":"\u2462 Deploy Keycloak","text":"

Create the namesapce:

kubectl create namespace keycloak\n

Deploy Keycloak with a bootstrap realm, users, and clients:

kubectl apply -n keycloak -f https://raw.githubusercontent.com/Kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Note: The Keycloak server may take a couple of minutes to be ready.

"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#4-enforce-authentication-and-authorization-for-the-toy-store-api","title":"\u2463 Enforce authentication and authorization for the Toy Store API","text":"

Create a Kuadrant AuthPolicy to configure authentication and authorization:

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: AuthPolicy\nmetadata:\n  name: toystore-protection\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: toystore\n  authScheme:\n    identity:\n    - name: keycloak-users\n      oidc:\n        endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n    - name: k8s-service-accounts\n      kubernetes:\n        audiences:\n        - https://kubernetes.default.svc.cluster.local\n      extendedProperties:\n      - name: sub\n        valueFrom:\n          authJSON: auth.identity.user.username\n    authorization:\n    - name: k8s-rbac\n      kubernetes:\n        user:\n          valueFrom:\n            authJSON: auth.identity.sub\n    response:\n    - name: identity\n      json:\n        properties:\n        - name: userid\n          valueFrom:\n            authJSON: auth.identity.sub\n      wrapper: envoyDynamicMetadata\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#try-the-api-missing-authentication","title":"Try the API missing authentication","text":"
curl -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"keycloak-users\"\n# www-authenticate: Bearer realm=\"k8s-service-accounts\"\n# x-ext-auth-reason: {\"k8s-service-accounts\":\"credential not found\",\"keycloak-users\":\"credential not found\"}\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#try-the-api-without-permission","title":"Try the API without permission","text":"

Obtain an access token with the Keycloak server:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

Send a request to the API as the Keycloak-authenticated user while still missing permissions:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 403 Forbidden\n

Create a Kubernetes Service Account to represent a consumer of the API associated with the alternative source of identities k8s-service-accounts:

kubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: client-app-1\nEOF\n

Obtain an access token for the client-app-1 service account:

SA_TOKEN=$(kubectl create token client-app-1)\n

Send a request to the API as the service account while still missing permissions:

curl -H \"Authorization: Bearer $SA_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#5-grant-access-to-the-toy-store-api-for-user-and-service-account","title":"\u2464 Grant access to the Toy Store API for user and service account","text":"

Create the toystore-reader and toystore-writer roles:

kubectl apply -f - <<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: toystore-reader\nrules:\n- nonResourceURLs: [\"/toy*\"]\n  verbs: [\"get\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: toystore-writer\nrules:\n- nonResourceURLs: [\"/admin/toy\"]\n  verbs: [\"post\", \"delete\"]\nEOF\n

Add permissions to the user and service account:

User Kind Roles john User registered in Keycloak toystore-reader, toystore-writer client-app-1 Kuberentes Service Account toystore-reader
kubectl apply -f - <<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: toystore-readers\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: toystore-reader\nsubjects:\n- kind: User\n  name: $(jq -R -r 'split(\".\") | .[1] | @base64d | fromjson | .sub' <<< \"$ACCESS_TOKEN\")\n- kind: ServiceAccount\n  name: client-app-1\n  namespace: default\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: toystore-writers\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: toystore-writer\nsubjects:\n- kind: User\n  name: $(jq -R -r 'split(\".\") | .[1] | @base64d | fromjson | .sub' <<< \"$ACCESS_TOKEN\")\nEOF\n
Q: Can I use Roles and RoleBindings instead of ClusterRoles and ClusterRoleBindings? Yes, you can. The example above is for non-resource URL Kubernetes roles. For using `Roles` and `RoleBindings` instead of `ClusterRoles` and `ClusterRoleBindings`, thus more flexible resource-based permissions to protect the API, see the spec for [Kubernetes SubjectAccessReview authorization](https://github.com/Kuadrant/authorino/blob/v0.5.0/docs/features.md#kubernetes-subjectaccessreview-authorizationkubernetes) in the Authorino docs."},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#try-the-api-with-permission","title":"Try the API with permission","text":"

Send requests to the API as the Keycloak-authenticated user:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 200 OK\n
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -H 'Host: api.toystore.com' -X POST http://localhost:9080/admin/toy -i\n# HTTP/1.1 200 OK\n

Send requests to the API as the Kubernetes service account:

curl -H \"Authorization: Bearer $SA_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy -i\n# HTTP/1.1 200 OK\n
curl -H \"Authorization: Bearer $SA_TOKEN\" -H 'Host: api.toystore.com' -X POST http://localhost:9080/admin/toy -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#6-enforce-rate-limiting-on-requests-to-the-toy-store-api","title":"\u2465 Enforce rate limiting on requests to the Toy Store API","text":"

Create a Kuadrant RateLimitPolicy to configure rate limiting:

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\n  name: toystore\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: toystore\n  limits:\n    \"per-user\":\n      rates:\n      - limit: 5\n        duration: 10\n        unit: second\n      counters:\n      - metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid\nEOF\n

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#try-the-api-rate-limited","title":"Try the API rate limited","text":"

Each user should be entitled to a maximum of 5 requests every 10 seconds.

Note: If the tokens have expired, you may need to refresh them first.

Send requests as the Keycloak-authenticated user:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H \"Authorization: Bearer $ACCESS_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n

Send requests as the Kubernetes service account:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H \"Authorization: Bearer $SA_TOKEN\" -H 'Host: api.toystore.com' http://localhost:9080/toy | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n
"},{"location":"kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/#cleanup","title":"Cleanup","text":"
make local-cleanup\n
"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/","title":"Gateway Rate Limiting for Cluster Operators","text":"

This user guide walks you through an example of how to configure rate limiting for all routes attached to an ingress gateway.

"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#run-the-steps-1-5","title":"Run the steps \u2460 \u2192 \u2464","text":""},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#1-setup","title":"\u2460 Setup","text":"

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

Clone the project:

git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator\n

Setup the environment:

make local-setup\n

Request an instance of Kuadrant:

kubectl -n kuadrant-system apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: Kuadrant\nmetadata:\n  name: kuadrant\nspec: {}\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#2-create-the-ingress-gateways","title":"\u2461 Create the ingress gateways","text":"
kubectl -n istio-system apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: external\n  annotations:\n    kuadrant.io/namespace: kuadrant-system\n    networking.istio.io/service-type: ClusterIP\nspec:\n  gatewayClassName: istio\n  listeners:\n  - name: external\n    port: 80\n    protocol: HTTP\n    hostname: '*.io'\n    allowedRoutes:\n      namespaces:\n        from: All\n---\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: internal\n  annotations:\n    kuadrant.io/namespace: kuadrant-system\n    networking.istio.io/service-type: ClusterIP\nspec:\n  gatewayClassName: istio\n  listeners:\n  - name: local\n    port: 80\n    protocol: HTTP\n    hostname: '*.local'\n    allowedRoutes:\n      namespaces:\n        from: All\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#3-enforce-rate-limiting-on-requests-incoming-through-the-external-gateway","title":"\u2462 Enforce rate limiting on requests incoming through the external gateway","text":"
    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510      \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n    \u2502 (Gateway) \u2502      \u2502 (Gateway) \u2502\n    \u2502  external \u2502      \u2502  internal \u2502\n    \u2502           \u2502      \u2502           \u2502\n    \u2502   *.io    \u2502      \u2502  *.local  \u2502\n    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518      \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n          \u25b2\n          \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 (RateLimitPolicy) \u2502\n\u2502       gw-rlp      \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

Create a Kuadrant RateLimitPolicy to configure rate limiting:

kubectl apply -n istio-system -f - <<EOF\napiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\n  name: gw-rlp\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: Gateway\n    name: external\n  limits:\n    \"global\":\n      rates:\n      - limit: 5\n        duration: 10\n        unit: second\nEOF\n

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#4-deploy-a-sample-api-to-test-rate-limiting-enforced-at-the-level-of-the-gateway","title":"\u2463 Deploy a sample API to test rate limiting enforced at the level of the gateway","text":"
                           \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510      \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510      \u2502 (Gateway) \u2502      \u2502 (Gateway) \u2502\n\u2502 (RateLimitPolicy) \u2502      \u2502  external \u2502      \u2502  internal \u2502\n\u2502       gw-rlp      \u251c\u2500\u2500\u2500\u2500\u2500\u25ba\u2502           \u2502      \u2502           \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518      \u2502   *.io    \u2502      \u2502  *.local  \u2502\n                           \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518      \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518\n                                 \u2502                  \u2502\n                                 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                           \u2502\n                                 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                                 \u2502   (HTTPRoute)    \u2502\n                                 \u2502     toystore     \u2502\n                                 \u2502                  \u2502\n                                 \u2502 *.toystore.io    \u2502\n                                 \u2502 *.toystore.local \u2502\n                                 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                          \u2502\n                                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                                   \u2502   (Service)  \u2502\n                                   \u2502   toystore   \u2502\n                                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

Deploy the sample API:

kubectl apply -f examples/toystore/toystore.yaml\n

Route traffic to the API from both gateways:

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: toystore\nspec:\n  parentRefs:\n  - name: external\n    namespace: istio-system\n  - name: internal\n    namespace: istio-system\n  hostnames:\n  - \"*.toystore.io\"\n  - \"*.toystore.local\"\n  rules:\n  - backendRefs:\n    - name: toystore\n      port: 80\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#5-verify-the-rate-limiting-works-by-sending-requests-in-a-loop","title":"\u2464 Verify the rate limiting works by sending requests in a loop","text":"

Expose the gateways, respectively at the port numbers 9081 and 9082 of the local host:

kubectl port-forward -n istio-system service/external-istio 9081:80 2>&1 >/dev/null &\nkubectl port-forward -n istio-system service/internal-istio 9082:80 2>&1 >/dev/null &\n

Up to 5 successful (200 OK) requests every 10 seconds through the external ingress gateway (*.io), then 429 Too Many Requests:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.io' http://localhost:9081 | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n

Unlimited successful (200 OK) through the internal ingress gateway (*.local):

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.local' http://localhost:9082 | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n
"},{"location":"kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/#cleanup","title":"Cleanup","text":"
make local-cleanup\n
"},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/","title":"Simple Rate Limiting for Application Developers","text":"

This user guide walks you through an example of how to configure rate limiting for an endpoint of an application using Kuadrant.

In this guide, we will rate limit a sample REST API called Toy Store. In reality, this API is just an echo service that echoes back to the user whatever attributes it gets in the request. The API listens to requests at the hostname api.toystore.com, where it exposes the endpoints GET /toys* and POST /toys, respectively, to mimic a operations of reading and writing toy records.

We will rate limit the POST /toys endpoint to a maximum of 5rp10s (\"5 requests every 10 seconds\").

"},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/#run-the-steps-1-3","title":"Run the steps \u2460 \u2192 \u2462","text":""},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/#1-setup","title":"\u2460 Setup","text":"

This step uses tooling from the Kuadrant Operator component to create a containerized Kubernetes server locally using Kind, where it installs Istio, Kubernetes Gateway API and Kuadrant itself.

Note: In production environment, these steps are usually performed by a cluster operator with administrator privileges over the Kubernetes cluster.

Clone the project:

git clone https://github.com/Kuadrant/kuadrant-operator && cd kuadrant-operator\n

Setup the environment:

make local-setup\n

Request an instance of Kuadrant:

kubectl -n kuadrant-system apply -f - <<EOF\napiVersion: kuadrant.io/v1beta1\nkind: Kuadrant\nmetadata:\n  name: kuadrant\nspec: {}\nEOF\n
"},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/#2-deploy-the-toy-store-api","title":"\u2461 Deploy the Toy Store API","text":"

Create the deployment:

kubectl apply -f examples/toystore/toystore.yaml\n

Create a HTTPRoute to route traffic to the service via Istio Ingress Gateway:

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: toystore\nspec:\n  parentRefs:\n  - name: istio-ingressgateway\n    namespace: istio-system\n  hostnames:\n  - api.toystore.com\n  rules:\n  - matches:\n    - method: GET\n      path:\n        type: PathPrefix\n        value: \"/toys\"\n    backendRefs:\n    - name: toystore\n      port: 80\n  - matches: # it has to be a separate HTTPRouteRule so we do not rate limit other endpoints\n    - method: POST\n      path:\n        type: Exact\n        value: \"/toys\"\n    backendRefs:\n    - name: toystore\n      port: 80\nEOF\n

Verify the route works:

curl -H 'Host: api.toystore.com' http://localhost:9080/toys -i\n# HTTP/1.1 200 OK\n

Note: If the command above fails to hit the Toy Store API on your environment, try forwarding requests to the service:

kubectl port-forward -n istio-system service/istio-ingressgateway 9080:80 2>&1 >/dev/null &\n
"},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/#3-enforce-rate-limiting-on-requests-to-the-toy-store-api","title":"\u2462 Enforce rate limiting on requests to the Toy Store API","text":"

Create a Kuadrant RateLimitPolicy to configure rate limiting:

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1beta2\nkind: RateLimitPolicy\nmetadata:\n  name: toystore\nspec:\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: toystore\n  limits:\n    \"create-toy\":\n      rates:\n      - limit: 5\n        duration: 10\n        unit: second\n      routeSelectors:\n      - matches: # selects the 2nd HTTPRouteRule of the targeted route\n        - method: POST\n          path:\n            type: Exact\n            value: \"/toys\"\nEOF\n

Note: It may take a couple of minutes for the RateLimitPolicy to be applied depending on your cluster.

Verify the rate limiting works by sending requests in a loop.

Up to 5 successful (200 OK) requests every 10 seconds to POST /toys, then 429 Too Many Requests:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.com' http://localhost:9080/toys -X POST | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n

Unlimited successful (200 OK) to GET /toys:

while :; do curl --write-out '%{http_code}' --silent --output /dev/null -H 'Host: api.toystore.com' http://localhost:9080/toys | egrep --color \"\\b(429)\\b|$\"; sleep 1; done\n
"},{"location":"kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/#cleanup","title":"Cleanup","text":"
make local-cleanup\n
"},{"location":"authorino/","title":"Authorino","text":"

Kubernetes-native authorization service for tailor-made Zero Trust API security.

A lightweight Envoy external authorization server fully manageable via Kubernetes Custom Resources. JWT authentication, API key, mTLS, pattern-matching authz, OPA, K8s SA tokens, K8s RBAC, external metadata fetching, and more, with minimum to no coding at all, no rebuilding of your applications.

Authorino is not about inventing anything new. It's about making the best things about auth out there easy and simple to use. Authorino is multi-tenant, it's cloud-native and it's open source.

"},{"location":"authorino/#table-of-contents","title":"Table of contents","text":"
  • Getting started
  • Use-cases
  • How it works
  • List of features
  • Documentation
  • FAQ
  • Benchmarks
  • Contributing
"},{"location":"authorino/#getting-started","title":"Getting started","text":"
  1. Deploy with the Authorino Operator
  2. Setup Envoy proxy and the external authorization filter
  3. Apply an Authorino AuthConfig custom resource
  4. Obtain an authentication token and start sending requests

The full Getting started page of the docs provides details for the steps above, as well as information about requirements and next steps.

Or try out our Hello World example.

For general information about protecting your service using Authorino, check out the docs.

"},{"location":"authorino/#use-cases","title":"Use-cases","text":"

The User guides section of the docs gathers several AuthN/AuthZ use-cases as well as the instructions to implement them using Authorino. A few examples are:

  • Authentication with JWTs and OpenID Connect Discovery
  • Authentication with API keys
  • Authentication with Kubernetes SA tokens (TokenReview API)
  • Authentication with X.509 certificates and mTLS
  • Authorization with JSON pattern-matching rules (e.g. JWT claims, request attributes, etc)
  • Authorization with Open Policy Agent (OPA) Rego policies
  • Authorization using the Kubernetes RBAC (rules stated in K8s Role and RoleBinding resources)
  • Authorization using auth metadata fetched from external sources
  • OIDC authentication and RBAC with Keycloak JWTs
  • Injecting auth data into the request (HTTP headers, Wristband tokens, rate-limit metadata, etc)
  • Authorino for the Kubernetes control plane (aka Authorino as ValidatingWebhook service)
"},{"location":"authorino/#how-it-works","title":"How it works","text":"

Authorino enables hybrid API security, with usually no code changes required to your application, tailor-made for your own combination of authentication standards and protocols and authorization policies of choice.

Authorino implements Envoy Proxy's external authorization gRPC protocol, and is a part of Red Hat Kuadrant architecture.

Under the hood, Authorino is based on Kubernetes Custom Resource Definitions and the Operator pattern.

Bootstrap and configuration:

  1. Deploy the service/API to be protected (\"Upstream\"), Authorino and Envoy
  2. Write and apply an Authorino AuthConfig Custom Resource associated to the public host of the service

Request-time:

  1. A user or service account (\"Consumer\") obtains an access token to consume resources of the Upstream service, and sends a request to the Envoy ingress endpoint
  2. The Envoy proxy establishes fast gRPC connection with Authorino carrying data of the HTTP request (context info), which causes Authorino to lookup for an AuthConfig Custom Resource to enforce (pre-cached)
  3. Identity verification (authentication) phase - Authorino verifies the identity of the consumer, where at least one authentication method/identity provider must go through
  4. External metadata phase - Authorino fetches additional metadata for the authorization from external sources (optional)
  5. Policy enforcement (authorization) phase - Authorino takes as input a JSON composed out of context data, resolved identity object and fetched additional metadata from previous phases, and triggers the evaluation of user-defined authorization policies
  6. Response (metadata-out) phase \u2013 Authorino builds user-defined custom responses (dynamic JSON objects and/or Festival Wristband OIDC tokens), to be supplied back to the client and/or upstream service within added HTTP headers or as Envoy Dynamic Metadata (optional)
  7. Callbacks phase \u2013 Authorino sends callbacks to specified HTTP endpoints (optional)
  8. Authorino and Envoy settle the authorization protocol with either OK/NOK response
  9. If authorized, Envoy triggers other HTTP filters in the chain (if any), pre-injecting eventual dynamic metadata returned by Authorino, and ultimately redirects the request to the Upstream
  10. The Upstream serves the requested resource to the consumer
More The [Architecture](./docs/architecture.md) section of the docs covers details of protecting your APIs with Envoy and Authorino, including information about topology (centralized gateway, centralized authorization service or sidecars), deployment modes (cluster-wide reconciliation vs. namespaced instances), an specification of Authorino's [`AuthConfig`](./docs/architecture.md#the-authorino-authconfig-custom-resource-definition-crd) Custom Resource Definition (CRD) and more. You will also find in that section information about what happens in request-time (aka Authorino's [Auth Pipeline](./docs/architecture.md#the-auth-pipeline-aka-enforcing-protection-in-request-time)) and how to leverage the [Authorization JSON](./docs/architecture.md#the-authorization-json) for writing policies, dynamic responses and other features of Authorino."},{"location":"authorino/#list-of-features","title":"List of features","text":"Feature Stage Identity verification & authentication JOSE/JWT validation (OpenID Connect) Ready OAuth 2.0 Token Introspection (opaque tokens) Ready Kubernetes TokenReview (SA tokens) Ready OpenShift User-echo endpoint In analysis API key authentication Ready mTLS authentication Ready HMAC authentication Planned (#9) Plain (resolved beforehand and injected in the payload) Ready Anonymous access Ready Ad hoc external metadata fetching OpenID Connect User Info Ready UMA-protected resource attributes Ready HTTP GET/GET-by-POST Ready Policy enforcement/authorization JSON pattern matching (e.g. JWT claims, request attributes checking) Ready OPA/Rego policies (inline and pull from registry) Ready Kubernetes SubjectAccessReview (resource and non-resource attributes) Ready Authzed/SpiceDB Ready Keycloak Authorization Services (UMA-compliant Authorization API) In analysis Custom responses Festival Wristbands tokens (token normalization, Edge Authentication Architecture) Ready JSON injection (header injection, Envoy Dynamic Metadata) Ready Plain text value (header injection) Ready Custom response status code/messages (e.g. redirect) Ready Callbacks HTTP endpoints Ready Caching OpenID Connect and User-Managed Access configs Ready JSON Web Keys (JWKs) and JSON Web Key Sets (JWKS) Ready Access tokens Ready External metadata Ready Precompiled Rego policies Ready Policy evaluation Ready Sharding (lookup performance, multitenancy) Ready

For a detailed description of the features above, refer to the Features page.

"},{"location":"authorino/#faq","title":"FAQ","text":"Do I need to deploy Envoy? Authorino is built from the ground up to work well with Envoy. It is strongly recommended that you leverage Envoy along side Authorino. That said, it is possible to use Authorino without Envoy. Authorino implements Envoy's [external authorization](https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/ext_authz) gRPC protocol and therefore will accept any client request that complies. Authorino also provides a second interface for [raw HTTP authorization](./docs/architecture.md#raw-http-authorization-interface), suitable for using with Kubernetes ValidatingWebhook and other integrations (e.g. other proxies). The only attribute of the authorization request that is strictly required is the host name. (See [Host lookup](./docs/architecture.md#host-lookup) for more information.) The other attributes, such as method, path, headers, etc, might as well be required, depending on each `AuthConfig`. In the case of the gRPC [`CheckRequest`](https://pkg.go.dev/github.com/envoyproxy/go-control-plane/envoy/service/auth/v3?utm_source=gopls#CheckRequest) method, the host is supplied in `Attributes.Request.Http.Host` and alternatively in `Attributes.ContextExtensions[\"host\"]`. For raw HTTP authorization requests, the host must be supplied in `Host` HTTP header. Check out [Kuadrant](https://github.com/kuadrant/kuadrant-controller) for easy-to-use Envoy and Authorino deployment & configuration for API management use-cases, using Kubernetes Custom Resources. Is Authorino an Identity Provider (IdP)? No, Authorino is not an Identity Provider (IdP). Neither it is an auth server of any kind, such as an OAuth2 server, an OpenID Connect (OIDC) server, a Single Sign On (SSO) server. Authorino is not an identity broker either. It can verify access tokens from multiple trusted sources of identity and protocols, but it will not negotiate authentication flows for non-authenticated access requests. Some tricks nonetheless can be done, for example, to [redirect unauthenticated users to a login page](./docs/user-guides/deny-with-redirect-to-login.md). For an excellent auth server that checks all the boxes above, check out [Keycloak](https://www.keycloak.org). How does Authorino compare to Keycloak? Keycloak is a proper auth server and identity provider (IdP). It offers a huge set of features for managing identities, identity sources with multiple user federation options, and a platform for authentication and authorization services. Keycloak exposes authenticators that implement protocols such as OpenID Connect. The is a one-time flow that establishes the delegation of power to a client, for a short period of time. To be consistent with Zero Trust security, you want a validator to verify the short-lived tokens in every request that tries to reach your protected service/resource. This step that will repeat everytime could save heavy looking up into big tables of tokens and leverage cached authorization policies for fast in-memory evaluation. This is where Authorino comes in. Authorino verifies and validates Keycloak-issued ID tokens. OpenID Connect Discovery is used to request and cache JSON Web Key Sets (JWKS), used to verify the signature of the tokens without having to contact again with the Keycloak server, or looking in a table of credentials. Moreover, user long-lived credentials are safe, rather than spread in hops across the network. You can also use Keycloak for storing auth-relevant resource metadata. These can be fetched by Authorino in request-time, to be combined into your authorization policies. See Keycloak Authorization Services and User-Managed Access (UMA) support, as well as Authorino [UMA external metadata](./docs/features.md#user-managed-access-uma-resource-registry-metadatauma) counter-part. Why doesn't Authorino handle OAuth flows? It has to do with trust. OAuth grants are supposed to be negotiated directly between whoever owns the long-lived credentials in one hand (user, service accounts), and the trustworthy auth server that receives those credentials \u2013 ideally with minimum number of hops in the middle \u2013 and exchanges them for short-lived access tokens, on the other end. There are use-cases for Authorino running in the edge (e.g. Edge Authentication Architecture and token normalization), but in most cases Authorino should be seen as a last-mile component that provides decoupled identity verification and authorization policy enforcement to protected services in request-time. In this sense, the OAuth grant is a pre-flight exchange that happens once and as direct and safe as possible, whereas auth enforcement is kept lightweight and efficient. Where does Authorino store users and roles? Authorino does not store users, roles, role bindings, access control lists, or any raw authorization data. Authorino handles policies, where even these policies can be stored elsewhere (as opposed to stated inline inside of an Authorino `AuthConfig` CR). Authorino evaluates policies for stateless authorization requests. Any additional context is either resolved from the provided payload or static definitions inside the policies. That includes extracting user information from a JWT or client TLS certificate, requesting user metadata from opaque authentication tokens (e.g. API keys) to the trusted sources actually storing that content, obtaining synchronous HTTP metadata from services, etc. In the case of authentication with API keys, as well as its derivative to model HTTP Basic Auth, user data are stored in Kubernetes `Secret`s. The secret's keys, annotations and labels are usually the structures used to organize the data that later a policy evaluated in Authorino may require. Strictly, those are not Authorino data structures. Can't I just use Envoy JWT Authentication and RBAC filters? Envoy's [JWT Authentication](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/jwt_authn/v3/config.proto.html) works pretty much similar to Authorino's [JOSE/JWT verification and validation for OpenID Connect](./docs/features.md#openid-connect-oidc-jwtjose-verification-and-validation-identityoidc). In both cases, the JSON Web Key Sets (JWKS) to verify the JWTs are auto-loaded and cached to be used in request-time. Moreover, you can configure for details such as where to extract the JWT from the HTTP request (header, param or cookie) and do some cool tricks regarding how dynamic metadata based on JWT claims can be injected to consecutive filters in the chain. However, in terms of authorization, while Envoy's implementation essentially allows to check for the list of audiences (`aud` JWT claim), Authorino opens up for a lot more options such as pattern-matching rules with operators and conditionals, built-in OPA and other methods of evaluating authorization policies. Authorino also allows to combine JWT authentication with other types of authentication to support different sources of identity and groups of users such as API keys, Kubernetes tokens, OAuth opaque tokens , etc. In summary, Envoy's JWT Authentication and Envoy RBAC filter are excellent solutions for simple use-cases where JWTs from one single issuer is the only authentication method you are planning to support and limited to no authorization rules suffice. On the other hand, if you need to integrate more identity sources, different types of authentication, authorization policies, etc, you might to consider Authorino. Should I use Authorino if I already have Istio configured? Istio is a great solution for managing service meshes. It delivers an excellent platform with an interesting layer of abstraction on top of Envoy proxy's virtual omnipresence within the mesh. There are lots of similarities, but also complementarity between Authorino and Istio and [Istio Authorization](https://istio.io/latest/docs/concepts/security/#authorization) in special. Istio provides a simple way to enable features that are, in many cases, features of Envoy, such as authorization based on JWTs, authorization based on attributes of the request, and activation of external authorization services, without having to deal with complex Envoy config files. See [Kuadrant](https://github.com/kuadrant/kuadrant-controller) for a similar approach, nonetheless leveraging features of Istio as well. Authorino is an Envoy-compatible external authorization service. One can use Authorino with or without Istio. In particular, [Istio Authorization Policies](https://istio.io/latest/docs/reference/config/security/authorization-policy/) can be seen, in terms of functionality and expressiveness, as a subset of one type of authorization policies supported by Authorino, the [JSON pattern-matching authorization](./docs/features.md#json-pattern-matching-authorization-rules-authorizationjson) policies. While Istio, however, is heavily focused on specific use cases of API Management, offering a relatively limited list of [supported attribute conditions](https://istio.io/latest/docs/reference/config/security/conditions/), Authorino is more generic, allowing to express authorization rules for a wider spectrum of use cases \u2013 ACLs, RBAC, ABAC, etc, pretty much counting on any attribute of the Envoy payload, identity object and external metadata available. Authorino also provides built-in OPA authorization, several other methods of authentication and identity verification (e.g. Kubernetes token validation, API key-based authentication, OAuth token introspection, OIDC-discoverable JWT verification, etc), and features like fetching of external metadata (HTTP services, OIDC userinfo, UMA resource data), token normalization, wristband tokens and dynamic responses. These all can be used independently or combined, in a simple and straightforward Kubernetes-native fashion. In summary, one might value Authorino when looking for a policy enforcer that offers: 1. multiple supported methods and protocols for rather hybrid authentication, encompassing future and legacy auth needs; 2. broader expressiveness and more functionalities for the authorization rules; 3. authentication and authorization in one single declarative manifest; 4. capability to fetch auth metadata from external sources on-the-fly; 5. built-in OPA module; 6. easy token normalization and/or aiming for Edge Authentication Architecture (EAA). The good news is that, if you have Istio configured, then you have Envoy and the whole platform for wiring Authorino up if you want to. \ud83d\ude09 Do I have to learn OPA/Rego language to use Authorino? No, you do not. However, if you are comfortable with [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/) from Open Policy Agent (OPA), there are some quite interesting things you can do in Authorino, just as you would in any OPA server or OPA plugin, but leveraging Authorino's [built-in OPA module](./docs/features.md#open-policy-agent-opa-rego-policies-authorizationopa) instead. Authorino's OPA module is compiled as part of Authorino's code directly from the Golang packages, and imposes no extra latency to the evaluation of your authorization policies. Even the policies themselves are pre-compiled in reconciliation-time, for fast evaluation afterwards, in request-time. On the other hand, if you do not want to learn Rego or in any case would like to combine it with declarative and Kubernetes-native authN/authZ spec for your services, Authorino does complement OPA with at least two other methods for expressing authorization policies \u2013 i.e. [JSON pattern-matching authorization rules](./docs/features.md#json-pattern-matching-authorization-rules-authorizationjson) and [Kubernetes SubjectAccessReview](./docs/features.md#kubernetes-subjectaccessreview-authorizationkubernetes), the latter allowing to rely completely on the Kubernetes RBAC. You break down, mix and combine these methods and technolgies in as many authorization policies as you want, potentially applying them according to specific conditions. Authorino will trigger the evaluation of concurrent policies in parallel, aborting the context if any of the processes denies access. Authorino also packages well-established industry standards and protocols for identity verification (JOSE/JWT validation, OAuth token introspection, Kubernetes TokenReview) and ad-hoc request-time metadata fetching (OIDC userinfo, User-Managed Access (UMA)), and corresponding layers of caching, without which such functionalities would have to be implemented by code. Can I use Authorino to protect non-REST APIs? Yes, you can. In principle, the API format (REST, gRPC, GraphQL, etc) should not matter for the authN/authZ enforcer. There are a couple points to consider though. While REST APIs are designed in a way that, in most cases, information usually needed for the evaluation of authorization policies are available in the metadata of the HTTP request (method, path, headers), other API formats quite often will require processing of the HTTP body. By default, Envoy's external authorization HTTP filter will not forward the body of the request to Authorino; to change that, enable the `with_request_body` option in the Envoy configuration for the external authorization filter. E.g.:
with_request_body:\nmax_request_bytes: 1024\nallow_partial_message: true\npack_as_bytes: true\n
Additionally, when enabling the request body passed in the payload to Authorino, parsing of the content should be of concern as well. Authorino provides easy access to attributes of the HTTP request, parsed as part of the [Authorization JSON](./docs/architecture.md#the-authorization-json), however the body of the request is passed as string and should be parsed by the user according to each case. Check out Authorino [OPA authorization](./docs/features.md#open-policy-agent-opa-rego-policies-authorizationopa) and the Rego [Encoding](https://www.openpolicyagent.org/docs/latest/policy-reference/#encoding) functions for options to parse serialized JSON, YAML and URL-encoded params. For XML transformation, an external parsing service connected via Authorino's [HTTP GET/GET-by-POST external metadata](./docs/features.md#http-getget-by-post-metadatahttp) might be required. Can I run Authorino other than on Kubernetes? As of today, no, you cannot, or at least it wouldn't suit production requirements. Do I have to be admin of the cluster to install Authorino? To install the Authorino Custom Resource Definition (CRD) and to define cluster roles required by the Authorino service, admin privilege to the Kubernetes cluster is required. This step happens only once per cluster and is usually equivalent to installing the [Authorino Operator](https://github.com/kuadrant/authorino-operator). Thereafter, deploying instances of the Authorino service and applying `AuthConfig` custom resources to a namespace depend on the permissions set by the cluster administrator \u2013 either directly by editing the bindings in the cluster's RBAC, or via options of the operator. In most cases, developers will be granted permissions to create and manage `AuthConfig`s, and sometimes to deploy their own instances of Authorino. Is it OK to store AuthN/AuthZ configs as Kubernetes objects? Authorino's API checks all the bullets to be [aggregated to the Kubernetes cluster APIs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#should-i-add-a-custom-resource-to-my-kubernetes-cluster), and therefore using Custom Resource Definition (CRD) and the [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) has always been an easy design decision. By merging the definitions of service authN/authZ to the control plane, Authorino `AuthConfig` resources can be thought as extensions of the specs of the desired state of services regarding the data flow security. The Authorino custom controllers, built-in into the authorization service, are the agents that read from that desired state and reconcile the processes operating in the data plane. Authorino is declarative and seamless for developers and cluster administrators managing the state of security of the applications running in the server, used to tools such as `kubectl`, the Kubernetes UI and its dashboards. Instead of learning about yet another configuration API format, Authorino users can jump straight to applying and editing YAML or JSON structures they already know, in a way that things such as `spec`, `status`, `namespace` and `labels` have the meaning they are expected to have, and docs are as close as `kubectl explain`. Moreover, Authorino does not pile up any other redundant layers of APIs, event-processing, RBAC, transformation and validation webhooks, etc. It is Kubernetes in its best. In terms of scale, Authorino `AuthConfig`s should grow proportionally to the number of protected services, virtually limited by nothing but the Kubernetes API data storage, while [namespace division](./docs/architecture.md#cluster-wide-vs-namespaced-instances) and [label selectors](./docs/architecture.md#sharding) help adjust horizontally and keep distributed. In other words, there are lots of benefits of using Kubernetes custom resources and custom controllers, and unless you are planning on bursting your server with more services than it can keep record, it is totally \ud83d\udc4d to store your AuthN/AuthZ configs as cluster API objects. Can I use Authorino for rate limiting? You can, but you shouldn't. Check out instead [Limitador](https://github.com/kuadrant/limitador), for simple and efficient global rate limiting. Combine it with Authorino and Authorino's support for [Envoy Dynamic Metadata](./docs/features.md#envoy-dynamic-metadata) for authenticated rate limiting."},{"location":"authorino/#benchmarks","title":"Benchmarks","text":"

Configuration of the tests (Authorino features): | Performance test | Identity | Metadata | Authorization | Response | |----------------------------|:---------:|:-------------:|:------------------------------------------------------:|:--------:| | ReconcileAuthConfig | OIDC/JWT | UserInfo, UMA | OPA(inline Rego) | - | | AuthPipeline | OIDC/JWT | - | JSON pattern-matching(JWT claim check) | - | | APIKeyAuthn | API key | N/A | N/A | N/A | | JSONPatternMatchingAuthz | N/A | N/A | JSON pattern-matching | N/A | | OPAAuthz | N/A | N/A | OPA(inline Rego) | N/A |

Platform: linux/amd64 CPU: Intel\u00ae Xeon\u00ae Platinum 8370C 2.80GHz Cores: 1, 4, 10

Results:

ReconcileAuthConfig:\n\n        \u2502   sec/op    \u2502     B/op     \u2502  allocs/op  \u2502\n*         1.533m \u00b1 2%   264.4Ki \u00b1 0%   6.470k \u00b1 0%\n*-4       1.381m \u00b1 6%   264.5Ki \u00b1 0%   6.471k \u00b1 0%\n*-10      1.563m \u00b1 5%   270.2Ki \u00b1 0%   6.426k \u00b1 0%\ngeomean   1.491m        266.4Ki        6.456k\n\nAuthPipeline:\n\n        \u2502   sec/op    \u2502     B/op     \u2502 allocs/op  \u2502\n*         388.0\u00b5 \u00b1 2%   80.70Ki \u00b1 0%   894.0 \u00b1 0%\n*-4       348.4\u00b5 \u00b1 5%   80.67Ki \u00b1 2%   894.0 \u00b1 3%\n*-10      356.4\u00b5 \u00b1 2%   78.97Ki \u00b1 0%   860.0 \u00b1 0%\ngeomean   363.9\u00b5        80.11Ki        882.5\n\nAPIKeyAuthn:\n\n        \u2502   sec/op    \u2502    B/op      \u2502 allocs/op  \u2502\n*         3.246\u00b5 \u00b1 1%   480.0 \u00b1 0%     6.000 \u00b1 0%\n*-4       3.111\u00b5 \u00b1 0%   480.0 \u00b1 0%     6.000 \u00b1 0%\n*-10      3.091\u00b5 \u00b1 1%   480.0 \u00b1 0%     6.000 \u00b1 0%\ngeomean   3.148\u00b5        480.0          6.000\n\nOPAAuthz vs JSONPatternMatchingAuthz:\n\n        \u2502   OPAAuthz   \u2502      JSONPatternMatchingAuthz       \u2502\n        \u2502    sec/op    \u2502   sec/op     vs base                \u2502\n*         87.469\u00b5 \u00b1 1%   1.797\u00b5 \u00b1 1%  -97.95% (p=0.000 n=10)\n*-4       95.954\u00b5 \u00b1 3%   1.766\u00b5 \u00b1 0%  -98.16% (p=0.000 n=10)\n*-10      96.789\u00b5 \u00b1 4%   1.763\u00b5 \u00b1 0%  -98.18% (p=0.000 n=10)\ngeomean    93.31\u00b5        1.775\u00b5       -98.10%\n\n        \u2502   OPAAuthz    \u2502      JSONPatternMatchingAuthz      \u2502\n        \u2502     B/op      \u2502    B/op     vs base                \u2502\n*         28826.00 \u00b1 0%   64.00 \u00b1 0%  -99.78% (p=0.000 n=10)\n*-4       28844.00 \u00b1 0%   64.00 \u00b1 0%  -99.78% (p=0.000 n=10)\n*-10      28862.00 \u00b1 0%   64.00 \u00b1 0%  -99.78% (p=0.000 n=10)\ngeomean    28.17Ki        64.00       -99.78%\n\n        \u2502   OPAAuthz   \u2502      JSONPatternMatchingAuthz      \u2502\n        \u2502  allocs/op   \u2502 allocs/op   vs base                \u2502\n*         569.000 \u00b1 0%   2.000 \u00b1 0%  -99.65% (p=0.000 n=10)\n*-4       569.000 \u00b1 0%   2.000 \u00b1 0%  -99.65% (p=0.000 n=10)\n*-10      569.000 \u00b1 0%   2.000 \u00b1 0%  -99.65% (p=0.000 n=10)\ngeomean     569.0        2.000       -99.65%\n

"},{"location":"authorino/#contributing","title":"Contributing","text":"

If you are interested in contributing to Authorino, please refer to the Developer's guide for info about the stack and requirements, workflow, policies and Code of Conduct.

Join us on kuadrant.slack.com for live discussions about the roadmap and more.

"},{"location":"authorino/docs/","title":"Documentation","text":""},{"location":"authorino/docs/#getting-started","title":"Getting started","text":""},{"location":"authorino/docs/#terminology","title":"Terminology","text":""},{"location":"authorino/docs/#architecture","title":"Architecture","text":""},{"location":"authorino/docs/#feature-description","title":"Feature description","text":""},{"location":"authorino/docs/#user-guides","title":"User guides","text":""},{"location":"authorino/docs/#developers-guide","title":"Developer\u2019s guide","text":""},{"location":"authorino/docs/architecture/","title":"Architecture","text":"
  • Overview
  • Topologies
  • Centralized gateway
  • Centralized authorization service
  • Sidecars
  • Cluster-wide vs. Namespaced instances
  • The Authorino AuthConfig Custom Resource Definition (CRD)
  • Resource reconciliation and status update
  • The \"Auth Pipeline\" (aka: enforcing protection in request-time)
  • Host lookup
  • Avoiding host name collision
  • The Authorization JSON
  • Raw HTTP Authorization interface
  • Caching
  • OpenID Connect and User-Managed Access configs
  • JSON Web Keys (JWKs) and JSON Web Key Sets (JWKS)
  • Revoked access tokens
  • External metadata
  • Compiled Rego policies
  • Repeated requests
  • Sharding
  • RBAC
  • Observability
"},{"location":"authorino/docs/architecture/#overview","title":"Overview","text":"

There are a few concepts to understand Authorino's architecture. The main components are: Authorino, Envoy and the Upstream service to be protected. Envoy proxies requests to the configured virtual host upstream service, first contacting with Authorino to decide on authN/authZ.

The topology can vary from centralized proxy and centralized authorization service, to dedicated sidecars, with the nuances in between. Read more about the topologies in the Topologies section below.

Authorino is deployed using the Authorino Operator, from an Authorino Kubernetes custom resource. Then, from another kind of custom resource, the AuthConfig CRs, each Authorino instance reads and adds to the index the exact rules of authN/authZ to enforce for each protected host (\"index reconciliation\").

Everything that the AuthConfig reconciler can fetch in reconciliation-time is stored in the index. This is the case of static parameters such as signing keys, authentication secrets and authorization policies from external policy registries.

AuthConfigs can refer to identity providers (IdP) and trusted auth servers whose access tokens will be accepted to authenticate to the protected host. Consumers obtain an authentication token (short-lived access token or long-lived API key) and send those in the requests to the protected service.

When Authorino is triggered by Envoy via the gRPC interface, it starts evaluating the Auth Pipeline, i.e. it applies to the request the parameters to verify the identity and to enforce authorization, as found in the index for the requested host (See host lookup for details).

Apart from static rules, these parameters can include instructions to contact online with external identity verifiers, external sources of metadata and policy decision points (PDPs).

On every request, Authorino's \"working memory\" is called Authorization JSON, a data structure that holds information about the context (the HTTP request) and objects from each phase of the auth pipeline: i.e., identity verification (phase i), ad-hoc metadata fetching (phase ii), authorization policy enforcement (phase iii), dynamic response (phase iv), and callbacks (phase v). The evaluators in each of these phases can both read and write from the Authorization JSON for dynamic steps and decisions of authN/authZ.

"},{"location":"authorino/docs/architecture/#topologies","title":"Topologies","text":"

Typically, upstream APIs are deployed to the same Kubernetes cluster and namespace where the Envoy proxy and Authorino is running (although not necessarily). Whatever is the case, Envoy must be proxying to the upstream API (see Envoy's HTTP route components and virtual hosts) and pointing to Authorino in the external authorization filter.

This can be achieved with different topologies: - Envoy can be a centralized gateway with one dedicated instance of Authorino, proxying to one or more upstream services - Envoy can be deployed as a sidecar of each protected service, but still contacting from a centralized Authorino authorization service - Both Envoy and Authorino deployed as sidecars of the protected service, restricting all communication between them to localhost

Each topology above induces different measures for security.

"},{"location":"authorino/docs/architecture/#centralized-gateway","title":"Centralized gateway","text":"

Recommended in the protected services to validate the origin of the traffic. It must have been proxied by Envoy. See Authorino JSON injection for an extra validation option using a shared secret passed in HTTP header.

"},{"location":"authorino/docs/architecture/#centralized-authorization-service","title":"Centralized authorization service","text":"

Protected service should only listen on localhost and all traffic can be considered safe.

"},{"location":"authorino/docs/architecture/#sidecars","title":"Sidecars","text":"

Recommended namespaced instances of Authorino with fine-grained label selectors to avoid unnecessary caching of AuthConfigs.

Apart from that, protected service should only listen on localhost and all traffic can be considered safe.

"},{"location":"authorino/docs/architecture/#cluster-wide-vs-namespaced-instances","title":"Cluster-wide vs. Namespaced instances","text":"

Authorino instances can run in either cluster-wide or namespaced mode.

Namespace-scoped instances only watch resources (AuthConfigs and Secrets) created in a given namespace. This deployment mode does not require admin privileges over the Kubernetes cluster to deploy the instance of the service (given Authorino's CRDs have been installed beforehand, such as when Authorino is installed using the Authorino Operator).

Cluster-wide deployment mode, in contraposition, deploys instances of Authorino that watch resources across the entire cluster, consolidating all resources into a multi-namespace index of auth configs. Admin privileges over the Kubernetes cluster is required to deploy Authorino in cluster-wide mode.

Be careful to avoid superposition when combining multiple Authorino instances and instance modes in the same Kubernetes cluster. Apart from caching unnecessary auth config data in the instances depending on your routing settings, the leaders of each instance (set of replicas) may compete for updating the status of the custom resources that are reconciled. See Resource reconciliation and status update for more information.

If necessary, use label selectors to narrow down the space of resources watched and reconciled by each Authorino instance. Check out the Sharding section below for details.

"},{"location":"authorino/docs/architecture/#the-authorino-authconfig-custom-resource-definition-crd","title":"The Authorino AuthConfig Custom Resource Definition (CRD)","text":"

The desired protection for a service is declaratively stated by applying an AuthConfig Custom Resource to the Kubernetes cluster running Authorino.

An AuthConfig resource typically looks like the following:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: my-api-protection\nspec:\n# List of one or more hostname[:port] entries, lookup keys to find this config in request-time\n# Authorino will try to prevent hostname collision by rejecting a hostname already taken.\nhosts:\n- my-api.io # north-south traffic\n- my-api.ns.svc.cluster.local # east-west traffic\n# List of one or more trusted sources of identity:\n# - Endpoints of issuers of OpenId Connect ID tokens (JWTs)\n# - Endpoints for OAuth 2.0 token introspection\n# - Attributes for the Kubernetes `TokenReview` API\n# - Label selectors for API keys (stored in Kubernetes `Secret`s)\n# - mTLS trusted certificate issuers\n# - HMAC secrets\nidentity: [\u2026]\n# List of sources of external metadata for the authorization (optional):\n# - Endpoints for HTTP GET or GET-by-POST requests\n# - OIDC UserInfo endpoints (associated with an OIDC token issuer)\n# - User-Managed Access (UMA) resource registries\nmetadata: [\u2026]\n# List of authorization policies to be enforced (optional):\n# - JSON pattern-matching rules (e.g. `context.request.http.path eq '/pets'`)\n# - Open Policy Agent (OPA) inline or external Rego policies\n# - Attributes for the Kubernetes `SubjectAccessReview` API\nauthorization: [\u2026]\n# List of dynamic response elements, to inject post-external authorization data into the request (optional):\n# - JSON objects\n# - Festival Wristbands (signed JWTs issued by Authorino)\n# - Envoy Dynamic Metadata\nresponse: [\u2026]\n# List of callback targets:\n# - Endpoints for HTTP requests\ncallbacks: [\u2026]\n# Custom HTTP status code, message and headers to replace the default `401 Unauthorized` and `403 Forbidden` (optional)\ndenyWith:\nunauthenticated:\ncode: 302\nmessage: Redirecting to login\nheaders:\n- name: Location\nvalue: https://my-app.io/login\nunauthorized: {\u2026}\n

Check out the OAS of the AuthConfig CRD for a formal specification of the options for identity verification, external metadata fetching, authorization policies, and dynamic response, as well as any other host protection capability implemented by Authorino.

You can also read the specification from the CLI using the kubectl explain command. The Authorino CRD is required to have been installed in Kubernetes cluster. E.g. kubectl explain authconfigs.spec.identity.extendedProperties.

A complete description of supported features and corresponding configuration options within an AuthConfig CR can be found in the Features page.

More concrete examples of AuthConfigs for specific use-cases can be found in the User guides.

"},{"location":"authorino/docs/architecture/#resource-reconciliation-and-status-update","title":"Resource reconciliation and status update","text":"

The instances of the Authorino authorization service workload, following the Operator pattern, watch events related to the AuthConfig custom resources, to build and reconcile an in-memory index of configs. Whenever a replica receives traffic for authorization request, it looks up in the index of AuthConfigs and then triggers the \"Auth Pipeline\", i.e. enforces the associated auth spec onto the request.

An instance can be a single authorization service workload or a set of replicas. All replicas watch and reconcile the same set of resources that match the --auth-config-label-selector and --secret-label-selector configuration options. (See both Cluster-wide vs. Namespaced instances and Sharding, for details about defining the reconciliation space of Authorino instances.)

The above means that all replicas of an Authorino instance should be able to receive traffic for authorization requests.

Among the multiple replicas of an instance, Authorino elects one replica to be leader. The leader is responsible for updating the status of reconciled AuthConfigs. If the leader eventually becomes unavailable, the instance will automatically elect another replica take its place as the new leader.

The status of an AuthConfig tells whether the resource is \"ready\" (i.e. indexed). It also includes summary information regarding the numbers of identity configs, metadata configs, authorization configs and response configs within the spec, as well as whether Festival Wristband tokens are being issued by the Authorino instance as by spec.

Apart from watching events related to AuthConfig custom resources, Authorino also watches events related to Kubernetes Secrets, as part of Authorino's API key authentication feature. Secret resources that store API keys are linked to their corresponding AuthConfigs in the index. Whenever the Authorino instance detects a change in the set of API key Secrets linked to an AuthConfigs, the instance reconciles the index.

Authorino only watches events related to Secrets whose metadata.labels match the label selector --secret-label-selector of the Authorino instance. The default values of the label selector for Kubernetes Secrets representing Authorino API keys is authorino.kuadrant.io/managed-by=authorino.

"},{"location":"authorino/docs/architecture/#the-auth-pipeline-aka-enforcing-protection-in-request-time","title":"The \"Auth Pipeline\" (aka: enforcing protection in request-time)","text":"

In each request to the protected API, Authorino triggers the so-called \"Auth Pipeline\", a set of configured evaluators that are organized in a 5-phase pipeline:

  • (i) Identity phase: at least one source of identity (i.e., one identity evaluator) must resolve the supplied credential in the request into a valid identity or Authorino will otherwise reject the request as unauthenticated (401 HTTP response status).
  • (ii) Metadata phase: optional fetching of additional data from external sources, to add up to context and identity information, and used in authorization policies, dynamic responses and callback requests (phases iii to v).
  • (iii) Authorization phase: all unskipped policies must evaluate to a positive result (\"authorized\"), or Authorino will otherwise reject the request as unauthorized (403 HTTP response code).
  • (iv) Response phase \u2013 Authorino builds all user-defined response items (dynamic JSON objects and/or Festival Wristband OIDC tokens), which are supplied back to the external authorization client within added HTTP headers or as Envoy Dynamic Metadata
  • (v) Callbacks phase \u2013 Authorino sends callbacks to specified HTTP endpoints.

Each phase is sequential to the other, from (i) to (v), while the evaluators within each phase are triggered concurrently or as prioritized. The Identity phase (i) is the only one required to list at least one evaluator (i.e. one identity source or more); Metadata, Authorization and Response phases can have any number of evaluators (including zero, and even be omitted in this case).

"},{"location":"authorino/docs/architecture/#host-lookup","title":"Host lookup","text":"

Authorino reads the request host from Attributes.Http.Host of Envoy's CheckRequest type, and uses it as key to lookup in the index of AuthConfigs, matched against spec.hosts.

Alternatively to Attributes.Http.Host, a host entry can be supplied in the Attributes.ContextExtensions map of the external authorino request. This will take precedence before the host attribute of the HTTP request.

The host context extension is useful to support use cases such as of path prefix-based lookup and wildcard subdomains lookup with lookup strongly dictated by the external authorization client (e.g. Envoy), which often knows about routing and the expected AuthConfig to enforce beyond what Authorino can infer strictly based on the host name.

Wildcards can also be used in the host names specified in the AuthConfig, resolved by Authorino. E.g. if *.pets.com is in spec.hosts, Authorino will match the concrete host names dogs.pets.com, cats.pets.com, etc. In case, of multiple possible matches, Authorino will try the longest match first (in terms of host name labels) and fall back to the closest wildcard upwards in the domain tree (if any).

When more than one host name is specified in the AuthConfig, all of them can be used as key, i.e. all of them can be requested in the authorization request and will be mapped to the same config.

Example. Host lookup with wildcards.

The domain tree above induces the following relation: - foo.nip.io \u2192 authconfig-1 (matches *.io) - talker-api.nip.io \u2192 authconfig-2 (matches talker-api.nip.io) - dogs.pets.com \u2192 authconfig-2 (matches *.pets.com) - api.acme.com \u2192 authconfig-3 (matches api.acme.com) - www.acme.com \u2192 authconfig-4 (matches *.acme.com) - foo.org \u2192 404 Not found

The host can include the port number (i.e. hostname:port) or it can be just the name of the host name. Authorino will first try finding in the index a config associated to hostname:port, as supplied in the authorization request; if the index misses an entry for hostname:port, Authorino will then remove the :port suffix and repeat the lookup using just hostname as key. This provides implicit support for multiple port numbers for a same host without having to list all combinations in the AuthConfig.

"},{"location":"authorino/docs/architecture/#avoiding-host-name-collision","title":"Avoiding host name collision","text":"

Authorino tries to prevent host name collision between AuthConfigs by rejecting to link in the index any AuthConfig and host name if the host name is already linked to a different AuthConfig in the index. This was intentionally designed to prevent users from superseding each other's AuthConfigs, partially or fully, by just picking the same host names or overlapping host names as others.

When wildcards are involved, a host name that matches a host wildcard already linked in the index to another AuthConfig will be considered taken, and therefore the newest AuthConfig will be rejected to be linked to that host.

"},{"location":"authorino/docs/architecture/#the-authorization-json","title":"The Authorization JSON","text":"

On every Auth Pipeline, Authorino builds the Authorization JSON, a \"working-memory\" data structure composed of context (information about the request, as supplied by the Envoy proxy to Authorino) and auth (objects resolved in phases (i) to (v) of the pipeline). The evaluators of each phase can read from the Authorization JSON and implement dynamic properties and decisions based on its values.

At phase (iii), the authorization evaluators count on an Authorization JSON payload that looks like the following:

// The authorization JSON combined along Authorino's auth pipeline for each request\n{\n  \"context\": { // the input from the proxy\n    \"origin\": {\u2026},\n    \"request\": {\n      \"http\": {\n        \"method\": \"\u2026\",\n        \"headers\": {\u2026},\n        \"path\": \"/\u2026\",\n        \"host\": \"\u2026\",\n        \u2026\n      }\n    }\n  },\n  \"auth\": {\n    \"identity\": {\n      // the identity resolved, from the supplied credentials, by one of the evaluators of phase (i)\n    },\n    \"metadata\": {\n      // each metadata object/collection resolved by the evaluators of phase (ii), by name of the evaluator\n    }\n  }\n}\n

The policies evaluated can use any data from the authorization JSON to define authorization rules.

After phase (iii), Authorino appends to the authorization JSON the results of this phase as well, and the payload available for phase (iv) becomes:

// The authorization JSON combined along Authorino's auth pipeline for each request\n{\n  \"context\": { // the input from the proxy\n    \"origin\": {\u2026},\n    \"request\": {\n      \"http\": {\n        \"method\": \"\u2026\",\n        \"headers\": {\u2026},\n        \"path\": \"/\u2026\",\n        \"host\": \"\u2026\",\n        \u2026\n      }\n    }\n  },\n  \"auth\": {\n    \"identity\": {\n      // the identity resolved, from the supplied credentials, by one of the evaluators of phase (i)\n    },\n    \"metadata\": {\n      // each metadata object/collection resolved by the evaluators of phase (ii), by name of the evaluator\n    },\n    \"authorization\": {\n      // each authorization policy result resolved by the evaluators of phase (iii), by name of the evaluator\n    }\n  }\n}\n

Festival Wristbands and Dynamic JSON responses can include dynamic values (custom claims/properties) fetched from the authorization JSON. These can be returned to the external authorization client in added HTTP headers or as Envoy Well Known Dynamic Metadata. Check out Dynamic response features for details.

For information about reading and fetching data from the Authorization JSON (syntax, functions, etc), check out JSON paths.

"},{"location":"authorino/docs/architecture/#raw-http-authorization-interface","title":"Raw HTTP Authorization interface","text":"

Besides providing the gRPC authorization interface \u2013 that implements the Envoy gRPC authorization server \u2013, Authorino also provides another interface for raw HTTP authorization. This second interface responds to GET and POST HTTP requests sent to :5001/check, and is suitable for other forms of integration, such as: - using Authorino as Kubernetes ValidatingWebhook service (example); - other HTTP proxies and API gateways; - old versions of Envoy incompatible with the latest version of gRPC external authorization protocol (Authorino is based on v3.19.1 of Envoy external authorization API)

In the raw HTTP interface, the host used to lookup for an AuthConfig must be supplied in the Host HTTP header of the request. Other attributes of the HTTP request are also passed in the context to evaluate the AuthConfig, including the body of the request.

"},{"location":"authorino/docs/architecture/#caching","title":"Caching","text":""},{"location":"authorino/docs/architecture/#openid-connect-and-user-managed-access-configs","title":"OpenID Connect and User-Managed Access configs","text":"

OpenID Connect and User-Managed Access configurations, discovered usually at reconciliation-time from well-known discovery endpoints.

Cached individual OpenID Connect configurations discovered by Authorino can be configured to be auto-refreshed, by setting the corresponding spec.identity.oidc.ttl field in the AuthConfig (given in seconds, default: 0 \u2013 i.e. no cache update).

"},{"location":"authorino/docs/architecture/#json-web-keys-jwks-and-json-web-key-sets-jwks","title":"JSON Web Keys (JWKs) and JSON Web Key Sets (JWKS)","text":"

JSON signature verification certificates linked by discovered OpenID Connect configurations, fetched usually at reconciliation-time.

"},{"location":"authorino/docs/architecture/#revoked-access-tokens","title":"Revoked access tokens","text":"Not implemented - In analysis (#19)

Caching of access tokens identified and or notified as revoked prior to expiration.

"},{"location":"authorino/docs/architecture/#external-metadata","title":"External metadata","text":"Not implemented - Planned (#21)

Caching of resource data obtained in previous requests.

"},{"location":"authorino/docs/architecture/#compiled-rego-policies","title":"Compiled Rego policies","text":"

Performed automatically by Authorino at reconciliation-time for the authorization policies based on the built-in OPA module.

Precompiled and cached individual Rego policies originally pulled by Authorino from external registries can be configured to be auto-refreshed, by setting the corresponding spec.authorization.opa.externalRegistry.ttl field in the AuthConfig (given in seconds, default: 0 \u2013 i.e. no cache update).

"},{"location":"authorino/docs/architecture/#repeated-requests","title":"Repeated requests","text":"Not implemented - In analysis (#20)

For consecutive requests performed, within a given period of time, by a same user that request for a same resource, such that the result of the auth pipeline can be proven that would not change.

"},{"location":"authorino/docs/architecture/#sharding","title":"Sharding","text":"

By default, Authorino instances will watch AuthConfig CRs in the entire space (namespace or entire cluster; see Cluster-wide vs. Namespaced instances for details). To support combining multiple Authorino instances and instance modes in the same Kubernetes cluster, and yet avoiding superposition between the instances (i.e. multiple instances reconciling the same AuthConfigs), Authorino offers support for data sharding, i.e. to horizontally narrow down the space of reconciliation of an Authorino instance to a subset of that space.

The benefits of limiting the space of reconciliation of an Authorino instance include avoiding unnecessary caching and workload in instances that do not receive corresponding traffic (according to your routing settings) and preventing leaders of multiple instances (sets of replicas) to compete on resource status updates (see Resource reconciliation and status update for details).

Use-cases for sharding of AuthConfigs: - Horizontal load balancing of traffic of authorization requests - Supporting for managed centralized instances of Authorino to API owners who create and maintain their own AuthConfigs within their own user namespaces.

Authorino's custom controllers filter the AuthConfig-related events to be reconciled using Kubernetes label selectors, defined for the Authorino instance via --auth-config-label-selector command-line flag. By default, --auth-config-label-selector is empty, meaning all AuthConfigs in the space are watched; this variable can be set to any value parseable as a valid label selector, causing Authorino to then watch only events of AuthConfigs whose metadata.labels match the selector.

The following are all valid examples of AuthConfig label selector filters:

--auth-config-label-selector=\"authorino.kuadrant.io/managed-by=authorino\"\n--auth-config-label-selector=\"authorino.kuadrant.io/managed-by=authorino,other-label=other-value\"\n--auth-config-label-selector=\"authorino.kuadrant.io/managed-by in (authorino,kuadrant)\"\n--auth-config-label-selector=\"authorino.kuadrant.io/managed-by!=authorino-v0.4\"\n--auth-config-label-selector=\"!disabled\"\n
"},{"location":"authorino/docs/architecture/#rbac","title":"RBAC","text":"

The table below describes the roles and role bindings defined by the Authorino service:

Role Kind Scope(*) Description Permissions authorino-manager-role ClusterRole C/N Role of the Authorino manager service Watch and reconcile AuthConfigs and Secrets authorino-manager-k8s-auth-role ClusterRole C/N Role for the Kubernetes auth features Create TokenReviews and SubjectAccessReviews (Kubernetes auth) authorino-leader-election-role Role N Leader election role Create/update the ConfigMap used to coordinate which replica of Authorino is the leader authorino-authconfig-editor-role ClusterRole - AuthConfig editor R/W AuthConfigs; Read AuthConfig/status authorino-authconfig-viewer-role ClusterRole - AuthConfig viewer Read AuthConfigs and AuthConfig/status authorino-proxy-role ClusterRole C/N Kube-rbac-proxy-role (sidecar)'s role Create TokenReviews and SubjectAccessReviews to check permissions to the /metrics endpoint authorino-metrics-reader ClusterRole - Metrics reader GET /metrics

(*) C - Cluster-wide | N - Authorino namespace | C/N - Cluster-wide or Authorino namespace (depending on the deployment mode).

"},{"location":"authorino/docs/architecture/#observability","title":"Observability","text":"

Please refer to the Observability user guide for info on Prometheus metrics exported by Authorino, readiness probe, logging, tracing, etc.

"},{"location":"authorino/docs/code_of_conduct/","title":"Code of conduct","text":""},{"location":"authorino/docs/code_of_conduct/#authorino-code-of-conduct-v10","title":"Authorino Code of Conduct v1.0","text":"

This document provides community guidelines for a safe, respectful, productive, and collaborative place for any person who is willing to contribute to Authorino.

  • Participants will be tolerant of opposing views.
  • Participants must ensure that their language and actions are free of personal attacks and disparaging personal remarks.
  • When interpreting the words and actions of others, participants should always assume good intentions.
  • Behaviour which can be reasonably considered harassment will not be tolerated.

This Code of Conduct is adapted from The Ruby Community Conduct Guideline

"},{"location":"authorino/docs/contributing/","title":"Developer's Guide","text":"
  • Technology stack for developers
  • Workflow
  • Check the issues
  • Clone the repo and setup the local environment
  • Make your changes
  • Run the tests
  • Try locally
    • Build, deploy and try Authorino in a local cluster
    • Additional tools (for specific use-cases)
    • Re-build and rollout latest
    • Clean-up
  • Sign your commits
  • Logging policy
  • Additional resources
  • Reach out
"},{"location":"authorino/docs/contributing/#technology-stack-for-developers","title":"Technology stack for developers","text":"

Minimum requirements to contribute to Authorino are: - Golang v1.19+ - Docker

Authorino's code was originally bundled using the Operator SDK (v1.9.0).

The following tools can be installed as part of the development workflow:

  • Installed with go install to the $PROJECT_DIR/bin directory:
  • controller-gen: for building custom types and manifests
  • Kustomize: for assembling flavoured manifests and installing/deploying
  • setup-envtest: for running the tests \u2013 extra tools installed to ./testbin
  • [benchstat]https://cs.opensource.google/go/x/perf): for human-friendly test benchmark reports
  • mockgen: to generate mocks for tests \u2013 e.g. ./bin/mockgen -source=pkg/auth/auth.go -destination=pkg/auth/mocks/mock_auth.go
  • Kind: for deploying a containerized Kubernetes cluster for integration testing purposes

  • Other recommended tools to have installed:

  • jq
  • yq
  • gnu-sed
"},{"location":"authorino/docs/contributing/#workflow","title":"Workflow","text":""},{"location":"authorino/docs/contributing/#check-the-issues","title":"Check the issues","text":"

Start by checking the list of issues in GitHub.

In case you want to contribute with an idea for enhancement, a bug fix, or question, please make sure to describe the issue so we can start a conversation together and help you find the best way to get your contribution merged.

"},{"location":"authorino/docs/contributing/#clone-the-repo-and-setup-the-local-environment","title":"Clone the repo and setup the local environment","text":"

Fork/clone the repo:

git clone git@github.com:kuadrant/authorino.git && cd authorino\n

Download the Golang dependencies:

make vendor\n

For additional automation provided, check:

make help\n
"},{"location":"authorino/docs/contributing/#make-your-changes","title":"Make your changes","text":"

Good changes... - follow the Golang conventions - have proper test coverage - address corresponding updates to the docs - help us fix wherever we failed to do the above \ud83d\ude1c

"},{"location":"authorino/docs/contributing/#run-the-tests","title":"Run the tests","text":"

To run the tests:

make test\n
"},{"location":"authorino/docs/contributing/#try-locally","title":"Try locally","text":""},{"location":"authorino/docs/contributing/#build-deploy-and-try-authorino-in-a-local-cluster","title":"Build, deploy and try Authorino in a local cluster","text":"

The following command will: - Start a local Kubernetes cluster (using Kind) - Install the Authorino Operator and Authorino CRDs - Build an image of Authorino based on the current branch - Push the freshly built image to the cluster's registry - Install cert-manager in the cluster - Generate TLS certificates for the Authorino service - Deploy an instance of Authorino - Deploy the example application Talker API, a simple HTTP API that echoes back whatever it gets in the request - Setup Envoy for proxying to the Talker API and using Authorino for external authorization

make local-setup\n

You will be prompted to edit the Authorino custom resource.

The main workload composed of Authorino instance and user apps (Envoy, Talker API) will be deployed to the default Kubernetes namespace.

Once the deployment is ready, you can forward the requests on port 8000 to the Envoy service

kubectl port-forward deployment/envoy 8000:8000 &\n
Pro tips 1. Change the default workload namespace by supplying the `NAMESPACE` argument to your `make local-setup` and other deployment, apps and local cluster related targets. If the namespace does not exist, it will be created. 2. Switch to TLS disabled by default when deploying locally by supplying `TLS_ENABLED=0` to your `make local-setup` and `make deploy` commands. E.g. `make local-setup TLS_ENABLED=0`. 3. Skip being prompted to edit the `Authorino` CR and default to an Authorino deployment with TLS enabled, debug/development log level/mode, and standard name 'authorino', by supplying `FF=1` to your `make local-setup` and `make deploy` commands. E.g. `make local-setup FF=1` 4. Supply `DEPLOY_IDPS=1` to `make local-setup` and `make user-apps` to deploy Keycloak and Dex to the cluster. `DEPLOY_KEYCLOAK` and `DEPLOY_DEX` are also available. Read more about additional tools for specific use cases in the section below. 5. Saving the ID of the process (PID) of the port-forward command spawned in the background can be useful to later kill and restart the process. E.g. `kubectl port-forward deployment/envoy 8000:8000 &;PID=$!`; then `kill $PID`."},{"location":"authorino/docs/contributing/#additional-tools-for-specific-use-cases","title":"Additional tools (for specific use-cases)","text":"Limitador To deploy [Limitador](https://github.com/kuadrant/limitador) \u2013 pre-configured in Envoy for rate-limiting the Talker API to 5 hits per minute per `user_id` when available in the cluster workload \u2013, run:
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml\n
Keycloak Authorino examples include a bundle of [Keycloak](https://www.keycloak.org) preloaded with the following realm setup: - Admin console: http://localhost:8080/auth/admin (admin/p) - Preloaded realm: **kuadrant** - Preloaded clients: - **demo**: to which API consumers delegate access and therefore the one which access tokens are issued to - **authorino**: used by Authorino to fetch additional user info with `client_credentials` grant type - **talker-api**: used by Authorino to fetch UMA-protected resource data associated with the Talker API - Preloaded resources: - `/hello` - `/greetings/1` (owned by user john) - `/greetings/2` (owned by user jane) - `/goodbye` - Realm roles: - member (default to all users) - admin - Preloaded users: - john/p (member) - jane/p (admin) - peter/p (member, email not verified) To deploy, run:
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n
Forward local requests to the instance of Keycloak running in the cluster:
kubectl port-forward deployment/keycloak 8080:8080 &\n
Dex Authorino examples include a bundle of [Dex](https://dexidp.io) preloaded with the following setup: - Preloaded clients: - **demo**: to which API consumers delegate access and therefore the one which access tokens are issued to (Client secret: aaf88e0e-d41d-4325-a068-57c4b0d61d8e) - Preloaded users: - marta@localhost/password To deploy, run:
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/dex/dex-deploy.yaml\n
Forward local requests to the instance of Dex running in the cluster:
kubectl port-forward deployment/dex 5556:5556 &\n
a12n-server Authorino examples include a bundle of [**a12n-server**](https://github.com/curveball/a12n-server) and corresponding MySQL database, preloaded with the following setup: - Admin console: http://a12n-server:8531 (admin/123456) - Preloaded clients: - **service-account-1**: to obtain access tokens via `client_credentials` OAuth2 grant type, to consume the Talker API (Client secret: DbgXROi3uhWYCxNUq_U1ZXjGfLHOIM8X3C2bJLpeEdE); includes metadata privilege: `{ \"talker-api\": [\"read\"] }` that can be used to write authorization policies - **talker-api**: to authenticate to the token introspect endpoint (Client secret: V6g-2Eq2ALB1_WHAswzoeZofJ_e86RI4tdjClDDDb4g) To deploy, run:
kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/a12n-server/a12n-server-deploy.yaml\n
Forward local requests to the instance of a12n-server running in the cluster:
kubectl port-forward deployment/a12n-server 8531:8531 &\n
"},{"location":"authorino/docs/contributing/#re-build-and-rollout-latest","title":"Re-build and rollout latest","text":"

Re-build and rollout latest Authorino image:

make local-rollout\n

If you made changes to the CRD between iterations, re-install by running:

make install\n
"},{"location":"authorino/docs/contributing/#clean-up","title":"Clean-up","text":"

The following command deletes the entire Kubernetes cluster started with Kind:

make local-cleanup\n
"},{"location":"authorino/docs/contributing/#sign-your-commits","title":"Sign your commits","text":"

All commits to be accepted to Authorino's code are required to be signed. Refer to this page about signing your commits.

"},{"location":"authorino/docs/contributing/#logging-policy","title":"Logging policy","text":"

A few guidelines for adding logging messages in your code: 1. Make sure you understand Authorino's Logging architecture and policy regarding log levels, log modes, tracing IDs, etc. 2. Respect controller-runtime's Logging Guidelines. 3. Do not add sensitive data to your info log messages; instead, redact all sensitive data in your log messages or use debug log level by mutating the logger with V(1) before outputting the message.

"},{"location":"authorino/docs/contributing/#additional-resources","title":"Additional resources","text":"

Here in the repo:

  • Getting started
  • Terminology
  • Architecture
  • Feature description

Other repos:

  • Authorino Operator
  • Authorino examples
"},{"location":"authorino/docs/contributing/#reach-out","title":"Reach out","text":"

kuadrant.slack.com

"},{"location":"authorino/docs/features/","title":"Features","text":"
  • Overview
  • Common feature: JSON paths (valueFrom.authJSON)
  • Syntax
  • String modifiers
  • Interpolation
  • Identity verification & authentication features (identity)
  • API key (identity.apiKey)
  • Kubernetes TokenReview (identity.kubernetes)
  • OpenID Connect (OIDC) JWT/JOSE verification and validation (identity.oidc)
  • OAuth 2.0 introspection (identity.oauth2)
  • OpenShift OAuth (user-echo endpoint) (identity.openshift)
  • Mutual Transport Layer Security (mTLS) authentication (identity.mtls)
  • Hash Message Authentication Code (HMAC) authentication (identity.hmac)
  • Plain (identity.plain)
  • Anonymous access (identity.anonymous)
  • Festival Wristband authentication
  • Extra: Auth credentials (credentials)
  • Extra: Identity extension (extendedProperties)
  • External auth metadata features (metadata)
  • HTTP GET/GET-by-POST (metadata.http)
  • OIDC UserInfo (metadata.userInfo)
  • User-Managed Access (UMA) resource registry (metadata.uma)
  • Authorization features (authorization)
  • JSON pattern-matching authorization rules (authorization.json)
  • Open Policy Agent (OPA) Rego policies (authorization.opa)
  • Kubernetes SubjectAccessReview (authorization.kubernetes)
  • Authzed/SpiceDB (authorization.authzed)
  • Keycloak Authorization Services (UMA-compliant Authorization API)
  • Dynamic response features (response)
  • JSON injection (response.json)
  • Plain (response.plain)
  • Festival Wristband tokens (response.wristband)
  • Extra: Response wrappers (wrapper and wrapperKey)
    • Added HTTP headers
    • Envoy Dynamic Metadata
  • Extra: Custom denial status (denyWith)
  • Callbacks (callbacks)
  • HTTP endpoints (callbacks.http)
  • Common feature: Priorities
  • Common feature: Conditions (when)
  • Common feature: Caching (cache)
  • Common feature: Metrics (metrics)
"},{"location":"authorino/docs/features/#overview","title":"Overview","text":"

We call features of Authorino the different things one can do to enforce identity verification & authentication and authorization on requests against protected services. These can be a specific identity verification method based on a supported authentication protocol, or a method to fetch additional auth metadata in request-time, etc.

Most features of Authorino relate to the different phases of the Auth Pipeline and therefore are configured in the Authorino AuthConfig. An identity verification feature usually refers to a functionality of Authorino such as the API key-based authentication implemented by Authorino, the validation of JWTs/OIDC ID tokens, and authentication based on Kubernetes TokenReviews. Analogously, OPA, JSON pattern-matching and Kubernetes SubjectAccessReview are examples of authorization features of Authorino.

At a deeper level, a feature can also be an additional functionality within a bigger feature, usually applicable to the whole class the bigger feature belongs to. For instance, the configuration of the location and key selector of auth credentials, available for all identity verification-related features. Other examples would be Identity extension and Response wrappers.

A full specification of all features of Authorino that can be configured in an AuthConfig can be found in the official spec of the custom resource definition.

You can also learn about Authorino features by using the kubectl explain command in a Kubernetes cluster where the Authorino CRD has been installed. E.g. kubectl explain authconfigs.spec.identity.extendedProperties.

"},{"location":"authorino/docs/features/#common-feature-json-paths-valuefromauthjson","title":"Common feature: JSON paths (valueFrom.authJSON)","text":"

The first feature of Authorino to learn about is a common functionality, used in the specification of many other features. JSON paths have to do with reading data from the Authorization JSON, to refer to them in configuration of dynamic steps of API protection enforcing.

Usage examples of JSON paths are: dynamic URL and request parameters when fetching metadata from external sources, dynamic authorization policy rules, and dynamic authorization responses (injected JSON and Festival Wristband token claims).

"},{"location":"authorino/docs/features/#syntax","title":"Syntax","text":"

The syntax to fetch data from the Authorization JSON with JSON paths is based on GJSON. Refer to GJSON Path Syntax page for more information.

"},{"location":"authorino/docs/features/#string-modifiers","title":"String modifiers","text":"

On top of GJSON, Authorino defines a few string modifiers.

Examples below provided for the following Authorization JSON:

{\n  \"context\": {\n    \"request\": {\n      \"http\": {\n        \"path\": \"/pets/123\",\n        \"headers\": {\n          \"authorization\": \"Basic amFuZTpzZWNyZXQK\" // jane:secret\n          \"baggage\": \"eyJrZXkxIjoidmFsdWUxIn0=\" // {\"key1\":\"value1\"}\n        }\n      }\n    }\n  },\n  \"auth\": {\n    \"identity\": {\n      \"username\": \"jane\",\n      \"fullname\": \"Jane Smith\",\n      \"email\": \"\\u0006jane\\u0012@petcorp.com\\n\"\n    },\n  },\n}\n

@strip Strips out any non-printable characters such as carriage return. E.g. auth.identity.email.@strip \u2192 \"jane@petcorp.com\".

@case:upper|lower Changes the case of a string. E.g. auth.identity.username.@case:upper \u2192 \"JANE\".

@replace:{\"old\":string,\"new\":string} Replaces a substring within a string. E.g. auth.identity.username.@replace:{\"old\":\"Smith\",\"new\":\"Doe\"} \u2192 \"Jane Doe\".

@extract:{\"sep\":string,\"pos\":int} Splits a string at occurrences of a separator (default: \" \") and selects the substring at the pos-th position (default: 0). E.g. context.request.path.@extract:{\"sep\":\"/\",\"pos\":2} \u2192 123.

@base64:encode|decode base64-encodes or decodes a string value. E.g. auth.identity.username.decoded.@base64:encode \u2192 \"amFuZQo=\".

In combination with @extract, @base64 can be used to extract the username in an HTTP Basic Authentication request. E.g. context.request.headers.authorization.@extract:{\"pos\":1}|@base64:decode|@extract:{\"sep\":\":\",\"pos\":1} \u2192 \"jane\".

"},{"location":"authorino/docs/features/#interpolation","title":"Interpolation","text":"

JSON paths can be interpolated into strings to build template-like dynamic values. E.g. \"Hello, {auth.identity.name}!\".

"},{"location":"authorino/docs/features/#identity-verification-authentication-features-identity","title":"Identity verification & authentication features (identity)","text":""},{"location":"authorino/docs/features/#api-key-identityapikey","title":"API key (identity.apiKey)","text":"

Authorino relies on Kubernetes Secret resources to represent API keys.

To define an API key, create a Secret in the cluster containing an api_key entry that holds the value of the API key.

API key secrets must be created in the same namespace of the AuthConfig (default) or spec.identity.apiKey.allNamespaces must be set to true (only works with cluster-wide Authorino instances).

API key secrets must be labeled with the labels that match the selectors specified in spec.identity.apiKey.selector in the AuthConfig.

Whenever an AuthConfig is indexed, Authorino will also index all matching API key secrets. In order for Authorino to also watch events related to API key secrets individually (e.g. new Secret created, updates, deletion/revocation), Secrets must also include a label that matches Authorino's bootstrap configuration --secret-label-selector (default: authorino.kuadrant.io/managed-by=authorino). This label may or may not be present to spec.identity.apiKey.selector in the AuthConfig without implications for the caching of the API keys when triggered by the reconciliation of the AuthConfig; however, if not present, individual changes related to the API key secret (i.e. without touching the AuthConfig) will be ignored by the reconciler.

Example. For the following AuthConfig:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: my-api-protection\nnamespace: authorino-system\nspec:\nhosts:\n- my-api.io\nidentity:\n- name: api-key-users\napiKey:\nselector:\nmatchLabels: # the key-value set used to select the matching `Secret`s; resources including these labels will be accepted as valid API keys to authenticate to this service\ngroup: friends # some custom label\nallNamespaces: true # only works with cluster-wide Authorino instances; otherwise, create the API key secrets in the same namespace of the AuthConfig\n

The following Kubernetes Secret represents a valid API key:

apiVersion: v1\nkind: Secret\nmetadata:\nname: user-1-api-key-1\nnamespace: default\nlabels:\nauthorino.kuadrant.io/managed-by: authorino # so the Authorino controller reconciles events related to this secret\ngroup: friends\nstringData:\napi_key: <some-randomly-generated-api-key-value>\ntype: Opaque\n

The resolved identity object, added to the authorization JSON following an API key identity source evaluation, is the Kubernetes Secret resource (as JSON).

"},{"location":"authorino/docs/features/#kubernetes-tokenreview-identitykubernetes","title":"Kubernetes TokenReview (identity.kubernetes)","text":"

Authorino can verify Kubernetes-valid access tokens (using Kubernetes TokenReview API).

These tokens can be either ServiceAccount tokens such as the ones issued by kubelet as part of Kubernetes Service Account Token Volume Projection, or any valid user access tokens issued to users of the Kubernetes server API.

The list of audiences of the token must include the requested host and port of the protected API (default), or all audiences specified in the Authorino AuthConfig custom resource. For example:

For the following AuthConfig CR, the Kubernetes token must include the audience my-api.io:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: my-api-protection\nspec:\nhosts:\n- my-api.io\nidentity:\n- name: cluster-users\nkubernetes: {}\n

Whereas for the following AuthConfig CR, the Kubernetes token audiences must include foo and bar:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: my-api-protection\nspec:\nhosts:\n- my-api.io\nidentity:\n- name: cluster-users\nkubernetes:\naudiences:\n- foo\n- bar\n

The resolved identity object added to the authorization JSON following a successful Kubernetes authentication identity evaluation is the status field of TokenReview response (see TokenReviewStatus for reference).

"},{"location":"authorino/docs/features/#openid-connect-oidc-jwtjose-verification-and-validation-identityoidc","title":"OpenID Connect (OIDC) JWT/JOSE verification and validation (identity.oidc)","text":"

In reconciliation-time, using OpenID Connect Discovery well-known endpoint, Authorino automatically discovers and caches OpenID Connect configurations and associated JSON Web Key Sets (JWKS) for all OpenID Connect issuers declared in an AuthConfig. Then, in request-time, Authorino verifies the JSON Web Signature (JWS) and check the time validity of signed JSON Web Tokens (JWT) supplied on each request.

Important! Authorino does not implement OAuth2 grants nor OIDC authentication flows. As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities.

The kid claim stated in the JWT header must match one of the keys cached by Authorino during OpenID Connect Discovery, therefore supporting JWK rotation.

The decoded payload of the validated JWT is appended to the authorization JSON as the resolved identity.

OpenID Connect configurations and linked JSON Web Key Sets can be configured to be automatically refreshed (pull again from the OpenID Connect Discovery well-known endpoints), by setting the identity.oidc.ttl field (given in seconds, default: 0 \u2013 i.e. auto-refresh disabled).

For an excellent summary of the underlying concepts and standards that relate OpenID Connect and JSON Object Signing and Encryption (JOSE), see this article by Jan Rusnacko. For official specification and RFCs, see OpenID Connect Core, OpenID Connect Discovery, JSON Web Token (JWT) (RFC7519), and JSON Object Signing and Encryption (JOSE).

"},{"location":"authorino/docs/features/#oauth-20-introspection-identityoauth2","title":"OAuth 2.0 introspection (identity.oauth2)","text":"

For bare OAuth 2.0 implementations, Authorino can perform token introspection on the access tokens supplied in the requests to protected APIs.

Authorino does not implement any of OAuth 2.0 grants for the applications to obtain the token. However, it can verify supplied tokens with the OAuth server, including opaque tokens, as long as the server exposes the token_introspect endpoint (RFC 7662).

Developers must set the token introspection endpoint in the AuthConfig, as well as a reference to the Kubernetes secret storing the credentials of the OAuth client to be used by Authorino when requesting the introspect.

The response returned by the OAuth2 server to the token introspection request is the resolved identity appended to the authorization JSON.

"},{"location":"authorino/docs/features/#openshift-oauth-user-echo-endpoint-identityopenshift","title":"OpenShift OAuth (user-echo endpoint) (identity.openshift)","text":"Not implemented - In analysis

Online token introspection of OpenShift-valid access tokens based on OpenShift's user-echo endpoint.

"},{"location":"authorino/docs/features/#mutual-transport-layer-security-mtls-authentication-identitymtls","title":"Mutual Transport Layer Security (mTLS) authentication (identity.mtls)","text":"

Authorino can verify x509 certificates presented by clients for authentication on the request to the protected APIs, at application level.

Trusted root Certificate Authorities (CA) are stored in Kubernetes Secrets labeled according to selectors specified in the AuthConfig, watched and indexed by Authorino. Make sure to create proper kubernetes.io/tls-typed Kubernetes Secrets, containing the public certificates of the CA stored in either a tls.crt or ca.crt entry inside the secret.

Trusted root CA secrets must be created in the same namespace of the AuthConfig (default) or spec.identity.mtls.allNamespaces must be set to true (only works with cluster-wide Authorino instances).

The identity object resolved out of a client x509 certificate is equal to the subject field of the certificate, and it serializes as JSON within the Authorization JSON usually as follows:

{\n    \"auth\": {\n        \"identity\": {\n            \"CommonName\": \"aisha\",\n            \"Country\": [\"PK\"],\n            \"ExtraNames\": null,\n            \"Locality\": [\"Islamabad\"],\n            \"Names\": [\n                { \"Type\": [2, 5, 4, 3], \"Value\": \"aisha\" },\n                { \"Type\": [2, 5, 4, 6], \"Value\": \"PK\" },\n                { \"Type\": [2, 5, 4, 7], \"Value\": \"Islamabad\" },\n                { \"Type\": [2, 5, 4,10], \"Value\": \"ACME Inc.\" },\n                { \"Type\": [2, 5, 4,11], \"Value\": \"Engineering\" }\n            ],\n            \"Organization\": [\"ACME Inc.\"],\n            \"OrganizationalUnit\": [\"Engineering\"],\n            \"PostalCode\": null,\n            \"Province\": null,\n            \"SerialNumber\": \"\",\n            \"StreetAddress\": null\n        }\n  }\n}\n
"},{"location":"authorino/docs/features/#hash-message-authentication-code-hmac-authentication-identityhmac","title":"Hash Message Authentication Code (HMAC) authentication (identity.hmac)","text":"Not implemented - Planned (#9)

Authentication based on the validation of a hash code generated from the contextual information of the request to the protected API, concatenated with a secret known by the API consumer.

"},{"location":"authorino/docs/features/#plain-identityplain","title":"Plain (identity.plain)","text":"

Authorino can read plain identity objects, based on authentication tokens provided and verified beforehand using other means (e.g. Envoy JWT Authentication filter, Kubernetes API server authentication), and injected into the payload to the external authorization service.

The plain identity object is retrieved from the Authorization JSON based on a JSON path specified in the AuthConfig.

This feature is particularly useful in cases where authentication/identity verification is handled before invoking the authorization service and its resolved value injected in the payload can be trusted. Examples of applications for this feature include: - Authentication handled in Envoy leveraging the Envoy JWT Authentication filter (decoded JWT injected as 'metadata_context') - Use of Authorino as Kubernetes ValidatingWebhook service (Kubernetes 'userInfo' injected in the body of the AdmissionReview request)

Example of AuthConfig to retrieve plain identity object from the Authorization JSON.

spec:\nidentity:\n- name: plain\nplain:\nauthJSON: context.metadata_context.filter_metadata.envoy\\.filters\\.http\\.jwt_authn|verified_jwt\n

If the specified JSON path does not exist in the Authorization JSON or the value is null, the identity verification will fail and, unless other identity config succeeds, Authorino will halt the Auth Pipeline with the usual 401 Unauthorized.

"},{"location":"authorino/docs/features/#anonymous-access-identityanonymous","title":"Anonymous access (identity.anonymous)","text":"

Literally a no-op evaluator for the identity verification phase that returns a static identity object {\"anonymous\":true}.

It allows to implement AuthConfigs that bypasses the identity verification phase of Authorino, to such as: - enable anonymous access to protected services (always or combined with Priorities) - postpone authentication in the Auth Pipeline to be resolved as part of an OPA policy

Example of AuthConfig spec that falls back to anonymous access when OIDC authentication fails, enforcing read-only access to the protected service in such cases:

spec:\nidentity:\n- name: jwt\noidc: { endpoint: ... }\n- name: anonymous\npriority: 1 # expired oidc token, missing creds, etc. default to anonymous access\nanonymous: {}\nauthorization:\n- name: read-only-access-if-authn-fails\nwhen:\n- selector: auth.identity.anonymous\noperator: eq\nvalue: \"true\"\njson:\nrules:\n- selector: context.request.http.method\noperator: eq\nvalue: GET\n
"},{"location":"authorino/docs/features/#festival-wristband-authentication","title":"Festival Wristband authentication","text":"

Authorino-issued Festival Wristband tokens can be validated as any other signed JWT using Authorino's OpenID Connect (OIDC) JWT/JOSE verification and validation.

The value of the issuer must be the same issuer specified in the custom resource for the protected API originally issuing wristband. Eventually, this can be the same custom resource where the wristband is configured as a valid source of identity, but not necessarily.

"},{"location":"authorino/docs/features/#extra-auth-credentials-credentials","title":"Extra: Auth credentials (credentials)","text":"

All the identity verification methods supported by Authorino can be configured regarding the location where access tokens and credentials (i.e. authentication secrets) fly within the request.

By default, authentication secrets are expected to be supplied in the Authorization HTTP header, with the Bearer prefix and plain authentication secret, separated by space. The full list of supported options for the location of authentication secrets and selector is specified in the table below:

Location (credentials.in) Description Selector (credentials.keySelector) authorization_header Authorization HTTP header Prefix (default: Bearer) custom_header Custom HTTP header Name of the header. Value should have no prefix. query Query string parameter Name of the parameter cookie Cookie header ID of the cookie entry"},{"location":"authorino/docs/features/#extra-identity-extension-extendedproperties","title":"Extra: Identity extension (extendedProperties)","text":"

Resolved identity objects can be extended with user-defined JSON properties. Values can be static or fetched from the Authorization JSON.

A typical use-case for this feature is token normalization. Say you have more than one identity source listed in your AuthConfig but each source issues an access token with a different JSON structure \u2013 e.g. two OIDC issuers that use different names for custom JWT claims of similar meaning; when two different identity verification/authentication methods are combined, such as API keys (whose identity objects are the corresponding Kubernetes Secrets) and Kubernetes tokens (whose identity objects are Kubernetes UserInfo data).

In such cases, identity extension can be used to normalize the token to always include the same set of JSON properties of interest, regardless of the source of identity that issued the original token verified by Authorino. This simplifies the writing of authorization policies and configuration of dynamic responses.

In case of extending an existing property of the identity object (replacing), the API allows to control whether to overwrite the value or not. This is particularly useful for normalizing tokens of a same identity source that nonetheless may occasionally differ in structure, such as in the case of JWT claims that sometimes may not be present but can be safely replaced with another (e.g. username or sub).

"},{"location":"authorino/docs/features/#external-auth-metadata-features-metadata","title":"External auth metadata features (metadata)","text":""},{"location":"authorino/docs/features/#http-getget-by-post-metadatahttp","title":"HTTP GET/GET-by-POST (metadata.http)","text":"

Generic HTTP adapter that sends a request to an external service. It can be used to fetch external metadata for the authorization policies (phase ii of the Authorino Auth Pipeline), or as a web hook.

The adapter allows issuing requests either by GET or POST methods; in both cases with URL and parameters defined by the user in the spec. Dynamic values fetched from the Authorization JSON can be used.

POST request parameters as well as the encoding of the content can be controlled using the bodyParameters and contentType fields of the config, respectively. The Content-Type of POST requests can be either application/x-www-form-urlencoded (default) or application/json.

Authentication of Authorino with the external metadata server can be set either via long-lived shared secret stored in a Kubernetes Secret or via OAuth2 client credentials grant. For long-lived shared secret, set the sharedSecretRef field. For OAuth2 client credentials grant, use the oauth2 option.

In both cases, the location where the secret (long-lived or OAuth2 access token) travels in the request performed to the external HTTP service can be specified in the credentials field. By default, the authentication secret is supplied in the Authorization header with the Bearer prefix.

Custom headers can be set with the headers field. Nevertheless, headers such as Content-Type and Authorization (or eventual custom header used for carrying the authentication secret, set instead via the credentials option) will be superseded by the respective values defined for the fields contentType and sharedSecretRef.

"},{"location":"authorino/docs/features/#oidc-userinfo-metadatauserinfo","title":"OIDC UserInfo (metadata.userInfo)","text":"

Online fetching of OpenID Connect (OIDC) UserInfo data (phase ii of the Authorino Auth Pipeline), associated with an OIDC identity source configured and resolved in phase (i).

Apart from possibly complementing information of the JWT, fetching OpenID Connect UserInfo in request-time can be particularly useful for remote checking the state of the session, as opposed to only verifying the JWT/JWS offline.

Implementation requires an OpenID Connect issuer (spec.identity.oidc) configured in the same AuthConfig.

The response returned by the OIDC server to the UserInfo request is appended (as JSON) to auth.metadata in the authorization JSON.

"},{"location":"authorino/docs/features/#user-managed-access-uma-resource-registry-metadatauma","title":"User-Managed Access (UMA) resource registry (metadata.uma)","text":"

User-Managed Access (UMA) is an OAuth-based protocol for resource owners to allow other users to access their resources. Since the UMA-compliant server is expected to know about the resources, Authorino includes a client that fetches resource data from the server and adds that as metadata of the authorization payload.

This enables the implementation of resource-level Attribute-Based Access Control (ABAC) policies. Attributes of the resource fetched in a UMA flow can be, e.g., the owner of the resource, or any business-level attributes stored in the UMA-compliant server.

A UMA-compliant server is an external authorization server (e.g., Keycloak) where the protected resources are registered. It can be as well the upstream API itself, as long as it implements the UMA protocol, with initial authentication by client_credentials grant to exchange for a Protected API Token (PAT).

It's important to notice that Authorino does NOT manage resources in the UMA-compliant server. As shown in the flow above, Authorino's UMA client is only to fetch data about the requested resources. Authorino exchanges client credentials for a Protected API Token (PAT), then queries for resources whose URI match the path of the HTTP request (as passed to Authorino by the Envoy proxy) and fetches data of each matching resource.

The resources data is added as metadata of the authorization payload and passed as input for the configured authorization policies. All resources returned by the UMA-compliant server in the query by URI are passed along. They are available in the PDPs (authorization payload) as input.auth.metadata.custom-name => Array. (See The \"Auth Pipeline\" for details.)

"},{"location":"authorino/docs/features/#authorization-features-authorization","title":"Authorization features (authorization)","text":""},{"location":"authorino/docs/features/#json-pattern-matching-authorization-rules-authorizationjson","title":"JSON pattern-matching authorization rules (authorization.json)","text":"

Grant/deny access based on simple pattern-matching expressions (\"rules\") compared against values selected from the Authorization JSON.

Each expression is a tuple composed of: - a selector, to fetch from the Authorization JSON \u2013 see Common feature: JSON paths for details about syntax; - an operator \u2013 eq (equals), neq (not equal); incl (includes) and excl (excludes), for arrays; and matches, for regular expressions; - a fixed comparable value

Rules can mix and combine literal expressions and references to expression sets (\"named patterns\") defined at the upper level of the AuthConfig spec. (See Common feature: Conditions)

spec:\nauthorization:\n- name: my-simple-json-pattern-matching-policy\njson:\nrules: # All rules must match for access to be granted\n- selector: auth.identity.email_verified\noperator: eq\nvalue: \"true\"\n- patternRef: admin\npatterns:\nadmin: # a named pattern that can be reused in other sets of rules or conditions\n- selector: auth.identity.roles\noperator: incl\nvalue: admin\n
"},{"location":"authorino/docs/features/#open-policy-agent-opa-rego-policies-authorizationopa","title":"Open Policy Agent (OPA) Rego policies (authorization.opa)","text":"

You can model authorization policies in Rego language and add them as part of the protection of your APIs.

Policies can be either declared in-line in Rego language (inlineRego) or as an HTTP endpoint where Authorino will fetch the source code of the policy in reconciliation-time (externalRegistry).

Policies pulled from external registries can be configured to be automatically refreshed (pulled again from the external registry), by setting the authorization.opa.externalRegistry.ttl field (given in seconds, default: 0 \u2013 i.e. auto-refresh disabled).

Authorino's built-in OPA module precompiles the policies during reconciliation of the AuthConfig and caches the precompiled policies for fast evaluation in runtime, where they receive the Authorization JSON as input.

An optional field allValues: boolean makes the values of all rules declared in the Rego document to be returned in the OPA output after policy evaluation. When disabled (default), only the boolean value allow is returned. Values of internal rules of the Rego document can be referenced in subsequent policies/phases of the Auth Pipeline.

"},{"location":"authorino/docs/features/#kubernetes-subjectaccessreview-authorizationkubernetes","title":"Kubernetes SubjectAccessReview (authorization.kubernetes)","text":"

Access control enforcement based on rules defined in the Kubernetes authorization system, i.e. Role, ClusterRole, RoleBinding and ClusterRoleBinding resources of Kubernetes RBAC.

Authorino issues a SubjectAccessReview (SAR) inquiry that checks with the underlying Kubernetes server whether the user can access a particular resource, resource kind or generic URL.

It supports resource attributes authorization check (parameters defined in the AuthConfig) and non-resource attributes authorization check (HTTP endpoint inferred from the original request). - Resource attributes: adequate for permissions set at namespace level, defined in terms of common attributes of operations on Kubernetes resources (namespace, API group, kind, name, subresource, verb) - Non-resource attributes: adequate for permissions set at cluster scope, defined for protected endpoints of a generic HTTP API (URL path + verb)

Example of Kubernetes role for resource attributes authorization:

apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\nname: pet-reader\nrules:\n- apiGroups: [\"pets.io\"]\nresources: [\"pets\"]\nverbs: [\"get\"]\n

Example of Kubernetes cluster role for non-resource attributes authorization:

apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\nname: pet-editor\nrules:\n- nonResourceURLs: [\"/pets/*\"]\nverbs: [\"put\", \"delete\"]\n

Kubernetes' authorization policy configs look like the following in an Authorino AuthConfig:

authorization:\n- name: kubernetes-rbac\nkubernetes:\nuser:\nvalueFrom: # values of the parameter can be fixed (`value`) or fetched from the Authorization JSON (`valueFrom.authJSON`)\nauthJSON: auth.identity.metadata.annotations.userid\ngroups: [] # user groups to test for.\n# for resource attributes permission checks; omit it to perform a non-resource attributes SubjectAccessReview with path and method/verb assumed from the original request\n# if included, use the resource attributes, where the values for each parameter can be fixed (`value`) or fetched from the Authorization JSON (`valueFrom.authJSON`)\nresourceAttributes:\nnamespace:\nvalue: default\ngroup:\nvalue: pets.io # the api group of the protected resource to be checked for permissions for the user\nresource:\nvalue: pets # the resource kind\nname:\nvalueFrom: { authJSON: context.request.http.path.@extract:{\"sep\":\"/\",\"pos\":2} } # resource name \u2013 e.g., the {id} in `/pets/{id}`\nverb:\nvalueFrom: { authJSON: context.request.http.method.@case:lower } # api operation \u2013 e.g., copying from the context to use the same http method of the request\n

user and properties of resourceAttributes can be defined from fixed values or patterns of the Authorization JSON.

An array of groups (optional) can as well be set. When defined, it will be used in the SubjectAccessReview request.

"},{"location":"authorino/docs/features/#authzedspicedb-authorizationauthzed","title":"Authzed/SpiceDB (authorization.authzed)","text":"

Check permission requests sent to a Google Zanzibar-based Authzed/SpiceDB instance, via gRPC.

Subject, resource and permission parameters can be set to static values or read from the Authorization JSON.

spec:\nauthorization:\n- name: authzed\nauthzed:\nendpoint: spicedb:50051\ninsecure: true # disables TLS\nsharedSecretRef:\nname: spicedb\nkey: token\nsubject:\nkind:\nvalue: blog/user\nname:\nvalueFrom:\nauthJSON: auth.identity.sub\nresource:\nkind:\nvalue: blog/post\nname:\nvalueFrom:\nauthJSON: context.request.http.path.@extract:{\"sep\":\"/\",\"pos\":2} # /posts/{id}\npermission:\nvalueFrom:\nauthJSON: context.request.http.method\n
"},{"location":"authorino/docs/features/#keycloak-authorization-services-uma-compliant-authorization-api","title":"Keycloak Authorization Services (UMA-compliant Authorization API)","text":"Not implemented - In analysis

Online delegation of authorization to a Keycloak server.

"},{"location":"authorino/docs/features/#dynamic-response-features-response","title":"Dynamic response features (response)","text":""},{"location":"authorino/docs/features/#json-injection-responsejson","title":"JSON injection (response.json)","text":"

User-defined dynamic JSON objects generated by Authorino in the response phase, from static or dynamic data of the auth pipeline, and passed back to the external authorization client within added HTTP headers or as Envoy Well Known Dynamic Metadata.

The following Authorino AuthConfig custom resource is an example that defines 3 dynamic JSON response items, where two items are returned to the client, stringified, in added HTTP headers, and the third is wrapped as Envoy Dynamic Metadata(\"emitted\", in Envoy terminology). Envoy proxy can be configured to \"pipe\" dynamic metadata emitted by one filter into another filter \u2013 for example, from external authorization to rate limit.

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nnamespace: my-namespace\nname: my-api-protection\nspec:\nhosts:\n- my-api.io\nidentity:\n- name: edge\napiKey:\nselector:\nmatchLabels:\nauthorino.kuadrant.io/managed-by: authorino\ncredentials:\nin: authorization_header\nkeySelector: APIKEY\nresponse:\n- name: a-json-returned-in-a-header\nwrapper: httpHeader # can be omitted\nwrapperKey: x-my-custom-header # if omitted, name of the header defaults to the name of the config (\"a-json-returned-in-a-header\")\njson:\nproperties:\n- name: prop1\nvalue: value1\n- name: prop2\nvalueFrom:\nauthJSON: some.path.within.auth.json\n- name: another-json-returned-in-a-header\nwrapperKey: x-ext-auth-other-json\njson:\nproperties:\n- name: propX\nvalue: valueX\n- name: a-json-returned-as-envoy-metadata\nwrapper: envoyDynamicMetadata\nwrapperKey: auth-data\njson:\nproperties:\n- name: api-key-ns\nvalueFrom:\nauthJSON: auth.identity.metadata.namespace\n- name: api-key-name\nvalueFrom:\nauthJSON: auth.identity.metadata.name\n
"},{"location":"authorino/docs/features/#plain-responseplain","title":"Plain (response.plain)","text":"

Simpler, yet more generalized form, for extending the authorization response for header mutation and Envoy Dynamic Metadata, based on plain text values.

The value can be static:

response:\n- name: x-auth-service\nplain:\nvalue: Authorino\n

or fetched dynamically from the Authorization JSON (which includes support for interpolation):

- name: x-username\nplain:\nvalueFrom:\nauthJSON: auth.identity.username\n
"},{"location":"authorino/docs/features/#festival-wristband-tokens-responsewristband","title":"Festival Wristband tokens (response.wristband)","text":"

Festival Wristbands are signed OpenID Connect JSON Web Tokens (JWTs) issued by Authorino at the end of the auth pipeline and passed back to the client, typically in added HTTP response header. It is an opt-in feature that can be used to implement Edge Authentication Architecture (EAA) and enable token normalization. Authorino wristbands include minimal standard JWT claims such as iss, iat, and exp, and optional user-defined custom claims, whose values can be static or dynamically fetched from the authorization JSON.

The Authorino AuthConfig custom resource below sets an API protection that issues a wristband after a successful authentication via API key. Apart from standard JWT claims, the wristband contains 2 custom claims: a static value aud=internal and a dynamic value born that fetches from the authorization JSON the date/time of creation of the secret that represents the API key used to authenticate.

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nnamespace: my-namespace\nname: my-api-protection\nspec:\nhosts:\n- my-api.io\nidentity:\n- name: edge\napiKey:\nselector:\nmatchLabels:\nauthorino.kuadrant.io/managed-by: authorino\ncredentials:\nin: authorization_header\nkeySelector: APIKEY\nresponse:\n- name: my-wristband\nwristband:\nissuer: https://authorino-oidc.default.svc:8083/my-namespace/my-api-protection/my-wristband\ncustomClaims:\n- name: aud\nvalue: internal\n- name: born\nvalueFrom:\nauthJSON: auth.identity.metadata.creationTimestamp\ntokenDuration: 300\nsigningKeyRefs:\n- name: my-signing-key\nalgorithm: ES256\n- name: my-old-signing-key\nalgorithm: RS256\nwrapper: httpHeader # can be omitted\nwrapperKey: x-ext-auth-wristband # whatever http header name desired - defaults to the name of  the response config (\"my-wristband\")\n

The signing key names listed in signingKeyRefs must match the names of Kubernetes Secret resources created in the same namespace, where each secret contains a key.pem entry that holds the value of the private key that will be used to sign the wristbands issued, formatted as PEM. The first key in this list will be used to sign the wristbands, while the others are kept to support key rotation.

For each protected API configured for the Festival Wristband issuing, Authorino exposes the following OpenID Connect Discovery well-known endpoints (available for requests within the cluster): - OpenID Connect configuration: https://authorino-oidc.default.svc:8083/{namespace}/{api-protection-name}/{response-config-name}/.well-known/openid-configuration - JSON Web Key Set (JWKS) well-known endpoint: https://authorino-oidc.default.svc:8083/{namespace}/{api-protection-name}/{response-config-name}/.well-known/openid-connect/certs

"},{"location":"authorino/docs/features/#extra-response-wrappers-wrapper-and-wrapperkey","title":"Extra: Response wrappers (wrapper and wrapperKey)","text":""},{"location":"authorino/docs/features/#added-http-headers","title":"Added HTTP headers","text":"

By default, Authorino dynamic responses (injected JSON and Festival Wristband tokens) are passed back to Envoy, stringified, as injected HTTP headers. This can be made explicit by setting the wrapper property of the response config to httpHeader.

The property wrapperKey controls the name of the HTTP header, with default to the name of dynamic response config when omitted.

"},{"location":"authorino/docs/features/#envoy-dynamic-metadata","title":"Envoy Dynamic Metadata","text":"

Authorino dynamic responses (injected JSON and Festival Wristband tokens) can be passed back to Envoy in the form of Envoy Dynamic Metadata. To do so, set the wrapper property of the response config to envoyDynamicMetadata.

A response config with wrapper=envoyDynamicMetadata and wrapperKey=auth-data in the AuthConfig can be configured in the Envoy route or virtual host setting to be passed to rate limiting filter as below. The metadata content is expected to be a dynamic JSON injected by Authorino containing { \"auth-data\": { \"api-key-ns\": string, \"api-key-name\": string } }. (See the response config a-json-returned-as-envoy-metadata in the example for the JSON injection feature above)

# Envoy config snippet to inject `user_namespace` and `username` rate limit descriptors from metadata returned by Authorino\nrate_limits:\n- actions:\n- metadata:\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- key: auth-data\n- key: api-key-ns\ndescriptor_key: user_namespace\n- metadata:\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- key: auth-data\n- key: api-key-name\ndescriptor_key: username\n
"},{"location":"authorino/docs/features/#extra-custom-denial-status-denywith","title":"Extra: Custom denial status (denyWith)","text":"

By default, Authorino will inform Envoy to respond with 401 Unauthorized or 403 Forbidden respectively when the identity verification (phase i of the Auth Pipeline) or authorization (phase ii) fail. These can be customized by specifying spec.denyWith in the AuthConfig.

"},{"location":"authorino/docs/features/#callbacks-callbacks","title":"Callbacks (callbacks)","text":""},{"location":"authorino/docs/features/#http-endpoints-callbackshttp","title":"HTTP endpoints (callbacks.http)","text":"

Sends requests to specified HTTP endpoints at the end of the auth pipeline.

The scheme of the http field is the same as of metadata.http.

Example:

spec:\nidentity: [\u2026]\nauthorization: [\u2026]\ncallbacks:\n- name: log\nhttp:\nendpoint: http://logsys\nmethod: POST\nbody:\nvalueFrom:\nauthJSON: |\n\\{\"requestId\":context.request.http.id,\"username\":\"{auth.identity.username}\",\"authorizationResult\":{auth.authorization}\\}\n- name: important-forbidden\nwhen:\n- selector: auth.authorization.important-policy\noperator: eq\nvalue: \"false\"\nhttp:\nendpoint: \"http://monitoring/important?forbidden-user={auth.identity.username}\"\n
"},{"location":"authorino/docs/features/#common-feature-priorities","title":"Common feature: Priorities","text":"

Priorities allow to set sequence of execution for blocks of concurrent evaluators within phases of the Auth Pipeline.

Evaluators of same priority execute concurrently to each other \"in a block\". After syncing that block (i.e. after all evaluators of the block have returned), the next block of evaluator configs of consecutive priority is triggered.

Use cases for priorities are: 1. Saving expensive tasks to be triggered when there's a high chance of returning immediately after finishing executing a less expensive one \u2013 e.g. - an identity config that calls an external IdP to verify a token that is rarely used, compared to verifying JWTs preferred by most users of the service; - an authorization policy that performs some quick checks first, such as verifying allowed paths, and only if it passes, moves to the evaluation of a more expensive policy. 2. Establishing dependencies between evaluators - e.g. - an external metadata request that needs to wait until a previous metadata responds first (in order to use data from the response)

Priorities can be set using the priority property available in all evaluator configs of all phases of the Auth Pipeline (identity, metadata, authorization and response). The lower the number, the highest the priority. By default, all evaluators have priority 0 (i.e. highest priority).

Consider the following example to understand how priorities work:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: talker-api-protection\nspec:\nhosts:\n- talker-api\nidentity:\n- name: tier-1\npriority: 0\napiKey:\nselector:\nmatchLabels:\ntier: \"1\"\n- name: tier-2\npriority: 1\napiKey:\nselector:\nmatchLabels:\ntier: \"2\"\n- name: tier-3\npriority: 1\napiKey:\nselector:\nmatchLabels:\ntier: \"3\"\nmetadata:\n- name: first\nhttp:\nendpoint: http://talker-api:3000\nmethod: GET\n- name: second\npriority: 1\nhttp:\nendpoint: http://talker-api:3000/first_uuid={auth.metadata.first.uuid}\nmethod: GET\nauthorization:\n- name: allowed-endpoints\nwhen:\n- selector: context.request.http.path\noperator: neq\nvalue: /hi\n- selector: context.request.http.path\noperator: neq\nvalue: /hello\n- selector: context.request.http.path\noperator: neq\nvalue: /aloha\n- selector: context.request.http.path\noperator: neq\nvalue: /ciao\njson:\nrules:\n- selector: deny\noperator: eq\nvalue: \"true\"\n- name: more-expensive-policy # no point in evaluating this one if it's not an allowed endpoint\npriority: 1\nopa:\ninlineRego: |\nallow { true }\nresponse:\n- name: x-auth-data\njson:\nproperties:\n- name: tier\nvalueFrom:\nauthJSON: auth.identity.metadata.labels.tier\n- name: first-uuid\nvalueFrom:\nauthJSON: auth.metadata.first.uuid\n- name: second-uuid\nvalueFrom:\nauthJSON: auth.metadata.second.uuid\n- name: second-path\nvalueFrom:\nauthJSON: auth.metadata.second.path\n

For the AuthConfig above,

  • Identity configs tier-2 and tier-3 (priority 1) will only trigger (concurrently) in case tier-1 (priority 0) fails to validate the authentication token first. (This behavior happens without prejudice of context canceling between concurrent evaluators \u2013 i.e. evaluators that are triggered concurrently to another, such as tier-2 and tier-3, continue to cancel the context of each other if any of them succeeds validating the token first.)

  • Metadata source second (priority 1) uses the response of the request issued by metadata source first (priority 0), so it will wait for first to finish by triggering only in the second block.

  • Authorization policy allowed-endpoints (priority 0) is considered to be a lot less expensive than more-expensive-policy (priority 1) and has a high chance of denying access to the protected service (if the path is not one of the allowed endpoints). By setting different priorities to these policies we ensure the more expensive policy if triggered in sequence of the less expensive one, instead of concurrently.

"},{"location":"authorino/docs/features/#common-feature-conditions-when","title":"Common feature: Conditions (when)","text":"

Conditions, named when in the AuthConfig API, are sets of expressions (JSON patterns) that, whenever included, must evaluate to true against the Authorization JSON, so the scope where the expressions are defined is enforced. If any of the expressions in the set of conditions for a given scope does not match, Authorino will skip that scope in the Auth Pipeline.

The scope for a set of when conditions can be the entire AuthConfig (\"top-level conditions\") or a particular evaluator of any phase of the auth pipeline.

Each expression is a tuple composed of: - a selector, to fetch from the Authorization JSON \u2013 see Common feature: JSON paths for details about syntax; - an operator \u2013 eq (equals), neq (not equal); incl (includes) and excl (excludes), for arrays; and matches, for regular expressions; - a fixed comparable value

Literal expressions and references to expression sets (patterns, defined at the upper level of the AuthConfig spec) can be listed, mixed and combined in when conditions sets.

Conditions can be used, e.g.,:

i) to skip an entire AuthConfig based on the context:

spec:\nwhen: # no authn/authz required on requests to /status\n- selector: context.request.http.path\noperator: neq\nvalue: /status\n

ii) to skip parts of an AuthConfig (i.e. a specific evaluator):

spec:\nmetadata:\n- name: metadata-source\nhttp:\nendpoint: https://my-metadata-source.io\nwhen: # only fetch the external metadata if the context is HTTP method other than OPTIONS\n- selector: context.request.http.method\noperator: neq\nvalue: OPTIONS\n

iii) to enforce a particular evaluator only in certain contexts (really the same as the above, though to a different use case):

spec:\nidentity:\n- name: authn-meth-1\napiKey: {...} # this authn method only valid for POST requests to /foo[/*]\nwhen:\n- selector: context.request.http.path\noperator: matches\nvalue: ^/foo(/.*)?$\n- selector: context.request.http.method\noperator: eq\nvalue: POST\n- name: authn-meth-2\noidc: {...}\n

iv) to avoid repetition while defining patterns for conditions:

spec:\npatterns:\na-pet: # a named pattern that can be reused in sets of conditions\n- selector: context.request.http.path\noperator: matches\nvalue: ^/pets/\\d+(/.*)$\nmetadata:\n- name: pets-info\nwhen:\n- patternRef: a-pet\nhttp:\nendpoint: https://pets-info.io?petId={context.request.http.path.@extract:{\"sep\":\"/\",\"pos\":2}}\nauthorization:\n- name: pets-owners-only\nwhen:\n- patternRef: a-pet\nopa:\ninlineRego: |\nallow { input.metadata[\"pets-info\"].ownerid == input.auth.identity.userid }\n

v) mixing and combining literal expressions and refs:

spec:\npatterns:\nfoo:\n- selector: context.request.http.path\noperator: eq\nvalue: /foo\nwhen: # unauthenticated access to /foo always granted\n- patternRef: foo\n- selector: context.request.http.headers.authorization\noperator: eq\nvalue: \"\"\nauthorization:\n- name: my-policy-1\nwhen: # authenticated access to /foo controlled by policy\n- patternRef: foo\njson: {...}\n

vi) to avoid evaluating unnecessary identity checks when the user can indicate the preferred authentication method (again the pattern of skipping based upon the context):

spec:\nidentity:\n- name: jwt\nwhen:\n- selector: context.request.http.headers.authorization\noperator: matches\nvalue: JWT .+\noidc: {...}\n- name: api-key\nwhen:\n- selector: context.request.http.headers.authorization\noperator: matches\nvalue: APIKEY .+\napiKey: {...}\n
"},{"location":"authorino/docs/features/#common-feature-caching-cache","title":"Common feature: Caching (cache)","text":"

Objects resolved at runtime in an Auth Pipeline can be cached \"in-memory\", and avoided being evaluated again at a subsequent request, until it expires. A lookup cache key and a TTL can be set individually for any evaluator config in an AuthConfig.

Each cache config induces a completely independent cache table (or \"cache namespace\"). Consequently, different evaluator configs can use the same cache key and there will be no collision between entries from different evaluators.

E.g.:

spec:\nhosts:\n- my-api.io\nidentity: [...]\nmetadata:\n- name: external-metadata\nhttp:\nendpoint: http://my-external-source?search={context.request.http.path}\ncache:\nkey:\nvalueFrom: { authJSON: context.request.http.path }\nttl: 300\nauthorization:\n- name: complex-policy\nopa:\nexternalRegistry:\nendpoint: http://my-policy-registry\ncache:\nkey:\nvalueFrom:\nauthJSON: \"{auth.identity.group}-{context.request.http.method}-{context.request.http.path}\"\nttl: 60\n

The example above sets caching for the 'external-metadata' metadata config and for the 'complex-policy' authorization policy. In the case of 'external-metadata', the cache key is the path of the original HTTP request being authorized by Authorino (fetched dynamically from the Authorization JSON); i.e., after obtaining a metadata object from the external source for a given contextual HTTP path one first time, whenever that same HTTP path repeats in a subsequent request, Authorino will use the cached object instead of sending a request again to the external source of metadata. After 5 minutes (300 seconds), the cache entry will expire and Authorino will fetch again from the source if requested.

As for the 'complex-policy' authorization policy, the cache key is a string composed the 'group' the identity belongs to, the method of the HTTP request and the path of the HTTP request. Whenever these repeat, Authorino will use the result of the policy that was evaluated and cached priorly. Cache entries in this namespace expire after 60 seconds.

Notes on evaluator caching

Capacity - By default, each cache namespace is limited to 1 mb. Entries will be evicted following First-In-First-Out (FIFO) policy to release space. The individual capacity of cache namespaces is set at the level of the Authorino instance (via --evaluator-cache-size command-line flag or spec.evaluatorCacheSize field of the Authorino CR).

Usage - Avoid caching objects whose evaluation is considered to be relatively cheap. Examples of operations associated to Authorino auth features that are usually NOT worth caching: validation of JSON Web Tokens (JWT), Kubernetes TokenReviews and SubjectAccessReviews, API key validation, simple JSON pattern-matching authorization rules, simple OPA policies. Examples of operations where caching may be desired: OAuth2 token introspection, fetching of metadata from external sources (via HTTP request), complex OPA policies.

"},{"location":"authorino/docs/features/#common-feature-metrics-metrics","title":"Common feature: Metrics (metrics)","text":"

By default, Authorino will only export metrics down to the level of the AuthConfig. Deeper metrics at the level of each evaluator within an AuthConfig can be activated by setting the common field metrics: true of the evaluator config.

E.g.:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: my-authconfig\nnamespace: my-ns\nspec:\nmetadata:\n- name: my-external-metadata\nhttp:\nendpoint: http://my-external-source?search={context.request.http.path}\nmetrics: true\n

The above will enable the metrics auth_server_evaluator_duration_seconds (histogram) and auth_server_evaluator_total (counter) with labels namespace=\"my-ns\", authconfig=\"my-authconfig\", evaluator_type=\"METADATA_GENERIC_HTTP\" and evaluator_name=\"my-external-metadata\".

The same pattern works for other types of evaluators. Find below the list of all types and corresponding label constant used in the metric:

Evaluator type Metric's evaluator_type label identity.apiKey IDENTITY_APIKEY identity.kubernetes IDENTITY_KUBERNETES identity.oidc IDENTITY_OIDC identity.oauth2 IDENTITY_OAUTH2 identity.mtls IDENTITY_MTLS identity.hmac IDENTITY_HMAC identity.plain IDENTITY_PLAIN identity.anonymous IDENTITY_NOOP metadata.http METADATA_GENERIC_HTTP metadata.userInfo METADATA_USERINFO metadata.uma METADATA_UMA authorization.json AUTHORIZATION_JSON authorization.opa AUTHORIZATION_OPA authorization.kubernetes AUTHORIZATION_KUBERNETES response.json RESPONSE_JSON response.wristband RESPONSE_WRISTBAND

Metrics at the level of the evaluators can also be enforced to an entire Authorino instance, by setting the --deep-metrics-enabled command-line flag. In this case, regardless of the value of the field spec.(identity|metadata|authorization|response).metrics in the AuthConfigs, individual metrics for all evaluators of all AuthConfigs will be exported.

For more information about metrics exported by Authorino, see Observability.

"},{"location":"authorino/docs/getting-started/","title":"Getting started","text":"

This page covers requirements and instructions to deploy Authorino on a Kubernetes cluster, as well as the steps to declare, apply and try out a protection layer of authentication and authorization over your service, clean-up and complete uninstallation.

If you prefer learning with an example, check out our Hello World.

  • Requirements
  • Installation
  • Protect a service
  • Clean-up
  • Next steps
"},{"location":"authorino/docs/getting-started/#requirements","title":"Requirements","text":""},{"location":"authorino/docs/getting-started/#platform-requirements","title":"Platform requirements","text":"

These are the platform requirements to use Authorino:

  • Kubernetes server (recommended v1.20 or later), with permission to create Kubernetes Custom Resource Definitions (CRDs) (for bootstrapping Authorino and Authorino Operator)
Alternative: K8s distros and platforms Alternatively to upstream Kubernetes, you should be able to use any other Kubernetes distribution or Kubernetes Management Platform (KMP) with support for Kubernetes Custom Resources Definitions (CRD) and custom controllers, such as Red Hat OpenShift, IBM Cloud Kubernetes Service (IKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS).
  • Envoy proxy (recommended v1.19 or later), to wire up Upstream services (i.e. the services to be protected with Authorino) and external authorization filter (Authorino) for integrations based on the reverse-proxy architecture - example
Alternative: Non-reverse-proxy integration Technically, any client that implements Envoy's external authorization gRPC protocol should be compatible with Authorino. For integrations based on the reverse-proxy architecture nevertheless, we strongly recommended that you leverage Envoy alongside Authorino."},{"location":"authorino/docs/getting-started/#feature-specific-requirements","title":"Feature-specific requirements","text":"

A few examples are:

  • For OpenID Connect, make sure you have access to an identity provider (IdP) and an authority that can issue ID tokens (JWTs). Check out Keycloak which can solve both and connect to external identity sources and user federation like LDAP.

  • For Kubernetes authentication tokens, platform support for the TokenReview and SubjectAccessReview APIs of Kubernetes shall be required. In case you want to be able to requests access tokens for clients running outside the custer, you may also want to check out the requisites for using Kubernetes TokenRequest API (GA in v1.20).

  • For User-Managed Access (UMA) resource data, you will need a UMA-compliant server running as well. This can be an implementation of the UMA protocol by each upstream API itself or (more typically) an external server that knows about the resources. Again, Keycloak can be a good fit here as well. Just keep in mind that, whatever resource server you choose, changing-state actions commanded in the upstream APIs or other parties will have to be reflected in the resource server. Authorino will not do that for you.

Check out the Feature specification page for more feature-specific requirements.

"},{"location":"authorino/docs/getting-started/#installation","title":"Installation","text":""},{"location":"authorino/docs/getting-started/#step-install-the-authorino-operator","title":"Step: Install the Authorino Operator","text":"

The simplest way to install the Authorino Operator is by applying the manifest bundle:

kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n

The above will install the latest build of the Authorino Operator and latest version of the manifests (CRDs and RBAC), which by default points as well to the latest build of Authorino, both based on the main branches of each component. To install a stable released version of the Operator and therefore also defaults to its latest compatible stable release of Authorino, replace main with another tag of a proper release of the Operator, e.g. 'v0.2.0'.

Alternatively, you can deploy the Authorino Operator using the Operator Lifecycle Manager bundles. For instructions, check out Installing via OLM.

"},{"location":"authorino/docs/getting-started/#step-request-an-authorino-instance","title":"Step: Request an Authorino instance","text":"

Choose either cluster-wide or namespaced deployment mode and whether you want TLS termination enabled for the Authorino endpoints (gRPC authorization, raw HTTP authorization, and OIDC Festival Wristband Discovery listeners), and follow the corresponding instructions below.

The instructions here are for centralized gateway or centralized authorization service architecture. Check out the Topologies section of the docs for alternatively running Authorino in a sidecar container.

Cluster-wide (with TLS) Create the namespace:
kubectl create namespace authorino\n
Deploy [cert-manager](https://github.com/jetstack/cert-manager) (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace or cert-manager is installed and running in the cluster):
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml\n
Create the TLS certificates (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace):
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed \"s/\\$(AUTHORINO_INSTANCE)/authorino/g;s/\\$(NAMESPACE)/authorino/g\" | kubectl -n authorino apply -f -\n
Deploy Authorino:
kubectl -n authorino apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  replicas: 1\n  clusterWide: true\n  listener:\n    tls:\n      enabled: true\n      certSecretRef:\n        name: authorino-server-cert\n  oidcServer:\n    tls:\n      enabled: true\n      certSecretRef:\n        name: authorino-oidc-server-cert\nEOF\n
Cluster-wide (without TLS)
kubectl create namespace authorino\nkubectl -n authorino apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  image: quay.io/kuadrant/authorino:latest\n  replicas: 1\n  clusterWide: true\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n
Namespaced (with TLS) Create the namespace:
kubectl create namespace myapp\n
Deploy [cert-manager](https://github.com/jetstack/cert-manager) (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace or cert-manager is installed and running in the cluster):
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml\n
Create the TLS certificates (skip if you already have certificates and certificate keys created and stored in Kubernetes `Secret`s in the namespace):
curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed \"s/\\$(AUTHORINO_INSTANCE)/authorino/g;s/\\$(NAMESPACE)/myapp/g\" | kubectl -n myapp apply -f -\n
Deploy Authorino:
kubectl -n myapp apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  image: quay.io/kuadrant/authorino:latest\n  replicas: 1\n  clusterWide: false\n  listener:\n    tls:\n      enabled: true\n      certSecretRef:\n        name: authorino-server-cert\n  oidcServer:\n    tls:\n      enabled: true\n      certSecretRef:\n        name: authorino-oidc-server-cert\nEOF\n
Namespaced (without TLS)
kubectl create namespace myapp\nkubectl -n myapp apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  image: quay.io/kuadrant/authorino:latest\n  replicas: 1\n  clusterWide: false\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n
"},{"location":"authorino/docs/getting-started/#protect-a-service","title":"Protect a service","text":"

The most typical integration to protect services with Authorino is by putting the service (upstream) behind a reverse-proxy or API gateway, enabled with an authorization filter that ensures all requests to the service are first checked with the authorization server (Authorino).

To do that, make sure you have your upstream service deployed and running, usually in the same Kubernetes server where you installed Authorino. Then, setup an Envoy proxy and create an Authorino AuthConfig for your service.

Authorino exposes 2 interfaces to serve the authorization requests: - a gRPC interface that implements Envoy's External Authorization protocol; - a raw HTTP authorization interface, suitable for using Authorino with Kubernetes ValidatingWebhook, for Envoy external authorization via HTTP, and other integrations (e.g. other proxies).

To use Authorino as a simple satellite (sidecar) Policy Decision Point (PDP), applications can integrate directly via any of these interfaces. By integrating via a proxy or API gateway, the combination makes Authorino to perform as an external Policy Enforcement Point (PEP) completely decoupled from the application.

"},{"location":"authorino/docs/getting-started/#life-cycle","title":"Life cycle","text":""},{"location":"authorino/docs/getting-started/#step-setup-envoy","title":"Step: Setup Envoy","text":"

To configure Envoy for proxying requests targeting the upstream service and authorizing with Authorino, setup an Envoy configuration that enables Envoy's external authorization HTTP filter. Store the configuration in a ConfigMap.

These are the important bits in the Envoy configuration to activate Authorino:

static_resources:\nlisteners:\n- address: {\u2026} # TCP socket address and port of the proxy\nfilter_chains:\n- filters:\n- name: envoy.http_connection_manager\ntyped_config:\n\"@type\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\nroute_config: {\u2026} # routing configs - virtual host domain and endpoint matching patterns and corresponding upstream services to redirect the traffic\nhttp_filters:\n- name: envoy.filters.http.ext_authz # the external authorization filter\ntyped_config:\n\"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz\ntransport_api_version: V3\nfailure_mode_allow: false # ensures only authenticated and authorized traffic goes through\ngrpc_service:\nenvoy_grpc:\ncluster_name: authorino\ntimeout: 1s\nclusters:\n- name: authorino\nconnect_timeout: 0.25s\ntype: strict_dns\nlb_policy: round_robin\nhttp2_protocol_options: {}\nload_assignment:\ncluster_name: authorino\nendpoints:\n- lb_endpoints:\n- endpoint:\naddress:\nsocket_address:\naddress: authorino-authorino-authorization # name of the Authorino service deployed \u2013 it can be the fully qualified name with `.<namespace>.svc.cluster.local` suffix (e.g. `authorino-authorino-authorization.myapp.svc.cluster.local`)\nport_value: 50051\ntransport_socket: # in case TLS termination is enabled in Authorino; omit it otherwise\nname: envoy.transport_sockets.tls\ntyped_config:\n\"@type\": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext\ncommon_tls_context:\nvalidation_context:\ntrusted_ca:\nfilename: /etc/ssl/certs/authorino-ca-cert.crt\n

For a complete Envoy ConfigMap containing an upstream API protected with Authorino, with TLS enabled and option for rate limiting with Limitador, plus a webapp served with under the same domain of the protected API, check out this example.

After creating the ConfigMap with the Envoy configuration, create an Envoy Deployment and Service. E.g.:

kubectl -n myapp apply -f -<<EOF\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: envoy\n  labels:\n    app: envoy\nspec:\n  selector:\n    matchLabels:\n      app: envoy\n  template:\n    metadata:\n      labels:\n        app: envoy\n    spec:\n      containers:\n        - name: envoy\n          image: envoyproxy/envoy:v1.19-latest\n          command: [\"/usr/local/bin/envoy\"]\n          args:\n            - --config-path /usr/local/etc/envoy/envoy.yaml\n            - --service-cluster front-proxy\n            - --log-level info\n            - --component-log-level filter:trace,http:debug,router:debug\n          ports:\n            - name: web\n              containerPort: 8000 # matches the address of the listener in the envoy config\n          volumeMounts:\n            - name: config\n              mountPath: /usr/local/etc/envoy\n              readOnly: true\n            - name: authorino-ca-cert # in case TLS termination is enabled in Authorino; omit it otherwise\n              subPath: ca.crt\n              mountPath: /etc/ssl/certs/authorino-ca-cert.crt\n              readOnly: true\n      volumes:\n        - name: config\n          configMap:\n            name: envoy\n            items:\n              - key: envoy.yaml\n                path: envoy.yaml\n        - name: authorino-ca-cert # in case TLS termination is enabled in Authorino; omit it otherwise\n          secret:\n            defaultMode: 420\n            secretName: authorino-ca-cert\n  replicas: 1\nEOF\n
kubectl -n myapp apply -f -<<EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: envoy\nspec:\n  selector:\n    app: envoy\n  ports:\n    - name: web\n      port: 8000\n      protocol: TCP\nEOF\n
"},{"location":"authorino/docs/getting-started/#step-apply-an-authconfig","title":"Step: Apply an AuthConfig","text":"

Check out the docs for a full description of Authorino's AuthConfig Custom Resource Definition (CRD) and its features.

For examples based on specific use-cases, check out the User guides.

For authentication based on OpenID Connect (OIDC) JSON Web Tokens (JWT), plus one simple JWT claim authorization check, a typical AuthConfig custom resource looks like the following:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: my-api-protection\nspec:\n  hosts: # any hosts that resolve to the envoy service and envoy routing config where the external authorization filter is enabled\n    - my-api.io # north-south traffic through a Kubernetes `Ingress` or OpenShift `Route`\n    - my-api.myapp.svc.cluster.local # east-west traffic (between applications within the cluster)\n  identity:\n    - name: idp-users\n      oidc:\n        endpoint: https://my-idp.com/auth/realm\n  authorization:\n    - name: check-claim\n      json:\n        rules:\n          - selector: auth.identity.group\n            operator: eq\n            value: allowed-users\nEOF\n

After applying the AuthConfig, consumers of the protected service should be able to start sending requests.

"},{"location":"authorino/docs/getting-started/#clean-up","title":"Clean-up","text":""},{"location":"authorino/docs/getting-started/#remove-protection","title":"Remove protection","text":"

Delete the AuthConfig:

kubectl -n myapp delete authconfig/my-api-protection\n

Decommission the Authorino instance:

kubectl -n myapp delete authorino/authorino\n
"},{"location":"authorino/docs/getting-started/#uninstall","title":"Uninstall","text":"

To completely remove Authorino CRDs, run from the Authorino Operator directory:

make uninstall\n
"},{"location":"authorino/docs/getting-started/#next-steps","title":"Next steps","text":"
  1. Read the docs. The Architecture page and the Features page are good starting points to learn more about how Authorino works and its functionalities.
  2. Check out the User guides for several examples of AuthConfigs based on specific use-cases
"},{"location":"authorino/docs/terminology/","title":"Terminology","text":"

Here we define some terms that are used in the project, with the goal of avoiding confusion and facilitating more accurate conversations related to Authorino.

If you see terms used that are not here (or are used in place of terms here) please consider contributing a definition to this doc with a PR, or modifying the use elsewhere to align with these terms.

"},{"location":"authorino/docs/terminology/#terms","title":"Terms","text":"

Access token Type of temporary password (security token), tied to an authenticated identity, issued by an auth server as of request from either the identity subject itself or a registered auth client known by the auth server, and that delegates to a party powers to operate on behalf of that identity before a resource server; it can be formatted as an opaque data string or as an encoded JSON Web Token (JWT).

Application Programming Interface (API) Interface that defines interactions between multiple software applications; (in HTTP communication) set of endpoints and specification to expose resources hosted by a resource server, to be consumed by client applications; the access facade of a resource server.

Attribute-based Access Control (ABAC) Authorization model that grants/denies access to resources based on evaluation of authorization policies which combine attributes together (from claims, from the request, from the resource, etc).

Auth Usually employed as a short for authentication and authorization together (AuthN/AuthZ).

Auth client Application client (software) that uses an auth server, either in the process of authenticating and/or authorizing identity subjects (including self) who want to consume resources from a resources server or auth server.

Auth server Server where auth clients, users, roles, scopes, resources, policies and permissions can be stored and managed.

Authentication (AuthN) Process of verifying that a given credential belongs to a claimed-to-be identity; usually resulting in the issuing of an access token.

Authorization (AuthZ) Process of granting (or denying) access over a resource to a party based on the set of authorization rules, policies and/or permissions enforced.

Authorization header HTTP request header frequently used to carry credentials to authenticate a user in an HTTP communication, like in requests sent to an API; alternatives usually include credentials carried in another (custom) HTTP header, query string parameter or HTTP cookie.

Capability Usually employed to refer to a management feature of a Kubernetes-native system, based on the definition and use of Kubernetes Custom Resources (CRDs and CRs), that enables that system to one of the following \u201ccapability levels\u201d: Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, Auto Pilot.

Claim Attribute packed in a security token which represents a claim that one who bears the token is making about an entity, usually an identity subject.

Client ID Unique identifier of an auth client within an auth server domain (or auth server realm).

Client secret Password presented by auth clients together with their Client IDs while authenticating with an auth server, either when requesting access tokens to be issued or when consuming services from the auth servers in general.

Delegation Process of granting a party (usually an auth client) with powers to act, often with limited scope, on behalf of an identity, to access resources from a resource server. See also OAuth2.

Hash-based Message Authentication Code (HMAC) Specific type of message authentication code (MAC) that involves a cryptographic hash function and a shared secret cryptographic key; it can be used to verify the authenticity of a message and therefore as an authentication method.

Identity Set of properties that qualifies a subject as a strong identifiable entity (usually a user), who can be authenticated by an auth server. See also Claims.

Identity and Access Management (IAM) system Auth system that implements and/or connects with sources of identity (IdP) and offers interfaces for managing access (authorization policies and permissions). See also Auth server.

Identity Provider (IdP) Source of identity; it can be a feature of an auth server or external source connected to an auth server.

ID token Special type of access token; an encoded JSON Web Token (JWT) that packs claims about an identity.

JSON Web Token (JWT) JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.

JSON Web Signature (JWS) Standard for signing arbitrary data, especially JSON Web Tokens (JWT).

JSON Web Key Set (JWKS) Set of keys containing the public keys used to verify any JSON Web Token (JWT).

Keycloak Open source auth server to allow single sign-on with identity and access management.

Lightweight Directory Access Protocol (LDAP) Open standard for distributed directory information services for sharing of information about users, systems, networks, services and applications.

Mutual Transport Layer Security (mTLS) Protocol for the mutual authentication of client-server communication, i.e., the client authenticates the server and the server authenticates the client, based on the acceptance of the X.509 certificates of each party.

OAuth 2.0 (OAuth2) Industry-standard protocol for delegation.

OpenID Connect (OIDC) Simple identity verification (authentication) layer built on top of the OAuth2 protocol.

Open Policy Agent (OPA) Authorization policy agent that enables the usage of declarative authorization policies written in Rego language.

Opaque token Security token devoid of explicit meaning (e.g. random string); it requires the usage of lookup mechanism to be translated into a meaningful set claims representing an identity.

Permission Association between a protected resource the authorization policies that must be evaluated whether access should be granted; e.g. <user|group|role> CAN DO <action> ON RESOURCE <X>.

Policy Rule or condition (authorization policy) that must be satisfied to grant access to a resource; strongly related to the different access control mechanisms (ACMs) and strategies one can use to protect resources, e.g. attribute-based access control (ABAC), role-based access control (RBAC), context-based access control, user-based access control (UBAC).

Policy Administration Point (PAP) Set of UIs and APIs to manage resources servers, resources, scopes, policies and permissions; it is where the auth system is configured.

Policy Decision Point (PDP) Where the authorization requests are sent, with permissions being requested, and authorization policies are evaluated accordingly.

Policy Enforcement Point (PEP) Where the authorization is effectively enforced, usually at the resource server or at a proxy, based on a response provided by the Policy Decision Point (PDP).

Policy storage Where policies are stored and from where they can be fetched, perhaps to be cached.

Red Hat SSO Auth server; downstream product created from the Keycloak Open Source project.

Refresh token Special type of security token, often provided together with an access token in an OAuth2 flow, used to renew the duration of an access token before it expires; it requires client authentication.

Request Party Token (RPT) JSON Web Token (JWT) digitally signed using JSON Web Signature (JWS), issued by the Keycloak auth server.

Resource One or more endpoints of a system, API or server, that can be protected.

Resource-level Access Control (RLAC) Authorization model that takes into consideration attributes of each specific request resource to grant/deny access to those resources (e.g. the resource's owner).

Resource server Server that hosts protected resources.

Role Aspect of a user\u2019s identity assigned to the user to indicate the level of access they should have to the system; essentially, roles represent collections of permissions

Role-based Access Control (RBAC) Authorization model that grants/denies access to resources based on the roles of authenticated users (rather than on complex attributes/policy rules).

Scope Mechanism that defines the specific operations that applications can be allowed to do or information that they can request on an identity\u2019s behalf; often presented as a parameter when access is requested as a way to communicate what access is needed, and used by auth server to respond what actual access is granted.

Single Page Application (SPA) Web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server.

Single Sign-on (SSO) Authentication scheme that allows a user to log in with a single ID and password to any of several related, yet independent, software systems.

Upstream (In the context of authentication/authorization) API whose endpoints must be protected by the auth system; the unprotected service in front of which a protection layer is added (by connecting with a Policy Decision Point).

User-based Access Control (UBAC) Authorization model that grants/denies access to resources based on claims of the identity (attributes of the user).

User-Managed Access (UMA) OAuth2-based access management protocol, used for users of an auth server to control the authorization process, i.e. directly granting/denying access to user-owned resources to other requesting parties.

"},{"location":"authorino/docs/user-guides/","title":"User guides","text":"
  • Hello World The basics of protecting an API with Authorino.

  • Authentication with Kubernetes tokens (TokenReview API) Validate Kubernetes Service Account tokens to authenticate requests to your protected hosts.

  • Authentication with API keys Issue API keys stored in Kubernetes Secrets for clients to authenticate with your protected hosts.

  • Authentication with X.509 certificates and mTLS Verify client X.509 certificates against trusted root CAs.

  • OpenID Connect Discovery and authentication with JWTs Validate JSON Web Tokens (JWT) issued and signed by an OpenID Connect server; leverage OpenID Connect Discovery to automatically fetch JSON Web Key Sets (JWKS).

  • OAuth 2.0 token introspection (RFC 7662) Introspect OAuth 2.0 access tokens (e.g. opaque tokens) for online user data and token validation in request-time.

  • Passing credentials (Authorization header, cookie headers and others) Customize where credentials are supplied in the request by each trusted source of identity.

  • HTTP \"Basic\" Authentication (RFC 7235) Turn Authorino API key Secrets settings into HTTP basic auth.

  • Anonymous access Bypass identity verification or fall back to anonymous access when credentials fail to validate

  • Token normalization Normalize identity claims from trusted sources and reduce complexity in your policies.

  • Edge Authentication Architecture (EAA) Exchange satellite (outer-layer) authentication tokens for \"Festival Wristbands\" accepted ubiquitously at the inside of your network. Normalize from multiple and varied sources of identity and authentication methods in the edge of your architecture; filter privacy data, limit the scope of permissions, and simplify authorization rules to your internal microservices.

  • Fetching auth metadata from external sources Get online data from remote HTTP services to enhance authorization rules.

  • OpenID Connect UserInfo Fetch user info for OpenID Connect ID tokens in request-time for extra metadata for your policies and online verification of token validity.

  • Resource-level authorization with User-Managed Access (UMA) resource registry Fetch resource attributes relevant for authorization from a User-Managed Access (UMA) resource registry such as Keycloak resource server clients.

  • Simple pattern-matching authorization policies Write simple authorization rules based on JSON patterns matched against Authorino's Authorization JSON; check contextual information of the request, validate JWT claims, cross metadata fetched from external sources, etc.

  • OpenID Connect (OIDC) and Role-Based Access Control (RBAC) with Authorino and Keycloak Combine OpenID Connect (OIDC) authentication and Role-Based Access Control (RBAC) authorization rules leveraging Keycloak and Authorino working together.

  • Open Policy Agent (OPA) Rego policies Leverage the power of Open Policy Agent (OPA) policies, evaluated against Authorino's Authorization JSON in a built-in runtime compiled together with Authorino; pre-cache policies defined in Rego language inline or fetched from an external policy registry.

  • Kubernetes RBAC for service authorization (SubjectAccessReview API) Manage permissions in the Kubernetes RBAC and let Authorino to check them in request-time with the authorization system of the cluster.

  • Authorization with Keycloak Authorization Services Use Authorino as an adapter for Keycloak Authorization Services without importing any library or rebuilding your application code.

  • Integration with Authzed/SpiceDB Permission requests sent to a Google Zanzibar-based Authzed/SpiceDB instance, via gRPC.

  • Injecting data in the request Inject HTTP headers with serialized JSON content.

  • Authenticated rate limiting (with Envoy Dynamic Metadata) Provide Envoy with dynamic metadata from the external authorization process to be injected and used by consecutive filters, such as by a rate limiting service.

  • Redirecting to a login page Customize response status code and headers on failed requests. E.g. redirect users of a web application protected with Authorino to a login page instead of a 401 Unauthorized; mask resources on access denied behind a 404 Not Found response instead of 403 Forbidden.

  • Mixing Envoy built-in filter for auth and Authorino Have JWT validation handled by Envoy beforehand and the JWT payload injected into the request to Authorino, to be used in custom authorization policies defined in a AuthConfig.

  • Host override via context extension Induce the lookup of an AuthConfig by supplying extended host context, for use cases such as of path prefix-based lookup and wildcard subdomains lookup.

  • Using Authorino as ValidatingWebhook service Use Authorino as a generic Kubernetes ValidatingWebhook service where the rules to validate a request to the Kubernetes API are written in an AuthConfig.

  • Reducing the operational space: sharding, noise and multi-tenancy Have multiple instances of Authorino running in the same space (Kubernetes namespace or cluster-scoped), yet watching particular sets of resources.

  • Caching Cache auth objects resolved at runtime for any configuration bit of an AuthConfig, for easy access in subsequent requests whenever an arbitrary cache key repeats, until the cache entry expires.

  • Observability Prometheus metrics exported by Authorino, readiness probe, logging, tracing, etc.

"},{"location":"authorino/docs/user-guides/anonymous-access/","title":"User guide: Anonymous access","text":"

Bypass identity verification or fall back to anonymous access when credentials fail to validate

Authorino features in this guide:
  • Identity verification & authentication \u2192 Anonymous access
For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/anonymous-access/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/anonymous-access/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/anonymous-access/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/anonymous-access/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/anonymous-access/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/anonymous-access/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: public\n    anonymous: {}\nEOF\n

The example above enables anonymous access (i.e. removes authentication), without adding any extra layer of protection to the API. This is virtually equivalent to setting a top-level condition to the AuthConfig that always skips the configuration, or to switching authentication/authorization off completely in the route to the API.

For more sophisticated use cases of anonymous access with Authorino, consider combining this feature with other identity sources in the AuthConfig while playing with the priorities of each source, as well as combination with when conditions, and/or adding authorization policies that either cover authentication or address anonymous access with proper rules (e.g. enforcing read-only access).

Check out the docs for the Anonymous access feature for an example of an AuthConfig that falls back to anonymous access when a priority OIDC/JWT-based authentication fails, and enforces a read-only policy in such cases.

"},{"location":"authorino/docs/user-guides/anonymous-access/#6-consume-the-api","title":"6. Consume the API","text":"
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/anonymous-access/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/","title":"User guide: Authentication with API keys","text":"

Issue API keys stored in Kubernetes Secrets for clients to authenticate with your protected hosts.

Authorino features in this guide:
  • Identity verification & authentication \u2192 API key
In Authorino, API keys are stored as Kubernetes `Secret`s. Each resource must contain an `api_key` entry with the value of the API key, and labeled to match the selectors specified in `spec.identity.apiKey.selector` of the `AuthConfig`. API key `Secret`s must also include labels that match the `secretLabelSelector` field of the Authorino instance. See [Resource reconciliation and status update](../architecture.md#resource-reconciliation-and-status-update) for details. For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/api-key-authentication/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/api-key-authentication/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\nEOF\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#7-consume-the-api","title":"7. Consume the API","text":"

With a valid API key:

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

With missing or invalid API key:

curl -H 'Authorization: APIKEY invalid' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: APIKEY realm=\"friends\"\n# x-ext-auth-reason: the API Key provided is invalid\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#8-delete-an-api-key-revoke-access-to-the-api","title":"8. Delete an API key (revoke access to the API)","text":"
kubectl delete secret/api-key-1\n
"},{"location":"authorino/docs/user-guides/api-key-authentication/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/","title":"User guide: Authenticated rate limiting (with Envoy Dynamic Metadata)","text":"

Provide Envoy with dynamic metadata about the external authorization process to be injected into the rate limiting filter.

Authorino features in this guide:
  • Dynamic response \u2192 Response wrappers \u2192 Envoy Dynamic Metadata
  • Dynamic response \u2192 JSON injection
  • Identity verification & authentication \u2192 API key
Dynamic JSON objects built out of static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json) can be wrapped to be returned to the reverse-proxy as Envoy Well Known Dynamic Metadata content. Envoy can use those to inject data returned by the external authorization service into the other filters, such as the rate limiting filter. Check out as well the user guides about [Injecting data in the request](./injecting-data.md) and [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#4-deploy-limitador","title":"4. Deploy Limitador","text":"

Limitador is a lightweight rate limiting service that can be used with Envoy.

On this bundle, we will deploy Limitador pre-configured to limit requests to the talker-api domain up to 5 requests per interval of 60 seconds per user_id. Envoy will be configured to recognize the presence of Limitador and activate it on requests to the Talker API.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#5-setup-envoy","title":"5. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#6-create-the-authconfig","title":"6. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n  response:\n  - name: rate-limit\n    wrapper: envoyDynamicMetadata\n    wrapperKey: ext_auth_data # how this bit of dynamic metadata from the ext authz service is named in the Envoy config\n    json:\n      properties:\n      - name: username\n        valueFrom:\n          authJSON: auth.identity.metadata.annotations.auth-data\\/username\nEOF\n

An annotation auth-data/username will be read from the Kubernetes Secrets storing valid API keys and passed as dynamic metadata { \"ext_auth_data\": { \"username\": \u00abannotations.auth-data/username\u00bb } }.

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#7-create-a-couple-of-api-keys","title":"7. Create a couple of API keys","text":"

For user John:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\n  annotations:\n    auth-data/username: john\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n

For user Jane:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-2\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\n  annotations:\n    auth-data/username: jane\nstringData:\n  api_key: 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#8-consume-the-api","title":"8. Consume the API","text":"

As John:

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Repeat the request a few more times within the 60-second time window, until the response status is 429 Too Many Requests.

While the API is still limited to John, send requests as Jane:

curl -H 'Authorization: APIKEY 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete secret/api-key-2\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/limitador/limitador-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/authzed/","title":"User guide: Integration with Authzed/SpiceDB","text":"

Permission requests sent to a Google Zanzibar-based Authzed/SpiceDB instance, via gRPC.

Authorino features in this guide:
  • Authorization \u2192 Authzed/SpiceDB
  • Identity verification & authentication \u2192 API key

"},{"location":"authorino/docs/user-guides/authzed/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/authzed/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/authzed/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/authzed/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/authzed/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/authzed/#5-create-the-permission-database","title":"5. Create the permission database","text":"

Create the namespace:

kubectl create namespace spicedb\n

Create the SpiceDB instance:

kubectl -n spicedb apply -f -<<EOF\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: spicedb\n  labels:\n    app: spicedb\nspec:\n  selector:\n    matchLabels:\n      app: spicedb\n  template:\n    metadata:\n      labels:\n        app: spicedb\n    spec:\n      containers:\n      - name: spicedb\n        image: authzed/spicedb\n        args:\n        - serve\n        - \"--grpc-preshared-key\"\n        - secret\n        - \"--http-enabled\"\n        ports:\n        - containerPort: 50051\n        - containerPort: 8443\n  replicas: 1\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: spicedb\nspec:\n  selector:\n    app: spicedb\n  ports:\n    - name: grpc\n      port: 50051\n      protocol: TCP\n    - name: http\n      port: 8443\n      protocol: TCP\nEOF\n

Forward local request to the SpiceDB service:

kubectl -n spicedb port-forward service/spicedb 8443:8443 2>&1 >/dev/null &\n

Create the permission schema:

curl -X POST http://localhost:8443/v1/schema/write \\\n-H 'Authorization: Bearer secret' \\\n-H 'Content-Type: application/json' \\\n-d @- << EOF\n{\n  \"schema\": \"definition blog/user {}\\ndefinition blog/post {\\n\\trelation reader: blog/user\\n\\trelation writer: blog/user\\n\\n\\tpermission read = reader + writer\\n\\tpermission write = writer\\n}\"\n}\nEOF\n

Create the relationships:

  • blog/user:emilia \u2192 writer of blog/post:1
  • blog/user:beatrice \u2192 reader of blog/post:1
curl -X POST http://localhost:8443/v1/relationships/write \\\n-H 'Authorization: Bearer secret' \\\n-H 'Content-Type: application/json' \\\n-d @- << EOF\n{\n  \"updates\": [\n    {\n      \"operation\": \"OPERATION_CREATE\",\n      \"relationship\": {\n        \"resource\": {\n          \"objectType\": \"blog/post\",\n          \"objectId\": \"1\"\n        },\n        \"relation\": \"writer\",\n        \"subject\": {\n          \"object\": {\n            \"objectType\": \"blog/user\",\n            \"objectId\": \"emilia\"\n          }\n        }\n      }\n    },\n    {\n      \"operation\": \"OPERATION_CREATE\",\n      \"relationship\": {\n        \"resource\": {\n          \"objectType\": \"blog/post\",\n          \"objectId\": \"1\"\n        },\n        \"relation\": \"reader\",\n        \"subject\": {\n          \"object\": {\n            \"objectType\": \"blog/user\",\n            \"objectId\": \"beatrice\"\n          }\n        }\n      }\n    }\n  ]\n}\nEOF\n
"},{"location":"authorino/docs/user-guides/authzed/#6-create-the-authconfig","title":"6. Create the AuthConfig","text":"

Store the shared token for Authorino to authenticate with the SpiceDB instance in a Service:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: spicedb\n  labels:\n    app: spicedb\nstringData:\n  grpc-preshared-key: secret\nEOF\n

Create the AuthConfig:

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: blog-users\n    apiKey:\n      selector:\n        matchLabels:\n          app: talker-api\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n  authorization:\n  - name: authzed\n    authzed:\n      endpoint: spicedb.spicedb.svc.cluster.local:50051\n      insecure: true\n      sharedSecretRef:\n        name: spicedb\n        key: grpc-preshared-key\n      subject:\n        kind:\n          value: blog/user\n        name:\n          valueFrom:\n            authJSON: auth.identity.metadata.annotations.username\n      resource:\n        kind:\n          value: blog/post\n        name:\n          valueFrom:\n            authJSON: context.request.http.path.@extract:{\"sep\":\"/\",\"pos\":2}\n      permission:\n        valueFrom:\n          authJSON: context.request.http.method.@replace:{\"old\":\"GET\",\"new\":\"read\"}.@replace:{\"old\":\"POST\",\"new\":\"write\"}\nEOF\n
"},{"location":"authorino/docs/user-guides/authzed/#7-create-the-api-keys","title":"7. Create the API keys","text":"

For Emilia (writer):

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-writer\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    app: talker-api\n  annotations:\n    username: emilia\nstringData:\n  api_key: IAMEMILIA\nEOF\n

For Beatrice (reader):

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-reader\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    app: talker-api\n  annotations:\n    username: beatrice\nstringData:\n  api_key: IAMBEATRICE\nEOF\n
"},{"location":"authorino/docs/user-guides/authzed/#8-consume-the-api","title":"8. Consume the API","text":"

As Emilia, send a GET request:

curl -H 'Authorization: APIKEY IAMEMILIA' \\\n-X GET \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i\n# HTTP/1.1 200 OK\n

As Emilia, send a POST request:

curl -H 'Authorization: APIKEY IAMEMILIA' \\\n-X POST \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i\n# HTTP/1.1 200 OK\n

As Beatrice, send a GET request:

curl -H 'Authorization: APIKEY IAMBEATRICE' \\\n-X GET \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i\n# HTTP/1.1 200 OK\n

As Beatrice, send a POST request:

curl -H 'Authorization: APIKEY IAMBEATRICE' \\\n-X POST \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/posts/1 -i\n# HTTP/1.1 403 Forbidden\n# x-ext-auth-reason: PERMISSIONSHIP_NO_PERMISSION;token=GhUKEzE2NzU3MDE3MjAwMDAwMDAwMDA=\n
"},{"location":"authorino/docs/user-guides/authzed/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace spicedb\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/caching/","title":"User guide: Caching","text":"

Cache auth objects resolved at runtime for any configuration bit of an AuthConfig (i.e. any evaluator), of any phase (identity, metadata, authorization and dynamic response), for easy access in subsequent requests, whenever an arbitrary (user-defined) cache key repeats, until the cache entry expires.

This is particularly useful for configuration bits whose evaluation is significantly more expensive than accessing the cache. E.g.:

  • Caching of metadata fetched from external sources in general
  • Caching of previously validated identity access tokens (e.g. for OAuth2 opaque tokens that involve consuming the token introspection endpoint of an external auth server)
  • Caching of complex Rego policies that involve sending requests to external services

Cases where one will NOT want to enable caching, due to relatively cheap compared to accessing and managing the cache: - Validation of OIDC/JWT access tokens - OPA/Rego policies that do not involve external requests - JSON pattern-matching authorization - Dynamic JSON responses - Anonymous access

Authorino features in this guide:
  • Common feature \u2192 Caching
  • Identity verification & authentication \u2192 Anonymous access
  • External auth metadata \u2192 HTTP GET/GET-by-POST
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
  • Dynamic response \u2192 JSON injection
For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/caching/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/caching/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/caching/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/caching/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/caching/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/caching/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: anonymous\n    anonymous: {}\n  metadata:\n  - name: cached-metadata\n    http:\n      endpoint: http://talker-api.default.svc.cluster.local:3000/metadata/{context.request.http.path}\n      method: GET\n    cache:\n      key:\n        valueFrom: { authJSON: context.request.http.path }\n      ttl: 60\n  authorization:\n  - name: cached-authz\n    opa:\n      inlineRego: |\n        now = time.now_ns()\n        allow = true\n      allValues: true\n    cache:\n      key:\n        valueFrom: { authJSON: context.request.http.path }\n      ttl: 60\n  response:\n  - name: x-authz-data\n    json:\n      properties:\n      - name: cached-metadata\n        valueFrom: { authJSON: auth.metadata.cached-metadata.uuid }\n      - name: cached-authz\n        valueFrom: { authJSON: auth.authorization.cached-authz.now }\nEOF\n

The example above enables caching for the external source of metadata, which in this case, for convenience, is the same upstream API protected by Authorino (i.e. the Talker API), though consumed directly by Authorino, without passing through the proxy. This API generates a uuid random hash that it injects in the JSON response. This value is different in every request processed by the API.

The example also enables caching of returned OPA virtual documents. cached-authz is a trivial Rego policy that always grants access, but generates a timestamp, which Authorino will cache.

In both cases, the path of the HTTP request is used as cache key. I.e., whenever the path repeats, Authorino reuse the values stored previously in each cache table (cached-metadata and cached-authz), respectively saving a request to the external source of metadata and the evaluation of the OPA policy. Cache entries will expire in both cases after 60 seconds they were stored in the cache.

The cached values will be visible in the response returned by the Talker API in x-authz-data header injected by Authorino. This way, we can tell when an existing value in the cache was used and when a new one was generated and stored.

"},{"location":"authorino/docs/user-guides/caching/#6-consume-the-api","title":"6. Consume the API","text":"
  1. To /hello
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# [\u2026]\n#  \"X-Authz-Data\": \"{\\\"cached-authz\\\":\\\"1649343067462380300\\\",\\\"cached-metadata\\\":\\\"92c111cd-a10f-4e86-8bf0-e0cd646c6f79\\\"}\",\n# [\u2026]\n
  1. To a different path
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/goodbye\n# [\u2026]\n#  \"X-Authz-Data\": \"{\\\"cached-authz\\\":\\\"1649343097860450300\\\",\\\"cached-metadata\\\":\\\"37fce386-1ee8-40a7-aed1-bf8a208f283c\\\"}\",\n# [\u2026]\n
  1. To /hello again before the cache entry expires (60 seconds from the first request sent to this path)
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# [\u2026]\n#  \"X-Authz-Data\": \"{\\\"cached-authz\\\":\\\"1649343067462380300\\\",\\\"cached-metadata\\\":\\\"92c111cd-a10f-4e86-8bf0-e0cd646c6f79\\\"}\",  <=== same cache-id as before\n# [\u2026]\n
  1. To /hello again after the cache entry expires (60 seconds from the first request sent to this path)
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# [\u2026]\n#  \"X-Authz-Data\": \"{\\\"cached-authz\\\":\\\"1649343135702743800\\\",\\\"cached-metadata\\\":\\\"e708a3a6-5caf-4028-ab5c-573ad9be7188\\\"}\",  <=== different cache-id\n# [\u2026]\n
"},{"location":"authorino/docs/user-guides/caching/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/","title":"User guide: Redirecting to a login page","text":"

Customize response status code and headers on failed requests to redirect users of a web application protected with Authorino to a login page instead of a 401 Unauthorized.

Authorino features in this guide:
  • Dynamic response \u2192 Custom denial status
  • Identity verification & authentication \u2192 API key
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
Authorino's default response status codes, messages and headers for unauthenticated (`401`) and unauthorized (`403`) requests can be customized with static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). Check out as well the user guides about [HTTP \"Basic\" Authentication (RFC 7235)](./user-guides/http-basic-authentication.md) and [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#2-deploy-the-matrix-quotes-web-application","title":"2. Deploy the Matrix Quotes web application","text":"

The Matrix Quotes is a static web application that contains quotes from the film The Matrix.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/matrix-quotes-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Matrix Quotes webapp behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/envoy-deploy.yaml\n

The bundle also creates an Ingress with host name matrix-quotes-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: matrix-quotes-protection\nspec:\n  hosts:\n  - matrix-quotes-authorino.127.0.0.1.nip.io\n  identity:\n  - name: browser-users\n    apiKey:\n      selector:\n        matchLabels:\n          group: users\n    credentials:\n      in: cookie\n      keySelector: TOKEN\n  - name: http-basic-auth\n    apiKey:\n      selector:\n        matchLabels:\n          group: users\n    credentials:\n      in: authorization_header\n      keySelector: Basic\n  denyWith:\n    unauthenticated:\n      code: 302\n      headers:\n      - name: Location\n        valueFrom:\n          authJSON: http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/login.html?redirect_to={context.request.http.path}\nEOF\n

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: user-credential-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: users\nstringData:\n  api_key: am9objpw # john:p\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#7-consume-the-application","title":"7. Consume the application","text":"

On a web browser, navigate to http://matrix-quotes-authorino.127.0.0.1.nip.io:8000.

Click on the cards to read quotes from characters of the movie. You should be redirected to login page.

Log in using John's credentials: - Username: john - Password: p

Click again on the cards and check that now you are able to access the inner pages.

You can also consume a protected endpoint of the application using HTTP Basic Authentication:

curl -u john:p http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/neo.html\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#8-optional-modify-the-authconfig-to-authenticate-with-oidc","title":"8. (Optional) Modify the AuthConfig to authenticate with OIDC","text":""},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#setup-a-keycloak-server","title":"Setup a Keycloak server","text":"

Deploy a Keycloak server preloaded with a realm named kuadrant:

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Resolve local Keycloak domain so it can be accessed from the local host and inside the cluster with the name: (This will be needed to redirect to Keycloak's login page and at the same time validate issued tokens.)

echo '127.0.0.1 keycloak' >> /etc/hosts\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl port-forward deployment/keycloak 8080:8080 &\n

Create a client:

curl -H \"Authorization: Bearer $(curl http://keycloak:8080/auth/realms/master/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=admin-cli' -d 'username=admin' -d 'password=p' | jq -r .access_token)\" \\\n-H 'Content-type: application/json' \\\n-d '{ \"name\": \"matrix-quotes\", \"clientId\": \"matrix-quotes\", \"publicClient\": true, \"redirectUris\": [\"http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/auth*\"], \"enabled\": true }' \\\nhttp://keycloak:8080/auth/admin/realms/kuadrant/clients\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#reconfigure-the-matrix-quotes-app-to-use-keycloaks-login-page","title":"Reconfigure the Matrix Quotes app to use Keycloak's login page","text":"
kubectl set env deployment/matrix-quotes KEYCLOAK_REALM=http://keycloak:8080/auth/realms/kuadrant CLIENT_ID=matrix-quotes\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#apply-the-changes-to-the-authconfig","title":"Apply the changes to the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: matrix-quotes-protection\nspec:\n  hosts:\n  - matrix-quotes-authorino.127.0.0.1.nip.io\n  identity:\n  - name: idp-users\n    oidc:\n      endpoint: http://keycloak:8080/auth/realms/kuadrant\n    credentials:\n      in: cookie\n      keySelector: TOKEN\n  denyWith:\n    unauthenticated:\n      code: 302\n      headers:\n      - name: Location\n        valueFrom:\n          authJSON: http://keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/auth?client_id=matrix-quotes&redirect_uri=http://matrix-quotes-authorino.127.0.0.1.nip.io:8000/auth?redirect_to={context.request.http.path}&scope=openid&response_type=code\nEOF\n
"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#consume-the-application-again","title":"Consume the application again","text":"

Refresh the browser window or navigate again to http://matrix-quotes-authorino.127.0.0.1.nip.io:8000.

Click on the cards to read quotes from characters of the movie. You should be redirected to login page this time served by the Keycloak server.

Log in as Jane (a user of the Keycloak realm): - Username: jane - Password: p

Click again on the cards and check that now you are able to access the inner pages.

"},{"location":"authorino/docs/user-guides/deny-with-redirect-to-login/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/user-credential-1\nkubectl delete authconfig/matrix-quotes-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/matrix-quotes/matrix-quotes-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/","title":"User guide: Edge Authentication Architecture (EAA)","text":"

Edge Authentication Architecture (EAA) is a pattern where more than extracting authentication logics and specifics from the application codebase to a proper authN/authZ layer, this is pushed to the edge of your cloud network, without violating the Zero Trust principle nevertheless.

The very definition of \"edge\" is subject to discussion, but the underlying idea is that clients (e.g. API clients, IoT devices, etc.) authenticate with a layer that, before moving traffic to inside the network: - understands the complexity of all the different methods of authentication supported; - sometimes some token normalization is involved; - eventually enforces some preliminary authorization policies; and - possibly filters data bits that are sensitive to privacy concerns (e.g. to comply with local legislation such as GRPD, CCPA, etc)

As a minimum, EAA allows to simplify authentication between applications and microservices inside the network, as well as to reduce authorization to domain-specific rules and policies, rather than having to deal all the complexity to support all types of clients in every node.

Authorino features in this guide:
  • Dynamic response \u2192 Festival Wristband tokens
  • Identity verification & authentication \u2192 Identity extension
  • Identity verification & authentication \u2192 API key
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
Festival Wristbands are OpenID Connect ID tokens (signed JWTs) issued by Authorino by the end of the Auth Pipeline, for authorized requests. It can be configured to include claims based on static values and values fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). Check out as well the user guides about [Token normalization](./token-normalization.md), [Authentication with API keys](./api-key-authentication.md) and [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses
  • jwt, to inspect JWTs (optional)

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#2-create-the-namespaces","title":"2. Create the namespaces","text":"

For simplicity, this examples will set up edge and internal nodes in different namespaces of the same Kubernetes cluster. Those will share a same single cluster-wide Authorino instance. In real-life scenarios, it does not have to be like that.

kubectl create namespace authorino\nkubectl create namespace edge\nkubectl create namespace internal\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl -n authorino apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  clusterWide: true\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#5-setup-the-edge","title":"5. Setup the Edge","text":""},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#setup-envoy","title":"Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl -n edge apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/eaa/envoy-edge-deploy.yaml\n

The bundle also creates an Ingress with host name edge-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 9000 to inside the cluster in order to actually reach the Envoy service:

kubectl -n edge port-forward deployment/envoy 9000:9000 &\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#create-the-authconfig","title":"Create the AuthConfig","text":"

Create a required secret, used by Authorino to sign the Festival Wristband tokens:

kubectl -n edge apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: wristband-signing-key\nstringData:\n  key.pem: |\n    -----BEGIN EC PRIVATE KEY-----\n    MHcCAQEEIDHvuf81gVlWGo0hmXGTAnA/HVxGuH8vOc7/8jewcVvqoAoGCCqGSM49\n    AwEHoUQDQgAETJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZxJKDysoGwn\n    cnUvHIu23SgW+Ee9lxSmZGhO4eTdQeKxMA==\n    -----END EC PRIVATE KEY-----\ntype: Opaque\nEOF\n

Create the config:

kubectl -n edge apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: edge-auth\nspec:\n  hosts:\n  - edge-authorino.127.0.0.1.nip.io\n  identity:\n  - name: api-clients\n    apiKey:\n      selector:\n        matchLabels:\n          authorino.kuadrant.io/managed-by: authorino\n      allNamespaces: true\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n    extendedProperties:\n    - name: username\n      valueFrom:\n        authJSON: auth.identity.metadata.annotations.authorino\\.kuadrant\\.io/username\n  - name: idp-users\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n    extendedProperties:\n    - name: username\n      valueFrom:\n        authJSON: auth.identity.preferred_username\n  response:\n  - name: wristband\n    wrapper: envoyDynamicMetadata\n    wristband:\n      issuer: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband\n      customClaims:\n      - name: username\n        valueFrom:\n          authJSON: auth.identity.username\n      tokenDuration: 300\n      signingKeyRefs:\n        - name: wristband-signing-key\n          algorithm: ES256\nEOF\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#6-setup-the-internal-workload","title":"6. Setup the internal workload","text":""},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#deploy-the-talker-api","title":"Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl -n internal apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#setup-envoy_1","title":"Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl -n internal apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/eaa/envoy-node-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl -n internal port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#create-the-authconfig_1","title":"Create the AuthConfig","text":"
kubectl -n internal apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: edge-authenticated\n    oidc:\n      endpoint: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband\nEOF\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#7-create-an-api-key","title":"7. Create an API key","text":"
kubectl -n edge apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n  annotations:\n    authorino.kuadrant.io/username: alice\n    authorino.kuadrant.io/email: alice@host\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#8-consume-the-api","title":"8. Consume the API","text":""},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#using-the-api-key-to-authenticate","title":"Using the API key to authenticate","text":"

Authenticate at the edge:

WRISTBAND_TOKEN=$(curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://edge-authorino.127.0.0.1.nip.io:9000/auth -is | tr -d '\\r' | sed -En 's/^x-wristband-token: (.*)/\\1/p')\n

Consume the API:

curl -H \"Authorization: Bearer $WRISTBAND_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n

Try to consume the API with authentication token that is only accepted in the edge:

curl -H \"Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"edge-authenticated\"\n# x-ext-auth-reason: credential not found\n

(Optional) Inspect the wristband token and verify that it only contains restricted info to authenticate and authorize with internal apps.

jwt decode $WRISTBAND_TOKEN\n# [...]\n#\n# Token claims\n# ------------\n# {\n#   \"exp\": 1638452051,\n#   \"iat\": 1638451751,\n#   \"iss\": \"http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/edge/edge-auth/wristband\",\n#   \"sub\": \"02cb51ea0e1c9f3c0960197a2518c8eb4f47e1b9222a968ffc8d4c8e783e4d19\",\n#   \"username\": \"alice\"\n# }\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#authenticating-with-the-keycloak-server","title":"Authenticating with the Keycloak server","text":"

Obtain an access token with the Keycloak server for Jane:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

ACCESS_TOKEN=$(kubectl -n edge run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

(Optional) Inspect the access token issue by Keycloak and verify and how it contains more details about the identity than required to authenticate and authorize with internal apps.

jwt decode $ACCESS_TOKEN\n# [...]\n#\n# Token claims\n# ------------\n# { [...]\n#   \"email\": \"jane@kuadrant.io\",\n#   \"email_verified\": true,\n#   \"exp\": 1638452220,\n#   \"family_name\": \"Smith\",\n#   \"given_name\": \"Jane\",\n#   \"iat\": 1638451920,\n#   \"iss\": \"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\",\n#   \"jti\": \"699f6e49-dea4-4f29-ae2a-929a3a18c94b\",\n#   \"name\": \"Jane Smith\",\n#   \"preferred_username\": \"jane\",\n#   \"realm_access\": {\n#     \"roles\": [\n#       \"offline_access\",\n#       \"member\",\n#       \"admin\",\n#       \"uma_authorization\"\n#     ]\n#   },\n# [...]\n

As Jane, obtain a limited wristband token at the edge:

WRISTBAND_TOKEN=$(curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://edge-authorino.127.0.0.1.nip.io:9000/auth -is | tr -d '\\r' | sed -En 's/^x-wristband-token: (.*)/\\1/p')\n

Consume the API:

curl -H \"Authorization: Bearer $WRISTBAND_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete namespace edge\nkubectl delete namespace internal\nkubectl delete namespace authorino\nkubectl delete namespace keycloak\n

To uninstall the Authorino and Authorino Operator manifests, run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/","title":"User guide: Mixing Envoy built-in filter for auth and Authorino","text":"

Have JWT validation handled by Envoy beforehand and the JWT payload injected into the request to Authorino, to be used in custom authorization policies defined in a AuthConfig.

In this user guide, we will set up Envoy and Authorino to protect a service called the Talker API service, with JWT authentication handled in Envoy and a more complex authorization policy enforced in Authorino.

The policy defines a geo-fence by which only requests originated in Great Britain (country code: GB) will be accepted, unless the user is bound to a role called 'admin' in the auth server, in which case no geofence is enforced.

All requests to the Talker API will be authenticated in Envoy. However, requests to /global will not trigger the external authorization.

Authorino features in this guide:
  • Identity verification & authentication \u2192 Plain
  • External auth metadata \u2192 HTTP GET/GET-by-POST
  • Authorization \u2192 JSON pattern-matching authorization rules
  • Dynamic response \u2192 Custom denial status
For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#4-setup-envoy","title":"4. Setup Envoy","text":"

The command below creates the Envoy configuration and deploys the Envoy proxy wire up the Talker API and external authorization with Authorino.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f -<<EOF\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  labels:\n    app: authorino\n  name: envoy\ndata:\n  envoy.yaml: |\n    static_resources:\n      clusters:\n      - name: talker-api\n        connect_timeout: 0.25s\n        type: strict_dns\n        lb_policy: round_robin\n        load_assignment:\n          cluster_name: talker-api\n          endpoints:\n          - lb_endpoints:\n            - endpoint:\n                address:\n                  socket_address:\n                    address: talker-api\n                    port_value: 3000\n      - name: keycloak\n        connect_timeout: 0.25s\n        type: logical_dns\n        lb_policy: round_robin\n        load_assignment:\n          cluster_name: keycloak\n          endpoints:\n          - lb_endpoints:\n            - endpoint:\n                address:\n                  socket_address:\n                    address: keycloak.keycloak.svc.cluster.local\n                    port_value: 8080\n      - name: authorino\n        connect_timeout: 0.25s\n        type: strict_dns\n        lb_policy: round_robin\n        http2_protocol_options: {}\n        load_assignment:\n          cluster_name: authorino\n          endpoints:\n          - lb_endpoints:\n            - endpoint:\n                address:\n                  socket_address:\n                    address: authorino-authorino-authorization\n                    port_value: 50051\n      listeners:\n      - address:\n          socket_address:\n            address: 0.0.0.0\n            port_value: 8000\n        filter_chains:\n        - filters:\n          - name: envoy.http_connection_manager\n            typed_config:\n              \"@type\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n              stat_prefix: local\n              route_config:\n                name: local_route\n                virtual_hosts:\n                - name: local_service\n                  domains: ['*']\n                  routes:\n                  - match: { path_separated_prefix: /global }\n                    route: { cluster: talker-api }\n                    typed_per_filter_config:\n                      envoy.filters.http.ext_authz:\n                        \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute\n                        disabled: true\n                  - match: { prefix: / }\n                    route: { cluster: talker-api }\n              http_filters:\n              - name: envoy.filters.http.jwt_authn\n                typed_config:\n                  \"@type\": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication\n                  providers:\n                    keycloak:\n                      issuer: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n                      remote_jwks:\n                        http_uri:\n                          uri: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/certs\n                          cluster: keycloak\n                          timeout: 5s\n                        cache_duration:\n                          seconds: 300\n                      payload_in_metadata: verified_jwt\n                  rules:\n                  - match: { prefix: / }\n                    requires: { provider_name: keycloak }\n              - name: envoy.filters.http.ext_authz\n                typed_config:\n                  \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz\n                  transport_api_version: V3\n                  failure_mode_allow: false\n                  metadata_context_namespaces:\n                  - envoy.filters.http.jwt_authn\n                  grpc_service:\n                    envoy_grpc:\n                      cluster_name: authorino\n                    timeout: 1s\n              - name: envoy.filters.http.router\n                typed_config:\n                  \"@type\": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n              use_remote_address: true\n    admin:\n      access_log_path: \"/tmp/admin_access.log\"\n      address:\n        socket_address:\n          address: 0.0.0.0\n          port_value: 8001\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: authorino\n    svc: envoy\n  name: envoy\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: authorino\n      svc: envoy\n  template:\n    metadata:\n      labels:\n        app: authorino\n        svc: envoy\n    spec:\n      containers:\n      - args:\n        - --config-path /usr/local/etc/envoy/envoy.yaml\n        - --service-cluster front-proxy\n        - --log-level info\n        - --component-log-level filter:trace,http:debug,router:debug\n        command:\n        - /usr/local/bin/envoy\n        image: envoyproxy/envoy:v1.22-latest\n        name: envoy\n        ports:\n        - containerPort: 8000\n          name: web\n        - containerPort: 8001\n          name: admin\n        volumeMounts:\n        - mountPath: /usr/local/etc/envoy\n          name: config\n          readOnly: true\n      volumes:\n      - configMap:\n          items:\n          - key: envoy.yaml\n            path: envoy.yaml\n          name: envoy\n        name: config\n---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: authorino\n  name: envoy\nspec:\n  ports:\n  - name: web\n    port: 8000\n    protocol: TCP\n  selector:\n    app: authorino\n    svc: envoy\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-wildcard-host\nspec:\n  rules:\n  - host: talker-api-authorino.127.0.0.1.nip.io\n    http:\n      paths:\n      - backend:\n          service:\n            name: envoy\n            port:\n              number: 8000\n        path: /\n        pathType: Prefix\nEOF\n

For convenience, an Ingress resource is defined with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#5-deploy-the-ip-location-service","title":"5. Deploy the IP Location service","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-examples/main/ip-location/ip-location-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#6-create-the-authconfig","title":"6. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: jwt\n    plain:\n      authJSON: context.metadata_context.filter_metadata.envoy\\.filters\\.http\\.jwt_authn|verified_jwt\n  metadata:\n  - name: geoinfo\n    http:\n      endpoint: http://ip-location.default.svc.cluster.local:3000/{context.request.http.headers.x-forwarded-for.@extract:{\"sep\":\",\"}}\n      method: GET\n      headers:\n      - name: Accept\n        value: application/json\n    cache:\n      key:\n        valueFrom: { authJSON: \"context.request.http.headers.x-forwarded-for.@extract:{\\\"sep\\\":\\\",\\\"}\" }\n  authorization:\n  - name: geofence\n    when:\n    - selector: auth.identity.realm_access.roles\n      operator: excl\n      value: admin\n    json:\n      rules:\n      - selector: auth.metadata.geoinfo.country_iso_code\n        operator: eq\n        value: \"GB\"\n  denyWith:\n    unauthorized:\n      message:\n        valueFrom: { authJSON: \"The requested resource is not available in {auth.metadata.geoinfo.country_name}\" }\nEOF\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#7-obtain-a-token-and-consume-the-api","title":"7. Obtain a token and consume the API","text":""},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#obtain-an-access-token-and-consume-the-api-as-john-member","title":"Obtain an access token and consume the API as John (member)","text":"

Obtain an access token with the Keycloak server for John:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user John, a non-admin (member) user:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As John, consume the API inside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 79.123.45.67' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 200 OK\n

As John, consume the API outside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 109.69.200.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 403 Forbidden\n# x-ext-auth-reason: The requested resource is not available in Italy\n

As John, consume a path of the API that will cause Envoy to skip external authorization:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 109.69.200.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/global -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#obtain-an-access-token-and-consume-the-api-as-jane-admin","title":"Obtain an access token and consume the API as Jane (admin)","text":"

Obtain an access token with the Keycloak server for Jane, an admin user:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

As Jane, consume the API inside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 79.123.45.67' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 200 OK\n

As Jane, consume the API outside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 109.69.200.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 200 OK\n

As Jane, consume a path of the API that will cause Envoy to skip external authorization:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 109.69.200.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/global -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/envoy-jwt-authn-and-authorino/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete ingress/ingress-wildcard-host\nkubectl delete service/envoy\nkubectl delete deployment/envoy\nkubectl delete configmap/envoy\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/external-metadata/","title":"User guide: Fetching auth metadata from external sources","text":"

Get online data from remote HTTP services to enhance authorization rules.

Authorino features in this guide:
  • External auth metadata \u2192 HTTP GET/GET-by-POST
  • Identity verification & authentication \u2192 API key
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
You can configure Authorino to fetch additional metadata from external sources in request-time, by sending either GET or POST request to an HTTP service. The service is expected to return a JSON content which is appended to the [Authorization JSON](./../architecture.md#the-authorization-json), thus becoming available for usage in other configs of the Auth Pipeline, such as in authorization policies or custom responses. URL, parameters and headers of the request to the external source of metadata can be configured, including with dynamic values. Authentication between Authorino and the service can be set as part of these configuration options, or based on shared authentication token stored in a Kubernetes `Secret`. Check out as well the user guides about [Authentication with API keys](./api-key-authentication.md) and [Open Policy Agent (OPA) Rego policies](./opa-authorization.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/external-metadata/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/external-metadata/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/external-metadata/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/external-metadata/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/external-metadata/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/external-metadata/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

In this example, we will implement a geofence policy for the API, using OPA and metadata fetching from an external service that returns geolocalization JSON data for a given IP address. The policy establishes that only GET requests are allowed and the path of the request should be in the form /{country-code}/*, where {country-code} is the 2-character code of the country where the client is identified as in.

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n  metadata:\n    - name: geo\n      http:\n        endpoint: http://ip-api.com/json/{context.request.http.headers.x-forwarded-for.@extract:{\"sep\":\",\"}}?fields=countryCode\n        method: GET\n        headers:\n        - name: Accept\n          value: application/json\n  authorization:\n  - name: geofence\n    opa:\n      inlineRego: |\n        import input.context.request.http\n        allow {\n          http.method = \"GET\"\n          split(http.path, \"/\") = [_, requested_country, _]\n          lower(requested_country) == lower(object.get(input.auth.metadata.geo, \"countryCode\", \"\"))\n        }\nEOF\n

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON, including the description of the @extract string modifier.

"},{"location":"authorino/docs/user-guides/external-metadata/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/external-metadata/#7-consume-the-api","title":"7. Consume the API","text":"

From an IP address assigned to the United Kingdom of Great Britain and Northern Ireland (country code GB):

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 79.123.45.67' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/gb/hello -i\n# HTTP/1.1 200 OK\n
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 79.123.45.67' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/it/hello -i\n# HTTP/1.1 403 Forbidden\n

From an IP address assigned to Italy (country code IT):

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 109.112.34.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/gb/hello -i\n# HTTP/1.1 403 Forbidden\n
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 109.112.34.56' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/it/hello -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/external-metadata/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/hello-world/","title":"User guide: Hello World","text":""},{"location":"authorino/docs/user-guides/hello-world/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/hello-world/#1-create-the-namespace","title":"1. Create the namespace","text":"
kubectl create namespace hello-world\n# namespace/hello-world created\n
"},{"location":"authorino/docs/user-guides/hello-world/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n# deployment.apps/talker-api created\n# service/talker-api created\n
"},{"location":"authorino/docs/user-guides/hello-world/#3-setup-envoy","title":"3. Setup Envoy","text":"
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/envoy-deploy.yaml\n# configmap/envoy created\n# deployment.apps/envoy created\n# service/envoy created\n

Forward requests on port 8000 to the Envoy pod running inside the cluster:

kubectl -n hello-world port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/hello-world/#4-consume-the-api-unprotected","title":"4. Consume the API (unprotected)","text":"
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/hello-world/#5-protect-the-api","title":"5. Protect the API","text":""},{"location":"authorino/docs/user-guides/hello-world/#install-the-authorino-operator","title":"Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/hello-world/#deploy-authorino","title":"Deploy Authorino","text":"
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/authorino.yaml\n# authorino.operator.authorino.kuadrant.io/authorino created\n

The command above will deploy Authorino as a separate service (in contrast to as a sidecar of the Talker API and other architectures). For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/hello-world/#6-consume-the-api-behind-envoy-and-authorino","title":"6. Consume the API behind Envoy and Authorino","text":"
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 404 Not Found\n# x-ext-auth-reason: Service not found\n

Authorino does not know about the talker-api-authorino.127.0.0.1.nip.io host, hence the 404 Not Found. Teach it by applying an AuthConfig.

"},{"location":"authorino/docs/user-guides/hello-world/#7-apply-an-authconfig","title":"7. Apply an AuthConfig","text":"
kubectl -n hello-world apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/hello-world/authconfig.yaml\n# authconfig.authorino.kuadrant.io/talker-api-protection created\n
"},{"location":"authorino/docs/user-guides/hello-world/#8-consume-the-api-without-credentials","title":"8. Consume the API without credentials","text":"
curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: APIKEY realm=\"api-clients\"\n# x-ext-auth-reason: credential not found\n
"},{"location":"authorino/docs/user-guides/hello-world/#grant-access-to-the-api-with-a-tailor-made-security-scheme","title":"Grant access to the API with a tailor-made security scheme","text":"

Check out other user guides for several AuthN/AuthZ use-cases and instructions to implement them using Authorino. A few examples are:

  • Authentication with API keys
  • Authentication with JWTs and OpenID Connect Discovery
  • Authentication with Kubernetes tokens (TokenReview API)
  • Authorization with Open Policy Agent (OPA) Rego policies
  • Authorization with simple JSON pattern-matching rules (e.g. JWT claims)
  • Authorization with Kubernetes RBAC (SubjectAccessReview API)
  • Fetching auth metadata from external sources
  • Token normalization
"},{"location":"authorino/docs/user-guides/hello-world/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the namespaces created in step 1 and 5:

kubectl delete namespace hello-world\nkubectl delete namespace authorino-operator\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/host-override/","title":"Host override via context extension","text":"

By default, Authorino uses the host information of the HTTP request (Attributes.Http.Host) to lookup for an indexed AuthConfig to be enforced. The host info be overridden by supplying a host entry as a (per-route) context extension (Attributes.ContextExtensions), which takes precedence whenever present.

Overriding the host attribute of the HTTP request can be useful to support use cases such as of path prefix-based lookup and wildcard subdomains lookup.

  • Example of host override for path prefix-based lookup
  • Example of host override for wildcard subdomain lookup

For further details about Authorino lookup of AuthConfig, check out Host lookup.

"},{"location":"authorino/docs/user-guides/host-override/#example-of-host-override-for-path-prefix-based-lookup","title":"Example of host override for path prefix-based lookup","text":"

In this use case, 2 different APIs (i.e. Dogs API and Cats API) are served under the same base domain, and differentiated by the path prefix: - pets.com/dogs \u2192 Dogs API - pets.com/cats \u2192 Cats API

Edit the Envoy config to extend the external authorization settings at the level of the routes, with the host value that will be favored by Authorino before the actual host attribute of the HTTP request:

virtual_hosts:\n- name: pets-api\ndomains: ['pets.com']\nroutes:\n- match:\nprefix: /dogs\nroute:\ncluster: dogs-api\ntyped_per_filter_config:\nenvoy.filters.http.ext_authz:\n\\\"@type\\\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute\ncheck_settings:\ncontext_extensions:\nhost: dogs.pets.com\n- match:\nprefix: /cats\nroute:\ncluster: cats-api\ntyped_per_filter_config:\nenvoy.filters.http.ext_authz:\n\\\"@type\\\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute\ncheck_settings:\ncontext_extensions:\nhost: cats.pets.com\n

Create the AuthConfig for the Pets API:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: dogs-api-protection\nspec:\nhosts:\n- dogs.pets.com\nidentity: [...]\n

Create the AuthConfig for the Cats API:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: cats-api-protection\nspec:\nhosts:\n- cats.pets.com\nidentity: [...]\n

Notice that the host subdomains dogs.pets.com and cats.pets.com are not really requested by the API consumers. Rather, users send requests to pets.com/dogs and pets.com/cats. When routing those requests, Envoy makes sure to inject the corresponding context extensions that will induce the right lookup in Authorino.

"},{"location":"authorino/docs/user-guides/host-override/#example-of-host-override-for-wildcard-subdomain-lookup","title":"Example of host override for wildcard subdomain lookup","text":"

In this use case, a single Pets API serves requests for any subdomain that matches *.pets.com, e.g.: - dogs.pets.com \u2192 Pets API - cats.pets.com \u2192 Pets API

Edit the Envoy config to extend the external authorization settings at the level of the virtual host, with the host value that will be favored by Authorino before the actual host attribute of the HTTP request:

virtual_hosts:\n- name: pets-api\ndomains: ['*.pets.com']\ntyped_per_filter_config:\nenvoy.filters.http.ext_authz:\n\\\"@type\\\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute\ncheck_settings:\ncontext_extensions:\nhost: pets.com\nroutes:\n- match:\nprefix: /\nroute:\ncluster: pets-api\n

The host context extension used above is any key that matches one of the hosts listed in the targeted AuthConfig.

Create the AuthConfig for the Pets API:

apiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\nname: pets-api-protection\nspec:\nhosts:\n- pets.com\nidentity: [...]\n

Notice that requests to dogs.pets.com and to cats.pets.com are all routed by Envoy to the same API, with same external authorization configuration. in all the cases, Authorino will lookup for the indexed AuthConfig associated with pets.com. The same is valid for a request sent, e.g., to birds.pets.com.

"},{"location":"authorino/docs/user-guides/http-basic-authentication/","title":"User guide: HTTP \"Basic\" Authentication (RFC 7235)","text":"

Turn Authorino API key Secrets settings into HTTP basic auth.

Authorino features in this guide:
  • Identity verification & authentication \u2192 API key
  • Authorization \u2192 JSON pattern-matching authorization rules
HTTP \"Basic\" Authentication ([RFC 7235](https://datatracker.ietf.org/doc/html/rfc7235)) is not recommended if you can afford other more secure methods such as OpenID Connect. To support legacy nonetheless it is sometimes necessary to implement it. In Authorino, HTTP \"Basic\" Authentication can be modeled leveraging the API key authentication feature (stored as Kubernetes `Secret`s with an `api_key` entry and labeled to match selectors specified in `spec.identity.apiKey.selector` of the `AuthConfig`). Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/http-basic-authentication/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/http-basic-authentication/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: http-basic-auth\n    apiKey:\n      selector:\n        matchLabels:\n          group: users\n    credentials:\n      in: authorization_header\n      keySelector: Basic\n  authorization:\n  - name: acl\n    when:\n    - selector: context.request.http.path\n      operator: eq\n      value: /bye\n    json:\n      rules:\n      - selector: context.request.http.headers.authorization.@extract:{\"pos\":1}|@base64:decode|@extract:{\"sep\":\":\"}\n        operator: eq\n        value: john\nEOF\n

The config specifies an Access Control List (ACL), by which only the user john is authorized to consume the /bye endpoint of the API.

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON, including the description of the string modifiers @extract and @case used above. Check out as well the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

"},{"location":"authorino/docs/user-guides/http-basic-authentication/#6-create-user-credentials","title":"6. Create user credentials","text":"

To create credentials for HTTP \"Basic\" Authentication, store each username:password, base64-encoded, in the api_key value of the Kubernetes Secret resources. E.g.:

printf \"john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\" | base64\n# am9objpuZHlCenJlVXpGNHpxRFFzcVNQTUhrUmhyaUVPdGNSeA==\n

Create credentials for user John:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: basic-auth-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: users\nstringData:\n  api_key: am9objpuZHlCenJlVXpGNHpxRFFzcVNQTUhrUmhyaUVPdGNSeA== # john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n

Create credentials for user Jane:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: basic-auth-2\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: users\nstringData:\n  api_key: amFuZTpkTnNScnNhcHkwbk5Dd210NTM3ZkhGcHl4MGNCc0xFcA== # jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#7-consume-the-api","title":"7. Consume the API","text":"

As John (authorized in the ACL):

curl -u john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
curl -u john:ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx http://talker-api-authorino.127.0.0.1.nip.io:8000/bye\n# HTTP/1.1 200 OK\n

As Jane (NOT authorized in the ACL):

curl -u jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
curl -u jane:dNsRrsapy0nNCwmt537fHFpyx0cBsLEp http://talker-api-authorino.127.0.0.1.nip.io:8000/bye -i\n# HTTP/1.1 403 Forbidden\n

With an invalid user/password:

curl -u unknown:invalid http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Basic realm=\"http-basic-auth\"\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#8-revoke-access-to-the-api","title":"8. Revoke access to the API","text":"
kubectl delete secret/basic-auth-1\n
"},{"location":"authorino/docs/user-guides/http-basic-authentication/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/basic-auth-1\nkubectl delete secret/basic-auth-2\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/injecting-data/","title":"User guide: Injecting data in the request","text":"

Inject HTTP headers with serialized JSON content.

Authorino features in this guide:
  • Dynamic response \u2192 JSON injection
  • Identity verification & authentication \u2192 API key
Inject serialized custom JSON objects as HTTP request headers. Values can be static or fetched from the [Authorization JSON](./../architecture.md#the-authorization-json). Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/injecting-data/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/injecting-data/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/injecting-data/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/injecting-data/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/injecting-data/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/injecting-data/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

The following defines a JSON object to be injected as an added HTTP header into the request, named after the response config x-ext-auth-data. The object includes 3 properties: 1. a static value authorized: true; 2. a dynamic value request-time, from Envoy-supplied contextual data present in the Authorization JSON; and 3. a greeting message geeting-message that interpolates a dynamic value read from an annotation of the Kubernetes Secret resource that represents the API key used to authenticate into a static string.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n  response:\n  - name: x-ext-auth-data\n    json:\n      properties:\n      - name: authorized\n        value: true\n      - name: request-time\n        valueFrom:\n          authJSON: context.request.time.seconds\n      - name: greeting-message\n        valueFrom:\n          authJSON: Hello, {auth.identity.metadata.annotations.auth-data\\/name}!\nEOF\n

Check out the docs for information about the common feature JSON paths for reading from the Authorization JSON.

"},{"location":"authorino/docs/user-guides/injecting-data/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\n  annotations:\n    auth-data/name: Rita\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/injecting-data/#7-consume-the-api","title":"7. Consume the API","text":"
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# {\n#   \"method\": \"GET\",\n#   \"path\": \"/hello\",\n#   \"query_string\": null,\n#   \"body\": \"\",\n#   \"headers\": {\n#     \u2026\n#     \"X-Ext-Auth-Data\": \"{\\\"authorized\\\":true,\\\"greeting-message\\\":\\\"Hello, Rita!\\\",\\\"request-time\\\":1637954644}\",\n#   },\n#   \u2026\n# }\n
"},{"location":"authorino/docs/user-guides/injecting-data/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/","title":"User guide: Simple pattern-matching authorization policies","text":"

Write simple authorization rules based on JSON patterns matched against Authorino's Authorization JSON; check contextual information of the request, validate JWT claims, cross metadata fetched from external sources, etc.

Authorino features in this guide:
  • Authorization \u2192 JSON pattern-matching authorization rules
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
Authorino provides a built-in authorization module to check simple pattern-matching rules against the [Authorization JSON](./../architecture.md#the-authorization-json). This is an alternative to [OPA](./../features.md#open-policy-agent-opa-rego-policies-authorizationopa) when all you want is to check for some simple rules, without complex logics, such as match the value of a JWT claim. Check out as well the user guide about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

The email-verified-only authorization policy ensures that users consuming the API from a given network (IP range 192.168.1/24) must have their emails verified.

The email_verified claim is a property of the identity added to the JWT by the OpenID Connect issuer.

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  authorization:\n  - name: email-verified-only\n    when:\n    - selector: \"context.request.http.headers.x-forwarded-for.@extract:{\\\"sep\\\": \\\",\\\"}\"\n      operator: matches\n      value: 192\\\\.168\\\\.1\\\\.\\\\d+\n    json:\n      rules:\n      - selector: auth.identity.email_verified\n        operator: eq\n        value: \"true\"\nEOF\n

Check out the docs for information about semantics and operators supported by the JSON pattern-matching authorization feature, as well the common feature JSON paths for reading from the Authorization JSON, including the description of the string modifier @extract used above. Check out as well the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#6-obtain-an-access-token-and-consume-the-api","title":"6. Obtain an access token and consume the API","text":""},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#obtain-an-access-token-and-consume-the-api-as-jane-email-verified","title":"Obtain an access token and consume the API as Jane (email verified)","text":"

Obtain an access token with the Keycloak server for Jane:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As Jane, consume the API outside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 123.45.6.78' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

As Jane, consume the API inside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 192.168.1.10' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#obtain-an-access-token-and-consume-the-api-as-peter-email-not-verified","title":"Obtain an access token and consume the API as Peter (email NOT verified)","text":"

Obtain an access token with the Keycloak server for Peter:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=peter' -d 'password=p' | jq -r .access_token)\n

As Peter, consume the API outside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 123.45.6.78' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

As Peter, consume the API inside the area where the policy applies:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" \\\n-H 'X-Forwarded-For: 192.168.1.10' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 403 Forbidden\n# x-ext-auth-reason: Unauthorized\n
"},{"location":"authorino/docs/user-guides/json-pattern-matching-authorization/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/","title":"User guide: Authorization with Keycloak Authorization Services","text":"

Keycloak provides a powerful set of tools (REST endpoints and administrative UIs), also known as Keycloak Authorization Services, to manage and enforce authorization, workflows for multiple access control mechanisms, including discretionary user access control and user-managed permissions.

This user guide is an example of how to use Authorino as an adapter to Keycloak Authorization Services while still relying on the reverse-proxy integration pattern, thus not involving importing an authorization library nor rebuilding the application's code.

Authorino features in this guide:
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Keycloak server
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

In this example, Authorino will accept access tokens (JWTs) issued by the Keycloak server. These JWTs can be either normal Keycloak ID tokens or Requesting Party Tokens (RPT).

RPTs include claims about the permissions of the user regarding protected resources and scopes associated with a Keycloak authorization client that the user can access.

When the supplied access token is an RPT, Authorino will just validate whether the user's granted permissions present in the token include the requested resource ID (translated from the path) and scope (inferred from the HTTP method). If the token does not contain a permissions claim (i.e. it is not an RPT), Authorino will negotiate a User-Managed Access (UMA) ticket on behalf of the user and try to obtain an RPT on that UMA ticket.

In cases of asynchronous user-managed permission control, the first request to the API using a normal Keycloak ID token is denied by Authorino. The user that owns the resource acknowledges the access request in the Keycloak UI. If access is granted, the new permissions will be reflected in subsequent RPTs obtained by Authorino on behalf of the requesting party.

Whenever an RPT with proper permissions is obtained by Authorino, the RPT is supplied back to the API consumer, so it can be used in subsequent requests thus skipping new negotiations of UMA tickets.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  authorization:\n  - name: uma\n    opa:\n      inlineRego: |\n        pat := http.send({\"url\":\"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token\",\"method\": \"post\",\"headers\":{\"Content-Type\":\"application/x-www-form-urlencoded\"},\"raw_body\":\"grant_type=client_credentials\"}).body.access_token\n        resource_id := http.send({\"url\":concat(\"\",[\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=\",input.context.request.http.path]),\"method\":\"get\",\"headers\":{\"Authorization\":concat(\" \",[\"Bearer \",pat])}}).body[0]\n        scope := lower(input.context.request.http.method)\n        access_token := trim_prefix(input.context.request.http.headers.authorization, \"Bearer \")\n        default rpt = \"\"\n        rpt = access_token { object.get(input.auth.identity, \"authorization\", {}).permissions }\n        else = rpt_str {\n          ticket := http.send({\"url\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/authz/protection/permission\",\"method\":\"post\",\"headers\":{\"Authorization\":concat(\" \",[\"Bearer \",pat]),\"Content-Type\":\"application/json\"},\"raw_body\":concat(\"\",[\"[{\\\"resource_id\\\":\\\"\",resource_id,\"\\\",\\\"resource_scopes\\\":[\\\"\",scope,\"\\\"]}]\"])}).body.ticket\n          rpt_str := object.get(http.send({\"url\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token\",\"method\":\"post\",\"headers\":{\"Authorization\":concat(\" \",[\"Bearer \",access_token]),\"Content-Type\":\"application/x-www-form-urlencoded\"},\"raw_body\":concat(\"\",[\"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket&ticket=\",ticket,\"&submit_request=true\"])}).body, \"access_token\", \"\")\n        }\n        allow {\n          permissions := object.get(io.jwt.decode(rpt)[1], \"authorization\", { \"permissions\": [] }).permissions\n          permissions[i]\n          permissions[i].rsid = resource_id\n          permissions[i].scopes[_] = scope\n        }\n      allValues: true\n  response:\n  - name: x-keycloak\n    when:\n    - selector: auth.identity.authorization.permissions\n      operator: eq\n      value: \"\"\n    json:\n      properties:\n      - name: rpt\n        valueFrom: { authJSON: auth.authorization.uma.rpt }\nEOF\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#6-obtain-an-access-token-with-the-keycloak-server","title":"6. Obtain an access token with the Keycloak server","text":"

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for user Jane:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#7-consume-the-api","title":"7. Consume the API","text":"

As Jane, try to send a GET request to the protected resource /greetings/1, owned by user John.

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 403 Forbidden\n

As John, log in to http://localhost:8080/auth/realms/kuadrant/account in the web browser (username: john / password: p), and grant access to the resource greeting-1 for Jane. A pending permission request by Jane shall exist in the list of John's Resources.

As Jane, try to consume the protected resource /greetings/1 again:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 200 OK\n#\n# {\u2026\n#   \"headers\": {\u2026\n#     \"X-Keycloak\": \"{\\\"rpt\\\":\\\"<RPT>\", \u2026\n

Copy the RPT from the response and repeat the request now using the RPT to authenticate:

curl -H \"Authorization: Bearer <RPT>\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/keycloak-authorization-services/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/","title":"User guide: Kubernetes RBAC for service authorization (SubjectAccessReview API)","text":"

Manage permissions in the Kubernetes RBAC and let Authorino to check them in request-time with the authorization system of the cluster.

Authorino features in this guide:
  • Authorization \u2192 Kubernetes SubjectAccessReview
  • Identity verification & authentication \u2192 Kubernetes TokenReview
Authorino can delegate authorization decision to the Kubernetes authorization system, allowing permissions to be stored and managed using the Kubernetes Role-Based Access Control (RBAC) for example. The feature is based on the `SubjectAccessReview` API and can be used for `resourceAttributes` (parameters defined in the `AuthConfig`) or `nonResourceAttributes` (inferring HTTP path and verb from the original request). Check out as well the user guide about [Authentication with Kubernetes tokens (TokenReview API)](./kubernetes-tokenreview.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Kubernetes user with permission to create TokenRequests (to consume the API from outside the cluster)
  • yq (to parse your ~/.kube/config file to extract user authentication data)

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

The AuthConfig below sets all Kubernetes service accounts as trusted users of the API, and relies on the Kubernetes RBAC to enforce authorization using Kubernetes SubjectAccessReview API for non-resource endpoints:

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  - envoy.default.svc.cluster.local\n  identity:\n  - name: service-accounts\n    kubernetes:\n      audiences: [\"https://kubernetes.default.svc.cluster.local\"]\n  authorization:\n  - name: k8s-rbac\n    kubernetes:\n      user:\n        valueFrom: { authJSON: auth.identity.user.username }\nEOF\n

Check out the spec for the Authorino Kubernetes SubjectAccessReview authorization feature, for resource attributes permission checks where SubjectAccessReviews issued by Authorino are modeled in terms of common attributes of operations on Kubernetes resources (namespace, API group, kind, name, subresource, verb).

"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#6-create-roles-associated-with-endpoints-of-the-api","title":"6. Create roles associated with endpoints of the API","text":"

Because the k8s-rbac policy defined in the AuthConfig in the previous step is for non-resource access review requests, the corresponding roles and role bindings have to be defined at cluster scope.

Create a talker-api-greeter role whose users and service accounts bound to this role can consume the non-resource endpoints POST /hello and POST /hi of the API:

kubectl apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: talker-api-greeter\nrules:\n- nonResourceURLs: [\"/hello\"]\n  verbs: [\"post\"]\n- nonResourceURLs: [\"/hi\"]\n  verbs: [\"post\"]\nEOF\n

Create a talker-api-speaker role whose users and service accounts bound to this role can consume the non-resource endpoints POST /say/* of the API:

kubectl apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: talker-api-speaker\nrules:\n- nonResourceURLs: [\"/say/*\"]\n  verbs: [\"post\"]\nEOF\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#7-create-the-serviceaccounts-and-permissions-to-consume-the-api","title":"7. Create the ServiceAccounts and permissions to consume the API","text":"

Create service accounts api-consumer-1 and api-consumer-2:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: api-consumer-1\nEOF\n
kubectl apply -f -<<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: api-consumer-2\nEOF\n

Bind both service accounts to the talker-api-greeter role:

kubectl apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: talker-api-greeter-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: talker-api-greeter\nsubjects:\n- kind: ServiceAccount\n  name: api-consumer-1\n  namespace: default\n- kind: ServiceAccount\n  name: api-consumer-2\n  namespace: default\nEOF\n

Bind service account api-consumer-1 to the talker-api-speaker role:

kubectl apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: talker-api-speaker-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: talker-api-speaker\nsubjects:\n- kind: ServiceAccount\n  name: api-consumer-1\n  namespace: default\nEOF\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#8-consume-the-api","title":"8. Consume the API","text":"

Run a pod that consumes one of the greeting endpoints of the API from inside the cluster, as service account api-consumer-1, bound to the talker-api-greeter and talker-api-speaker cluster roles in the Kubernetes RBAC:

kubectl run greeter --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{\n  \"apiVersion\": \"v1\",\n  \"spec\": {\n    \"containers\": [{\n      \"name\": \"api-consumer\", \"image\": \"quay.io/kuadrant/authorino-examples:api-consumer\", \"command\": [\"./run\"],\n      \"args\":[\"--endpoint=http://envoy.default.svc.cluster.local:8000/hi\",\"--method=POST\",\"--interval=0\",\"--token-path=/var/run/secrets/tokens/api-token\"],\n      \"volumeMounts\": [{\"mountPath\": \"/var/run/secrets/tokens\",\"name\": \"access-token\"}]\n    }],\n    \"serviceAccountName\": \"api-consumer-1\",\n    \"volumes\": [{\"name\": \"access-token\",\"projected\": {\"sources\": [{\"serviceAccountToken\": {\"path\": \"api-token\",\"expirationSeconds\": 7200}}]}}]\n  }\n}' -- sh\n# Sending...\n# 200\n

Run a pod that sends a POST request to /say/blah from within the cluster, as service account api-consumer-1:

kubectl run speaker --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{\n  \"apiVersion\": \"v1\",\n  \"spec\": {\n    \"containers\": [{\n      \"name\": \"api-consumer\", \"image\": \"quay.io/kuadrant/authorino-examples:api-consumer\", \"command\": [\"./run\"],\n      \"args\":[\"--endpoint=http://envoy.default.svc.cluster.local:8000/say/blah\",\"--method=POST\",\"--interval=0\",\"--token-path=/var/run/secrets/tokens/api-token\"],\n      \"volumeMounts\": [{\"mountPath\": \"/var/run/secrets/tokens\",\"name\": \"access-token\"}]\n    }],\n    \"serviceAccountName\": \"api-consumer-1\",\n    \"volumes\": [{\"name\": \"access-token\",\"projected\": {\"sources\": [{\"serviceAccountToken\": {\"path\": \"api-token\",\"expirationSeconds\": 7200}}]}}]\n  }\n}' -- sh\n# Sending...\n# 200\n

Run a pod that sends a POST request to /say/blah from within the cluster, as service account api-consumer-2, bound only to the talker-api-greeter cluster role in the Kubernetes RBAC:

kubectl run speaker --attach --rm --restart=Never -q --image=quay.io/kuadrant/authorino-examples:api-consumer --overrides='{\n  \"apiVersion\": \"v1\",\n  \"spec\": {\n    \"containers\": [{\n      \"name\": \"api-consumer\", \"image\": \"quay.io/kuadrant/authorino-examples:api-consumer\", \"command\": [\"./run\"],\n      \"args\":[\"--endpoint=http://envoy.default.svc.cluster.local:8000/say/blah\",\"--method=POST\",\"--interval=0\",\"--token-path=/var/run/secrets/tokens/api-token\"],\n      \"volumeMounts\": [{\"mountPath\": \"/var/run/secrets/tokens\",\"name\": \"access-token\"}]\n    }],\n    \"serviceAccountName\": \"api-consumer-2\",\n    \"volumes\": [{\"name\": \"access-token\",\"projected\": {\"sources\": [{\"serviceAccountToken\": {\"path\": \"api-token\",\"expirationSeconds\": 7200}}]}}]\n  }\n}' -- sh\n# Sending...\n# 403\n
Extra: consume the API as service account api-consumer-2 from outside the cluster Obtain a short-lived access token for service account `api-consumer-2`, bound to the `talker-api-greeter` cluster role in the Kubernetes RBAC, using the Kubernetes TokenRequest API:
export ACCESS_TOKEN=$(echo '{ \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"TokenRequest\", \"spec\": { \"expirationSeconds\": 600 } }' | kubectl create --raw /api/v1/namespaces/default/serviceaccounts/api-consumer-2/token -f - | jq -r .status.token)\n
Consume the API as `api-consumer-2` from outside the cluster:
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X POST http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X POST http://talker-api-authorino.127.0.0.1.nip.io:8000/say/something -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/kubernetes-subjectaccessreview/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete serviceaccount/api-consumer-1\nkubectl delete serviceaccount/api-consumer-2\nkubectl delete clusterrolebinding/talker-api-greeter-rolebinding\nkubectl delete clusterrolebinding/talker-api-speaker-rolebinding\nkubectl delete clusterrole/talker-api-greeter\nkubectl delete clusterrole/talker-api-speaker\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/","title":"User guide: Authentication with Kubernetes tokens (TokenReview API)","text":"

Validate Kubernetes Service Account tokens to authenticate requests to your protected hosts.

Authorino features in this guide:
  • Identity verification & authentication \u2192 Kubernetes TokenReview
Authorino can verify Kubernetes-valid access tokens (using Kubernetes [TokenReview](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1) API). These tokens can be either `ServiceAccount` tokens or any valid user access tokens issued to users of the Kubernetes server API. The `audiences` claim of the token must include the requested host and port of the protected API (default), or all audiences specified in `spec.identity.kubernetes.audiences` of the `AuthConfig`. For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Kubernetes user with permission to create TokenRequests (to consume the API from outside the cluster)
  • yq (to parse your ~/.kube/config file to extract user authentication data)

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  - envoy.default.svc.cluster.local\n  identity:\n  - name: authorized-service-accounts\n    kubernetes:\n      audiences:\n      - talker-api\nEOF\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#6-create-a-serviceaccount","title":"6. Create a ServiceAccount","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: api-consumer-1\nEOF\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#7-consume-the-api-from-outside-the-cluster","title":"7. Consume the API from outside the cluster","text":"

Obtain a short-lived access token for the api-consumer-1 ServiceAccount:

export ACCESS_TOKEN=$(echo '{ \"apiVersion\": \"authentication.k8s.io/v1\", \"kind\": \"TokenRequest\", \"spec\": { \"audiences\": [\"talker-api\"], \"expirationSeconds\": 600 } }' | kubectl create --raw /api/v1/namespaces/default/serviceaccounts/api-consumer-1/token -f - | jq -r .status.token)\n

Consume the API with a valid Kubernetes token:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Consume the API with the Kubernetes token expired (10 minutes):

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"authorized-service-accounts\"\n# x-ext-auth-reason: Not authenticated\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#8-consume-the-api-from-inside-the-cluster","title":"8. Consume the API from inside the cluster","text":"

Deploy an application that consumes an endpoint of the Talker API, in a loop, every 10 seconds. The application uses a short-lived service account token mounted inside the container using Kubernetes Service Account Token Volume Projection to authenticate.

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Pod\nmetadata:\n  name: api-consumer\nspec:\n  containers:\n  - name: api-consumer\n    image: quay.io/kuadrant/authorino-examples:api-consumer\n    command: [\"./run\"]\n    args:\n      - --endpoint=http://envoy.default.svc.cluster.local:8000/hello\n      - --token-path=/var/run/secrets/tokens/api-token\n      - --interval=10\n    volumeMounts:\n    - mountPath: /var/run/secrets/tokens\n      name: talker-api-access-token\n  serviceAccountName: api-consumer-1\n  volumes:\n  - name: talker-api-access-token\n    projected:\n      sources:\n      - serviceAccountToken:\n          path: api-token\n          expirationSeconds: 7200\n          audience: talker-api\nEOF\n

Check the logs of api-consumer:

kubectl logs -f api-consumer\n# Sending...\n# 200\n# 200\n# 200\n# 200\n# ...\n
"},{"location":"authorino/docs/user-guides/kubernetes-tokenreview/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl pod/api-consumer\nkubectl serviceaccount/api-consumer-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/","title":"User guide: Authentication with X.509 certificates and Mutual Transport Layer Security (mTLS)","text":"

Verify client X.509 certificates against trusted root CAs stored in Kubernetes Secrets to authenticate access to APIs protected with Authorino.

Authorino features in this guide:
  • Identity verification & authentication \u2192 mTLS
  • Authorization \u2192 JSON pattern-matching authorization rules
Authorino can verify x509 certificates presented by clients for authentication on the request to the protected APIs, at application level. Trusted root Certificate Authorities (CA) are stored as Kubernetes `kubernetes.io/tls` Secrets labeled according to selectors specified in the AuthConfig, watched and cached by Authorino. For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/mtls-authentication/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • cert-manager

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Install cert-manager in the cluster:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#2-deploy-authorino","title":"2. Deploy Authorino","text":"

Create the TLS certificates for the Authorino service:

curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed \"s/\\$(AUTHORINO_INSTANCE)/authorino/g;s/\\$(NAMESPACE)/default/g\" | kubectl apply -f -\n

Deploy an Authorino service:

kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      certSecretRef:\n        name: authorino-server-cert\n  oidcServer:\n    tls:\n      certSecretRef:\n        name: authorino-oidc-server-cert\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination enabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/mtls-authentication/#3-deploy-the-talker-api","title":"3. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#4-create-a-ca","title":"4. Create a CA","text":"

Create a CA certificate to issue the client certificates that will be used to authenticate to consume the Talker API:

openssl req -x509 -sha256 -days 365 -nodes -newkey rsa:2048 -subj \"/CN=talker-api-ca\" -keyout /tmp/ca.key -out /tmp/ca.crt\n

Store the CA cert in a Kubernetes Secret, labeled to be discovered by Authorino:

kubectl create secret tls talker-api-ca --cert=/tmp/ca.crt --key=/tmp/ca.key\nkubectl label secret talker-api-ca authorino.kuadrant.io/managed-by=authorino app=talker-api\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#5-setup-envoy","title":"5. Setup Envoy","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  labels:\n    app: envoy\n  name: envoy\ndata:\n  envoy.yaml: |\n    static_resources:\n      listeners:\n      - address:\n          socket_address:\n            address: 0.0.0.0\n            port_value: 8000\n        filter_chains:\n        - transport_socket:\n            name: envoy.transport_sockets.tls\n            typed_config:\n              \"@type\": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n              common_tls_context:\n                tls_certificates:\n                - certificate_chain: {filename: \"/etc/ssl/certs/talker-api/tls.crt\"}\n                  private_key: {filename: \"/etc/ssl/certs/talker-api/tls.key\"}\n                validation_context:\n                  trusted_ca:\n                    filename: /etc/ssl/certs/talker-api/tls.crt\n          filters:\n          - name: envoy.http_connection_manager\n            typed_config:\n              \"@type\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n              stat_prefix: local\n              route_config:\n                name: local_route\n                virtual_hosts:\n                - name: local_service\n                  domains: ['*']\n                  routes:\n                  - match: { prefix: / }\n                    route: { cluster: talker-api }\n              http_filters:\n              - name: envoy.filters.http.ext_authz\n                typed_config:\n                  \"@type\": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz\n                  transport_api_version: V3\n                  failure_mode_allow: false\n                  include_peer_certificate: true\n                  grpc_service:\n                    envoy_grpc: { cluster_name: authorino }\n                    timeout: 1s\n              - name: envoy.filters.http.router\n                typed_config: {}\n              use_remote_address: true\n      clusters:\n      - name: authorino\n        connect_timeout: 0.25s\n        type: strict_dns\n        lb_policy: round_robin\n        http2_protocol_options: {}\n        load_assignment:\n          cluster_name: authorino\n          endpoints:\n          - lb_endpoints:\n            - endpoint:\n                address:\n                  socket_address:\n                    address: authorino-authorino-authorization\n                    port_value: 50051\n        transport_socket:\n          name: envoy.transport_sockets.tls\n          typed_config:\n            \"@type\": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext\n            common_tls_context:\n              validation_context:\n                trusted_ca:\n                  filename: /etc/ssl/certs/authorino-ca-cert.crt\n      - name: talker-api\n        connect_timeout: 0.25s\n        type: strict_dns\n        lb_policy: round_robin\n        load_assignment:\n          cluster_name: talker-api\n          endpoints:\n          - lb_endpoints:\n            - endpoint:\n                address:\n                  socket_address:\n                    address: talker-api\n                    port_value: 3000\n    admin:\n      access_log_path: \"/tmp/admin_access.log\"\n      address:\n        socket_address:\n          address: 0.0.0.0\n          port_value: 8001\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: envoy\n  name: envoy\nspec:\n  selector:\n    matchLabels:\n      app: envoy\n  template:\n    metadata:\n      labels:\n        app: envoy\n    spec:\n      containers:\n      - args:\n        - --config-path /usr/local/etc/envoy/envoy.yaml\n        - --service-cluster front-proxy\n        - --log-level info\n        - --component-log-level filter:trace,http:debug,router:debug\n        command:\n        - /usr/local/bin/envoy\n        image: envoyproxy/envoy:v1.19-latest\n        name: envoy\n        ports:\n        - containerPort: 8000\n          name: web\n        - containerPort: 8001\n          name: admin\n        volumeMounts:\n        - mountPath: /usr/local/etc/envoy\n          name: config\n          readOnly: true\n        - mountPath: /etc/ssl/certs/authorino-ca-cert.crt\n          name: authorino-ca-cert\n          readOnly: true\n          subPath: ca.crt\n        - mountPath: /etc/ssl/certs/talker-api\n          name: talker-api-ca\n          readOnly: true\n      volumes:\n      - configMap:\n          items:\n          - key: envoy.yaml\n            path: envoy.yaml\n          name: envoy\n        name: config\n      - name: authorino-ca-cert\n        secret:\n          defaultMode: 420\n          secretName: authorino-ca-cert\n      - name: talker-api-ca\n        secret:\n          defaultMode: 420\n          secretName: talker-api-ca\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: envoy\nspec:\n  selector:\n    app: envoy\n  ports:\n  - name: web\n    port: 8000\n    protocol: TCP\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-wildcard-host\nspec:\n  rules:\n  - host: talker-api-authorino.127.0.0.1.nip.io\n    http:\n      paths:\n      - backend:\n          service:\n            name: envoy\n            port: { number: 8000 }\n        path: /\n        pathType: Prefix\nEOF\n

The bundle includes an Ingress with host name talker-api-authorino.127.0.0.1.nip.io. If you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#6-create-the-authconfig","title":"6. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: mtls\n    mtls:\n      selector:\n        matchLabels:\n          app: talker-api\n  authorization:\n  - name: acme\n    json:\n      rules:\n      - selector: auth.identity.Organization\n        operator: incl\n        value: ACME Inc.\nEOF\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#7-consume-the-api","title":"7. Consume the API","text":"

With a TLS certificate signed by the trusted CA:

openssl genrsa -out /tmp/aisha.key 2048\nopenssl req -new -key /tmp/aisha.key -out /tmp/aisha.csr -subj \"/CN=aisha/C=PK/L=Islamabad/O=ACME Inc./OU=Engineering\"\nopenssl x509 -req -in /tmp/aisha.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -CAcreateserial -out /tmp/aisha.crt -days 1 -sha256\n\ncurl -k --cert /tmp/aisha.crt --key /tmp/aisha.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 200 OK\n

With a TLS certificate signed by the trusted CA, though missing an authorized Organization:

openssl genrsa -out /tmp/john.key 2048\nopenssl req -new -key /tmp/john.key -out /tmp/john.csr -subj \"/CN=john/C=UK/L=London\"\nopenssl x509 -req -in /tmp/john.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -CAcreateserial -out /tmp/john.crt -days 1 -sha256\n\ncurl -k --cert /tmp/john.crt --key /tmp/john.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 403 Forbidden\n# x-ext-auth-reason: Unauthorized\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#8-try-the-authconfig-via-raw-http-authorization-interface","title":"8. Try the AuthConfig via raw HTTP authorization interface","text":"

Expose Authorino's raw HTTP authorization to the local host:

kubectl port-forward service/authorino-authorino-authorization 5001:5001 &\n

With a TLS certificate signed by the trusted CA:

curl -k --cert /tmp/aisha.crt --key /tmp/aisha.key -H 'Content-Type: application/json' -d '{}' https://talker-api-authorino.127.0.0.1.nip.io:5001/check -i\n# HTTP/2 200\n

With a TLS certificate signed by an unknown authority:

openssl req -x509 -sha256 -days 365 -nodes -newkey rsa:2048 -subj \"/CN=untrusted\" -keyout /tmp/untrusted-ca.key -out /tmp/untrusted-ca.crt\nopenssl genrsa -out /tmp/niko.key 2048\nopenssl req -new -key /tmp/niko.key -out /tmp/niko.csr -subj \"/CN=niko/C=JP/L=Osaka\"\nopenssl x509 -req -in /tmp/niko.csr -CA /tmp/untrusted-ca.crt -CAkey /tmp/untrusted-ca.key -CAcreateserial -out /tmp/niko.crt -days 1 -sha256\n\ncurl -k --cert /tmp/niko.crt --key /tmp/niko.key -H 'Content-Type: application/json' -d '{}' https://talker-api-authorino.127.0.0.1.nip.io:5001/check -i\n# HTTP/2 401\n# www-authenticate: Basic realm=\"mtls\"\n# x-ext-auth-reason: x509: certificate signed by unknown authority\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#9-revoke-an-entire-chain-of-certificates","title":"9. Revoke an entire chain of certificates","text":"
kubectl delete secret/talker-api-ca\n

Even if the deleted root certificate is still cached and accepted at the gateway, Authorino will revoke access at application level immediately.

Try with a previously accepted certificate:

curl -k --cert /tmp/aisha.crt --key /tmp/aisha.key https://talker-api-authorino.127.0.0.1.nip.io:8000 -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Basic realm=\"mtls\"\n# x-ext-auth-reason: x509: certificate signed by unknown authority\n
"},{"location":"authorino/docs/user-guides/mtls-authentication/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete ingress/service\nkubectl delete configmap/service\nkubectl delete configmap/deployment\nkubectl delete configmap/envoy\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n

To uninstall the cert-manager, run:

kubectl delete -f kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/","title":"User guide: OAuth 2.0 token introspection (RFC 7662)","text":"

Introspect OAuth 2.0 access tokens (e.g. opaque tokens) for online user data and token validation in request-time.

Authorino features in this guide:
  • Identity verification & authentication \u2192 OAuth 2.0 introspection
  • Authorization \u2192 JSON pattern-matching authorization rules
Authorino can perform OAuth 2.0 token introspection ([RFC 7662](https://tools.ietf.org/html/rfc7662)) on the access tokens supplied in the requests to protected APIs. This is particularly useful when using opaque tokens, for remote checking the token validity and resolving the identity object. _Important!_ Authorino does **not** implement [OAuth2 grants](https://datatracker.ietf.org/doc/html/rfc6749#section-4) nor [OIDC authentication flows](https://openid.net/specs/openid-connect-core-1_0.html#Authentication). As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities. Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./user-guides/json-pattern-matching-authorization.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • OAuth 2.0 server that implements the token introspection endpoint (RFC 7662) (e.g. Keycloak or a12n-server)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n

Deploy an a12n-server server preloaded with all the realm settings required for this guide:

kubectl create namespace a12n-server\nkubectl -n a12n-server apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/a12n-server/a12n-server-deploy.yaml\n

Forward local requests to the instance of a12n-server running in the cluster:

kubectl -n a12n-server port-forward deployment/a12n-server 8531:8531 &\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

Create a couple required secret, used by Authorino to authenticate with Keycloak and a12n-server during the introspection request:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: oauth2-token-introspection-credentials-keycloak\nstringData:\n  clientID: talker-api\n  clientSecret: 523b92b6-625d-4e1e-a313-77e7a8ae4e88\ntype: Opaque\nEOF\n
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: oauth2-token-introspection-credentials-a12n-server\nstringData:\n  clientID: talker-api\n  clientSecret: V6g-2Eq2ALB1_WHAswzoeZofJ_e86RI4tdjClDDDb4g\ntype: Opaque\nEOF\n

Create the config:

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak\n    oauth2:\n      tokenIntrospectionUrl: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect\n      tokenTypeHint: requesting_party_token\n      credentialsRef:\n        name: oauth2-token-introspection-credentials-keycloak\n  - name: a12n-server\n    oauth2:\n      tokenIntrospectionUrl: http://a12n-server.a12n-server.svc.cluster.local:8531/introspect\n      credentialsRef:\n        name: oauth2-token-introspection-credentials-a12n-server\n  authorization:\n  - name: can-read\n    when:\n    - selector: auth.identity.privileges\n      operator: neq\n      value: \"\"\n    json:\n      rules:\n      - selector: auth.identity.privileges.talker-api\n        operator: incl\n        value: read\nEOF\n

On every request, Authorino will try to verify the token remotely with the Keycloak server and the a12n-server server.

For authorization, whenever the introspected token data includes a privileges property (returned by a12n-server), Authorino will enforce only consumers whose privileges.talker-api includes the \"read\" permission are granted access.

Check out the docs for information about the common feature Conditions about skipping parts of an AuthConfig in the auth pipeline based on context.

"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#6-obtain-an-access-token-and-consume-the-api","title":"6. Obtain an access token and consume the API","text":""},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#obtain-an-access-token-with-keycloak-and-consume-the-api","title":"Obtain an access token with Keycloak and consume the API","text":"

Obtain an access token with the Keycloak server for user Jane:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

export $(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r '\"ACCESS_TOKEN=\"+.access_token,\"REFRESH_TOKEN=\"+.refresh_token')\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As user Jane, consume the API:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Revoke the access token and try to consume the API again:

kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/logout -H \"Content-Type: application/x-www-form-urlencoded\" -d \"refresh_token=$REFRESH_TOKEN\" -d 'token_type_hint=requesting_party_token' -u demo:\n
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"keycloak\"\n# www-authenticate: Bearer realm=\"a12n-server\"\n# x-ext-auth-reason: {\"a12n-server\":\"token is not active\",\"keycloak\":\"token is not active\"}\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#obtain-an-access-token-with-a12n-server-and-consume-the-api","title":"Obtain an access token with a12n-server and consume the API","text":"

Obtain an access token with the a12n-server server for service account service-account-1:

ACCESS_TOKEN=$(curl -d 'grant_type=client_credentials' -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s \"http://localhost:8531/token\" | jq -r .access_token)\n

You can as well obtain an access token from within the cluster, in case your a12n-server is not reachable from the outside:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://a12n-server.a12n-server.svc.cluster.local:8531/token -s -d 'grant_type=client_credentials' -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s | jq -r .access_token)\n

Verify the issued token is an opaque access token in this case:

echo $ACCESS_TOKEN\n

As service-account-1, consumer the API with a valid access token:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Revoke the access token and try to consume the API again:

curl -d \"token=$ACCESS_TOKEN\" -u service-account-1:FO6LgoMKA8TBDDHgSXZ5-iq1wKNwqdDkyeEGIl6gp0s \"http://localhost:8531/revoke\" -i\n
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"keycloak\"\n# www-authenticate: Bearer realm=\"a12n-server\"\n# x-ext-auth-reason: {\"a12n-server\":\"token is not active\",\"keycloak\":\"token is not active\"}\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#consume-the-api-with-a-missing-or-invalid-access-token","title":"Consume the API with a missing or invalid access token","text":"
curl -H \"Authorization: Bearer invalid\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"keycloak\"\n# www-authenticate: Bearer realm=\"a12n-server\"\n# x-ext-auth-reason: {\"a12n-server\":\"token is not active\",\"keycloak\":\"token is not active\"}\n
"},{"location":"authorino/docs/user-guides/oauth2-token-introspection/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete secret/oauth2-token-introspection-credentials-keycloak\nkubectl delete secret/oauth2-token-introspection-credentials-a12n-server\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\nkubectl delete namespace a12-server\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/observability/","title":"Observability","text":""},{"location":"authorino/docs/user-guides/observability/#metrics","title":"Metrics","text":"

Authorino exports metrics at 2 endpoints:

/metrics Metrics of the controller-runtime about reconciliation (caching) of AuthConfigs and API key Secrets /server-metrics Metrics of the external authorization gRPC and OIDC/Festival Wristband validation built-in HTTP servers

The Authorino Operator creates a Kubernetes Service named <authorino-cr-name>-controller-metrics that exposes the endpoints on port 8080. The Authorino instance allows to modify the port number of the metrics endpoints, by setting the --metrics-addr command-line flag (default: :8080).

Main metrics exported by endpoint1:

Endpoint: /metrics Metric name Description\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Labels Type controller_runtime_reconcile_total Total number of reconciliations per controller controller=authconfig|secret, result=success|error|requeue counter controller_runtime_reconcile_errors_total Total number of reconciliation errors per controller controller=authconfig|secret counter controller_runtime_reconcile_time_seconds Length of time per reconciliation per controller controller=authconfig|secret histogram controller_runtime_max_concurrent_reconciles Maximum number of concurrent reconciles per controller controller=authconfig|secret gauge workqueue_adds_total Total number of adds handled by workqueue name=authconfig|secret counter workqueue_depth Current depth of workqueue name=authconfig|secret gauge workqueue_queue_duration_seconds How long in seconds an item stays in workqueue before being requested name=authconfig|secret histogram workqueue_longest_running_processor_seconds How many seconds has the longest running processor for workqueue been running. name=authconfig|secret gauge workqueue_retries_total Total number of retries handled by workqueue name=authconfig|secret counter workqueue_unfinished_work_seconds How many seconds of work has been done that is in progress and hasn't been observed by work_duration. name=authconfig|secret gauge workqueue_work_duration_seconds How long in seconds processing an item from workqueue takes. name=authconfig|secret histogram rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host. code=200|404, method=GET|PUT|POST counter Endpoint: /server-metrics Metric name Description Labels Type auth_server_evaluator_total2 Total number of evaluations of individual authconfig rule performed by the auth server. namespace, authconfig, evaluator_type, evaluator_name counter auth_server_evaluator_cancelled2 Number of evaluations of individual authconfig rule cancelled by the auth server. namespace, authconfig, evaluator_type, evaluator_name counter auth_server_evaluator_ignored2 Number of evaluations of individual authconfig rule ignored by the auth server. namespace, authconfig, evaluator_type, evaluator_name counter auth_server_evaluator_denied2 Number of denials from individual authconfig rule evaluated by the auth server. namespace, authconfig, evaluator_type, evaluator_name counter auth_server_evaluator_duration_seconds2 Response latency of individual authconfig rule evaluated by the auth server (in seconds). namespace, authconfig, evaluator_type, evaluator_name histogram auth_server_authconfig_total Total number of authconfigs enforced by the auth server, partitioned by authconfig. namespace, authconfig counter auth_server_authconfig_response_status Response status of authconfigs sent by the auth server, partitioned by authconfig. namespace, authconfig, status=OK|UNAUTHENTICATED,PERMISSION_DENIED counter auth_server_authconfig_duration_seconds Response latency of authconfig enforced by the auth server (in seconds). namespace, authconfig histogram auth_server_response_status Response status of authconfigs sent by the auth server. status=OK|UNAUTHENTICATED,PERMISSION_DENIED|NOT_FOUND counter grpc_server_handled_total Total number of RPCs completed on the server, regardless of success or failure. grpc_code=OK|Aborted|Canceled|DeadlineExceeded|Internal|ResourceExhausted|Unknown, grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorization counter grpc_server_handling_seconds Response latency (seconds) of gRPC that had been application-level handled by the server. grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorization histogram grpc_server_msg_received_total Total number of RPC stream messages received on the server. grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorization counter grpc_server_msg_sent_total Total number of gRPC stream messages sent by the server. grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorization counter grpc_server_started_total Total number of RPCs started on the server. grpc_method=Check, grpc_service=envoy.service.auth.v3.Authorization counter http_server_handled_total Total number of calls completed on the raw HTTP authorization server, regardless of success or failure. http_code counter http_server_handling_seconds Response latency (seconds) of raw HTTP authorization request that had been application-level handled by the server. histogram oidc_server_requests_total Number of get requests received on the OIDC (Festival Wristband) server. namespace, authconfig, wristband, path=oidc-config|jwks counter oidc_server_response_status Status of HTTP response sent by the OIDC (Festival Wristband) server. status=200|404 counter

1 Both endpoints export metrics about the Go runtime, such as number of goroutines (go_goroutines) and threads (go_threads), usage of CPU, memory and GC stats.

2 Opt-in metrics: auth_server_evaluator_* metrics require authconfig.spec.(identity|metadata|authorization|response).metrics: true (default: false). This can be enforced for the entire instance (all AuthConfigs and evaluators), by setting the --deep-metrics-enabled command-line flag in the Authorino deployment.

Example of metrics exported at the /metrics endpoint
# HELP controller_runtime_active_workers Number of currently used workers per controller\n# TYPE controller_runtime_active_workers gauge\ncontroller_runtime_active_workers{controller=\"authconfig\"} 0\ncontroller_runtime_active_workers{controller=\"secret\"} 0\n# HELP controller_runtime_max_concurrent_reconciles Maximum number of concurrent reconciles per controller\n# TYPE controller_runtime_max_concurrent_reconciles gauge\ncontroller_runtime_max_concurrent_reconciles{controller=\"authconfig\"} 1\ncontroller_runtime_max_concurrent_reconciles{controller=\"secret\"} 1\n# HELP controller_runtime_reconcile_errors_total Total number of reconciliation errors per controller\n# TYPE controller_runtime_reconcile_errors_total counter\ncontroller_runtime_reconcile_errors_total{controller=\"authconfig\"} 12\ncontroller_runtime_reconcile_errors_total{controller=\"secret\"} 0\n# HELP controller_runtime_reconcile_time_seconds Length of time per reconciliation per controller\n# TYPE controller_runtime_reconcile_time_seconds histogram\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.005\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.01\"} 11\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.025\"} 17\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.05\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.1\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.15\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.2\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.25\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.3\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.35\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.4\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.45\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.5\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.6\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.7\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.8\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"0.9\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"1\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"1.25\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"1.5\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"1.75\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"2\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"2.5\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"3\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"3.5\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"4\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"4.5\"} 18\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"5\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"6\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"7\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"8\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"9\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"10\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"15\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"20\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"25\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"30\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"40\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"50\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"60\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"authconfig\",le=\"+Inf\"} 19\ncontroller_runtime_reconcile_time_seconds_sum{controller=\"authconfig\"} 5.171108321999999\ncontroller_runtime_reconcile_time_seconds_count{controller=\"authconfig\"} 19\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.005\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.01\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.025\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.05\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.1\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.15\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.2\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.25\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.3\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.35\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.4\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.45\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.6\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.7\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.8\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"0.9\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"1\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"1.25\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"1.5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"1.75\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"2\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"2.5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"3\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"3.5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"4\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"4.5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"5\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"6\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"7\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"8\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"9\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"10\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"15\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"20\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"25\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"30\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"40\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"50\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"60\"} 1\ncontroller_runtime_reconcile_time_seconds_bucket{controller=\"secret\",le=\"+Inf\"} 1\ncontroller_runtime_reconcile_time_seconds_sum{controller=\"secret\"} 0.000138025\ncontroller_runtime_reconcile_time_seconds_count{controller=\"secret\"} 1\n# HELP controller_runtime_reconcile_total Total number of reconciliations per controller\n# TYPE controller_runtime_reconcile_total counter\ncontroller_runtime_reconcile_total{controller=\"authconfig\",result=\"error\"} 12\ncontroller_runtime_reconcile_total{controller=\"authconfig\",result=\"requeue\"} 0\ncontroller_runtime_reconcile_total{controller=\"authconfig\",result=\"requeue_after\"} 0\ncontroller_runtime_reconcile_total{controller=\"authconfig\",result=\"success\"} 7\ncontroller_runtime_reconcile_total{controller=\"secret\",result=\"error\"} 0\ncontroller_runtime_reconcile_total{controller=\"secret\",result=\"requeue\"} 0\ncontroller_runtime_reconcile_total{controller=\"secret\",result=\"requeue_after\"} 0\ncontroller_runtime_reconcile_total{controller=\"secret\",result=\"success\"} 1\n# HELP go_gc_cycles_automatic_gc_cycles_total Count of completed GC cycles generated by the Go runtime.\n# TYPE go_gc_cycles_automatic_gc_cycles_total counter\ngo_gc_cycles_automatic_gc_cycles_total 13\n# HELP go_gc_cycles_forced_gc_cycles_total Count of completed GC cycles forced by the application.\n# TYPE go_gc_cycles_forced_gc_cycles_total counter\ngo_gc_cycles_forced_gc_cycles_total 0\n# HELP go_gc_cycles_total_gc_cycles_total Count of all completed GC cycles.\n# TYPE go_gc_cycles_total_gc_cycles_total counter\ngo_gc_cycles_total_gc_cycles_total 13\n# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 4.5971e-05\ngo_gc_duration_seconds{quantile=\"0.25\"} 5.69e-05\ngo_gc_duration_seconds{quantile=\"0.5\"} 0.000140699\ngo_gc_duration_seconds{quantile=\"0.75\"} 0.000313162\ngo_gc_duration_seconds{quantile=\"1\"} 0.001692423\ngo_gc_duration_seconds_sum 0.003671076\ngo_gc_duration_seconds_count 13\n# HELP go_gc_heap_allocs_by_size_bytes_total Distribution of heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_allocs_by_size_bytes_total histogram\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"8.999999999999998\"} 6357\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"16.999999999999996\"} 45065\n[...]\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"32768.99999999999\"} 128306\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"+Inf\"} 128327\ngo_gc_heap_allocs_by_size_bytes_total_sum 1.5021512e+07\ngo_gc_heap_allocs_by_size_bytes_total_count 128327\n# HELP go_gc_heap_allocs_bytes_total Cumulative sum of memory allocated to the heap by the application.\n# TYPE go_gc_heap_allocs_bytes_total counter\ngo_gc_heap_allocs_bytes_total 1.5021512e+07\n# HELP go_gc_heap_allocs_objects_total Cumulative count of heap allocations triggered by the application. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_allocs_objects_total counter\ngo_gc_heap_allocs_objects_total 128327\n# HELP go_gc_heap_frees_by_size_bytes_total Distribution of freed heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_frees_by_size_bytes_total histogram\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"8.999999999999998\"} 3885\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"16.999999999999996\"} 33418\n[...]\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"32768.99999999999\"} 96417\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"+Inf\"} 96425\ngo_gc_heap_frees_by_size_bytes_total_sum 9.880944e+06\ngo_gc_heap_frees_by_size_bytes_total_count 96425\n# HELP go_gc_heap_frees_bytes_total Cumulative sum of heap memory freed by the garbage collector.\n# TYPE go_gc_heap_frees_bytes_total counter\ngo_gc_heap_frees_bytes_total 9.880944e+06\n# HELP go_gc_heap_frees_objects_total Cumulative count of heap allocations whose storage was freed by the garbage collector. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_frees_objects_total counter\ngo_gc_heap_frees_objects_total 96425\n# HELP go_gc_heap_goal_bytes Heap size target for the end of the GC cycle.\n# TYPE go_gc_heap_goal_bytes gauge\ngo_gc_heap_goal_bytes 9.356624e+06\n# HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory.\n# TYPE go_gc_heap_objects_objects gauge\ngo_gc_heap_objects_objects 31902\n# HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size.\n# TYPE go_gc_heap_tiny_allocs_objects_total counter\ngo_gc_heap_tiny_allocs_objects_total 11750\n# HELP go_gc_pauses_seconds_total Distribution individual GC-related stop-the-world pause latencies.\n# TYPE go_gc_pauses_seconds_total histogram\ngo_gc_pauses_seconds_total_bucket{le=\"9.999999999999999e-10\"} 0\ngo_gc_pauses_seconds_total_bucket{le=\"1.9999999999999997e-09\"} 0\n[...]\ngo_gc_pauses_seconds_total_bucket{le=\"206708.18602188796\"} 26\ngo_gc_pauses_seconds_total_bucket{le=\"+Inf\"} 26\ngo_gc_pauses_seconds_total_sum 0.003151488\ngo_gc_pauses_seconds_total_count 26\n# HELP go_goroutines Number of goroutines that currently exist.\n# TYPE go_goroutines gauge\ngo_goroutines 80\n# HELP go_info Information about the Go environment.\n# TYPE go_info gauge\ngo_info{version=\"go1.18.7\"} 1\n# HELP go_memory_classes_heap_free_bytes Memory that is completely free and eligible to be returned to the underlying system, but has not been. This metric is the runtime's estimate of free address space that is backed by physical memory.\n# TYPE go_memory_classes_heap_free_bytes gauge\ngo_memory_classes_heap_free_bytes 589824\n# HELP go_memory_classes_heap_objects_bytes Memory occupied by live objects and dead objects that have not yet been marked free by the garbage collector.\n# TYPE go_memory_classes_heap_objects_bytes gauge\ngo_memory_classes_heap_objects_bytes 5.140568e+06\n# HELP go_memory_classes_heap_released_bytes Memory that is completely free and has been returned to the underlying system. This metric is the runtime's estimate of free address space that is still mapped into the process, but is not backed by physical memory.\n# TYPE go_memory_classes_heap_released_bytes gauge\ngo_memory_classes_heap_released_bytes 4.005888e+06\n# HELP go_memory_classes_heap_stacks_bytes Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use.\n# TYPE go_memory_classes_heap_stacks_bytes gauge\ngo_memory_classes_heap_stacks_bytes 786432\n# HELP go_memory_classes_heap_unused_bytes Memory that is reserved for heap objects but is not currently used to hold heap objects.\n# TYPE go_memory_classes_heap_unused_bytes gauge\ngo_memory_classes_heap_unused_bytes 2.0602e+06\n# HELP go_memory_classes_metadata_mcache_free_bytes Memory that is reserved for runtime mcache structures, but not in-use.\n# TYPE go_memory_classes_metadata_mcache_free_bytes gauge\ngo_memory_classes_metadata_mcache_free_bytes 13984\n# HELP go_memory_classes_metadata_mcache_inuse_bytes Memory that is occupied by runtime mcache structures that are currently being used.\n# TYPE go_memory_classes_metadata_mcache_inuse_bytes gauge\ngo_memory_classes_metadata_mcache_inuse_bytes 2400\n# HELP go_memory_classes_metadata_mspan_free_bytes Memory that is reserved for runtime mspan structures, but not in-use.\n# TYPE go_memory_classes_metadata_mspan_free_bytes gauge\ngo_memory_classes_metadata_mspan_free_bytes 17104\n# HELP go_memory_classes_metadata_mspan_inuse_bytes Memory that is occupied by runtime mspan structures that are currently being used.\n# TYPE go_memory_classes_metadata_mspan_inuse_bytes gauge\ngo_memory_classes_metadata_mspan_inuse_bytes 113968\n# HELP go_memory_classes_metadata_other_bytes Memory that is reserved for or used to hold runtime metadata.\n# TYPE go_memory_classes_metadata_other_bytes gauge\ngo_memory_classes_metadata_other_bytes 5.544408e+06\n# HELP go_memory_classes_os_stacks_bytes Stack memory allocated by the underlying operating system.\n# TYPE go_memory_classes_os_stacks_bytes gauge\ngo_memory_classes_os_stacks_bytes 0\n# HELP go_memory_classes_other_bytes Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more.\n# TYPE go_memory_classes_other_bytes gauge\ngo_memory_classes_other_bytes 537777\n# HELP go_memory_classes_profiling_buckets_bytes Memory that is used by the stack trace hash map used for profiling.\n# TYPE go_memory_classes_profiling_buckets_bytes gauge\ngo_memory_classes_profiling_buckets_bytes 1.455487e+06\n# HELP go_memory_classes_total_bytes All memory mapped by the Go runtime into the current process as read-write. Note that this does not include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in /memory/classes.\n# TYPE go_memory_classes_total_bytes gauge\ngo_memory_classes_total_bytes 2.026804e+07\n# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.\n# TYPE go_memstats_alloc_bytes gauge\ngo_memstats_alloc_bytes 5.140568e+06\n# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.\n# TYPE go_memstats_alloc_bytes_total counter\ngo_memstats_alloc_bytes_total 1.5021512e+07\n# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.\n# TYPE go_memstats_buck_hash_sys_bytes gauge\ngo_memstats_buck_hash_sys_bytes 1.455487e+06\n# HELP go_memstats_frees_total Total number of frees.\n# TYPE go_memstats_frees_total counter\ngo_memstats_frees_total 108175\n# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.\n# TYPE go_memstats_gc_cpu_fraction gauge\ngo_memstats_gc_cpu_fraction 0\n# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.\n# TYPE go_memstats_gc_sys_bytes gauge\ngo_memstats_gc_sys_bytes 5.544408e+06\n# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.\n# TYPE go_memstats_heap_alloc_bytes gauge\ngo_memstats_heap_alloc_bytes 5.140568e+06\n# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.\n# TYPE go_memstats_heap_idle_bytes gauge\ngo_memstats_heap_idle_bytes 4.595712e+06\n# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.\n# TYPE go_memstats_heap_inuse_bytes gauge\ngo_memstats_heap_inuse_bytes 7.200768e+06\n# HELP go_memstats_heap_objects Number of allocated objects.\n# TYPE go_memstats_heap_objects gauge\ngo_memstats_heap_objects 31902\n# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.\n# TYPE go_memstats_heap_released_bytes gauge\ngo_memstats_heap_released_bytes 4.005888e+06\n# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.\n# TYPE go_memstats_heap_sys_bytes gauge\ngo_memstats_heap_sys_bytes 1.179648e+07\n# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.\n# TYPE go_memstats_last_gc_time_seconds gauge\ngo_memstats_last_gc_time_seconds 1.6461572121033354e+09\n# HELP go_memstats_lookups_total Total number of pointer lookups.\n# TYPE go_memstats_lookups_total counter\ngo_memstats_lookups_total 0\n# HELP go_memstats_mallocs_total Total number of mallocs.\n# TYPE go_memstats_mallocs_total counter\ngo_memstats_mallocs_total 140077\n# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.\n# TYPE go_memstats_mcache_inuse_bytes gauge\ngo_memstats_mcache_inuse_bytes 2400\n# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.\n# TYPE go_memstats_mcache_sys_bytes gauge\ngo_memstats_mcache_sys_bytes 16384\n# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.\n# TYPE go_memstats_mspan_inuse_bytes gauge\ngo_memstats_mspan_inuse_bytes 113968\n# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.\n# TYPE go_memstats_mspan_sys_bytes gauge\ngo_memstats_mspan_sys_bytes 131072\n# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.\n# TYPE go_memstats_next_gc_bytes gauge\ngo_memstats_next_gc_bytes 9.356624e+06\n# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.\n# TYPE go_memstats_other_sys_bytes gauge\ngo_memstats_other_sys_bytes 537777\n# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.\n# TYPE go_memstats_stack_inuse_bytes gauge\ngo_memstats_stack_inuse_bytes 786432\n# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.\n# TYPE go_memstats_stack_sys_bytes gauge\ngo_memstats_stack_sys_bytes 786432\n# HELP go_memstats_sys_bytes Number of bytes obtained from system.\n# TYPE go_memstats_sys_bytes gauge\ngo_memstats_sys_bytes 2.026804e+07\n# HELP go_sched_goroutines_goroutines Count of live goroutines.\n# TYPE go_sched_goroutines_goroutines gauge\ngo_sched_goroutines_goroutines 80\n# HELP go_sched_latencies_seconds Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running.\n# TYPE go_sched_latencies_seconds histogram\ngo_sched_latencies_seconds_bucket{le=\"9.999999999999999e-10\"} 244\ngo_sched_latencies_seconds_bucket{le=\"1.9999999999999997e-09\"} 244\n[...]\ngo_sched_latencies_seconds_bucket{le=\"206708.18602188796\"} 2336\ngo_sched_latencies_seconds_bucket{le=\"+Inf\"} 2336\ngo_sched_latencies_seconds_sum 0.18509832400000004\ngo_sched_latencies_seconds_count 2336\n# HELP go_threads Number of OS threads created.\n# TYPE go_threads gauge\ngo_threads 8\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 1.84\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 14\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 4.3728896e+07\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.64615612779e+09\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 7.65362176e+08\n# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.\n# TYPE process_virtual_memory_max_bytes gauge\nprocess_virtual_memory_max_bytes 1.8446744073709552e+19\n# HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.\n# TYPE rest_client_requests_total counter\nrest_client_requests_total{code=\"200\",host=\"10.96.0.1:443\",method=\"GET\"} 114\nrest_client_requests_total{code=\"200\",host=\"10.96.0.1:443\",method=\"PUT\"} 4\n# HELP workqueue_adds_total Total number of adds handled by workqueue\n# TYPE workqueue_adds_total counter\nworkqueue_adds_total{name=\"authconfig\"} 19\nworkqueue_adds_total{name=\"secret\"} 1\n# HELP workqueue_depth Current depth of workqueue\n# TYPE workqueue_depth gauge\nworkqueue_depth{name=\"authconfig\"} 0\nworkqueue_depth{name=\"secret\"} 0\n# HELP workqueue_longest_running_processor_seconds How many seconds has the longest running processor for workqueue been running.\n# TYPE workqueue_longest_running_processor_seconds gauge\nworkqueue_longest_running_processor_seconds{name=\"authconfig\"} 0\nworkqueue_longest_running_processor_seconds{name=\"secret\"} 0\n# HELP workqueue_queue_duration_seconds How long in seconds an item stays in workqueue before being requested\n# TYPE workqueue_queue_duration_seconds histogram\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"1e-08\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"1e-07\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"1e-06\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"9.999999999999999e-06\"} 8\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"9.999999999999999e-05\"} 17\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"0.001\"} 17\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"0.01\"} 17\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"0.1\"} 18\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"1\"} 18\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"10\"} 19\nworkqueue_queue_duration_seconds_bucket{name=\"authconfig\",le=\"+Inf\"} 19\nworkqueue_queue_duration_seconds_sum{name=\"authconfig\"} 4.969016371\nworkqueue_queue_duration_seconds_count{name=\"authconfig\"} 19\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"1e-08\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"1e-07\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"1e-06\"} 0\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"9.999999999999999e-06\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"9.999999999999999e-05\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"0.001\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"0.01\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"0.1\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"1\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"10\"} 1\nworkqueue_queue_duration_seconds_bucket{name=\"secret\",le=\"+Inf\"} 1\nworkqueue_queue_duration_seconds_sum{name=\"secret\"} 4.67e-06\nworkqueue_queue_duration_seconds_count{name=\"secret\"} 1\n# HELP workqueue_retries_total Total number of retries handled by workqueue\n# TYPE workqueue_retries_total counter\nworkqueue_retries_total{name=\"authconfig\"} 12\nworkqueue_retries_total{name=\"secret\"} 0\n# HELP workqueue_unfinished_work_seconds How many seconds of work has been done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.\n# TYPE workqueue_unfinished_work_seconds gauge\nworkqueue_unfinished_work_seconds{name=\"authconfig\"} 0\nworkqueue_unfinished_work_seconds{name=\"secret\"} 0\n# HELP workqueue_work_duration_seconds How long in seconds processing an item from workqueue takes.\n# TYPE workqueue_work_duration_seconds histogram\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"1e-08\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"1e-07\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"1e-06\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"9.999999999999999e-06\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"9.999999999999999e-05\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"0.001\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"0.01\"} 11\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"0.1\"} 18\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"1\"} 18\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"10\"} 19\nworkqueue_work_duration_seconds_bucket{name=\"authconfig\",le=\"+Inf\"} 19\nworkqueue_work_duration_seconds_sum{name=\"authconfig\"} 5.171738079000001\nworkqueue_work_duration_seconds_count{name=\"authconfig\"} 19\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"1e-08\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"1e-07\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"1e-06\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"9.999999999999999e-06\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"9.999999999999999e-05\"} 0\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"0.001\"} 1\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"0.01\"} 1\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"0.1\"} 1\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"1\"} 1\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"10\"} 1\nworkqueue_work_duration_seconds_bucket{name=\"secret\",le=\"+Inf\"} 1\nworkqueue_work_duration_seconds_sum{name=\"secret\"} 0.000150956\nworkqueue_work_duration_seconds_count{name=\"secret\"} 1\n
Example of metrics exported at the /server-metrics endpoint
# HELP auth_server_authconfig_duration_seconds Response latency of authconfig enforced by the auth server (in seconds).\n# TYPE auth_server_authconfig_duration_seconds histogram\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.001\"} 0\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.051000000000000004\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.101\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.15100000000000002\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.201\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.251\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.301\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.351\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.40099999999999997\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.45099999999999996\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.501\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.551\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.6010000000000001\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.6510000000000001\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.7010000000000002\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.7510000000000002\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.8010000000000003\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.8510000000000003\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.9010000000000004\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"0.9510000000000004\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"edge-auth\",namespace=\"authorino\",le=\"+Inf\"} 1\nauth_server_authconfig_duration_seconds_sum{authconfig=\"edge-auth\",namespace=\"authorino\"} 0.001701795\nauth_server_authconfig_duration_seconds_count{authconfig=\"edge-auth\",namespace=\"authorino\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.001\"} 1\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.051000000000000004\"} 4\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.101\"} 4\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.15100000000000002\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.201\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.251\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.301\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.351\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.40099999999999997\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.45099999999999996\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.501\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.551\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.6010000000000001\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.6510000000000001\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.7010000000000002\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.7510000000000002\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.8010000000000003\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.8510000000000003\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.9010000000000004\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"0.9510000000000004\"} 5\nauth_server_authconfig_duration_seconds_bucket{authconfig=\"talker-api-protection\",namespace=\"authorino\",le=\"+Inf\"} 5\nauth_server_authconfig_duration_seconds_sum{authconfig=\"talker-api-protection\",namespace=\"authorino\"} 0.26967658299999997\nauth_server_authconfig_duration_seconds_count{authconfig=\"talker-api-protection\",namespace=\"authorino\"} 5\n# HELP auth_server_authconfig_response_status Response status of authconfigs sent by the auth server, partitioned by authconfig.\n# TYPE auth_server_authconfig_response_status counter\nauth_server_authconfig_response_status{authconfig=\"edge-auth\",namespace=\"authorino\",status=\"OK\"} 1\nauth_server_authconfig_response_status{authconfig=\"talker-api-protection\",namespace=\"authorino\",status=\"OK\"} 2\nauth_server_authconfig_response_status{authconfig=\"talker-api-protection\",namespace=\"authorino\",status=\"PERMISSION_DENIED\"} 2\nauth_server_authconfig_response_status{authconfig=\"talker-api-protection\",namespace=\"authorino\",status=\"UNAUTHENTICATED\"} 1\n# HELP auth_server_authconfig_total Total number of authconfigs enforced by the auth server, partitioned by authconfig.\n# TYPE auth_server_authconfig_total counter\nauth_server_authconfig_total{authconfig=\"edge-auth\",namespace=\"authorino\"} 1\nauth_server_authconfig_total{authconfig=\"talker-api-protection\",namespace=\"authorino\"} 5\n# HELP auth_server_evaluator_duration_seconds Response latency of individual authconfig rule evaluated by the auth server (in seconds).\n# TYPE auth_server_evaluator_duration_seconds histogram\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.001\"} 0\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.051000000000000004\"} 3\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.101\"} 3\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.15100000000000002\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.201\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.251\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.301\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.351\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.40099999999999997\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.45099999999999996\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.501\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.551\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.6010000000000001\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.6510000000000001\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.7010000000000002\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.7510000000000002\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.8010000000000003\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.8510000000000003\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.9010000000000004\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"0.9510000000000004\"} 4\nauth_server_evaluator_duration_seconds_bucket{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\",le=\"+Inf\"} 4\nauth_server_evaluator_duration_seconds_sum{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\"} 0.25800055\nauth_server_evaluator_duration_seconds_count{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\"} 4\n# HELP auth_server_evaluator_total Total number of evaluations of individual authconfig rule performed by the auth server.\n# TYPE auth_server_evaluator_total counter\nauth_server_evaluator_total{authconfig=\"talker-api-protection\",evaluator_name=\"geo\",evaluator_type=\"METADATA_GENERIC_HTTP\",namespace=\"authorino\"} 4\n# HELP auth_server_response_status Response status of authconfigs sent by the auth server.\n# TYPE auth_server_response_status counter\nauth_server_response_status{status=\"NOT_FOUND\"} 1\nauth_server_response_status{status=\"OK\"} 3\nauth_server_response_status{status=\"PERMISSION_DENIED\"} 2\nauth_server_response_status{status=\"UNAUTHENTICATED\"} 1\n# HELP go_gc_cycles_automatic_gc_cycles_total Count of completed GC cycles generated by the Go runtime.\n# TYPE go_gc_cycles_automatic_gc_cycles_total counter\ngo_gc_cycles_automatic_gc_cycles_total 11\n# HELP go_gc_cycles_forced_gc_cycles_total Count of completed GC cycles forced by the application.\n# TYPE go_gc_cycles_forced_gc_cycles_total counter\ngo_gc_cycles_forced_gc_cycles_total 0\n# HELP go_gc_cycles_total_gc_cycles_total Count of all completed GC cycles.\n# TYPE go_gc_cycles_total_gc_cycles_total counter\ngo_gc_cycles_total_gc_cycles_total 11\n# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 4.5971e-05\ngo_gc_duration_seconds{quantile=\"0.25\"} 5.69e-05\ngo_gc_duration_seconds{quantile=\"0.5\"} 0.000158594\ngo_gc_duration_seconds{quantile=\"0.75\"} 0.000324091\ngo_gc_duration_seconds{quantile=\"1\"} 0.001692423\ngo_gc_duration_seconds_sum 0.003546711\ngo_gc_duration_seconds_count 11\n# HELP go_gc_heap_allocs_by_size_bytes_total Distribution of heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_allocs_by_size_bytes_total histogram\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"8.999999999999998\"} 6261\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"16.999999999999996\"} 42477\n[...]\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"32768.99999999999\"} 122133\ngo_gc_heap_allocs_by_size_bytes_total_bucket{le=\"+Inf\"} 122154\ngo_gc_heap_allocs_by_size_bytes_total_sum 1.455944e+07\ngo_gc_heap_allocs_by_size_bytes_total_count 122154\n# HELP go_gc_heap_allocs_bytes_total Cumulative sum of memory allocated to the heap by the application.\n# TYPE go_gc_heap_allocs_bytes_total counter\ngo_gc_heap_allocs_bytes_total 1.455944e+07\n# HELP go_gc_heap_allocs_objects_total Cumulative count of heap allocations triggered by the application. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_allocs_objects_total counter\ngo_gc_heap_allocs_objects_total 122154\n# HELP go_gc_heap_frees_by_size_bytes_total Distribution of freed heap allocations by approximate size. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_frees_by_size_bytes_total histogram\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"8.999999999999998\"} 3789\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"16.999999999999996\"} 31067\n[...]\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"32768.99999999999\"} 91013\ngo_gc_heap_frees_by_size_bytes_total_bucket{le=\"+Inf\"} 91021\ngo_gc_heap_frees_by_size_bytes_total_sum 9.399936e+06\ngo_gc_heap_frees_by_size_bytes_total_count 91021\n# HELP go_gc_heap_frees_bytes_total Cumulative sum of heap memory freed by the garbage collector.\n# TYPE go_gc_heap_frees_bytes_total counter\ngo_gc_heap_frees_bytes_total 9.399936e+06\n# HELP go_gc_heap_frees_objects_total Cumulative count of heap allocations whose storage was freed by the garbage collector. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.\n# TYPE go_gc_heap_frees_objects_total counter\ngo_gc_heap_frees_objects_total 91021\n# HELP go_gc_heap_goal_bytes Heap size target for the end of the GC cycle.\n# TYPE go_gc_heap_goal_bytes gauge\ngo_gc_heap_goal_bytes 9.601744e+06\n# HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory.\n# TYPE go_gc_heap_objects_objects gauge\ngo_gc_heap_objects_objects 31133\n# HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size.\n# TYPE go_gc_heap_tiny_allocs_objects_total counter\ngo_gc_heap_tiny_allocs_objects_total 9866\n# HELP go_gc_pauses_seconds_total Distribution individual GC-related stop-the-world pause latencies.\n# TYPE go_gc_pauses_seconds_total histogram\ngo_gc_pauses_seconds_total_bucket{le=\"9.999999999999999e-10\"} 0\ngo_gc_pauses_seconds_total_bucket{le=\"1.9999999999999997e-09\"} 0\n[...]\ngo_gc_pauses_seconds_total_bucket{le=\"206708.18602188796\"} 22\ngo_gc_pauses_seconds_total_bucket{le=\"+Inf\"} 22\ngo_gc_pauses_seconds_total_sum 0.0030393599999999996\ngo_gc_pauses_seconds_total_count 22\n# HELP go_goroutines Number of goroutines that currently exist.\n# TYPE go_goroutines gauge\ngo_goroutines 79\n# HELP go_info Information about the Go environment.\n# TYPE go_info gauge\ngo_info{version=\"go1.18.7\"} 1\n# HELP go_memory_classes_heap_free_bytes Memory that is completely free and eligible to be returned to the underlying system, but has not been. This metric is the runtime's estimate of free address space that is backed by physical memory.\n# TYPE go_memory_classes_heap_free_bytes gauge\ngo_memory_classes_heap_free_bytes 630784\n# HELP go_memory_classes_heap_objects_bytes Memory occupied by live objects and dead objects that have not yet been marked free by the garbage collector.\n# TYPE go_memory_classes_heap_objects_bytes gauge\ngo_memory_classes_heap_objects_bytes 5.159504e+06\n# HELP go_memory_classes_heap_released_bytes Memory that is completely free and has been returned to the underlying system. This metric is the runtime's estimate of free address space that is still mapped into the process, but is not backed by physical memory.\n# TYPE go_memory_classes_heap_released_bytes gauge\ngo_memory_classes_heap_released_bytes 3.858432e+06\n# HELP go_memory_classes_heap_stacks_bytes Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use.\n# TYPE go_memory_classes_heap_stacks_bytes gauge\ngo_memory_classes_heap_stacks_bytes 786432\n# HELP go_memory_classes_heap_unused_bytes Memory that is reserved for heap objects but is not currently used to hold heap objects.\n# TYPE go_memory_classes_heap_unused_bytes gauge\ngo_memory_classes_heap_unused_bytes 2.14776e+06\n# HELP go_memory_classes_metadata_mcache_free_bytes Memory that is reserved for runtime mcache structures, but not in-use.\n# TYPE go_memory_classes_metadata_mcache_free_bytes gauge\ngo_memory_classes_metadata_mcache_free_bytes 13984\n# HELP go_memory_classes_metadata_mcache_inuse_bytes Memory that is occupied by runtime mcache structures that are currently being used.\n# TYPE go_memory_classes_metadata_mcache_inuse_bytes gauge\ngo_memory_classes_metadata_mcache_inuse_bytes 2400\n# HELP go_memory_classes_metadata_mspan_free_bytes Memory that is reserved for runtime mspan structures, but not in-use.\n# TYPE go_memory_classes_metadata_mspan_free_bytes gauge\ngo_memory_classes_metadata_mspan_free_bytes 16696\n# HELP go_memory_classes_metadata_mspan_inuse_bytes Memory that is occupied by runtime mspan structures that are currently being used.\n# TYPE go_memory_classes_metadata_mspan_inuse_bytes gauge\ngo_memory_classes_metadata_mspan_inuse_bytes 114376\n# HELP go_memory_classes_metadata_other_bytes Memory that is reserved for or used to hold runtime metadata.\n# TYPE go_memory_classes_metadata_other_bytes gauge\ngo_memory_classes_metadata_other_bytes 5.544408e+06\n# HELP go_memory_classes_os_stacks_bytes Stack memory allocated by the underlying operating system.\n# TYPE go_memory_classes_os_stacks_bytes gauge\ngo_memory_classes_os_stacks_bytes 0\n# HELP go_memory_classes_other_bytes Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more.\n# TYPE go_memory_classes_other_bytes gauge\ngo_memory_classes_other_bytes 537777\n# HELP go_memory_classes_profiling_buckets_bytes Memory that is used by the stack trace hash map used for profiling.\n# TYPE go_memory_classes_profiling_buckets_bytes gauge\ngo_memory_classes_profiling_buckets_bytes 1.455487e+06\n# HELP go_memory_classes_total_bytes All memory mapped by the Go runtime into the current process as read-write. Note that this does not include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in /memory/classes.\n# TYPE go_memory_classes_total_bytes gauge\ngo_memory_classes_total_bytes 2.026804e+07\n# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.\n# TYPE go_memstats_alloc_bytes gauge\ngo_memstats_alloc_bytes 5.159504e+06\n# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.\n# TYPE go_memstats_alloc_bytes_total counter\ngo_memstats_alloc_bytes_total 1.455944e+07\n# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.\n# TYPE go_memstats_buck_hash_sys_bytes gauge\ngo_memstats_buck_hash_sys_bytes 1.455487e+06\n# HELP go_memstats_frees_total Total number of frees.\n# TYPE go_memstats_frees_total counter\ngo_memstats_frees_total 100887\n# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.\n# TYPE go_memstats_gc_cpu_fraction gauge\ngo_memstats_gc_cpu_fraction 0\n# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.\n# TYPE go_memstats_gc_sys_bytes gauge\ngo_memstats_gc_sys_bytes 5.544408e+06\n# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.\n# TYPE go_memstats_heap_alloc_bytes gauge\ngo_memstats_heap_alloc_bytes 5.159504e+06\n# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.\n# TYPE go_memstats_heap_idle_bytes gauge\ngo_memstats_heap_idle_bytes 4.489216e+06\n# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.\n# TYPE go_memstats_heap_inuse_bytes gauge\ngo_memstats_heap_inuse_bytes 7.307264e+06\n# HELP go_memstats_heap_objects Number of allocated objects.\n# TYPE go_memstats_heap_objects gauge\ngo_memstats_heap_objects 31133\n# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.\n# TYPE go_memstats_heap_released_bytes gauge\ngo_memstats_heap_released_bytes 3.858432e+06\n# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.\n# TYPE go_memstats_heap_sys_bytes gauge\ngo_memstats_heap_sys_bytes 1.179648e+07\n# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.\n# TYPE go_memstats_last_gc_time_seconds gauge\ngo_memstats_last_gc_time_seconds 1.6461569717723043e+09\n# HELP go_memstats_lookups_total Total number of pointer lookups.\n# TYPE go_memstats_lookups_total counter\ngo_memstats_lookups_total 0\n# HELP go_memstats_mallocs_total Total number of mallocs.\n# TYPE go_memstats_mallocs_total counter\ngo_memstats_mallocs_total 132020\n# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.\n# TYPE go_memstats_mcache_inuse_bytes gauge\ngo_memstats_mcache_inuse_bytes 2400\n# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.\n# TYPE go_memstats_mcache_sys_bytes gauge\ngo_memstats_mcache_sys_bytes 16384\n# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.\n# TYPE go_memstats_mspan_inuse_bytes gauge\ngo_memstats_mspan_inuse_bytes 114376\n# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.\n# TYPE go_memstats_mspan_sys_bytes gauge\ngo_memstats_mspan_sys_bytes 131072\n# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.\n# TYPE go_memstats_next_gc_bytes gauge\ngo_memstats_next_gc_bytes 9.601744e+06\n# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.\n# TYPE go_memstats_other_sys_bytes gauge\ngo_memstats_other_sys_bytes 537777\n# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.\n# TYPE go_memstats_stack_inuse_bytes gauge\ngo_memstats_stack_inuse_bytes 786432\n# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.\n# TYPE go_memstats_stack_sys_bytes gauge\ngo_memstats_stack_sys_bytes 786432\n# HELP go_memstats_sys_bytes Number of bytes obtained from system.\n# TYPE go_memstats_sys_bytes gauge\ngo_memstats_sys_bytes 2.026804e+07\n# HELP go_sched_goroutines_goroutines Count of live goroutines.\n# TYPE go_sched_goroutines_goroutines gauge\ngo_sched_goroutines_goroutines 79\n# HELP go_sched_latencies_seconds Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running.\n# TYPE go_sched_latencies_seconds histogram\ngo_sched_latencies_seconds_bucket{le=\"9.999999999999999e-10\"} 225\ngo_sched_latencies_seconds_bucket{le=\"1.9999999999999997e-09\"} 225\n[...]\ngo_sched_latencies_seconds_bucket{le=\"206708.18602188796\"} 1916\ngo_sched_latencies_seconds_bucket{le=\"+Inf\"} 1916\ngo_sched_latencies_seconds_sum 0.18081453600000003\ngo_sched_latencies_seconds_count 1916\n# HELP go_threads Number of OS threads created.\n# TYPE go_threads gauge\ngo_threads 8\n# HELP grpc_server_handled_total Total number of RPCs completed on the server, regardless of success or failure.\n# TYPE grpc_server_handled_total counter\ngrpc_server_handled_total{grpc_code=\"Aborted\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Aborted\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Aborted\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"AlreadyExists\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"AlreadyExists\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"AlreadyExists\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Canceled\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Canceled\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Canceled\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"DataLoss\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"DataLoss\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"DataLoss\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"DeadlineExceeded\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"DeadlineExceeded\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"DeadlineExceeded\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"FailedPrecondition\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"FailedPrecondition\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"FailedPrecondition\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Internal\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Internal\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Internal\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"InvalidArgument\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"InvalidArgument\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"InvalidArgument\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"NotFound\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"NotFound\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"NotFound\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"OK\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 7\ngrpc_server_handled_total{grpc_code=\"OK\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"OK\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"OutOfRange\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"OutOfRange\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"OutOfRange\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"PermissionDenied\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"PermissionDenied\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"PermissionDenied\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"ResourceExhausted\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"ResourceExhausted\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"ResourceExhausted\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Unauthenticated\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unauthenticated\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unauthenticated\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Unavailable\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unavailable\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unavailable\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Unimplemented\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unimplemented\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unimplemented\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\ngrpc_server_handled_total{grpc_code=\"Unknown\",grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unknown\",grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_handled_total{grpc_code=\"Unknown\",grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\n# HELP grpc_server_handling_seconds Histogram of response latency (seconds) of gRPC that had been application-level handled by the server.\n# TYPE grpc_server_handling_seconds histogram\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.005\"} 3\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.01\"} 3\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.025\"} 3\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.05\"} 6\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.1\"} 6\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.25\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"0.5\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"1\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"2.5\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"5\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"10\"} 7\ngrpc_server_handling_seconds_bucket{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\",le=\"+Inf\"} 7\ngrpc_server_handling_seconds_sum{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 0.277605516\ngrpc_server_handling_seconds_count{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 7\n# HELP grpc_server_msg_received_total Total number of RPC stream messages received on the server.\n# TYPE grpc_server_msg_received_total counter\ngrpc_server_msg_received_total{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 7\ngrpc_server_msg_received_total{grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_msg_received_total{grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\n# HELP grpc_server_msg_sent_total Total number of gRPC stream messages sent by the server.\n# TYPE grpc_server_msg_sent_total counter\ngrpc_server_msg_sent_total{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 7\ngrpc_server_msg_sent_total{grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_msg_sent_total{grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\n# HELP grpc_server_started_total Total number of RPCs started on the server.\n# TYPE grpc_server_started_total counter\ngrpc_server_started_total{grpc_method=\"Check\",grpc_service=\"envoy.service.auth.v3.Authorization\",grpc_type=\"unary\"} 7\ngrpc_server_started_total{grpc_method=\"Check\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"unary\"} 0\ngrpc_server_started_total{grpc_method=\"Watch\",grpc_service=\"grpc.health.v1.Health\",grpc_type=\"server_stream\"} 0\n# HELP oidc_server_requests_total Number of get requests received on the OIDC (Festival Wristband) server.\n# TYPE oidc_server_requests_total counter\noidc_server_requests_total{authconfig=\"edge-auth\",namespace=\"authorino\",path=\"/.well-known/openid-configuration\",wristband=\"wristband\"} 1\noidc_server_requests_total{authconfig=\"edge-auth\",namespace=\"authorino\",path=\"/.well-known/openid-connect/certs\",wristband=\"wristband\"} 1\n# HELP oidc_server_response_status Status of HTTP response sent by the OIDC (Festival Wristband) server.\n# TYPE oidc_server_response_status counter\noidc_server_response_status{status=\"200\"} 2\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 1.42\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 14\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 4.370432e+07\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.64615612779e+09\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 7.65362176e+08\n# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.\n# TYPE process_virtual_memory_max_bytes gauge\nprocess_virtual_memory_max_bytes 1.8446744073709552e+19\n# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.\n# TYPE promhttp_metric_handler_requests_in_flight gauge\npromhttp_metric_handler_requests_in_flight 1\n# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.\n# TYPE promhttp_metric_handler_requests_total counter\npromhttp_metric_handler_requests_total{code=\"200\"} 1\npromhttp_metric_handler_requests_total{code=\"500\"} 0\npromhttp_metric_handler_requests_total{code=\"503\"} 0\n
"},{"location":"authorino/docs/user-guides/observability/#readiness-check","title":"Readiness check","text":"

Authorino exposes two main endpoints for health and readiness check of the AuthConfig controller: - /healthz: Health probe (ping) \u2013 reports \"ok\" if the controller is healthy. - /readyz: Readiness probe \u2013 reports \"ok\" if the controller is ready to reconcile AuthConfig-related events.

In general, the endpoints return either 200 (\"ok\", i.e. all checks have passed) or 500 (when one or more checks failed).

The default binding network address is :8081, which can be changed by setting the command-line flag --health-probe-addr.

The following additional subpath is available and its corresponding check can be aggregated into the response from the main readiness probe: - /readyz/authconfigs: Aggregated readiness status of the AuthConfigs \u2013 reports \"ok\" if all AuthConfigs watched by the reconciler have been marked as ready.

Important!The AuthConfig readiness check within the scope of the aggregated readiness probe endpoint is deactivated by default \u2013 i.e. this check is an opt-in check. Sending a request to the /readyz endpoint without explicitly opting-in for the AuthConfigs check, by using the include parameter, will result in a response message that disregards the actual status of the watched AuthConfigs, possibly an \"ok\" message. To read the aggregated status of the watched AuthConfigs, either use the specific endpoint /readyz/authconfigs or opt-in for the check in the aggregated endpoint by sending a request to /readyz?include=authconfigs

Apart from include to add the aggregated status of the AuthConfigs, the following additional query string parameters are available: - verbose=true|false - provides more verbose response messages; - exclude=(check name) \u2013 to exclude a particular readiness check (for future usage).

"},{"location":"authorino/docs/user-guides/observability/#logging","title":"Logging","text":"

Authorino provides structured log messages (\"production\") or more log messages output to stdout in a more user-friendly format (\"development\" mode) and different level of logging.

"},{"location":"authorino/docs/user-guides/observability/#log-levels-and-log-modes","title":"Log levels and log modes","text":"

Authorino outputs 3 levels of log messages: (from lowest to highest level) 1. debug 2. info (default) 3. error

info logging is restricted to high-level information of the gRPC and HTTP authorization services, limiting messages to incoming request and respective outgoing response logs, with reduced details about the corresponding objects (request payload and authorization result), and without any further detailed logs of the steps in between, except for errors.

Only debug logging will include processing details of each Auth Pipeline, such as intermediary requests to validate identities with external auth servers, requests to external sources of auth metadata or authorization policies.

To configure the desired log level, set the spec.logLevel field of the Authorino custom resource (or --log-level command-line flag in the Authorino deployment), to one of the supported values listed above. Default log level is info.

Apart from log level, Authorino can output messages to the logs in 2 different formats: - production (default): each line is a parseable JSON object with properties {\"level\":string, \"ts\":int, \"msg\":string, \"logger\":string, extra values...} - development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\\t<log-level>\\t<logger>\\t<message>\\t{extra-values-as-json}

To configure the desired log mode, set the spec.logMode field of the Authorino custom resource (or --log-mode command-line flag in the Authorino deployment), to one of the supported values listed above. Default log level is production.

Example of Authorino custom resource with log level debug and log mode production:

apiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\nname: authorino\nspec:\nlogLevel: debug\nlogMode: production\nlistener:\ntls:\nenabled: false\noidcServer:\ntls:\nenabled: false\n
"},{"location":"authorino/docs/user-guides/observability/#sensitive-data-output-to-the-logs","title":"Sensitive data output to the logs","text":"

Authorino will never output HTTP headers and query string parameters to info log messages, as such values usually include sensitive data (e.g. access tokens, API keys and Authorino Festival Wristbands). However, debug log messages may include such sensitive information and those are not redacted.

Therefore, DO NOT USE debug LOG LEVEL IN PRODUCTION! Instead, use either info or error.

"},{"location":"authorino/docs/user-guides/observability/#log-messages-printed-by-authorino","title":"Log messages printed by Authorino","text":"

Some log messages printed by Authorino and corresponding extra values included:

logger level message extra values authorino info \"setting instance base logger\" min level=info\\|debug, mode=production\\|development authorino info \"booting up authorino\" version authorino debug \"setting up with options\" auth-config-label-selector, deep-metrics-enabled, enable-leader-election, evaluator-cache-size, ext-auth-grpc-port, ext-auth-http-port, health-probe-addr, log-level, log-mode, max-http-request-body-size, metrics-addr, oidc-http-port, oidc-tls-cert, oidc-tls-cert-key, secret-label-selector, timeout, tls-cert, tls-cert-key, watch-namespace authorino info \"attempting to acquire leader lease <namespace>/cb88a58a.authorino.kuadrant.io...\\n\" authorino info \"successfully acquired lease <namespace>/cb88a58a.authorino.kuadrant.io\\n\" authorino info \"disabling grpc auth service\" authorino info \"starting grpc auth service\" port, tls authorino error \"failed to obtain port for the grpc auth service\" authorino error \"failed to load tls cert for the grpc auth\" authorino error \"failed to start grpc auth service\" authorino info \"disabling http auth service\" authorino info \"starting http auth service\" port, tls authorino error \"failed to obtain port for the http auth service\" authorino error \"failed to start http auth service\" authorino info \"disabling http oidc service\" authorino info \"starting http oidc service\" port, tls authorino error \"failed to obtain port for the http oidc service\" authorino error \"failed to start http oidc service\" authorino info \"starting manager\" authorino error \"unable to start manager\" authorino error \"unable to create controller\" controller=authconfig\\|secret\\|authconfigstatusupdate authorino error \"problem running manager\" authorino info \"starting status update manager\" authorino error \"unable to start status update manager\" authorino error \"problem running status update manager\" authorino.controller-runtime.metrics info \"metrics server is starting to listen\" addr authorino.controller-runtime.manager info \"starting metrics server\" path authorino.controller-runtime.manager.events debug \"Normal\" object={kind=ConfigMap, apiVersion=v1}, reauthorino.ason=LeaderElection, message=\"authorino-controller-manager-* became leader\" authorino.controller-runtime.manager.events debug \"Normal\" object={kind=Lease, apiVersion=coordination.k8s.io/v1}, reauthorino.ason=LeaderElection, message=\"authorino-controller-manager-* became leader\" authorino.controller-runtime.manager.controller.authconfig info \"resource reconciled\" authconfig authorino.controller-runtime.manager.controller.authconfig info \"host already taken\" authconfig, host authorino.controller-runtime.manager.controller.authconfig.statusupdater debug \"resource status did not change\" authconfig authorino.controller-runtime.manager.controller.authconfig.statusupdater debug \"resource status changed\" authconfig, authconfig/status authorino.controller-runtime.manager.controller.authconfig.statusupdater error \"failed to update the resource\" authconfig authorino.controller-runtime.manager.controller.authconfig.statusupdater info \"resource status updated\" authconfig authorino.controller-runtime.manager.controller.secret info \"resource reconciled\" authorino.controller-runtime.manager.controller.secret info \"could not reconcile authconfigs using api key authorino.authentication\" authorino.service.oidc info \"request received\" request id, url, realm, config, path authorino.service.oidc info \"response sent\" request id authorino.service.oidc error \"failed to serve oidc request\" authorino.service.auth info \"incoming authorization request\" request id, object authorino.service.auth debug \"incoming authorization request\" request id, object authorino.service.auth info \"outgoing authorization response\" request id, authorized, response, object authorino.service.auth debug \"outgoing authorization response\" request id, authorized, response, object authorino.service.auth error \"failed to create dynamic metadata\" request id, object authorino.service.auth.authpipeline debug \"skipping config\" request id, config, reason authorino.service.auth.authpipeline.identity debug \"identity validated\" request id, config, object authorino.service.auth.authpipeline.identity debug \"cannot validate identity\" request id, config, reason authorino.service.auth.authpipeline.identity error \"failed to extend identity object\" request id, config, object authorino.service.auth.authpipeline.identity.oidc error \"failed to discovery openid connect configuration\" endpoint authorino.service.auth.authpipeline.identity.oidc debug \"auto-refresh of openid connect configuration disabled\" endpoint, reason authorino.service.auth.authpipeline.identity.oidc debug \"openid connect configuration updated\" endpoint authorino.service.auth.authpipeline.identity.oauth2 debug \"sending token introspection request\" request id, url, data authorino.service.auth.authpipeline.identity.kubernetesauth debug \"calling kubernetes token review api\" request id, tokenreview authorino.service.auth.authpipeline.identity.apikey error \"Something went wrong fetching the authorized credentials\" authorino.service.auth.authpipeline.metadata debug \"fetched auth metadata\" request id, config, object authorino.service.auth.authpipeline.metadata debug \"cannot fetch metadata\" request id, config, reason authorino.service.auth.authpipeline.metadata.http debug \"sending request\" request id, method, url, headers authorino.service.auth.authpipeline.metadata.userinfo debug \"fetching user info\" request id, endpoint authorino.service.auth.authpipeline.metadata.uma debug \"requesting pat\" request id, url, data, headers authorino.service.auth.authpipeline.metadata.uma debug \"querying resources by uri\" request id, url authorino.service.auth.authpipeline.metadata.uma debug \"getting resource data\" request id, url authorino.service.auth.authpipeline.authorization debug \"evaluating for input\" request id, input authorino.service.auth.authpipeline.authorization debug \"access granted\" request id, config, object authorino.service.auth.authpipeline.authorization debug \"access denied\" request id, config, reason authorino.service.auth.authpipeline.authorization.opa error \"invalid response from policy evaluation\" policy authorino.service.auth.authpipeline.authorization.opa error \"failed to precompile policy\" policy authorino.service.auth.authpipeline.authorization.opa error \"failed to download policy from external registry\" policy, endpoint authorino.service.auth.authpipeline.authorization.opa error \"failed to refresh policy from external registry\" policy, endpoint authorino.service.auth.authpipeline.authorization.opa debug \"external policy unchanged\" policy, endpoint authorino.service.auth.authpipeline.authorization.opa debug \"auto-refresh of external policy disabled\" policy, endpoint, reason authorino.service.auth.authpipeline.authorization.opa info \"policy updated from external registry\" policy, endpoint authorino.service.auth.authpipeline.authorization.kubernetesauthz debug \"calling kubernetes subject access review api\" request id, subjectaccessreview authorino.service.auth.authpipeline.response debug \"dynamic response built\" request id, config, object authorino.service.auth.authpipeline.response debug \"cannot build dynamic response\" request id, config, reason authorino.service.auth.http debug \"bad request\" request id authorino.service.auth.http debug \"not found\" request id authorino.service.auth.http debug \"request body too large\" request id authorino.service.auth.http debug \"service unavailable\" request id"},{"location":"authorino/docs/user-guides/observability/#examples","title":"Examples","text":"

The examples below are all with --log-level=debug and --log-mode=production.

Booting up the service
{\"level\":\"info\",\"ts\":1669220526.929678,\"logger\":\"authorino\",\"msg\":\"setting instance base logger\",\"min level\":\"debug\",\"mode\":\"production\"}\n{\"level\":\"info\",\"ts\":1669220526.929718,\"logger\":\"authorino\",\"msg\":\"booting up authorino\",\"version\":\"7688cfa32317a49f0461414e741c980e9c05dba3\"}\n{\"level\":\"debug\",\"ts\":1669220526.9297278,\"logger\":\"authorino\",\"msg\":\"setting up with options\",\"auth-config-label-selector\":\"\",\"deep-metrics-enabled\":\"false\",\"enable-leader-election\":\"false\",\"evaluator-cache-size\":\"1\",\"ext-auth-grpc-port\":\"50051\",\"ext-auth-http-port\":\"5001\",\"health-probe-addr\":\":8081\",\"log-level\":\"debug\",\"log-mode\":\"production\",\"max-http-request-body-size\":\"8192\",\"metrics-addr\":\":8080\",\"oidc-http-port\":\"8083\",\"oidc-tls-cert\":\"/etc/ssl/certs/oidc.crt\",\"oidc-tls-cert-key\":\"/etc/ssl/private/oidc.key\",\"secret-label-selector\":\"authorino.kuadrant.io/managed-by=authorino\",\"timeout\":\"0\",\"tls-cert\":\"/etc/ssl/certs/tls.crt\",\"tls-cert-key\":\"/etc/ssl/private/tls.key\",\"watch-namespace\":\"default\"}\n{\"level\":\"info\",\"ts\":1669220527.9816976,\"logger\":\"authorino.controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n{\"level\":\"info\",\"ts\":1669220527.9823213,\"logger\":\"authorino\",\"msg\":\"starting grpc auth service\",\"port\":50051,\"tls\":true}\n{\"level\":\"info\",\"ts\":1669220527.9823658,\"logger\":\"authorino\",\"msg\":\"starting http auth service\",\"port\":5001,\"tls\":true}\n{\"level\":\"info\",\"ts\":1669220527.9824295,\"logger\":\"authorino\",\"msg\":\"starting http oidc service\",\"port\":8083,\"tls\":true}\n{\"level\":\"info\",\"ts\":1669220527.9825335,\"logger\":\"authorino\",\"msg\":\"starting manager\"}\n{\"level\":\"info\",\"ts\":1669220527.982721,\"logger\":\"authorino\",\"msg\":\"Starting server\",\"path\":\"/metrics\",\"kind\":\"metrics\",\"addr\":\"[::]:8080\"}\n{\"level\":\"info\",\"ts\":1669220527.982766,\"logger\":\"authorino\",\"msg\":\"Starting server\",\"kind\":\"health probe\",\"addr\":\"[::]:8081\"}\n{\"level\":\"info\",\"ts\":1669220527.9829438,\"logger\":\"authorino.controller.secret\",\"msg\":\"Starting EventSource\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"source\":\"kind source: *v1.Secret\"}\n{\"level\":\"info\",\"ts\":1669220527.9829693,\"logger\":\"authorino.controller.secret\",\"msg\":\"Starting Controller\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\"}\n{\"level\":\"info\",\"ts\":1669220527.9829714,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting EventSource\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\",\"source\":\"kind source: *v1beta1.AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669220527.9830208,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting Controller\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669220528.0834699,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting workers\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\",\"worker count\":1}\n{\"level\":\"info\",\"ts\":1669220528.0836608,\"logger\":\"authorino.controller.secret\",\"msg\":\"Starting workers\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"worker count\":1}\n{\"level\":\"info\",\"ts\":1669220529.041266,\"logger\":\"authorino\",\"msg\":\"starting status update manager\"}\n{\"level\":\"info\",\"ts\":1669220529.0418258,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting EventSource\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\",\"source\":\"kind source: *v1beta1.AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669220529.0418813,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting Controller\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669220529.1432905,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Starting workers\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\",\"worker count\":1}\n
Reconciling an AuthConfig and 2 related API key secrets
{\"level\":\"debug\",\"ts\":1669221208.7473805,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status changed\",\"authconfig\":\"default/talker-api-protection\",\"authconfig/status\":{\"conditions\":[{\"type\":\"Available\",\"status\":\"False\",\"lastTransitionTime\":\"2022-11-23T16:33:28Z\",\"reason\":\"HostsNotLinked\",\"message\":\"No hosts linked to the resource\"},{\"type\":\"Ready\",\"status\":\"False\",\"lastTransitionTime\":\"2022-11-23T16:33:28Z\",\"reason\":\"Unknown\"}],\"summary\":{\"ready\":false,\"hostsReady\":[],\"numHostsReady\":\"0/1\",\"numIdentitySources\":1,\"numMetadataSources\":0,\"numAuthorizationPolicies\":0,\"numResponseItems\":0,\"festivalWristbandEnabled\":false}}}\n{\"level\":\"info\",\"ts\":1669221208.7496614,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"info\",\"ts\":1669221208.7532616,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"debug\",\"ts\":1669221208.7535005,\"logger\":\"authorino.controller.secret\",\"msg\":\"adding k8s secret to the index\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"name\":\"api-key-1\",\"namespace\":\"default\",\"authconfig\":\"default/talker-api-protection\",\"config\":\"friends\"}\n{\"level\":\"debug\",\"ts\":1669221208.7535596,\"logger\":\"authorino.controller.secret.apikey\",\"msg\":\"api key added\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"name\":\"api-key-1\",\"namespace\":\"default\"}\n{\"level\":\"info\",\"ts\":1669221208.7536132,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"default/api-key-1\"}\n{\"level\":\"info\",\"ts\":1669221208.753772,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status updated\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"debug\",\"ts\":1669221208.753835,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status changed\",\"authconfig\":\"default/talker-api-protection\",\"authconfig/status\":{\"conditions\":[{\"type\":\"Available\",\"status\":\"True\",\"lastTransitionTime\":\"2022-11-23T16:33:28Z\",\"reason\":\"HostsLinked\"},{\"type\":\"Ready\",\"status\":\"True\",\"lastTransitionTime\":\"2022-11-23T16:33:28Z\",\"reason\":\"Reconciled\"}],\"summary\":{\"ready\":true,\"hostsReady\":[\"talker-api-authorino.127.0.0.1.nip.io\"],\"numHostsReady\":\"1/1\",\"numIdentitySources\":1,\"numMetadataSources\":0,\"numAuthorizationPolicies\":0,\"numResponseItems\":0,\"festivalWristbandEnabled\":false}}}\n{\"level\":\"info\",\"ts\":1669221208.7571108,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"info\",\"ts\":1669221208.7573664,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status updated\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"debug\",\"ts\":1669221208.757429,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status did not change\",\"authconfig\":\"default/talker-api-protection\"}\n{\"level\":\"debug\",\"ts\":1669221208.7586699,\"logger\":\"authorino.controller.secret\",\"msg\":\"adding k8s secret to the index\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"name\":\"api-key-2\",\"namespace\":\"default\",\"authconfig\":\"default/talker-api-protection\",\"config\":\"friends\"}\n{\"level\":\"debug\",\"ts\":1669221208.7586884,\"logger\":\"authorino.controller.secret.apikey\",\"msg\":\"api key added\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\",\"name\":\"api-key-2\",\"namespace\":\"default\"}\n{\"level\":\"info\",\"ts\":1669221208.7586913,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"default/api-key-2\"}\n{\"level\":\"debug\",\"ts\":1669221208.7597604,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status did not change\",\"authconfig\":\"default/talker-api-protection\"}\n
Enforcing an AuthConfig with authentication based on Kubernetes tokens: - identity: k8s-auth, oidc, oauth2, apikey - metadata: http, oidc userinfo - authorization: opa, k8s-authz - response: wristband
{\"level\":\"info\",\"ts\":1634830460.1486168,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"8157480586935853928\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":53144}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"http\":{\"id\":\"8157480586935853928\",\"method\":\"GET\",\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\"}}}}\n{\"level\":\"debug\",\"ts\":1634830460.1491194,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"8157480586935853928\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":53144}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"time\":{\"seconds\":1634830460,\"nanos\":147259000},\"http\":{\"id\":\"8157480586935853928\",\"method\":\"GET\",\"headers\":{\":authority\":\"talker-api\",\":method\":\"GET\",\":path\":\"/hello\",\":scheme\":\"http\",\"accept\":\"*/*\",\"authorization\":\"Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA\",\"user-agent\":\"curl/7.65.3\",\"x-envoy-internal\":\"true\",\"x-forwarded-for\":\"10.244.0.11\",\"x-forwarded-proto\":\"http\",\"x-request-id\":\"4c5d5c97-e15b-46a3-877a-d8188e09e08f\"},\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\",\"protocol\":\"HTTP/1.1\"}},\"context_extensions\":{\"virtual_host\":\"local_service\"},\"metadata_context\":{}}}\n{\"level\":\"debug\",\"ts\":1634830460.150506,\"logger\":\"authorino.service.auth.authpipeline.identity.kubernetesauth\",\"msg\":\"calling kubernetes token review api\",\"request id\":\"8157480586935853928\",\"tokenreview\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"token\":\"eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA\",\"audiences\":[\"talker-api\"]},\"status\":{\"user\":{}}}}\n{\"level\":\"debug\",\"ts\":1634830460.1509938,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"api-keys\",\"ExtendedProperties\":[{\"Name\":\"sub\",\"Value\":{\"Static\":null,\"Pattern\":\"auth.identity.metadata.annotations.userid\"}}],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":{\"AuthCredentials\":{\"KeySelector\":\"APIKEY\",\"In\":\"authorization_header\"},\"Name\":\"api-keys\",\"LabelSelectors\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"}},\"KubernetesAuth\":null},\"reason\":\"credential not found\"}\n{\"level\":\"debug\",\"ts\":1634830460.1517606,\"logger\":\"authorino.service.auth.authpipeline.identity.oauth2\",\"msg\":\"sending token introspection request\",\"request id\":\"8157480586935853928\",\"url\":\"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect\",\"data\":\"token=eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA&token_type_hint=requesting_party_token\"}\n{\"level\":\"debug\",\"ts\":1634830460.1620777,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"identity validated\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"k8s-service-accounts\",\"ExtendedProperties\":[],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"object\":{\"aud\":[\"talker-api\"],\"exp\":1634831051,\"iat\":1634830451,\"iss\":\"https://kubernetes.default.svc.cluster.local\",\"kubernetes.io\":{\"namespace\":\"authorino\",\"serviceaccount\":{\"name\":\"api-consumer-1\",\"uid\":\"b40f531c-ecab-4f31-a496-2ebc72add121\"}},\"nbf\":1634830451,\"sub\":\"system:serviceaccount:authorino:api-consumer-1\"}}\n{\"level\":\"debug\",\"ts\":1634830460.1622565,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"requesting pat\",\"request id\":\"8157480586935853928\",\"url\":\"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token\",\"data\":\"grant_type=client_credentials\",\"headers\":{\"Content-Type\":[\"application/x-www-form-urlencoded\"]}}\n{\"level\":\"debug\",\"ts\":1634830460.1670353,\"logger\":\"authorino.service.auth.authpipeline.metadata.http\",\"msg\":\"sending request\",\"request id\":\"8157480586935853928\",\"method\":\"GET\",\"url\":\"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path=/hello\",\"headers\":{\"Content-Type\":[\"text/plain\"]}}\n{\"level\":\"debug\",\"ts\":1634830460.169326,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"cannot fetch metadata\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"oidc-userinfo\",\"UserInfo\":{\"OIDC\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"},\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\"}},\"UMA\":null,\"GenericHTTP\":null},\"reason\":\"Missing identity for OIDC issuer http://keycloak:8080/auth/realms/kuadrant. Skipping related UserInfo metadata.\"}\n{\"level\":\"debug\",\"ts\":1634830460.1753876,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"fetched auth metadata\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"http-metadata\",\"UserInfo\":null,\"UMA\":null,\"GenericHTTP\":{\"Endpoint\":\"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path={context.request.http.path}\",\"Method\":\"GET\",\"Parameters\":[],\"ContentType\":\"application/x-www-form-urlencoded\",\"SharedSecret\":\"\",\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"object\":{\"body\":\"\",\"headers\":{\"Accept-Encoding\":\"gzip\",\"Content-Type\":\"text/plain\",\"Host\":\"talker-api.default.svc.cluster.local:3000\",\"User-Agent\":\"Go-http-client/1.1\",\"Version\":\"HTTP/1.1\"},\"method\":\"GET\",\"path\":\"/metadata\",\"query_string\":\"encoding=text/plain&original_path=/hello\",\"uuid\":\"1aa6ac66-3179-4351-b1a7-7f6a761d5b61\"}}\n{\"level\":\"debug\",\"ts\":1634830460.2331996,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"querying resources by uri\",\"request id\":\"8157480586935853928\",\"url\":\"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=/hello\"}\n{\"level\":\"debug\",\"ts\":1634830460.2495668,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"getting resource data\",\"request id\":\"8157480586935853928\",\"url\":\"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set/e20d194c-274c-4845-8c02-0ca413c9bf18\"}\n{\"level\":\"debug\",\"ts\":1634830460.2927864,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"fetched auth metadata\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"uma-resource-registry\",\"UserInfo\":null,\"UMA\":{\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\",\"ClientID\":\"talker-api\",\"ClientSecret\":\"523b92b6-625d-4e1e-a313-77e7a8ae4e88\"},\"GenericHTTP\":null},\"object\":[{\"_id\":\"e20d194c-274c-4845-8c02-0ca413c9bf18\",\"attributes\":{},\"displayName\":\"hello\",\"name\":\"hello\",\"owner\":{\"id\":\"57a645a5-fb67-438b-8be5-dfb971666dbc\"},\"ownerManagedAccess\":false,\"resource_scopes\":[],\"uris\":[\"/hi\",\"/hello\"]}]}\n{\"level\":\"debug\",\"ts\":1634830460.2930083,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"evaluating for input\",\"request id\":\"8157480586935853928\",\"input\":{\"context\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":53144}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"time\":{\"seconds\":1634830460,\"nanos\":147259000},\"http\":{\"id\":\"8157480586935853928\",\"method\":\"GET\",\"headers\":{\":authority\":\"talker-api\",\":method\":\"GET\",\":path\":\"/hello\",\":scheme\":\"http\",\"accept\":\"*/*\",\"authorization\":\"Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRsVWJZMENyVy1sZ0tFMVRMd19pcTFUWGtTYUl6T0hyWks0VHhKYnpEZUUifQ.eyJhdWQiOlsidGFsa2VyLWFwaSJdLCJleHAiOjE2MzQ4MzEwNTEsImlhdCI6MTYzNDgzMDQ1MSwiaXNzIjoiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImF1dGhvcmlubyIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcGktY29uc3VtZXItMSIsInVpZCI6ImI0MGY1MzFjLWVjYWItNGYzMS1hNDk2LTJlYmM3MmFkZDEyMSJ9fSwibmJmIjoxNjM0ODMwNDUxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXV0aG9yaW5vOmFwaS1jb25zdW1lci0xIn0.PaP0vqdl5DPfErr84KfVhPdlsGAPgsw0NkDaA9rne1zXjzcO7KPPbXhFwZC-oIjSGG1HfRMSoQeCXbQz24PSATmX8l1T52a9IFeXgP7sQmXZIDbiPfTm3X09kIIlfPKHhK_f-jQwRIpMRqNgLntlZ-xXX3P1fOBBUYR8obTPAQ6NDDaLHxw2SAmHFTQWjM_DInPDemXX0mEm7nCPKifsNxHaQH4wx4CD3LCLGbCI9FHNf2Crid8mmGJXf4wzcH1VuKkpUlsmnlUgTG2bfT2lbhSF2lBmrrhTJyYk6_aA09DwL4Bf4kvG-JtCq0Bkd_XynViIsOtOnAhgmdSPkfr-oA\",\"user-agent\":\"curl/7.65.3\",\"x-envoy-internal\":\"true\",\"x-forwarded-for\":\"10.244.0.11\",\"x-forwarded-proto\":\"http\",\"x-request-id\":\"4c5d5c97-e15b-46a3-877a-d8188e09e08f\"},\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\",\"protocol\":\"HTTP/1.1\"}},\"context_extensions\":{\"virtual_host\":\"local_service\"},\"metadata_context\":{}},\"auth\":{\"identity\":{\"aud\":[\"talker-api\"],\"exp\":1634831051,\"iat\":1634830451,\"iss\":\"https://kubernetes.default.svc.cluster.local\",\"kubernetes.io\":{\"namespace\":\"authorino\",\"serviceaccount\":{\"name\":\"api-consumer-1\",\"uid\":\"b40f531c-ecab-4f31-a496-2ebc72add121\"}},\"nbf\":1634830451,\"sub\":\"system:serviceaccount:authorino:api-consumer-1\"},\"metadata\":{\"http-metadata\":{\"body\":\"\",\"headers\":{\"Accept-Encoding\":\"gzip\",\"Content-Type\":\"text/plain\",\"Host\":\"talker-api.default.svc.cluster.local:3000\",\"User-Agent\":\"Go-http-client/1.1\",\"Version\":\"HTTP/1.1\"},\"method\":\"GET\",\"path\":\"/metadata\",\"query_string\":\"encoding=text/plain&original_path=/hello\",\"uuid\":\"1aa6ac66-3179-4351-b1a7-7f6a761d5b61\"},\"uma-resource-registry\":[{\"_id\":\"e20d194c-274c-4845-8c02-0ca413c9bf18\",\"attributes\":{},\"displayName\":\"hello\",\"name\":\"hello\",\"owner\":{\"id\":\"57a645a5-fb67-438b-8be5-dfb971666dbc\"},\"ownerManagedAccess\":false,\"resource_scopes\":[],\"uris\":[\"/hi\",\"/hello\"]}]}}}}\n{\"level\":\"debug\",\"ts\":1634830460.2955465,\"logger\":\"authorino.service.auth.authpipeline.authorization.kubernetesauthz\",\"msg\":\"calling kubernetes subject access review api\",\"request id\":\"8157480586935853928\",\"subjectaccessreview\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"nonResourceAttributes\":{\"path\":\"/hello\",\"verb\":\"get\"},\"user\":\"system:serviceaccount:authorino:api-consumer-1\"},\"status\":{\"allowed\":false}}}\n{\"level\":\"debug\",\"ts\":1634830460.2986183,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"access granted\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"my-policy\",\"OPA\":{\"Rego\":\"fail := input.context.request.http.headers[\\\"x-ext-auth-mock\\\"] == \\\"FAIL\\\"\\nallow { not fail }\\n\",\"OPAExternalSource\":{\"Endpoint\":\"\",\"SharedSecret\":\"\",\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"JSON\":null,\"KubernetesAuthz\":null},\"object\":true}\n{\"level\":\"debug\",\"ts\":1634830460.3044975,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"access granted\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"kubernetes-rbac\",\"OPA\":null,\"JSON\":null,\"KubernetesAuthz\":{\"Conditions\":[],\"User\":{\"Static\":\"\",\"Pattern\":\"auth.identity.user.username\"},\"Groups\":null,\"ResourceAttributes\":null}},\"object\":true}\n{\"level\":\"debug\",\"ts\":1634830460.3052874,\"logger\":\"authorino.service.auth.authpipeline.response\",\"msg\":\"dynamic response built\",\"request id\":\"8157480586935853928\",\"config\":{\"Name\":\"wristband\",\"Wrapper\":\"httpHeader\",\"WrapperKey\":\"x-ext-auth-wristband\",\"Wristband\":{\"Issuer\":\"https://authorino-oidc.default.svc:8083/default/talker-api-protection/wristband\",\"CustomClaims\":[],\"TokenDuration\":300,\"SigningKeys\":[{\"use\":\"sig\",\"kty\":\"EC\",\"kid\":\"wristband-signing-key\",\"crv\":\"P-256\",\"alg\":\"ES256\",\"x\":\"TJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZw\",\"y\":\"SSg8rKBsJ3J1LxyLtt0oFvhHvZcUpmRoTuHk3UHisTA\",\"d\":\"Me-5_zWBWVYajSGZcZMCcD8dXEa4fy85zv_yN7BxW-o\"}]},\"DynamicJSON\":null},\"object\":\"eyJhbGciOiJFUzI1NiIsImtpZCI6IndyaXN0YmFuZC1zaWduaW5nLWtleSIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzQ4MzA3NjAsImlhdCI6MTYzNDgzMDQ2MCwiaXNzIjoiaHR0cHM6Ly9hdXRob3Jpbm8tb2lkYy5hdXRob3Jpbm8uc3ZjOjgwODMvYXV0aG9yaW5vL3RhbGtlci1hcGktcHJvdGVjdGlvbi93cmlzdGJhbmQiLCJzdWIiOiI4NDliMDk0ZDA4MzU0ZjM0MjA4ZGI3MjBmYWZmODlmNmM3NmYyOGY3MTcxOWI4NTQ3ZDk5NWNlNzAwMjU2ZGY4In0.Jn-VB5Q_0EX1ed1ji4KvhO4DlMqZeIl5H0qlukbTyYkp-Pgb4SnPGSbYWp5_uvG8xllsFAA5nuyBIXeba-dbkw\"}\n{\"level\":\"info\",\"ts\":1634830460.3054585,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"8157480586935853928\",\"authorized\":true,\"response\":\"OK\"}\n{\"level\":\"debug\",\"ts\":1634830460.305476,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"8157480586935853928\",\"authorized\":true,\"response\":\"OK\"}\n
Enforcing an AuthConfig with authentication based on API keys - identity: k8s-auth, oidc, oauth2, apikey - metadata: http, oidc userinfo - authorization: opa, k8s-authz - response: wristband
{\"level\":\"info\",\"ts\":1634830413.2425854,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"7199257136822741594\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":52702}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"http\":{\"id\":\"7199257136822741594\",\"method\":\"GET\",\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\"}}}}\n{\"level\":\"debug\",\"ts\":1634830413.2426975,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"7199257136822741594\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":52702}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"time\":{\"seconds\":1634830413,\"nanos\":240094000},\"http\":{\"id\":\"7199257136822741594\",\"method\":\"GET\",\"headers\":{\":authority\":\"talker-api\",\":method\":\"GET\",\":path\":\"/hello\",\":scheme\":\"http\",\"accept\":\"*/*\",\"authorization\":\"APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\",\"user-agent\":\"curl/7.65.3\",\"x-envoy-internal\":\"true\",\"x-forwarded-for\":\"10.244.0.11\",\"x-forwarded-proto\":\"http\",\"x-request-id\":\"d38f5e66-bd72-4733-95d1-3179315cdd60\"},\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\",\"protocol\":\"HTTP/1.1\"}},\"context_extensions\":{\"virtual_host\":\"local_service\"},\"metadata_context\":{}}}\n{\"level\":\"debug\",\"ts\":1634830413.2428744,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"k8s-service-accounts\",\"ExtendedProperties\":[],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"reason\":\"credential not found\"}\n{\"level\":\"debug\",\"ts\":1634830413.2434332,\"logger\":\"authorino.service.auth.authpipeline\",\"msg\":\"skipping config\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"keycloak-jwts\",\"ExtendedProperties\":[],\"OAuth2\":null,\"OIDC\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"},\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\"},\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":null},\"reason\":\"context canceled\"}\n{\"level\":\"debug\",\"ts\":1634830413.2479305,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"identity validated\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"api-keys\",\"ExtendedProperties\":[{\"Name\":\"sub\",\"Value\":{\"Static\":null,\"Pattern\":\"auth.identity.metadata.annotations.userid\"}}],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":{\"AuthCredentials\":{\"KeySelector\":\"APIKEY\",\"In\":\"authorization_header\"},\"Name\":\"api-keys\",\"LabelSelectors\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"}},\"KubernetesAuth\":null},\"object\":{\"apiVersion\":\"v1\",\"data\":{\"api_key\":\"bmR5QnpyZVV6RjR6cURRc3FTUE1Ia1JocmlFT3RjUng=\"},\"kind\":\"Secret\",\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Secret\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"userid\\\":\\\"john\\\"},\\\"labels\\\":{\\\"audience\\\":\\\"talker-api\\\",\\\"authorino.kuadrant.io/managed-by\\\":\\\"authorino\\\"},\\\"name\\\":\\\"api-key-1\\\",\\\"namespace\\\":\\\"authorino\\\"},\\\"stringData\\\":{\\\"api_key\\\":\\\"ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\\\"},\\\"type\\\":\\\"Opaque\\\"}\\n\",\"userid\":\"john\"},\"creationTimestamp\":\"2021-10-21T14:45:54Z\",\"labels\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"},\"managedFields\":[{\"apiVersion\":\"v1\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:data\":{\".\":{},\"f:api_key\":{}},\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubectl.kubernetes.io/last-applied-configuration\":{},\"f:userid\":{}},\"f:labels\":{\".\":{},\"f:audience\":{},\"f:authorino.kuadrant.io/managed-by\":{}}},\"f:type\":{}},\"manager\":\"kubectl-client-side-apply\",\"operation\":\"Update\",\"time\":\"2021-10-21T14:45:54Z\"}],\"name\":\"api-key-1\",\"namespace\":\"authorino\",\"resourceVersion\":\"8979\",\"uid\":\"c369852a-7e1a-43bd-94ca-e2b3f617052e\"},\"sub\":\"john\",\"type\":\"Opaque\"}}\n{\"level\":\"debug\",\"ts\":1634830413.248768,\"logger\":\"authorino.service.auth.authpipeline.metadata.http\",\"msg\":\"sending request\",\"request id\":\"7199257136822741594\",\"method\":\"GET\",\"url\":\"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path=/hello\",\"headers\":{\"Content-Type\":[\"text/plain\"]}}\n{\"level\":\"debug\",\"ts\":1634830413.2496722,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"cannot fetch metadata\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"oidc-userinfo\",\"UserInfo\":{\"OIDC\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"},\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\"}},\"UMA\":null,\"GenericHTTP\":null},\"reason\":\"Missing identity for OIDC issuer http://keycloak:8080/auth/realms/kuadrant. Skipping related UserInfo metadata.\"}\n{\"level\":\"debug\",\"ts\":1634830413.2497928,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"requesting pat\",\"request id\":\"7199257136822741594\",\"url\":\"http://talker-api:523b92b6-625d-4e1e-a313-77e7a8ae4e88@keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token\",\"data\":\"grant_type=client_credentials\",\"headers\":{\"Content-Type\":[\"application/x-www-form-urlencoded\"]}}\n{\"level\":\"debug\",\"ts\":1634830413.258932,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"fetched auth metadata\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"http-metadata\",\"UserInfo\":null,\"UMA\":null,\"GenericHTTP\":{\"Endpoint\":\"http://talker-api.default.svc.cluster.local:3000/metadata?encoding=text/plain&original_path={context.request.http.path}\",\"Method\":\"GET\",\"Parameters\":[],\"ContentType\":\"application/x-www-form-urlencoded\",\"SharedSecret\":\"\",\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"object\":{\"body\":\"\",\"headers\":{\"Accept-Encoding\":\"gzip\",\"Content-Type\":\"text/plain\",\"Host\":\"talker-api.default.svc.cluster.local:3000\",\"User-Agent\":\"Go-http-client/1.1\",\"Version\":\"HTTP/1.1\"},\"method\":\"GET\",\"path\":\"/metadata\",\"query_string\":\"encoding=text/plain&original_path=/hello\",\"uuid\":\"97529f8c-587b-4121-a4db-cd90c63871fd\"}}\n{\"level\":\"debug\",\"ts\":1634830413.2945344,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"querying resources by uri\",\"request id\":\"7199257136822741594\",\"url\":\"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set?uri=/hello\"}\n{\"level\":\"debug\",\"ts\":1634830413.3123596,\"logger\":\"authorino.service.auth.authpipeline.metadata.uma\",\"msg\":\"getting resource data\",\"request id\":\"7199257136822741594\",\"url\":\"http://keycloak:8080/auth/realms/kuadrant/authz/protection/resource_set/e20d194c-274c-4845-8c02-0ca413c9bf18\"}\n{\"level\":\"debug\",\"ts\":1634830413.3340268,\"logger\":\"authorino.service.auth.authpipeline.metadata\",\"msg\":\"fetched auth metadata\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"uma-resource-registry\",\"UserInfo\":null,\"UMA\":{\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\",\"ClientID\":\"talker-api\",\"ClientSecret\":\"523b92b6-625d-4e1e-a313-77e7a8ae4e88\"},\"GenericHTTP\":null},\"object\":[{\"_id\":\"e20d194c-274c-4845-8c02-0ca413c9bf18\",\"attributes\":{},\"displayName\":\"hello\",\"name\":\"hello\",\"owner\":{\"id\":\"57a645a5-fb67-438b-8be5-dfb971666dbc\"},\"ownerManagedAccess\":false,\"resource_scopes\":[],\"uris\":[\"/hi\",\"/hello\"]}]}\n{\"level\":\"debug\",\"ts\":1634830413.3367748,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"evaluating for input\",\"request id\":\"7199257136822741594\",\"input\":{\"context\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":52702}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"time\":{\"seconds\":1634830413,\"nanos\":240094000},\"http\":{\"id\":\"7199257136822741594\",\"method\":\"GET\",\"headers\":{\":authority\":\"talker-api\",\":method\":\"GET\",\":path\":\"/hello\",\":scheme\":\"http\",\"accept\":\"*/*\",\"authorization\":\"APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\",\"user-agent\":\"curl/7.65.3\",\"x-envoy-internal\":\"true\",\"x-forwarded-for\":\"10.244.0.11\",\"x-forwarded-proto\":\"http\",\"x-request-id\":\"d38f5e66-bd72-4733-95d1-3179315cdd60\"},\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\",\"protocol\":\"HTTP/1.1\"}},\"context_extensions\":{\"virtual_host\":\"local_service\"},\"metadata_context\":{}},\"auth\":{\"identity\":{\"apiVersion\":\"v1\",\"data\":{\"api_key\":\"bmR5QnpyZVV6RjR6cURRc3FTUE1Ia1JocmlFT3RjUng=\"},\"kind\":\"Secret\",\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Secret\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"userid\\\":\\\"john\\\"},\\\"labels\\\":{\\\"audience\\\":\\\"talker-api\\\",\\\"authorino.kuadrant.io/managed-by\\\":\\\"authorino\\\"},\\\"name\\\":\\\"api-key-1\\\",\\\"namespace\\\":\\\"authorino\\\"},\\\"stringData\\\":{\\\"api_key\\\":\\\"ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\\\"},\\\"type\\\":\\\"Opaque\\\"}\\n\",\"userid\":\"john\"},\"creationTimestamp\":\"2021-10-21T14:45:54Z\",\"labels\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"},\"managedFields\":[{\"apiVersion\":\"v1\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:data\":{\".\":{},\"f:api_key\":{}},\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubectl.kubernetes.io/last-applied-configuration\":{},\"f:userid\":{}},\"f:labels\":{\".\":{},\"f:audience\":{},\"f:authorino.kuadrant.io/managed-by\":{}}},\"f:type\":{}},\"manager\":\"kubectl-client-side-apply\",\"operation\":\"Update\",\"time\":\"2021-10-21T14:45:54Z\"}],\"name\":\"api-key-1\",\"namespace\":\"authorino\",\"resourceVersion\":\"8979\",\"uid\":\"c369852a-7e1a-43bd-94ca-e2b3f617052e\"},\"sub\":\"john\",\"type\":\"Opaque\"},\"metadata\":{\"http-metadata\":{\"body\":\"\",\"headers\":{\"Accept-Encoding\":\"gzip\",\"Content-Type\":\"text/plain\",\"Host\":\"talker-api.default.svc.cluster.local:3000\",\"User-Agent\":\"Go-http-client/1.1\",\"Version\":\"HTTP/1.1\"},\"method\":\"GET\",\"path\":\"/metadata\",\"query_string\":\"encoding=text/plain&original_path=/hello\",\"uuid\":\"97529f8c-587b-4121-a4db-cd90c63871fd\"},\"uma-resource-registry\":[{\"_id\":\"e20d194c-274c-4845-8c02-0ca413c9bf18\",\"attributes\":{},\"displayName\":\"hello\",\"name\":\"hello\",\"owner\":{\"id\":\"57a645a5-fb67-438b-8be5-dfb971666dbc\"},\"ownerManagedAccess\":false,\"resource_scopes\":[],\"uris\":[\"/hi\",\"/hello\"]}]}}}}\n{\"level\":\"debug\",\"ts\":1634830413.339894,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"access granted\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"my-policy\",\"OPA\":{\"Rego\":\"fail := input.context.request.http.headers[\\\"x-ext-auth-mock\\\"] == \\\"FAIL\\\"\\nallow { not fail }\\n\",\"OPAExternalSource\":{\"Endpoint\":\"\",\"SharedSecret\":\"\",\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"JSON\":null,\"KubernetesAuthz\":null},\"object\":true}\n{\"level\":\"debug\",\"ts\":1634830413.3444238,\"logger\":\"authorino.service.auth.authpipeline.authorization.kubernetesauthz\",\"msg\":\"calling kubernetes subject access review api\",\"request id\":\"7199257136822741594\",\"subjectaccessreview\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"nonResourceAttributes\":{\"path\":\"/hello\",\"verb\":\"get\"},\"user\":\"john\"},\"status\":{\"allowed\":false}}}\n{\"level\":\"debug\",\"ts\":1634830413.3547812,\"logger\":\"authorino.service.auth.authpipeline.authorization\",\"msg\":\"access granted\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"kubernetes-rbac\",\"OPA\":null,\"JSON\":null,\"KubernetesAuthz\":{\"Conditions\":[],\"User\":{\"Static\":\"\",\"Pattern\":\"auth.identity.user.username\"},\"Groups\":null,\"ResourceAttributes\":null}},\"object\":true}\n{\"level\":\"debug\",\"ts\":1634830413.3558292,\"logger\":\"authorino.service.auth.authpipeline.response\",\"msg\":\"dynamic response built\",\"request id\":\"7199257136822741594\",\"config\":{\"Name\":\"wristband\",\"Wrapper\":\"httpHeader\",\"WrapperKey\":\"x-ext-auth-wristband\",\"Wristband\":{\"Issuer\":\"https://authorino-oidc.default.svc:8083/default/talker-api-protection/wristband\",\"CustomClaims\":[],\"TokenDuration\":300,\"SigningKeys\":[{\"use\":\"sig\",\"kty\":\"EC\",\"kid\":\"wristband-signing-key\",\"crv\":\"P-256\",\"alg\":\"ES256\",\"x\":\"TJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZw\",\"y\":\"SSg8rKBsJ3J1LxyLtt0oFvhHvZcUpmRoTuHk3UHisTA\",\"d\":\"Me-5_zWBWVYajSGZcZMCcD8dXEa4fy85zv_yN7BxW-o\"}]},\"DynamicJSON\":null},\"object\":\"eyJhbGciOiJFUzI1NiIsImtpZCI6IndyaXN0YmFuZC1zaWduaW5nLWtleSIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzQ4MzA3MTMsImlhdCI6MTYzNDgzMDQxMywiaXNzIjoiaHR0cHM6Ly9hdXRob3Jpbm8tb2lkYy5hdXRob3Jpbm8uc3ZjOjgwODMvYXV0aG9yaW5vL3RhbGtlci1hcGktcHJvdGVjdGlvbi93cmlzdGJhbmQiLCJzdWIiOiI5NjhiZjViZjk3MDM3NWRiNjE0ZDFhMDgzZTg2NTBhYTVhMGVhMzAyOTdiYmJjMTBlNWVlMWZmYTkxYTYwZmY4In0.7G440sWgi2TIaxrGJf5KWR9UOFpNTjwVYeaJXFLzsLhVNICoMLbYzBAEo4M3ym1jipxxTVeE7anm4qDDc7cnVQ\"}\n{\"level\":\"info\",\"ts\":1634830413.3569078,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"7199257136822741594\",\"authorized\":true,\"response\":\"OK\"}\n{\"level\":\"debug\",\"ts\":1634830413.3569596,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"7199257136822741594\",\"authorized\":true,\"response\":\"OK\"}\n
Enforcing an AuthConfig with authentication based on API keys (invalid API key) - identity: k8s-auth, oidc, oauth2, apikey - metadata: http, oidc userinfo - authorization: opa, k8s-authz - response: wristband
{\"level\":\"info\",\"ts\":1634830373.2066543,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"12947265773116138711\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":52288}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"http\":{\"id\":\"12947265773116138711\",\"method\":\"GET\",\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\"}}}}\n{\"level\":\"debug\",\"ts\":1634830373.2068064,\"logger\":\"authorino.service.auth\",\"msg\":\"incoming authorization request\",\"request id\":\"12947265773116138711\",\"object\":{\"source\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":52288}}}}},\"destination\":{\"address\":{\"Address\":{\"SocketAddress\":{\"address\":\"127.0.0.1\",\"PortSpecifier\":{\"PortValue\":8000}}}}},\"request\":{\"time\":{\"seconds\":1634830373,\"nanos\":198329000},\"http\":{\"id\":\"12947265773116138711\",\"method\":\"GET\",\"headers\":{\":authority\":\"talker-api\",\":method\":\"GET\",\":path\":\"/hello\",\":scheme\":\"http\",\"accept\":\"*/*\",\"authorization\":\"APIKEY invalid\",\"user-agent\":\"curl/7.65.3\",\"x-envoy-internal\":\"true\",\"x-forwarded-for\":\"10.244.0.11\",\"x-forwarded-proto\":\"http\",\"x-request-id\":\"9e391846-afe4-489a-8716-23a2e1c1aa77\"},\"path\":\"/hello\",\"host\":\"talker-api\",\"scheme\":\"http\",\"protocol\":\"HTTP/1.1\"}},\"context_extensions\":{\"virtual_host\":\"local_service\"},\"metadata_context\":{}}}\n{\"level\":\"debug\",\"ts\":1634830373.2070816,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"12947265773116138711\",\"config\":{\"Name\":\"keycloak-opaque\",\"ExtendedProperties\":[],\"OAuth2\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"},\"TokenIntrospectionUrl\":\"http://keycloak:8080/auth/realms/kuadrant/protocol/openid-connect/token/introspect\",\"TokenTypeHint\":\"requesting_party_token\",\"ClientID\":\"talker-api\",\"ClientSecret\":\"523b92b6-625d-4e1e-a313-77e7a8ae4e88\"},\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":null},\"reason\":\"credential not found\"}\n{\"level\":\"debug\",\"ts\":1634830373.207225,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"12947265773116138711\",\"config\":{\"Name\":\"api-keys\",\"ExtendedProperties\":[{\"Name\":\"sub\",\"Value\":{\"Static\":null,\"Pattern\":\"auth.identity.metadata.annotations.userid\"}}],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":{\"AuthCredentials\":{\"KeySelector\":\"APIKEY\",\"In\":\"authorization_header\"},\"Name\":\"api-keys\",\"LabelSelectors\":{\"audience\":\"talker-api\",\"authorino.kuadrant.io/managed-by\":\"authorino\"}},\"KubernetesAuth\":null},\"reason\":\"the API Key provided is invalid\"}\n{\"level\":\"debug\",\"ts\":1634830373.2072473,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"12947265773116138711\",\"config\":{\"Name\":\"k8s-service-accounts\",\"ExtendedProperties\":[],\"OAuth2\":null,\"OIDC\":null,\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"}}},\"reason\":\"credential not found\"}\n{\"level\":\"debug\",\"ts\":1634830373.2072592,\"logger\":\"authorino.service.auth.authpipeline.identity\",\"msg\":\"cannot validate identity\",\"request id\":\"12947265773116138711\",\"config\":{\"Name\":\"keycloak-jwts\",\"ExtendedProperties\":[],\"OAuth2\":null,\"OIDC\":{\"AuthCredentials\":{\"KeySelector\":\"Bearer\",\"In\":\"authorization_header\"},\"Endpoint\":\"http://keycloak:8080/auth/realms/kuadrant\"},\"MTLS\":null,\"HMAC\":null,\"APIKey\":null,\"KubernetesAuth\":null},\"reason\":\"credential not found\"}\n{\"level\":\"info\",\"ts\":1634830373.2073083,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"12947265773116138711\",\"authorized\":false,\"response\":\"UNAUTHENTICATED\",\"object\":{\"code\":16,\"status\":302,\"message\":\"Redirecting to login\"}}\n{\"level\":\"debug\",\"ts\":1634830373.2073889,\"logger\":\"authorino.service.auth\",\"msg\":\"outgoing authorization response\",\"request id\":\"12947265773116138711\",\"authorized\":false,\"response\":\"UNAUTHENTICATED\",\"object\":{\"code\":16,\"status\":302,\"message\":\"Redirecting to login\",\"headers\":[{\"Location\":\"https://my-app.io/login\"}]}}\n
Deleting an AuthConfig and 2 related API key secrets
{\"level\":\"info\",\"ts\":1669221361.5032296,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"default/api-key-1\"}\n{\"level\":\"info\",\"ts\":1669221361.5057878,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"default/api-key-2\"}\n
Shutting down the service
{\"level\":\"info\",\"ts\":1669221635.0135982,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for non leader election runnables\"}\n{\"level\":\"info\",\"ts\":1669221635.0136683,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for leader election runnables\"}\n{\"level\":\"info\",\"ts\":1669221635.0135982,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for non leader election runnables\"}\n{\"level\":\"info\",\"ts\":1669221635.0136883,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for leader election runnables\"}\n{\"level\":\"info\",\"ts\":1669221635.0137057,\"logger\":\"authorino.controller.secret\",\"msg\":\"Shutdown signal received, waiting for all workers to finish\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\"}\n{\"level\":\"info\",\"ts\":1669221635.013724,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Shutdown signal received, waiting for all workers to finish\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669221635.01375,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"All workers finished\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669221635.013752,\"logger\":\"authorino.controller.secret\",\"msg\":\"All workers finished\",\"reconciler group\":\"\",\"reconciler kind\":\"Secret\"}\n{\"level\":\"info\",\"ts\":1669221635.0137632,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for caches\"}\n{\"level\":\"info\",\"ts\":1669221635.013751,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"Shutdown signal received, waiting for all workers to finish\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669221635.0137684,\"logger\":\"authorino.controller.authconfig\",\"msg\":\"All workers finished\",\"reconciler group\":\"authorino.kuadrant.io\",\"reconciler kind\":\"AuthConfig\"}\n{\"level\":\"info\",\"ts\":1669221635.0137722,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for caches\"}\n{\"level\":\"info\",\"ts\":1669221635.0138857,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for webhooks\"}\n{\"level\":\"info\",\"ts\":1669221635.0138955,\"logger\":\"authorino\",\"msg\":\"Wait completed, proceeding to shutdown the manager\"}\n{\"level\":\"info\",\"ts\":1669221635.0138893,\"logger\":\"authorino\",\"msg\":\"Stopping and waiting for webhooks\"}\n{\"level\":\"info\",\"ts\":1669221635.0139785,\"logger\":\"authorino\",\"msg\":\"Wait completed, proceeding to shutdown the manager\"}\n
"},{"location":"authorino/docs/user-guides/observability/#tracing","title":"Tracing","text":""},{"location":"authorino/docs/user-guides/observability/#request-id","title":"Request ID","text":"

Processes related to the authorization request are identified and linked together by a request ID. The request ID can be: * generated outside Authorino and passed in the authorization request \u2013 this is essentially the case of requests via GRPC authorization interface initiated by the Envoy; * generated by Authorino \u2013 requests via Raw HTTP Authorization interface.

"},{"location":"authorino/docs/user-guides/observability/#propagation","title":"Propagation","text":"

Authorino propagates trace identifiers compatible with the W3C Trace Context format (https://www.w3.org/TR/trace-context/) and user-defined baggage data in the W3C Baggage format (https://www.w3.org/TR/baggage).

"},{"location":"authorino/docs/user-guides/observability/#log-tracing","title":"Log tracing","text":"

Most log messages associated with an authorization request include the request id value. This value can be used to match incoming request and corresponding outgoing response log messages, including at deep level when more fine-grained log details are enabled (debug level level).

"},{"location":"authorino/docs/user-guides/observability/#opentelemetry-integration","title":"OpenTelemetry integration","text":"

Integration with an OpenTelemetry collector can be enabled by supplying the --tracing-service-endpoint command-line flag (e.g. authorino server --tracing-service-endpoint=http://jaeger:14268/api/traces).

The additional --tracing-service-tags command-line flag allow to specify fixed agent-level key-value tags for the trace signals emitted by Authorino (e.g. authorino server --tracing-service-endpoint=... --tracing-service-tag=key1=value1 --tracing-service-tag=key2=value2).

Traces related to authorization requests are additionally tagged with the authorino.request_id attribute.

"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/","title":"User guide: OpenID Connect Discovery and authentication with JWTs","text":"

Validate JSON Web Tokens (JWT) issued and signed by an OpenID Connect server; leverage OpenID Connect Discovery to automatically fetch JSON Web Key Sets (JWKS).

Authorino features in this guide:
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
Authorino validates JSON Web Tokens (JWT) issued by an OpenID Connect server that implements OpenID Connect Discovery. Authorino fetches the OpenID Connect configuration and JSON Web Key Set (JWKS) from the issuer endpoint, and verifies the JSON Web Signature (JWS) and time validity of the token. _Important!_ Authorino does **not** implement [OAuth2 grants](https://datatracker.ietf.org/doc/html/rfc6749#section-4) nor [OIDC authentication flows](https://openid.net/specs/openid-connect-core-1_0.html#Authentication). As a common recommendation of good practice, obtaining and refreshing access tokens is for clients to negotiate directly with the auth servers and token issuers. Authorino will only validate those tokens using the parameters provided by the trusted issuer authorities. For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\nEOF\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#6-obtain-an-access-token-with-the-keycloak-server","title":"6. Obtain an access token with the Keycloak server","text":"

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#7-consume-the-api","title":"7. Consume the API","text":"

With a valid access token:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

With missing or invalid access token:

curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: Bearer realm=\"keycloak-kuadrant-realm\"\n# x-ext-auth-reason: credential not found\n
"},{"location":"authorino/docs/user-guides/oidc-jwt-authentication/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/","title":"User guide: OpenID Connect (OIDC) and Role-Based Access Control (RBAC) with Authorino and Keycloak","text":"

Combine OpenID Connect (OIDC) authentication and Role-Based Access Control (RBAC) authorization rules leveraging Keycloak and Authorino working together.

In this user guide, you will learn via example how to implement a simple Role-Based Access Control (RBAC) system to protect endpoints of an API, with roles assigned to users of an Identity Provider (Keycloak) and carried within the access tokens as JSON Web Token (JWT) claims. Users authenticate with the IdP via OAuth2/OIDC flow and get their access tokens verified and validated by Authorino on every request. Moreover, Authorino reads the role bindings of the user and enforces the proper RBAC rules based upon the context.

Authorino features in this guide:
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
  • Authorization \u2192 JSON pattern-matching authorization rules
Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./json-pattern-matching-authorization.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/oidc-rbac/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/oidc-rbac/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

In this example, the Keycloak realm defines a few users and 2 realm roles: 'member' and 'admin'. When users authenticate to the Keycloak server by any of the supported OAuth2/OIDC flows, Keycloak adds to the access token JWT a claim \"realm_access\": { \"roles\": array } that holds the list of roles assigned to the user. Authorino will verify the JWT on requests to the API and read from that claim to enforce the following RBAC rules:

Path Method Role /resources[/*] GET / POST / PUT member /resources/{id} DELETE admin /admin[/*] * admin

Apply the AuthConfig:

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  patterns:\n    member-role:\n    - selector: auth.identity.realm_access.roles\n      operator: incl\n      value: member\n    admin-role:\n    - selector: auth.identity.realm_access.roles\n      operator: incl\n      value: admin\n  authorization:\n  # RBAC rule: 'member' role required for requests to /resources[/*]\n  - name: rbac-resources-api\n    when:\n    - selector: context.request.http.path\n      operator: matches\n      value: ^/resources(/.*)?$\n    json:\n      rules:\n      - patternRef: member-role\n  # RBAC rule: 'admin' role required for DELETE requests to /resources/{id}\n  - name: rbac-delete-resource\n    when:\n    - selector: context.request.http.path\n      operator: matches\n      value: ^/resources/\\d+$\n    - selector: context.request.http.method\n      operator: eq\n      value: DELETE\n    json:\n      rules:\n      - patternRef: admin-role\n  # RBAC rule: 'admin' role required for requests to /admin[/*]\n  - name: rbac-admin-api\n    when:\n    - selector: context.request.http.path\n      operator: matches\n      value: ^/admin(/.*)?$\n    json:\n      rules:\n      - patternRef: admin-role\nEOF\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#6-obtain-an-access-token-and-consume-the-api","title":"6. Obtain an access token and consume the API","text":""},{"location":"authorino/docs/user-guides/oidc-rbac/#obtain-an-access-token-and-consume-the-api-as-john-member","title":"Obtain an access token and consume the API as John (member)","text":"

Obtain an access token with the Keycloak server for John:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user John, who is assigned to the 'member' role:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As John, send a GET request to /resources:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/resources -i\n# HTTP/1.1 200 OK\n

As John, send a DELETE request to /resources/123:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/resources/123 -i\n# HTTP/1.1 403 Forbidden\n

As John, send a GET request to /admin/settings:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/admin/settings -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#obtain-an-access-token-and-consume-the-api-as-jane-memberadmin","title":"Obtain an access token and consume the API as Jane (member/admin)","text":"

Obtain an access token from within the cluster for the user Jane, who is assigned to the 'member' and 'admin' roles:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

As Jane, send a GET request to /resources:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/resources -i\n# HTTP/1.1 200 OK\n

As Jane, send a DELETE request to /resources/123:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/resources/123 -i\n# HTTP/1.1 200 OK\n

As Jane, send a GET request to /admin/settings:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/admin/settings -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/oidc-rbac/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/","title":"User guide: OpenID Connect UserInfo","text":"

Fetch user info for OpenID Connect ID tokens in request-time for extra metadata for your policies and online verification of token validity.

Authorino features in this guide:
  • External auth metadata \u2192 OIDC UserInfo
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
  • Authorization \u2192 JSON pattern-matching authorization rules
Apart from possibly complementing information of the JWT, fetching OpenID Connect UserInfo in request-time can be particularly useful for remote checking the state of the session, as opposed to only verifying the JWT/JWS offline. Implementation requires an OpenID Connect issuer ([`spec.identity.oidc`](#openid-connect-oidc-jwtjose-verification-and-validation-identityoidc)) configured in the same `AuthConfig`. Check out as well the user guide about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/oidc-user-info/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/oidc-user-info/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"
kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  metadata:\n  - name: userinfo\n    userInfo:\n      identitySource: keycloak-kuadrant-realm\n  authorization:\n  - name: active-tokens-only\n    json:\n      rules:\n      - selector: \"auth.metadata.userinfo.email\" # user email expected from the userinfo instead of the jwt\n        operator: neq\n        value: \"\"\nEOF\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#6-obtain-an-access-token-with-the-keycloak-server","title":"6. Obtain an access token with the Keycloak server","text":"

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster:

export $(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r '\"ACCESS_TOKEN=\"+.access_token,\"REFRESH_TOKEN=\"+.refresh_token')\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

"},{"location":"authorino/docs/user-guides/oidc-user-info/#7-consume-the-api","title":"7. Consume the API","text":"

With a valid access token:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Revoke the access token and try to consume the API again:

kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/logout -H \"Content-Type: application/x-www-form-urlencoded\" -d \"refresh_token=$REFRESH_TOKEN\" -d 'token_type_hint=requesting_party_token' -u demo:\n
curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/oidc-user-info/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/opa-authorization/","title":"User guide: Open Policy Agent (OPA) Rego policies","text":"

Leverage the power of Open Policy Agent (OPA) policies, evaluated against Authorino's Authorization JSON in a built-in runtime compiled together with Authorino; pre-cache policies defined in Rego language inline or fetched from an external policy registry.

Authorino features in this guide:
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
  • Identity verification & authentication \u2192 API key
Authorino supports [Open Policy Agent](https://www.openpolicyagent.org) policies, either inline defined in [Rego language](https://www.openpolicyagent.org/docs/latest/policy-language) as part of the `AuthConfig` or fetched from an external endpoint, such as an OPA Policy Registry. Authorino's built-in OPA module precompiles the policies in reconciliation-time and cache them for fast evaluation in request-time, where they receive the Authorization JSON as input. Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/opa-authorization/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/opa-authorization/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

In this example, we will use OPA to implement a read-only policy for requests coming from outside a trusted network (IP range 192.168.1/24).

The implementation relies on the X-Forwarded-For HTTP header to read the client's IP address.

Optional. Set use_remote_address: true in the Envoy route configuration, so the proxy will append its IP address instead of run in transparent mode. This setting will also ensure real remote address of the client connection passed in the x-envoy-external-address HTTP header, which can be used to simplify the read-only policy in remote environment.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n  authorization:\n  - name: read-only-outside\n    opa:\n      inlineRego: |\n        ips := split(input.context.request.http.headers[\"x-forwarded-for\"], \",\")\n        trusted_network { regex.match(`192\\.168\\.1\\.\\d+`, ips[0]) }\n        allow { trusted_network }\n        allow { not trusted_network; input.context.request.http.method == \"GET\" }\nEOF\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#7-consume-the-api","title":"7. Consume the API","text":"

Inside the trusted network:

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 192.168.1.10' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 192.168.1.10' \\\n-X POST \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Outside the trusted network:

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 123.45.6.78' \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n
curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' \\\n-H 'X-Forwarded-For: 123.45.6.78' \\\n-X POST \\\nhttp://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 403 Forbidden\n# x-ext-auth-reason: Unauthorized\n
"},{"location":"authorino/docs/user-guides/opa-authorization/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/passing-credentials/","title":"User guide: Passing credentials (Authorization header, cookie headers and others)","text":"

Customize where credentials are supplied in the request by each trusted source of identity.

Authorino features in this guide:
  • Identity verification & authentication \u2192 Auth credentials
  • Identity verification & authentication \u2192 API key
Authentication tokens can be supplied in the `Authorization` header, in a custom header, cookie or query string parameter. Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/passing-credentials/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/passing-credentials/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

In this example, member users can authenticate supplying the API key in any of 4 different ways: - HTTP header Authorization: APIKEY <api-key> - HTTP header X-API-Key: <api-key> - Query string parameter api_key=<api-key> - Cookie Cookie: APIKEY=<api-key>;

admin API keys are only accepted in the (default) HTTP header Authorization: Bearer <api-key>.

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: members-authorization-header\n    apiKey:\n      selector:\n        matchLabels:\n          group: members\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY # instead of the default prefix 'Bearer'\n  - name: members-custom-header\n    apiKey:\n      selector:\n        matchLabels:\n          group: members\n    credentials:\n      in: custom_header\n      keySelector: X-API-Key\n  - name: members-query-string-param\n    apiKey:\n      selector:\n        matchLabels:\n          group: members\n    credentials:\n      in: query\n      keySelector: api_key\n  - name: members-cookie\n    apiKey:\n      selector:\n        matchLabels:\n          group: members\n    credentials:\n      in: cookie\n      keySelector: APIKEY\n  - name: admins\n    apiKey:\n      selector:\n        matchLabels:\n          group: admins\nEOF\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#6-create-a-couple-api-keys","title":"6. Create a couple API keys","text":"

For a member user:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: members\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n

For an admin user:

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-2\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: admins\nstringData:\n  api_key: 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#7-consume-the-api","title":"7. Consume the API","text":"

As member user, passing the API key in the Authorization header:

curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

As member user, passing the API key in the custom X-API-Key header:

curl -H 'X-API-Key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

As member user, passing the API key in the query string parameter api_key:

curl \"http://talker-api-authorino.127.0.0.1.nip.io:8000/hello?api_key=ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\"\n# HTTP/1.1 200 OK\n

As member user, passing the API key in the APIKEY cookie header:

curl -H 'Cookie: APIKEY=ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx;foo=bar' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

As admin user:

curl -H 'Authorization: Bearer 7BNaTmYGItSzXiwQLNHu82+x52p1XHgY' http://talker-api-authorino.127.0.0.1.nip.io:8000/hello\n# HTTP/1.1 200 OK\n

Missing the API key:

curl http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 401 Unauthorized\n# www-authenticate: APIKEY realm=\"members-authorization-header\"\n# www-authenticate: X-API-Key realm=\"members-custom-header\"\n# www-authenticate: api_key realm=\"members-query-string-param\"\n# www-authenticate: APIKEY realm=\"members-cookie\"\n# www-authenticate: Bearer realm=\"admins\"\n# x-ext-auth-reason: {\"admins\":\"credential not found\",\"members-authorization-header\":\"credential not found\",\"members-cookie\":\"credential not found\",\"members-custom-header\":\"credential not found\",\"members-query-string-param\":\"credential not found\"}\n
"},{"location":"authorino/docs/user-guides/passing-credentials/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete secret/api-key-2\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/","title":"User guide: Resource-level authorization with User-Managed Access (UMA) resource registry","text":"

Fetch resource metadata relevant for your authorization policies from Keycloak authorization clients, using User-Managed Access (UMA) protocol.

Authorino features in this guide:
  • External auth metadata \u2192 User-Managed Access (UMA) resource registry
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
Check out as well the user guides about [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Open Policy Agent (OPA) Rego policies](./opa-authorization.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n

Forward local requests to the instance of Keycloak running in the cluster:

kubectl -n keycloak port-forward deployment/keycloak 8080:8080 &\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

This user guide's implementation for resource-level authorization leverages part of Keycloak's User-Managed Access (UMA) support. Authorino will fetch resource attributes stored in a Keycloak resource server client.

The Keycloak server also provides the identities. The sub claim of the Keycloak-issued ID tokens must match the owner of the requested resource, identified by the URI of the request.

Create a required secret, used by Authorino to start the authentication with the UMA registry.

kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: talker-api-uma-credentials\nstringData:\n  clientID: talker-api\n  clientSecret: 523b92b6-625d-4e1e-a313-77e7a8ae4e88\ntype: Opaque\nEOF\n

Create the config:

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  metadata:\n  - name: resource-data\n    uma:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n      credentialsRef:\n        name: talker-api-uma-credentials\n  authorization:\n  - name: owned-resources\n    opa:\n      inlineRego: |\n        COLLECTIONS = [\"greetings\"]\n        http_request = input.context.request.http\n        http_method = http_request.method\n        requested_path_sections = split(trim_left(trim_right(http_request.path, \"/\"), \"/\"), \"/\")\n        get { http_method == \"GET\" }\n        post { http_method == \"POST\" }\n        put { http_method == \"PUT\" }\n        delete { http_method == \"DELETE\" }\n        valid_collection { COLLECTIONS[_] == requested_path_sections[0] }\n        collection_endpoint {\n          valid_collection\n          count(requested_path_sections) == 1\n        }\n        resource_endpoint {\n          valid_collection\n          some resource_id\n          requested_path_sections[1] = resource_id\n        }\n        identity_owns_the_resource {\n          identity := input.auth.identity\n          resource_attrs := object.get(input.auth.metadata, \"resource-data\", [])[0]\n          resource_owner := object.get(object.get(resource_attrs, \"owner\", {}), \"id\", \"\")\n          resource_owner == identity.sub\n        }\n        allow { get;    collection_endpoint }\n        allow { post;   collection_endpoint }\n        allow { get;    resource_endpoint; identity_owns_the_resource }\n        allow { put;    resource_endpoint; identity_owns_the_resource }\n        allow { delete; resource_endpoint; identity_owns_the_resource }\nEOF\n

The OPA policy owned-resource above enforces that all users can send GET and POST requests to /greetings, while only resource owners can send GET, PUT and DELETE requests to /greetings/{resource-id}.

"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#6-obtain-access-tokens-with-the-keycloak-server-and-consume-the-api","title":"6. Obtain access tokens with the Keycloak server and consume the API","text":""},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#obtain-an-access-token-as-john-and-consume-the-api","title":"Obtain an access token as John and consume the API","text":"

Obtain an access token for user John (owner of the resource /greetings/1 in the UMA registry):

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As John, send requests to the API:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings\n# HTTP/1.1 200 OK\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1\n# HTTP/1.1 200 OK\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1\n# HTTP/1.1 200 OK\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2 -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#obtain-an-access-token-as-jane-and-consume-the-api","title":"Obtain an access token as Jane and consume the API","text":"

Obtain an access token for user Jane (owner of the resource /greetings/2 in the UMA registry):

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

As Jane, send requests to the API:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings\n# HTTP/1.1 200 OK\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 403 Forbidden\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 403 Forbidden\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#obtain-an-access-token-as-peter-and-consume-the-api","title":"Obtain an access token as Peter and consume the API","text":"

Obtain an access token for user Peter (does not own any resource in the UMA registry):

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=peter' -d 'password=p' | jq -r .access_token)\n

As Jane, send requests to the API:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings\n# HTTP/1.1 200 OK\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 403 Forbidden\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/1 -i\n# HTTP/1.1 403 Forbidden\ncurl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/greetings/2 -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/resource-level-authorization-uma/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection\nkubectl delete secret/talker-api-uma-credentials\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/sharding/","title":"User guide: Reducing the operational space","text":"

By default, Authorino will watch events related to all AuthConfig custom resources in the reconciliation space (namespace or entire cluster). Instances can be configured though to only watch a subset of the resources, thus allowing such as: - to reduce noise and lower memory usage inside instances meant for restricted scope (e.g. Authorino deployed as a dedicated sidecar to protect only one host); - sharding auth config data across multiple instances; - multiple environments (e.g. staging, production) inside of a same cluster/namespace; - providing managed instances of Authorino that all watch CRs cluster-wide, yet dedicated to organizations allowed to create and operate their own AuthConfigs across multiple namespaces.

Authorino features in this guide:
  • Sharding
  • Identity verification & authentication \u2192 API key
Check out as well the user guide about [Authentication with API keys](./api-key-authentication.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/sharding/#requirements","title":"Requirements","text":"
  • Kubernetes server

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n
"},{"location":"authorino/docs/user-guides/sharding/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/sharding/#2-deploy-a-couple-instances-of-authorino","title":"2. Deploy a couple instances of Authorino","text":"

Deploy an instance of Authorino dedicated to AuthConfigs and API key Secrets labeled with authorino/environment=staging:

kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino-staging\nspec:\n  clusterWide: true\n  authConfigLabelSelectors: authorino/environment=staging\n  secretLabelSelectors: authorino/environment=staging\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

Deploy an instance of Authorino dedicated to AuthConfigs and API key Secrets labeled with authorino/environment=production, ans NOT labeled disabled:

kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino-production\nspec:\n  clusterWide: true\n  authConfigLabelSelectors: authorino/environment=production,!disabled\n  secretLabelSelectors: authorino/environment=production,!disabled\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The commands above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/sharding/#3-create-a-namespace-for-user-resources","title":"3. Create a namespace for user resources","text":"
kubectl create namespace myapp\n
"},{"location":"authorino/docs/user-guides/sharding/#4-create-authconfigs-and-api-key-secrets-for-both-instances","title":"4. Create AuthConfigs and API key Secrets for both instances","text":""},{"location":"authorino/docs/user-guides/sharding/#create-resources-for-authorino-staging","title":"Create resources for authorino-staging","text":"

Create an AuthConfig:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: auth-config-1\n  labels:\n    authorino/environment: staging\nspec:\n  hosts:\n  - my-host.staging.io\n  identity:\n  - name: api-key\n    apiKey:\n      selector:\n        matchLabels:\n          authorino/api-key: \"true\"\n          authorino/environment: staging\nEOF\n

Create an API key Secret:

kubectl -n myapp apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino/api-key: \"true\"\n    authorino/environment: staging\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n

Verify in the logs that only the authorino-staging instance adds the resources to the index:

kubectl logs $(kubectl get pods -l authorino-resource=authorino-staging -o name)\n# {\"level\":\"info\",\"ts\":1638382989.8327162,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"myapp/auth-config-1\"}\n# {\"level\":\"info\",\"ts\":1638382989.837424,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status updated\",\"authconfig/status\":\"myapp/auth-config-1\"}\n# {\"level\":\"info\",\"ts\":1638383144.9486837,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"myapp/api-key-1\"}\n
"},{"location":"authorino/docs/user-guides/sharding/#create-resources-for-authorino-production","title":"Create resources for authorino-production","text":"

Create an AuthConfig:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: auth-config-2\n  labels:\n    authorino/environment: production\nspec:\n  hosts:\n  - my-host.io\n  identity:\n  - name: api-key\n    apiKey:\n      selector:\n        matchLabels:\n          authorino/api-key: \"true\"\n          authorino/environment: production\nEOF\n

Create an API key Secret:

kubectl -n myapp apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-2\n  labels:\n    authorino/api-key: \"true\"\n    authorino/environment: production\nstringData:\n  api_key: MUWdeBte7AbSWxl6CcvYNJ+3yEIm5CaL\ntype: Opaque\nEOF\n

Verify in the logs that only the authorino-production instance adds the resources to the index:

kubectl logs $(kubectl get pods -l authorino-resource=authorino-production -o name)\n# {\"level\":\"info\",\"ts\":1638383423.86086,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig.statusupdater\",\"msg\":\"resource status updated\",\"authconfig/status\":\"myapp/auth-config-2\"}\n# {\"level\":\"info\",\"ts\":1638383423.8608105,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"myapp/auth-config-2\"}\n# {\"level\":\"info\",\"ts\":1638383460.3515081,\"logger\":\"authorino.controller-runtime.manager.controller.secret\",\"msg\":\"resource reconciled\",\"secret\":\"myapp/api-key-2\"}\n
"},{"location":"authorino/docs/user-guides/sharding/#9-remove-a-resource-from-scope","title":"9. Remove a resource from scope","text":"
kubectl -n myapp label authconfig/auth-config-2 disabled=true\n# authconfig.authorino.kuadrant.io/auth-config-2 labeled\n

Verify in the logs that only the authorino-production instance adds the resources to the index:

kubectl logs $(kubectl get pods -l authorino-resource=authorino-production -o name)\n# {\"level\":\"info\",\"ts\":1638383515.6428752,\"logger\":\"authorino.controller-runtime.manager.controller.authconfig\",\"msg\":\"resource reconciled\",\"authconfig\":\"myapp/auth-config-2\"}\n
"},{"location":"authorino/docs/user-guides/sharding/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete authorino/authorino-staging\nkubectl delete authorino/authorino-production\nkubectl delete namespace myapp\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/token-normalization/","title":"User guide: Token normalization","text":"

Broadly, the term token normalization in authentication systems usually implies the exchange of an authentication token, as provided by the user in a given format, and/or its associated identity claims, for another freshly issued token/set of claims, of a given (normalized) structure or format.

The most typical use-case for token normalization involves accepting tokens issued by multiple trusted sources and of often varied authentication protocols, while ensuring that the eventual different data structures adopted by each of those sources are normalized, thus allowing to simplify policies and authorization checks that depend on those values. In general, however, any modification to the identity claims can be for the purpose of normalization.

This user guide focuses on the aspect of mutation of the identity claims resolved from an authentication token, to a certain data format and/or by extending them, so that required attributes can thereafter be trusted to be present among the claims, in a desired form. For such, Authorino allows to extend resolved identity objects with custom attributes (custom claims) of either static values or with values fetched from the Authorization JSON.

For not only normalizing the identity claims for purpose of writing simpler authorization checks and policies, but also getting Authorino to issue a new token in a normalized format, check the Festival Wristband tokens feature.

Authorino features in this guide:
  • Identity verification & authentication \u2192 Identity extension
  • Identity verification & authentication \u2192 API key
  • Identity verification & authentication \u2192 OpenID Connect (OIDC) JWT/JOSE verification and validation
  • Authorization \u2192 JSON pattern-matching authorization rules
Check out as well the user guides about [Authentication with API keys](./api-key-authentication.md), [OpenID Connect Discovery and authentication with JWTs](./oidc-jwt-authentication.md) and [Simple pattern-matching authorization policies](./user-guides/json-pattern-matching-authorization.md). For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/token-normalization/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/token-normalization/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/token-normalization/#2-deploy-the-talker-api","title":"2. Deploy the Talker API","text":"

The Talker API is just an echo API, included in the Authorino examples. We will use it in this guide as the service to be protected with Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/token-normalization/#3-deploy-authorino","title":"3. Deploy Authorino","text":"
kubectl apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in namespaced reconciliation mode, and with TLS termination disabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/token-normalization/#4-setup-envoy","title":"4. Setup Envoy","text":"

The following bundle from the Authorino examples (manifest referred in the command below) is to apply Envoy configuration and deploy Envoy proxy, that wire up the Talker API behind the reverse-proxy and external authorization with the Authorino instance.

For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. For a simpler and straightforward way to manage an API, without having to manually install or configure Envoy and Authorino, check out Kuadrant.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\n

The bundle also creates an Ingress with host name talker-api-authorino.127.0.0.1.nip.io, but if you are using a local Kubernetes cluster created with Kind, you need to forward requests on port 8000 to inside the cluster in order to actually reach the Envoy service:

kubectl port-forward deployment/envoy 8000:8000 &\n
"},{"location":"authorino/docs/user-guides/token-normalization/#5-create-the-authconfig","title":"5. Create the AuthConfig","text":"

This example implements a policy that only users bound to the admin role can send DELETE requests.

The config trusts access tokens issued by a Keycloak realm as well as API keys labeled specifically to a selected group (friends). The roles of the identities handled by Keycloak are managed in Keycloak, as realm roles. Particularly, users john and peter are bound to the member role, while user jane is bound to roles member and admin. As for the users authenticating with API key, they are all bound to the admin role.

Without normalizing identity claims from these two different sources, the policy would have to handle the differences of data formats with additional ifs-and-elses. Instead, the config here uses the identity.extendedProperties option to ensure a custom roles (Array) claim is always present in the identity object. In the case of Keycloak ID tokens, the value is extracted from the realm_access.roles claim; for API key-resolved objects, the custom claim is set to the static value [\"admin\"].

kubectl apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: talker-api-protection\nspec:\n  hosts:\n  - talker-api-authorino.127.0.0.1.nip.io\n  identity:\n  - name: keycloak-kuadrant-realm\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n    extendedProperties:\n    - name: roles\n      valueFrom:\n        authJSON: auth.identity.realm_access.roles\n  - name: api-key-friends\n    apiKey:\n      selector:\n        matchLabels:\n          group: friends\n    credentials:\n      in: authorization_header\n      keySelector: APIKEY\n    extendedProperties:\n    - name: roles\n      value: [\"admin\"]\n  authorization:\n  - name: only-admins-can-delete\n    when:\n    - selector: context.request.http.method\n      operator: eq\n      value: DELETE\n    json:\n      rules:\n      - selector: auth.identity.roles\n        operator: incl\n        value: admin\nEOF\n
"},{"location":"authorino/docs/user-guides/token-normalization/#6-create-an-api-key","title":"6. Create an API key","text":"
kubectl apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: api-key-1\n  labels:\n    authorino.kuadrant.io/managed-by: authorino\n    group: friends\nstringData:\n  api_key: ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\ntype: Opaque\nEOF\n
"},{"location":"authorino/docs/user-guides/token-normalization/#7-consume-the-api","title":"7. Consume the API","text":""},{"location":"authorino/docs/user-guides/token-normalization/#obtain-an-access-token-and-consume-the-api-as-jane-admin","title":"Obtain an access token and consume the API as Jane (admin)","text":"

Obtain an access token with the Keycloak server for Jane:

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster for the user Jane, whose e-mail has been verified:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' | jq -r .access_token)\n

If otherwise your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

Consume the API as Jane:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/token-normalization/#obtain-an-access-token-and-consume-the-api-as-john-member","title":"Obtain an access token and consume the API as John (member)","text":"

Obtain an access token with the Keycloak server for John:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' | jq -r .access_token)\n

Consume the API as John:

curl -H \"Authorization: Bearer $ACCESS_TOKEN\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 403 Forbidden\n
"},{"location":"authorino/docs/user-guides/token-normalization/#consume-the-api-using-the-api-key-to-authenticate-admin","title":"Consume the API using the API key to authenticate (admin)","text":"
curl -H \"Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx\" -X DELETE http://talker-api-authorino.127.0.0.1.nip.io:8000/hello -i\n# HTTP/1.1 200 OK\n
"},{"location":"authorino/docs/user-guides/token-normalization/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete secret/api-key-1\nkubectl delete authconfig/talker-api-protection\nkubectl delete authorino/authorino\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml\nkubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/validating-webhook/","title":"User guide: Using Authorino as ValidatingWebhook service","text":"

Authorino provides an interface for raw HTTP external authorization requests. This interface can be used for integrations other than the typical Envoy gRPC protocol, such as (though not limited to) using Authorino as a generic Kubernetes ValidatingWebhook service.

The rules to validate a request to the Kubernetes API \u2013 typically a POST, PUT or DELETE request targeting a particular Kubernetes resource or collection \u2013, according to which either the change will be deemed accepted or not, are written in an Authorino AuthConfig custom resource. Authentication and authorization are performed by the Kubernetes API server as usual, with auth features of Authorino implementing the additional validation within the scope of an AdmissionReview request.

This user guide provides an example of using Authorino as a Kubernetes ValidatingWebhook service that validates requests to CREATE and UPDATE Authorino AuthConfig resources. In other words, we will use Authorino as a validator inside the cluster that decides what is a valid AuthConfig for any application which wants to rely on Authorino to protect itself.

The AuthConfig to validate other AuthConfigs will enforce the following rules: - Authorino features that cannot be used by any application in their security schemes: - Anonymous Access - Plain identity object extracted from context - Kubernetes authentication (TokenReview) - Kubernetes authorization (SubjectAccessReview) - Festival Wristband tokens - Authorino features that require a RoleBinding to a specific ClusterRole in the 'authorino' namespace, to be used in a AuthConfig: - Authorino API key authentication - All metadata pulled from external sources must be cached for precisely 5 minutes (300 seconds)

For convenience, the same instance of Authorino used to enforce the AuthConfig associated with the validating webhook will also be targeted for the sample AuthConfigs created to test the validation. For using different instances of Authorino for the validating webhook and for protecting applications behind a proxy, check out the section about sharding in the docs. There is also a user guide on the topic, with concrete examples.

Authorino features in this guide:
  • Identity verification & authentication \u2192 Plain
  • Identity verification & authentication \u2192 Kubernetes TokenReview
  • Identity verification & authentication \u2192 API key
  • External auth metadata \u2192 HTTP GET/GET-by-POST
  • Authorization \u2192 Kubernetes SubjectAccessReview
  • Authorization \u2192 Open Policy Agent (OPA) Rego policies
  • Dynamic response \u2192 Festival Wristband tokens
  • Common feature \u2192 Conditions
  • Common feature \u2192 Priorities
For further details about Authorino features in general, check the [docs](./../features.md).

"},{"location":"authorino/docs/user-guides/validating-webhook/#requirements","title":"Requirements","text":"
  • Kubernetes server
  • cert-manager
  • Auth server / Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)

Create a containerized Kubernetes server locally using Kind:

kind create cluster --name authorino-tutorial\n

Install cert-manager:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml\n

Deploy a Keycloak server preloaded with all the realm settings required for this guide:

kubectl create namespace keycloak\nkubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#1-install-the-authorino-operator","title":"1. Install the Authorino Operator","text":"
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#2-deploy-authorino","title":"2. Deploy Authorino","text":"

Create the namespace:

kubectl create namespace authorino\n

Create the TLS certificates:

curl -sSL https://raw.githubusercontent.com/Kuadrant/authorino/main/deploy/certs.yaml | sed \"s/\\$(AUTHORINO_INSTANCE)/authorino/g;s/\\$(NAMESPACE)/authorino/g\" | kubectl -n authorino apply -f -\n

Create the Authorino instance:

kubectl -n authorino apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  clusterWide: true\n  listener:\n    ports:\n      grpc: 50051\n      http: 5001 # for admissionreview requests sent by the kubernetes api server\n    tls:\n      certSecretRef:\n        name: authorino-server-cert\n  oidcServer:\n    tls:\n      certSecretRef:\n        name: authorino-oidc-server-cert\nEOF\n

The command above will deploy Authorino as a separate service (as opposed to a sidecar of the protected API and other architectures), in cluster-wide reconciliation mode, and with TLS termination enabled. For other variants and deployment options, check out the Getting Started section of the docs, the Architecture page, and the spec for the Authorino CRD in the Authorino Operator repo.

"},{"location":"authorino/docs/user-guides/validating-webhook/#3-create-the-authconfig-and-related-clusterrole","title":"3. Create the AuthConfig and related ClusterRole","text":"

Create the AuthConfig:

kubectl -n authorino apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: authconfig-validator\nspec:\n  # admissionreview requests will be sent to this host name\n  hosts:\n  - authorino-authorino-authorization.authorino.svc\n  # because we're using a single authorino instance for the validating webhook and to protect the user applications,\n  # skip operations related to this one authconfig in the 'authorino' namespace\n  when:\n  - selector: context.request.http.body.@fromstr|request.object.metadata.namespace\n    operator: neq\n    value: authorino\n  # kubernetes admissionreviews carry info about the authenticated user\n  identity:\n  - name: k8s-userinfo\n    plain:\n      authJSON: context.request.http.body.@fromstr|request.userInfo\n  authorization:\n  - name: features\n    opa:\n      inlineRego: |\n        authconfig = json.unmarshal(input.context.request.http.body).request.object\n        forbidden { count(object.get(authconfig.spec, \"identity\", [])) == 0 }\n        forbidden { authconfig.spec.identity[_].anonymous }\n        forbidden { authconfig.spec.identity[_].kubernetes }\n        forbidden { authconfig.spec.identity[_].plain }\n        forbidden { authconfig.spec.authorization[_].kubernetes }\n        forbidden { authconfig.spec.response[_].wristband }\n        apiKey { authconfig.spec.identity[_].apiKey }\n        allow { count(authconfig.spec.identity) > 0; not forbidden }\n      allValues: true\n  - name: apikey-authn-requires-k8s-role-binding\n    priority: 1\n    when:\n    - selector: auth.authorization.features.apiKey\n      operator: eq\n      value: \"true\"\n    kubernetes:\n      user:\n        valueFrom: { authJSON: auth.identity.username }\n      resourceAttributes:\n        namespace: { value: authorino }\n        group: { value: authorino.kuadrant.io }\n        resource: { value: authconfigs-with-apikeys }\n        verb: { value: create }\n  - name: metadata-cache-ttl\n    priority: 1\n    opa:\n      inlineRego: |\n        invalid_ttl = input.auth.authorization.features.authconfig.spec.metadata[_].cache.ttl != 300\n        allow { not invalid_ttl }\nEOF\n

Define a ClusterRole to control the usage of protected features of Authorino:

kubectl apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: authorino-apikey\nrules:\n- apiGroups: [\"authorino.kuadrant.io\"]\n  resources: [\"authconfigs-with-apikeys\"] # not a real k8s resource\n  verbs: [\"create\"]\nEOF\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#4-create-the-validatingwebhookconfiguration","title":"4. Create the ValidatingWebhookConfiguration","text":"
kubectl -n authorino apply -f -<<EOF\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n  name: authconfig-authz\n  annotations:\n    cert-manager.io/inject-ca-from: authorino/authorino-ca-cert\nwebhooks:\n- name: check-authconfig.authorino.kuadrant.io\n  clientConfig:\n    service:\n      namespace: authorino\n      name: authorino-authorino-authorization\n      port: 5001\n      path: /check\n  rules:\n  - apiGroups: [\"authorino.kuadrant.io\"]\n    apiVersions: [\"v1beta1\"]\n    resources: [\"authconfigs\"]\n    operations: [\"CREATE\", \"UPDATE\"]\n    scope: Namespaced\n  sideEffects: None\n  admissionReviewVersions: [\"v1\"]\nEOF\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#5-try-it-out","title":"5. Try it out","text":"

Create a namespace:

kubectl create namespace myapp\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#with-a-valid-authconfig","title":"With a valid AuthConfig","text":"
kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: keycloak\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\nEOF\n# authconfig.authorino.kuadrant.io/myapp-protection created\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#with-forbidden-features","title":"With forbidden features","text":"

Anonymous access:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"]}}\\n\"}},\"spec\":{\"identity\":null}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n
kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: anonymous-access\n    anonymous: {}\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"anonymous\\\":{},\\\"name\\\":\\\"anonymous-access\\\"}]}}\\n\"}},\"spec\":{\"identity\":[{\"anonymous\":{},\"name\":\"anonymous-access\"}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n

Kubernetes TokenReview:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: k8s-tokenreview\n    kubernetes:\n      audiences: [\"myapp\"]\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"kubernetes\\\":{\\\"audiences\\\":[\\\"myapp\\\"]},\\\"name\\\":\\\"k8s-tokenreview\\\"}]}}\\n\"}},\"spec\":{\"identity\":[{\"kubernetes\":{\"audiences\":[\"myapp\"]},\"name\":\"k8s-tokenreview\"}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n

Plain identity extracted from context:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: envoy-jwt-authn\n    plain:\n      authJSON: context.metadata_context.filter_metadata.envoy\\.filters\\.http\\.jwt_authn|verified_jwt\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"name\\\":\\\"envoy-jwt-authn\\\",\\\"plain\\\":{\\\"authJSON\\\":\\\"context.metadata_context.filter_metadata.envoy\\\\\\\\.filters\\\\\\\\.http\\\\\\\\.jwt_authn|verified_jwt\\\"}}]}}\\n\"}},\"spec\":{\"identity\":[{\"name\":\"envoy-jwt-authn\",\"plain\":{\"authJSON\":\"context.metadata_context.filter_metadata.envoy\\\\.filters\\\\.http\\\\.jwt_authn|verified_jwt\"}}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n

Kubernetes SubjectAccessReview:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: keycloak\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  authorization:\n  - name: k8s-subjectaccessreview\n    kubernetes:\n      user:\n        valueFrom: { authJSON: auth.identity.sub }\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"authorization\\\":[{\\\"kubernetes\\\":{\\\"user\\\":{\\\"valueFrom\\\":{\\\"authJSON\\\":\\\"auth.identity.sub\\\"}}},\\\"name\\\":\\\"k8s-subjectaccessreview\\\"}],\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"name\\\":\\\"keycloak\\\",\\\"oidc\\\":{\\\"endpoint\\\":\\\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\\\"}}]}}\\n\"}},\"spec\":{\"authorization\":[{\"kubernetes\":{\"user\":{\"valueFrom\":{\"authJSON\":\"auth.identity.sub\"}}},\"name\":\"k8s-subjectaccessreview\"}],\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n

Festival Wristband tokens:

kubectl -n myapp apply -f -<<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: wristband-signing-key\nstringData:\n  key.pem: |\n    -----BEGIN EC PRIVATE KEY-----\n    MHcCAQEEIDHvuf81gVlWGo0hmXGTAnA/HVxGuH8vOc7/8jewcVvqoAoGCCqGSM49\n    AwEHoUQDQgAETJf5NLVKplSYp95TOfhVPqvxvEibRyjrUZwwtpDuQZxJKDysoGwn\n    cnUvHIu23SgW+Ee9lxSmZGhO4eTdQeKxMA==\n    -----END EC PRIVATE KEY-----\ntype: Opaque\n---\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: keycloak\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  response:\n  - name: wristband\n    wristband:\n      issuer: http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband\n      signingKeyRefs:\n      - algorithm: ES256\n        name: wristband-signing-key\nEOF\n# secret/wristband-signing-key created\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"name\\\":\\\"keycloak\\\",\\\"oidc\\\":{\\\"endpoint\\\":\\\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\\\"}}],\\\"response\\\":[{\\\"name\\\":\\\"wristband\\\",\\\"wristband\\\":{\\\"issuer\\\":\\\"http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband\\\",\\\"signingKeyRefs\\\":[{\\\"algorithm\\\":\\\"ES256\\\",\\\"name\\\":\\\"wristband-signing-key\\\"}]}}]}}\\n\"}},\"spec\":{\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}],\"response\":[{\"name\":\"wristband\",\"wristband\":{\"issuer\":\"http://authorino-authorino-oidc.authorino.svc.cluster.local:8083/myapp/myapp-protection/wristband\",\"signingKeyRefs\":[{\"algorithm\":\"ES256\",\"name\":\"wristband-signing-key\"}]}}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#with-features-that-require-additional-permissions","title":"With features that require additional permissions","text":"

Before adding the required permissions:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: api-key\n    apiKey:\n      selector:\n        matchLabels: { app: myapp }\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"apiKey\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"myapp\\\"}}},\\\"name\\\":\\\"api-key\\\"}]}}\\n\"}},\"spec\":{\"identity\":[{\"apiKey\":{\"selector\":{\"matchLabels\":{\"app\":\"myapp\"}}},\"name\":\"api-key\"}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Not authorized: unknown reason\n

Add the required permissions:

kubectl -n authorino apply -f -<<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: authorino-apikey\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: authorino-apikey\nsubjects:\n- kind: User\n  name: kubernetes-admin\nEOF\n# rolebinding.rbac.authorization.k8s.io/authorino-apikey created\n

After adding the required permissions:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: api-key\n    apiKey:\n      selector:\n        matchLabels: { app: myapp }\nEOF\n# authconfig.authorino.kuadrant.io/myapp-protection configured\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#with-features-that-require-specific-property-validation","title":"With features that require specific property validation","text":"

Invalid:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: keycloak\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  metadata:\n  - name: external-source\n    http:\n      endpoint: http://metadata.io\n      method: GET\n    cache:\n      key: { value: global }\n      ttl: 60\nEOF\n# Error from server: error when applying patch:\n# {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"authorino.kuadrant.io/v1beta1\\\",\\\"kind\\\":\\\"AuthConfig\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"myapp-protection\\\",\\\"namespace\\\":\\\"myapp\\\"},\\\"spec\\\":{\\\"hosts\\\":[\\\"myapp.io\\\"],\\\"identity\\\":[{\\\"name\\\":\\\"keycloak\\\",\\\"oidc\\\":{\\\"endpoint\\\":\\\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\\\"}}],\\\"metadata\\\":[{\\\"cache\\\":{\\\"key\\\":{\\\"value\\\":\\\"global\\\"},\\\"ttl\\\":60},\\\"http\\\":{\\\"endpoint\\\":\\\"http://metadata.io\\\",\\\"method\\\":\\\"GET\\\"},\\\"name\\\":\\\"external-source\\\"}]}}\\n\"}},\"spec\":{\"identity\":[{\"name\":\"keycloak\",\"oidc\":{\"endpoint\":\"http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\"}}],\"metadata\":[{\"cache\":{\"key\":{\"value\":\"global\"},\"ttl\":60},\"http\":{\"endpoint\":\"http://metadata.io\",\"method\":\"GET\"},\"name\":\"external-source\"}]}}\n# to:\n# Resource: \"authorino.kuadrant.io/v1beta1, Resource=authconfigs\", GroupVersionKind: \"authorino.kuadrant.io/v1beta1, Kind=AuthConfig\"\n# Name: \"myapp-protection\", Namespace: \"myapp\"\n# for: \"STDIN\": admission webhook \"check-authconfig.authorino.kuadrant.io\" denied the request: Unauthorized\n

Valid:

kubectl -n myapp apply -f -<<EOF\napiVersion: authorino.kuadrant.io/v1beta1\nkind: AuthConfig\nmetadata:\n  name: myapp-protection\nspec:\n  hosts:\n  - myapp.io\n  identity:\n  - name: keycloak\n    oidc:\n      endpoint: http://keycloak.keycloak.svc.cluster.local:8080/auth/realms/kuadrant\n  metadata:\n  - name: external-source\n    http:\n      endpoint: http://metadata.io\n      method: GET\n    cache:\n      key: { value: global }\n      ttl: 300\nEOF\n# authconfig.authorino.kuadrant.io/myapp-protection configured\n
"},{"location":"authorino/docs/user-guides/validating-webhook/#cleanup","title":"Cleanup","text":"

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial\n

Otherwise, delete the resources created in each step:

kubectl delete namespace myapp\nkubectl delete namespace authorino\nkubectl delete namespace clusterrole/authorino-apikey\nkubectl delete namespace keycloak\n

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml\n
"},{"location":"authorino-operator/","title":"Authorino Operator","text":"

A Kubernetes Operator to manage Authorino instances.

"},{"location":"authorino-operator/#installation","title":"Installation","text":"

The Operator can be installed by applying the manifests to the Kubernetes cluster or using Operator Lifecycle Manager (OLM)

"},{"location":"authorino-operator/#applying-the-manifests-to-the-cluster","title":"Applying the manifests to the cluster","text":"
  1. Create the namespace for the Operator
kubectl create namespace authorino-operator\n
  1. Install the Operator manifests
make install\n
  1. Deploy the Operator
make deploy\n
Tip: Deploy a custom image of the Operator To deploy an image of the Operator other than the default quay.io/kuadrant/authorino-operator:latest, specify by setting the OPERATOR_IMAGE parameter. E.g.:
make deploy OPERATOR_IMAGE=authorino-operator:local\n
"},{"location":"authorino-operator/#installing-via-olm","title":"Installing via OLM","text":"

To install the Operator using the Operator Lifecycle Manager, you need to make the Operator CSVs available in the cluster by creating a CatalogSource resource.

The bundle and catalog images of the Operator are available in Quay.io:

Bundle quay.io/kuadrant/authorino-operator-bundle Catalog quay.io/kuadrant/authorino-operator-catalog
  1. Create the namespace for the Operator
kubectl create namespace authorino-operator\n
  1. Create the CatalogSource resource pointing to one of the images from in the Operator's catalog repo:
kubectl -n authorino-operator apply -f -<<EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: CatalogSource\nmetadata:\n  name: operatorhubio-catalog\n  namespace: authorino-operator\nspec:\n  sourceType: grpc\n  image: quay.io/kuadrant/authorino-operator-catalog:latest\n  displayName: Authorino Operator\nEOF\n
"},{"location":"authorino-operator/#requesting-an-authorino-instance","title":"Requesting an Authorino instance","text":"

Once the Operator is up and running, you can request instances of Authorino by creating Authorino CRs. E.g.:

kubectl -n default apply -f -<<EOF\napiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\n  name: authorino\nspec:\n  listener:\n    tls:\n      enabled: false\n  oidcServer:\n    tls:\n      enabled: false\nEOF\n
"},{"location":"authorino-operator/#the-authorino-custom-resource-definition-crd","title":"The Authorino Custom Resource Definition (CRD)","text":"

API to install, manage and configure Authorino authorization services .

Each Authorino Custom Resource (CR) represents an instance of Authorino deployed to the cluster. The Authorino Operator will reconcile the state of the Kubernetes Deployment and associated resources, based on the state of the CR.

"},{"location":"authorino-operator/#api-specification","title":"API Specification","text":"Field Type Description Required/Default spec AuthorinoSpec Specification of the Authorino deployment. Required"},{"location":"authorino-operator/#authorinospec","title":"AuthorinoSpec","text":"Field Type Description Required/Default clusterWide Boolean Sets the Authorino instance's watching scope \u2013 cluster-wide or namespaced. Default: true (cluster-wide) authConfigLabelSelectors String Label selectors used by the Authorino instance to filter AuthConfig-related reconciliation events. Default: empty (all AuthConfigs are watched) secretLabelSelectors String Label selectors used by the Authorino instance to filter Secret-related reconciliation events (API key and mTLS authentication methods). Default: authorino.kuadrant.io/managed-by=authorino replicas Integer Number of replicas desired for the Authorino instance. Values greater than 1 enable leader election in the Authorino service, where the leader updates the statuses of the AuthConfig CRs). Default: 1 evaluatorCacheSize Integer Cache size (in megabytes) of each Authorino evaluator (when enabled in an AuthConfig). Default: 1 image String Authorino image to be deployed (for dev/testing purpose only). Default: quay.io/kuadrant/authorino:latest imagePullPolicy String Sets the imagePullPolicy of the Authorino Deployment (for dev/testing purpose only). Default: k8s default logLevel String Defines the level of log you want to enable in Authorino (debug, info and error). Default: info logMode String Defines the log mode in Authorino (development or production). Default: production listener Listener Specification of the authorization service (gRPC interface). Required oidcServer OIDCServer Specification of the OIDC service. Required tracing Tracing Configuration of the OpenTelemetry tracing exporter. Optional metrics Metrics Configuration of the metrics server (port, level). Optional healthz Healthz Configuration of the health/readiness probe (port). Optional volumes VolumesSpec Additional volumes to be mounted in the Authorino pods. Optional"},{"location":"authorino-operator/#listener","title":"Listener","text":"

Configuration of the authorization server \u2013 gRPC and raw HTTP interfaces

Field Type Description Required/Default port Integer Port number of authorization server (gRPC interface). DEPRECATEDUse ports instead ports Ports Port numbers of the authorization server (gRPC and raw HTTPinterfaces). Optional tls TLS TLS configuration of the authorization server (GRPC and HTTP interfaces). Required timeout Integer Timeout of external authorization request (in milliseconds), controlled internally by the authorization server. Default: 0 (disabled)"},{"location":"authorino-operator/#oidcserver","title":"OIDCServer","text":"

Configuration of the OIDC Discovery server for Festival Wristband tokens.

Field Type Description Required/Default port Integer Port number of OIDC Discovery server for Festival Wristband tokens. Default: 8083 tls TLS TLS configuration of the OIDC Discovery server for Festival Wristband tokens Required"},{"location":"authorino-operator/#tls","title":"TLS","text":"

TLS configuration of server. Appears in listener and oidcServer.

Field Type Description Required/Default enabled Boolean Whether TLS is enabled or disabled for the server. Default: true certSecretRef LocalObjectReference The reference to the secret that contains the TLS certificates tls.crt and tls.key. Required when enabled: true"},{"location":"authorino-operator/#ports","title":"Ports","text":"

Port numbers of the authorization server.

Field Type Description Required/Default grpc Integer Port number of the gRPC interface of the authorization server. Set to 0 to disable this interface. Default: 50001 http Integer Port number of the raw HTTP interface of the authorization server. Set to 0 to disable this interface. Default: 5001"},{"location":"authorino-operator/#tracing","title":"Tracing","text":"

Configuration of the OpenTelemetry tracing exporter.

Field Type Description Required/Default endpoint String Full endpoint of the OpenTelemetry tracing collector service (e.g. http://jaeger:14268/api/traces). Required tags Map Key-value map of fixed tags to add to all OpenTelemetry traces emitted by Authorino. Optional"},{"location":"authorino-operator/#metrics","title":"Metrics","text":"

Configuration of the metrics server.

Field Type Description Required/Default port Integer Port number of the metrics server. Default: 8080 deep Boolean Enable/disable metrics at the level of each evaluator config (if requested in the AuthConfig) exported by the metrics server. Default: false"},{"location":"authorino-operator/#healthz","title":"Healthz","text":"

Configuration of the health/readiness probe (port).

Field Type Description Required/Default port Integer Port number of the health/readiness probe. Default: 8081"},{"location":"authorino-operator/#volumesspec","title":"VolumesSpec","text":"

Additional volumes to project in the Authorino pods. Useful for validation of TLS self-signed certificates of external services known to have to be contacted by Authorino at runtime.

Field Type Description Required/Default items []VolumeSpec List of additional volume items to project. Optional defaultMode Integer Mode bits used to set permissions on the files. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. Optional"},{"location":"authorino-operator/#volumespec","title":"VolumeSpec","text":"Field Type Description Required/Default name String Name of the volume and volume mount within the Deployment. It must be unique in the CR. Optional mountPath String Absolute path where to mount all the items. Required configMaps []String List of of Kubernetes ConfigMap names to mount. Required exactly one of: confiMaps, secrets. secrets []String List of of Kubernetes Secret names to mount. Required exactly one of: confiMaps, secrets. items []KeyToPath Mount details for selecting specific ConfigMap or Secret entries. Optional"},{"location":"authorino-operator/#full-example","title":"Full example","text":"
apiVersion: operator.authorino.kuadrant.io/v1beta1\nkind: Authorino\nmetadata:\nname: authorino\nspec:\nclusterWide: true\nauthConfigLabelSelectors: environment=production\nsecretLabelSelectors: authorino.kuadrant.io/component=authorino,environment=production\nreplicas: 2\nevaluatorCacheSize: 2 # mb\nimage: quay.io/kuadrant/authorino:latest\nimagePullPolicy: Always\nlogLevel: debug\nlogMode: production\nlistener:\nports:\ngrpc: 50001\nhttp: 5001\ntls:\nenabled: true\ncertSecretRef:\nname: authorino-server-cert # secret must contain `tls.crt` and `tls.key` entries\noidcServer:\nport: 8083\ntls:\nenabled: true\ncertSecretRef:\nname: authorino-oidc-server-cert # secret must contain `tls.crt` and `tls.key` entries\nmetrics:\nport: 8080\ndeep: true\nvolumes:\nitems:\n- name: keycloak-tls-cert\nmountPath: /etc/ssl/certs\nconfigMaps:\n- keycloak-tls-cert\nitems: # details to mount the k8s configmap in the authorino pods\n- key: keycloak.crt\npath: keycloak.crt\ndefaultMode: 420\n
"},{"location":"limitador/","title":"Limitador","text":"

Limitador is a generic rate-limiter written in Rust. It can be used as a library, or as a service. The service exposes HTTP endpoints to apply and observe limits. Limitador can be used with Envoy because it also exposes a grpc service, on a different port, that implements the Envoy Rate Limit protocol (v3).

  • Getting started
  • How it works
  • Development
  • Testing Environment
  • Kubernetes
  • License

Limitador is under active development, and its API has not been stabilized yet.

"},{"location":"limitador/#getting-started","title":"Getting started","text":"
  • Rust library
  • Server
"},{"location":"limitador/#rust-library","title":"Rust library","text":"

Add this to your Cargo.toml:

[dependencies]\nlimitador = { version = \"0.3.0\" }\n

For more information, see the README of the crate

"},{"location":"limitador/#server","title":"Server","text":"

Run with Docker (replace latest with the version you want):

docker run --rm --net=host -it quay.io/kuadrant/limitador:v1.0.0\n

Run locally:

cargo run --release --bin limitador-server -- --help\n

Refer to the help message on how to start up the server. More information are available in the server's README.md

"},{"location":"limitador/#development","title":"Development","text":""},{"location":"limitador/#build","title":"Build","text":"
cargo build\n
"},{"location":"limitador/#run-the-tests","title":"Run the tests","text":"

Some tests need a redis deployed in localhost:6379. You can run it in Docker with:

docker run --rm -p 6379:6379 -it redis\n

Some tests need a infinispan deployed in localhost:11222. You can run it in Docker with:

docker run --rm -p 11222:11222 -it -e USER=username -e PASS=password infinispan/server:11.0.9.Final\n

Then, run the tests:

cargo test --all-features\n

or you can run tests disabling the \"redis storage\" feature:

cd limitador; cargo test --no-default-features\n

"},{"location":"limitador/#license","title":"License","text":"

Apache 2.0 License

"},{"location":"limitador/doc/how-it-works/","title":"How it works","text":""},{"location":"limitador/doc/how-it-works/#how-it-works","title":"How it works","text":"

Limitador ensures that the most restrictive limit configuration will apply.

Limitador will try to match each incoming descriptor with the same namespaced counter's conditions and variables. The namespace for the descriptors is defined by the domain field whereas for the rate limit configuration the namespace field is being used. For each matching counter, the counter is increased and the limits checked.

One example to illustrate:

Let's say we have 1 rate limit configuration (one counter per config):

conditions: [\"KEY_A == 'VALUE_A'\"]\nmax_value: 1\nseconds: 60\nvariables: []\nnamespace: example.org\n

Limitador receives one descriptor with two entries:

domain: example.org\ndescriptors:\n- entries:\n- KEY_A: VALUE_A\n- OTHER_KEY: OTHER_VALUE\n

The counter's condition will match. Then, the counter will be increased and the limit checked. If the limit is exceeded, the request will be rejected with 429 Too Many Requests, otherwise accepted.

Note that the counter is being activated even though it does not match all the entries of the descriptor. The same rule applies for the variables field.

Currently, the implementation of condition only allow for equal (==) and not equal (!=) operators. More operators will be implemented based off the use cases for them.

The variables field is a list of keys. The matching rule is defined just as the existence of the list of descriptor entries with the same key values. If variables is variables: [A, B, C], one descriptor matches if it has at least three entries with the same A, B, C keys.

Few examples to illustrate.

Having the following descriptors:

domain: example.org\ndescriptors:\n- entries:\n- KEY_A: VALUE_A\n- OTHER_KEY: OTHER_VALUE\n

the following counters would not be activated.

conditions: [\"KEY_B == 'VALUE_B'\"]\nmax_value: 1\nseconds: 60\nvariables: []\nnamespace: example.org\n
Reason: conditions key does not exist

conditions:\n- \"KEY_A == 'VALUE_A'\"\n- \"OTHER_KEY == 'WRONG_VALUE'\"\nmax_value: 1\nseconds: 60\nvariables: []\nnamespace: example.org\n
Reason: not all the conditions match

conditions: []\nmax_value: 1\nseconds: 60\nvariables: [\"MY_VAR\"]\nnamespace: example.org\n
Reason: the variable name does not exist

conditions: [\"KEY_B == 'VALUE_B'\"]\nmax_value: 1\nseconds: 60\nvariables: [\"MY_VAR\"]\nnamespace: example.org\n
Reason: Both variables and conditions must match. In this particular case, only conditions match

"},{"location":"limitador/doc/topologies/","title":"Deployment topologies","text":""},{"location":"limitador/doc/topologies/#in-memory","title":"In-memory","text":""},{"location":"limitador/doc/topologies/#redis","title":"Redis","text":""},{"location":"limitador/doc/topologies/#redis-active-active-storage","title":"Redis active-active storage","text":"

The RedisLabs version of Redis supports active-active replication. Limitador is compatible with that deployment mode, but there are a few things to take into account regarding limit accuracy.

"},{"location":"limitador/doc/topologies/#considerations","title":"Considerations","text":"

With an active-active deployment, the data needs to be replicated between instances. An update in an instance takes a short time to be reflected in the other. That time lag depends mainly on the network speed between the Redis instances, and it affects the accuracy of the rate-limiting performed by Limitador because it can go over limits while the updates of the counters are being replicated.

The impact of that greatly depends on the use case. With limits of a few seconds, and a low number of hits, we could easily go over limits. On the other hand, if we have defined limits with a high number of hits and a long period, the effect will be basically negligible. For example, if we define a limit of one hour, and we know that the data takes around one second to be replicated, the accuracy loss is going to be negligible.

"},{"location":"limitador/doc/topologies/#set-up","title":"Set up","text":"

In order to try active-active replication, you can follow this tutorial from RedisLabs.

"},{"location":"limitador/doc/topologies/#disk","title":"Disk","text":"

Disk storage using RocksDB. Counters are held on disk (persistent).

"},{"location":"limitador/doc/migrations/conditions/","title":"New condition syntax","text":"

With limitador-server version 1.0.0 (and the limitador crate version 0.3.0), the syntax for conditions within limit definitions has changed.

"},{"location":"limitador/doc/migrations/conditions/#changes","title":"Changes","text":""},{"location":"limitador/doc/migrations/conditions/#the-new-syntax","title":"The new syntax","text":"

The new syntax formalizes what part of an expression is the identifier and which is the value to test against. Identifiers are simple string value, while string literals are to be demarcated by single quotes (') or double quotes (\") so that foo == \" bar\" now makes it explicit that the value is to be prefixed with a space character.

A few remarks: - Only string values are supported, as that's what they really are - There is no escape character sequence supported in string literals - A new operator has been added, !=

"},{"location":"limitador/doc/migrations/conditions/#the-issue-with-the-deprecated-syntax","title":"The issue with the deprecated syntax","text":"

The previous syntax wouldn't differentiate between values and the identifier, so that foo == bar was valid. In this case foo was the identifier of the variable, while bar was the value to evaluate it against. Whitespaces before and after the operator == would be equally important. SO that foo == bar would test for a foo variable being equal to bar where the trailing whitespace after the identifier, and the one prefixing the value, would have been evaluated.

"},{"location":"limitador/doc/migrations/conditions/#server-binary-users","title":"Server binary users","text":"

The server still allows for the deprecated syntax, but warns about its usage. You can easily migrate your limits file, using the following command:

limitador-server --validate old_limits.yaml > updated_limits.yaml\n

Which should output Deprecated syntax for conditions corrected! to stderr while stdout would be the limits using the new syntax. It is recommended you manually verify the resulting LIMITS_FILE.

"},{"location":"limitador/doc/migrations/conditions/#crate-users","title":"Crate users","text":"

A feature lenient_conditions has been added, which lets you use the syntax used in previous version of the crate. The function limitador::limit::check_deprecated_syntax_usages_and_reset() lets you verify if the deprecated syntax has been used as limit::Limits are created with their condition strings using the deprecated syntax.

"},{"location":"limitador/doc/server/configuration/","title":"Limitador configuration","text":""},{"location":"limitador/doc/server/configuration/#command-line-configuration","title":"Command line configuration","text":"

The preferred way of starting and configuring the Limitador server is using the command line:

Rate Limiting Server\n\nUsage: limitador-server [OPTIONS] <LIMITS_FILE> [STORAGE]\n\nSTORAGES:\n  memory        Counters are held in Limitador (ephemeral)\n  disk          Counters are held on disk (persistent)\n  redis         Uses Redis to store counters\n  redis_cached  Uses Redis to store counters, with an in-memory cache\n\nArguments:\n  <LIMITS_FILE>  The limit file to use\n\nOptions:\n  -b, --rls-ip <ip>\n          The IP to listen on for RLS [default: 0.0.0.0]\n  -p, --rls-port <port>\n          The port to listen on for RLS [default: 8081]\n  -B, --http-ip <http_ip>\n          The IP to listen on for HTTP [default: 0.0.0.0]\n  -P, --http-port <http_port>\n          The port to listen on for HTTP [default: 8080]\n  -l, --limit-name-in-labels\n          Include the Limit Name in prometheus label\n  -v...\n          Sets the level of verbosity\n      --validate\n          Validates the LIMITS_FILE and exits\n  -H, --rate-limit-headers <rate_limit_headers>\n          Enables rate limit response headers [default: NONE] [possible values: NONE, DRAFT_VERSION_03]\n  -h, --help\n          Print help\n  -V, --version\n          Print version\n

The values used are authoritative over any environment variables independently set.

"},{"location":"limitador/doc/server/configuration/#limit-definitions","title":"Limit definitions","text":"

The LIMITS_FILE provided is the source of truth for all the limits that will be enforced. The file location will be monitored by the server for any changes and be hot reloaded. If the changes are invalid, they will be ignored on hot reload, or the server will fail to start.

"},{"location":"limitador/doc/server/configuration/#the-limits_files-format","title":"The LIMITS_FILE's format","text":"

When starting the server, you point it to a LIMITS_FILE, which is expected to be a yaml file with an array of limit definitions, with the following format:

---\n\"$schema\": http://json-schema.org/draft-04/schema#\ntype: object\nproperties:\nname:\ntype: string\nnamespace:\ntype: string\nseconds:\ntype: integer\nmax_value:\ntype: integer\nconditions:\ntype: array\nitems:\n- type: string\nvariables:\ntype: array\nitems:\n- type: string\nrequired:\n- namespace\n- seconds\n- max_value\n- conditions\n- variables\n

Here is an example of such a limit definition:

namespace: example.org\nmax_value: 10\nseconds: 60\nconditions:\n- \"req.method == 'GET'\"\nvariables:\n- user_id\n
  • namespace namespaces the limit, will generally be the domain, see here
  • seconds is the duration for which the limit applies, in seconds: e.g. 60 is a span of time of one minute
  • max_value is the actual limit, e.g. 100 would limit to 100 requests
  • name lets the user optionally name the limit
  • variables is an array of variables, which once resolved, will be used to qualify counters for the limit, e.g. api_key to limit per api keys
  • conditions is an array of conditions, which once evaluated will decide whether to apply the limit or not
"},{"location":"limitador/doc/server/configuration/#condition-syntax","title":"condition syntax","text":"

Each condition is an expression producing a boolean value (true or false). All conditions must evaluate to true for the limit to be applied on a request.

Expressions follow the following syntax: $IDENTIFIER $OP $STRING_LITERAL, where:

  • $IDENTIFIER will be used to resolve the value at evaluation time, e.g. role
  • $OP is an operator, either == or !=
  • $STRING_LITERAL is a literal string value, \" or ' demarcated, e.g. \"admin\"

So that role != \"admin\" would apply the limit on request from all users, but admin's.

"},{"location":"limitador/doc/server/configuration/#counter-storages","title":"Counter storages","text":"

Limitador will load all the limit definitions from the LIMITS_FILE and keep these in memory. To enforce these limits, Limitador needs to track requests in the form of counters. There would be at least one counter per limit, but that number grows when variables are used to qualify counters per some arbitrary values.

"},{"location":"limitador/doc/server/configuration/#memory","title":"memory","text":"

As the name implies, Limitador will keep all counters in memory. This yields the best results in terms of latency as well as accuracy. By default, only up to 1000 \"concurrent\" counters will be kept around, evicting the oldest entries. \"Concurrent\" in this context means counters that need to exist at the \"same time\", based of the period of the limit, as \"expired\" counters are discarded.

This storage is ephemeral, as if the process is restarted, all the counters are lost and effectively \"reset\" all the limits as if no traffic had been rate limited, which can be fine for short-lived limits, less for longer-lived ones.

"},{"location":"limitador/doc/server/configuration/#redis","title":"redis","text":"

When you want persistence of your counters, such as for disaster recovery or across restarts, using redis will store the counters in a redis instance using the provided URL. Increments to individual counters is made within redis itself, providing accuracy over these, races tho can occur when multiple Limitador servers are used against a single redis and using \"stacked\" limits (i.e. over different periods). Latency is also impacted, as it results in one additional hop to talk to redis and maintain the counters.

Uses Redis to store counters\n\nUsage: limitador-server <LIMITS_FILE> redis <URL>\n\nArguments:\n  <URL>  Redis URL to use\n\nOptions:\n  -h, --help  Print help\n
"},{"location":"limitador/doc/server/configuration/#redis_cached","title":"redis_cached","text":"

In order to avoid some communication overhead to redis, redis_cached adds an in memory caching layer within the Limitador servers. This lowers the latency, but sacrifices some accuracy as it will not only cache counters, but also coalesce counters updates to redis over time. See this configuration option for more information.

Uses Redis to store counters, with an in-memory cache\n\nUsage: limitador-server <LIMITS_FILE> redis_cached [OPTIONS] <URL>\n\nArguments:\n  <URL>  Redis URL to use\n\nOptions:\n      --ttl <TTL>             TTL for cached counters in milliseconds [default: 5000]\n      --ratio <ratio>         Ratio to apply to the TTL from Redis on cached counters [default: 10000]\n      --flush-period <flush>  Flushing period for counters in milliseconds [default: 1000]\n      --max-cached <max>      Maximum amount of counters cached [default: 10000]\n  -h, --help                  Print help\n
"},{"location":"limitador/doc/server/configuration/#disk","title":"disk","text":"

Disk storage using RocksDB. Counters are held on disk (persistent).

Counters are held on disk (persistent)\n\nUsage: limitador-server <LIMITS_FILE> disk [OPTIONS] <PATH>\n\nArguments:\n  <PATH>  Path to counter DB\n\nOptions:\n      --optimize <OPTIMIZE>  Optimizes either to save disk space or higher throughput [default: throughput] [possible values: throughput, disk]\n  -h, --help                 Print help\n
"},{"location":"limitador/doc/server/configuration/#infinispan-optional-storage-experimental","title":"infinispan optional storage - experimental","text":"

The default binary will not support Infinispan as a storage backend for counters. If you want to give it a try, you would need to build your own binary of the server using:

cargo build --release --features=infinispan\n

Which will add the infinispan to the supported STORAGES.

USAGE:\n    limitador-server <LIMITS_FILE> infinispan [OPTIONS] <URL>\n\nARGS:\n    <URL>    Infinispan URL to use\n\nOPTIONS:\n    -n, --cache-name <cache name>      Name of the cache to store counters in [default: limitador]\n    -c, --consistency <consistency>    The consistency to use to read from the cache [default:\n                                       Strong] [possible values: Strong, Weak]\n    -h, --help                         Print help information\n

For an in-depth coverage of the different topologies supported and how they affect the behavior, see the topologies' document.

"},{"location":"limitador/doc/server/configuration/#configuration-using-environment-variables","title":"Configuration using environment variables","text":"

The Limitador server has some options that can be configured with environment variables. These will override the default values the server uses. Any argument used when starting the server will prevail over the environment variables.

"},{"location":"limitador/doc/server/configuration/#envoy_rls_host","title":"ENVOY_RLS_HOST","text":"
  • Host where the Envoy RLS server listens.
  • Optional. Defaults to \"0.0.0.0\".
  • Format: string.
"},{"location":"limitador/doc/server/configuration/#envoy_rls_port","title":"ENVOY_RLS_PORT","text":"
  • Port where the Envoy RLS server listens.
  • Optional. Defaults to 8081.
  • Format: integer.
"},{"location":"limitador/doc/server/configuration/#http_api_host","title":"HTTP_API_HOST","text":"
  • Host where the HTTP server listens.
  • Optional. Defaults to \"0.0.0.0\".
  • Format: string.
"},{"location":"limitador/doc/server/configuration/#http_api_port","title":"HTTP_API_PORT","text":"
  • Port where the HTTP API listens.
  • Optional. Defaults to 8080.
  • Format: integer.
"},{"location":"limitador/doc/server/configuration/#limits_file","title":"LIMITS_FILE","text":"
  • YAML file that contains the limits to create when Limitador boots. If the limits specified already have counters associated, Limitador will not delete them. Changes to the file will be picked up by the running server.
  • Required. No default
  • Format: string, file path.
"},{"location":"limitador/doc/server/configuration/#limit_name_in_prometheus_labels","title":"LIMIT_NAME_IN_PROMETHEUS_LABELS","text":"
  • Enables using limit names as labels in Prometheus metrics. This is disabled by default because for a few limits it should be fine, but it could become a problem when defining lots of limits. See the caution note in the Prometheus docs
  • Optional. Disabled by default.
  • Format: bool, set to \"1\" to enable.
"},{"location":"limitador/doc/server/configuration/#redis_local_cache_enabled","title":"REDIS_LOCAL_CACHE_ENABLED","text":"
  • Enables a storage implementation that uses Redis, but also caches some data in memory. The idea is to improve throughput and latencies by caching the counters in memory to reduce the number of accesses to Redis. To achieve that, this mode sacrifices some rate-limit accuracy. This mode does two things:
    • Batches counter updates. Instead of updating the counters on every request, it updates them in memory and commits them to Redis in batches. The flushing interval can be configured with the REDIS_LOCAL_CACHE_FLUSHING_PERIOD_MS env. The trade-off is that when running several instances of Limitador, other instances will not become aware of the counter updates until they're committed to Redis.
    • Caches counters. Instead of fetching the value of a counter every time it's needed, the value is cached for a configurable period. The trade-off is that when running several instances of Limitador, an instance will not become aware of the counter updates other instances do while the value is cached. When a counter is already at 0 (limit exceeded), it's cached until it expires in Redis. In this case, no matter what other instances do, we know that the quota will not be reestablished until the key expires in Redis, so in this case, rate-limit accuracy is not affected. When a counter has still some quota remaining the situation is different, that's why we can tune for how long it will be cached. The formula is as follows: MIN(ttl_in_redis/REDIS_LOCAL_CACHE_TTL_RATIO_CACHED_COUNTERS, REDIS_LOCAL_CACHE_MAX_TTL_CACHED_COUNTERS_MS). For example, let's image that the current TTL (time remaining until the limit resets) in Redis for a counter is 10 seconds, and we set the ratio to 2, and the max time for 30s. In this case, the counter will be cached for 5s (min(10/2, 30)). During those 5s, Limitador will not fetch the value of that counter from Redis, so it will answer faster, but it will also miss the updates done by other instances, so it can go over the limits in that 5s interval.
  • Optional. Disabled by default.
  • Format: set to \"1\" to enable.
  • Note: \"REDIS_URL\" needs to be set.
"},{"location":"limitador/doc/server/configuration/#redis_local_cache_flushing_period_ms","title":"REDIS_LOCAL_CACHE_FLUSHING_PERIOD_MS","text":"
  • Used to configure the local cache when using Redis. See REDIS_LOCAL_CACHE_ENABLED. This env only applies when \"REDIS_LOCAL_CACHE_ENABLED\" == 1.
  • Optional. Defaults to 1000.
  • Format: integer. Duration in milliseconds.
"},{"location":"limitador/doc/server/configuration/#redis_local_cache_max_ttl_cached_counters_ms","title":"REDIS_LOCAL_CACHE_MAX_TTL_CACHED_COUNTERS_MS","text":"
  • Used to configure the local cache when using Redis. See REDIS_LOCAL_CACHE_ENABLED. This env only applies when \"REDIS_LOCAL_CACHE_ENABLED\" == 1.
  • Optional. Defaults to 5000.
  • Format: integer. Duration in milliseconds.
"},{"location":"limitador/doc/server/configuration/#redis_local_cache_ttl_ratio_cached_counters","title":"REDIS_LOCAL_CACHE_TTL_RATIO_CACHED_COUNTERS","text":"
  • Used to configure the local cache when using Redis. See REDIS_LOCAL_CACHE_ENABLED. This env only applies when \"REDIS_LOCAL_CACHE_ENABLED\" == 1.
  • Optional. Defaults to 10.
  • Format: integer.
"},{"location":"limitador/doc/server/configuration/#redis_url","title":"REDIS_URL","text":"
  • Redis URL. Required only when you want to use Redis to store the limits.
  • Optional. By default, Limitador stores the limits in memory and does not require Redis.
  • Format: string, URL in the format of \"redis://127.0.0.1:6379\".
"},{"location":"limitador/doc/server/configuration/#rust_log","title":"RUST_LOG","text":"
  • Defines the log level.
  • Optional. Defaults to \"error\".
  • Format: enum: \"debug\", \"error\", \"info\", \"warn\", or \"trace\".
"},{"location":"limitador/doc/server/configuration/#when-built-with-the-infinispan-feature-experimental","title":"When built with the infinispan feature - experimental","text":""},{"location":"limitador/doc/server/configuration/#infinispan_cache_name","title":"INFINISPAN_CACHE_NAME","text":"
  • The name of the Infinispan cache that Limitador will use to store limits and counters. This variable applies only when INFINISPAN_URL is set.
  • Optional. By default, Limitador will use a cache called \"limitador\".
  • Format: string.
"},{"location":"limitador/doc/server/configuration/#infinispan_counters_consistency","title":"INFINISPAN_COUNTERS_CONSISTENCY","text":"
  • Defines the consistency mode for the Infinispan counters created by Limitador. This variable applies only when INFINISPAN_URL is set.
  • Optional. Defaults to \"strong\".
  • Format: enum: \"Strong\" or \"Weak\".
"},{"location":"limitador/doc/server/configuration/#infinispan_url","title":"INFINISPAN_URL","text":"
  • Infinispan URL. Required only when you want to use Infinispan to store the limits.
  • Optional. By default, Limitador stores the limits in memory and does not require Infinispan.
  • Format: URL, in the format of http://username:password@127.0.0.1:11222.
"},{"location":"limitador/doc/server/configuration/#rate_limit_headers","title":"RATE_LIMIT_HEADERS","text":"
  • Enables rate limit response headers. Only supported by the RLS server.
  • Optional. Defaults to \"NONE\".
  • Must be one of:
  • \"NONE\" - Does not add any additional headers to the http response.
  • \"DRAFT_VERSION_03\". Adds response headers per https://datatracker.ietf.org/doc/id/draft-polli-ratelimit-headers-03.html
"},{"location":"limitador/limitador/","title":"Limitador (library)","text":"

An embeddable rate-limiter library supporting in-memory, Redis and Infinispan data stores. Limitador can also be compiled to WebAssembly.

For the complete documentation of the crate's API, please refer to docs.rs

"},{"location":"limitador/limitador/#features","title":"Features","text":"
  • redis_storage: support for using Redis as the data storage backend.
  • infinispan_storage: support for using Infinispan as the data storage backend.
  • lenient_conditions: support for the deprecated syntax of Conditions
  • default: redis_storage.
"},{"location":"limitador/limitador/#webassembly-support","title":"WebAssembly support","text":"

To use Limitador in a project that compiles to WASM, there are some features that need to be disabled. Add this to your Cargo.toml instead:

[dependencies]\nlimitador = { version = \"0.3.0\", default-features = false }\n
"},{"location":"limitador/limitador-server/","title":"Limitador (server)","text":"

By default, Limitador starts the HTTP server in localhost:8080 and the grpc service that implements the Envoy Rate Limit protocol in localhost:8081. That can be configured with these ENVs: ENVOY_RLS_HOST, ENVOY_RLS_PORT, HTTP_API_HOST, and HTTP_API_PORT.

Or using the command line arguments:

Rate Limiting Server\n\nUsage: limitador-server [OPTIONS] <LIMITS_FILE> [STORAGE]\n\nSTORAGES:\n  memory        Counters are held in Limitador (ephemeral)\n  disk          Counters are held on disk (persistent)\n  redis         Uses Redis to store counters\n  redis_cached  Uses Redis to store counters, with an in-memory cache\n\nArguments:\n  <LIMITS_FILE>  The limit file to use\n\nOptions:\n  -b, --rls-ip <ip>\n          The IP to listen on for RLS [default: 0.0.0.0]\n  -p, --rls-port <port>\n          The port to listen on for RLS [default: 8081]\n  -B, --http-ip <http_ip>\n          The IP to listen on for HTTP [default: 0.0.0.0]\n  -P, --http-port <http_port>\n          The port to listen on for HTTP [default: 8080]\n  -l, --limit-name-in-labels\n          Include the Limit Name in prometheus label\n  -v...\n          Sets the level of verbosity\n      --validate\n          Validates the LIMITS_FILE and exits\n  -H, --rate-limit-headers <rate_limit_headers>\n          Enables rate limit response headers [default: NONE] [possible values: NONE, DRAFT_VERSION_03]\n  -h, --help\n          Print help\n  -V, --version\n          Print version\n

When using environment variables, these will override the defaults. While environment variable are themselves overridden by the command line arguments provided. See the individual STORAGES help for more options relative to each of the storages.

The OpenAPI spec of the HTTP service is here.

Limitador has to be started with a YAML file that has some limits defined. There's an example file that allows 10 requests per minute and per user_id when the HTTP method is \"GET\" and 5 when it is a \"POST\". You can run it with Docker (replace latest with the version you want):

docker run --rm --net=host -it -v $(pwd)/examples/limits.yaml:/home/limitador/my_limits.yaml:ro quay.io/kuadrant/limitador:latest limitador-server /home/limitador/my_limits.yaml\n

You can also use the YAML file when running locally:

cargo run --release --bin limitador-server ./examples/limits.yaml\n

If you want to use Limitador with Envoy, there's a minimal Envoy config for testing purposes here. The config forwards the \"userid\" header and the request method to Limitador. It assumes that there's an upstream API deployed on port 1323. You can use echo, for example.

Limitador has several options that can be configured via ENV. This doc specifies them.

"},{"location":"limitador/limitador-server/#limits-storage","title":"Limits storage","text":"

Limitador can store its limits and counters in-memory, disk or in Redis. In-memory is faster, but the limits are applied per instance. When using Redis, multiple instances of Limitador can share the same limits, but it's slower.

"},{"location":"limitador/limitador-server/docs/sandbox/","title":"Sandbox","text":""},{"location":"limitador/limitador-server/docs/sandbox/#testing-environment","title":"Testing Environment","text":""},{"location":"limitador/limitador-server/docs/sandbox/#requirements","title":"Requirements","text":"
  • docker
  • docker-compose
"},{"location":"limitador/limitador-server/docs/sandbox/#setup","title":"Setup","text":"

Clone the project

git clone https://github.com/Kuadrant/limitador.git\ncd limitador/limitador-server/sandbox\n

Check out make help for all the targets.

"},{"location":"limitador/limitador-server/docs/sandbox/#deployment-options","title":"Deployment options","text":"Limitador's configuration Command Info In-memory configuration make deploy-in-memory Counters are held in Limitador (ephemeral) Redis make deploy-redis Uses Redis to store counters Redis Cached make deploy-redis-cached Uses Redis to store counters, with an in-memory cache Infinispan make deploy-infinispan Uses Infinispan to store counters"},{"location":"limitador/limitador-server/docs/sandbox/#limitadors-admin-http-endpoint","title":"Limitador's admin HTTP endpoint","text":"
curl -i http://127.0.0.1:18080/limits/test_namespace\n
"},{"location":"limitador/limitador-server/docs/sandbox/#downstream-traffic","title":"Downstream traffic","text":"

Upstream service implemented by httpbin.org

curl -i -H \"Host: example.com\" http://127.0.0.1:18000/get\n
"},{"location":"limitador/limitador-server/docs/sandbox/#limitador-image","title":"Limitador Image","text":"

By default, the sandbox will run Limitador's limitador-testing:latest image.

Building limitador-testing:latest image

You can easily build the limitador's image from the current workspace code base with:

make build\n

The image will be tagged with limitador-testing:latest

Using custom Limitador's image

The LIMITADOR_IMAGE environment variable overrides the default image. For example:

make deploy-in-memory LIMITADOR_IMAGE=quay.io/kuadrant/limitador:latest\n
"},{"location":"limitador/limitador-server/docs/sandbox/#tear-down","title":"Tear Down","text":"
make tear-down\n
"},{"location":"limitador-operator/","title":"Limitador Operator","text":""},{"location":"limitador-operator/#overview","title":"Overview","text":"

The Operator to manage Limitador deployments.

"},{"location":"limitador-operator/#customresourcedefinitions","title":"CustomResourceDefinitions","text":"
  • Limitador, which defines a desired Limitador deployment.
"},{"location":"limitador-operator/#limitador-crd","title":"Limitador CRD","text":"

Limitador v1alpha1 API reference

Example:

---\napiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nlistener:\nhttp:\nport: 8080\ngrpc:\nport: 8081\nlimits:\n- conditions: [\"get_toy == 'yes'\"]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []\n
"},{"location":"limitador-operator/#features","title":"Features","text":"
  • Storage Options
  • Rate Limit Headers
  • Logging
"},{"location":"limitador-operator/#contributing","title":"Contributing","text":"

The Development guide describes how to build the operator and how to test your changes before submitting a patch or opening a PR.

Join us on kuadrant.slack.com for live discussions about the roadmap and more.

"},{"location":"limitador-operator/#licensing","title":"Licensing","text":"

This software is licensed under the Apache 2.0 license.

See the LICENSE and NOTICE files that should have been provided along with this software for details.

"},{"location":"limitador-operator/doc/development/","title":"Development Guide","text":"
  • Technology stack required for development
  • Build
  • Run locally
  • Deploy the operator in a deployment object
  • Deploy the operator using OLM
  • Build custom OLM catalog
    • Build operator bundle image
    • Build custom catalog
  • Cleaning up
  • Run tests
    • Lint tests
  • (Un)Install Limitador CRD
"},{"location":"limitador-operator/doc/development/#technology-stack-required-for-development","title":"Technology stack required for development","text":"
  • operator-sdk version v1.28.1
  • kind version v0.20.0
  • git
  • go version 1.20+
  • kubernetes version v1.26+
  • kubectl version v1.26+
"},{"location":"limitador-operator/doc/development/#build","title":"Build","text":"
make\n
"},{"location":"limitador-operator/doc/development/#run-locally","title":"Run locally","text":"

You need an active session open to a kubernetes cluster.

Optionally, run kind with local-env-setup.

make local-env-setup\n

Then, run the operator locally

make run\n
"},{"location":"limitador-operator/doc/development/#deploy-the-operator-in-a-deployment-object","title":"Deploy the operator in a deployment object","text":"
make local-setup\n
"},{"location":"limitador-operator/doc/development/#deploy-the-operator-using-olm","title":"Deploy the operator using OLM","text":"

You can deploy the operator using OLM just running a few commands. No need to build any image. Kuadrant engineering team provides latest and released version tagged images. They are available in the Quay.io/Kuadrant image repository.

Create kind cluster

make kind-create-cluster\n

Deploy OLM system

make install-olm\n

Deploy the operator using OLM. The make deploy-catalog target accepts the following variables:

Makefile Variable Description Default value CATALOG_IMG Catalog image URL quay.io/kuadrant/limitador-operator-catalog:latest
make deploy-catalog [CATALOG_IMG=quay.io/kuadrant/limitador-operator-catalog:latest]\n
"},{"location":"limitador-operator/doc/development/#build-custom-olm-catalog","title":"Build custom OLM catalog","text":"

If you want to deploy (using OLM) a custom limitador operator, you need to build your own catalog.

"},{"location":"limitador-operator/doc/development/#build-operator-bundle-image","title":"Build operator bundle image","text":"

The make bundle target accepts the following variables:

Makefile Variable Description Default value Notes IMG Operator image URL quay.io/kuadrant/limitador-operator:latest VERSION Bundle version 0.0.0 RELATED_IMAGE_LIMITADOR Limitador bundle URL quay.io/kuadrant/limitador:latest LIMITADOR_VERSION var could be use to build this URL providing the tag
  • Build the bundle manifests
make bundle [IMG=quay.io/kuadrant/limitador-operator:latest] \\\n[VERSION=0.0.0] \\\n[RELATED_IMAGE_LIMITADOR=quay.io/kuadrant/limitador:latest]\n
  • Build the bundle image from the manifests
Makefile Variable Description Default value BUNDLE_IMG Operator bundle image URL quay.io/kuadrant/limitador-operator-bundle:latest
make bundle-build [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]\n
  • Push the bundle image to a registry
Makefile Variable Description Default value BUNDLE_IMG Operator bundle image URL quay.io/kuadrant/limitador-operator-bundle:latest
make bundle-push [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]\n
"},{"location":"limitador-operator/doc/development/#build-custom-catalog","title":"Build custom catalog","text":"

The catalog format will be File-based Catalog.

Make sure all the required bundles are pushed to the registry. It is required by the opm tool.

The make catalog target accepts the following variables:

Makefile Variable Description Default value BUNDLE_IMG Operator bundle image URL quay.io/kuadrant/limitador-operator-bundle:latest
make catalog [BUNDLE_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]\n
  • Build the catalog image from the manifests
Makefile Variable Description Default value CATALOG_IMG Operator catalog image URL quay.io/kuadrant/limitador-operator-catalog:latest
make catalog-build [CATALOG_IMG=quay.io/kuadrant/limitador-operator-catalog:latest]\n
  • Push the catalog image to a registry
make catalog-push [CATALOG_IMG=quay.io/kuadrant/limitador-operator-bundle:latest]\n

You can try out your custom catalog image following the steps of the Deploy the operator using OLM section.

"},{"location":"limitador-operator/doc/development/#cleaning-up","title":"Cleaning up","text":"
make local-cleanup\n
"},{"location":"limitador-operator/doc/development/#run-tests","title":"Run tests","text":""},{"location":"limitador-operator/doc/development/#unittests","title":"Unittests","text":"
make test-unit\n

Optionally, add TEST_NAME makefile variable to run specific test

make test-unit TEST_NAME=TestConstants\n

or even subtest

make test-unit TEST_NAME=TestLimitIndexEquals/empty_indexes_are_equal\n
"},{"location":"limitador-operator/doc/development/#integration-tests","title":"Integration tests","text":"

Run integration tests

make test-integration\n
"},{"location":"limitador-operator/doc/development/#all-tests","title":"All tests","text":"

Run all tests

make test\n
"},{"location":"limitador-operator/doc/development/#lint-tests","title":"Lint tests","text":"
make run-lint\n
"},{"location":"limitador-operator/doc/development/#uninstall-limitador-crd","title":"(Un)Install Limitador CRD","text":"

You need an active session open to a kubernetes cluster.

Remove CRDs

make uninstall\n
"},{"location":"limitador-operator/doc/logging/","title":"Logging","text":"

The limitador operator outputs 3 levels of log messages: (from lowest to highest level) 1. debug 2. info (default) 3. error

info logging is restricted to high-level information. Actions like creating, deleting or updating kubernetes resources will be logged with reduced details about the corresponding objects, and without any further detailed logs of the steps in between, except for errors.

Only debug logging will include processing details.

To configure the desired log level, set the environment variable LOG_LEVEL to one of the supported values listed above. Default log level is info.

Apart from log level, the controller can output messages to the logs in 2 different formats: - production (default): each line is a parseable JSON object with properties {\"level\":string, \"ts\":int, \"msg\":string, \"logger\":string, extra values...} - development: more human-readable outputs, extra stack traces and logging info, plus extra values output as JSON, in the format: <timestamp-iso-8601>\\t<log-level>\\t<logger>\\t<message>\\t{extra-values-as-json}

To configure the desired log mode, set the environment variable LOG_MODE to one of the supported values listed above. Default log level is production.

"},{"location":"limitador-operator/doc/rate-limit-headers/","title":"Rate Limit Headers","text":"

It enables RateLimit Header Fields for HTTP as specified in Rate Limit Headers Draft

apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nrateLimitHeaders: DRAFT_VERSION_03\n

Current valid values are: * DRAFT_VERSION_03 (ref: https://datatracker.ietf.org/doc/id/draft-polli-ratelimit-headers-03.html) * NONE

By default, when spec.rateLimitHeaders is null, --rate-limit-headers command line arg is not included in the limitador's deployment.

"},{"location":"limitador-operator/doc/resource-requirements/","title":"Resource Requirements","text":"

The default resource requirement for Limitador deployments is specified in Limitador v1alpha1 API reference and will be applied if the resource requirement is not set in the spec.

apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nlistener:\nhttp:\nport: 8080\ngrpc:\nport: 8081\nlimits:\n- conditions: [\"get_toy == 'yes'\"]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []  
Field json/yaml field Type Required Default value Description ResourceRequirements resourceRequirements *corev1.ResourceRequirements No {\"limits\": {\"cpu\": \"500m\",\"memory\": \"64Mi\"},\"requests\": {\"cpu\": \"250m\",\"memory\": \"32Mi\"}} Limitador deployment resource requirements"},{"location":"limitador-operator/doc/resource-requirements/#example-with-resource-limits","title":"Example with resource limits","text":"

The resource requests and limits for the deployment can be set like the following:

apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nlistener:\nhttp:\nport: 8080\ngrpc:\nport: 8081\nlimits:\n- conditions: [\"get_toy == 'yes'\"]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []\nresourceRequirements:\nlimits:\ncpu: 200m\nmemory: 400Mi\nrequests:\ncpu: 101m  memory: 201Mi    

To specify the deployment without resource requests or limits, set an empty struct {} to the field:

apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nlistener:\nhttp:\nport: 8080\ngrpc:\nport: 8081\nlimits:\n- conditions: [ \"get_toy == 'yes'\" ]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []\nresourceRequirements: {}\n

"},{"location":"limitador-operator/doc/storage/","title":"Storage","text":"

The default storage for Limitador limits counter is in memory, which there's no configuration needed. In order to configure a Redis data structure store, currently there are 2 alternatives:

  • Redis
  • Redis Cached

For any of those, one should store the URL of the Redis service, inside a K8s opaque Secret.

apiVersion: v1\nkind: Secret\nmetadata:\nname: redisconfig\nstringData:\nURL: redis://127.0.0.1/a # Redis URL of its running instance\ntype: Opaque\n

It's also required to setup Spec.Storage

"},{"location":"limitador-operator/doc/storage/#redis","title":"Redis","text":"
apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nstorage:\nredis:\nconfigSecretRef: # The secret reference storing the URL for Redis\nname: redisconfig\nnamespace: default # optional\nlimits:\n- conditions: [\"get_toy == 'yes'\"]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []\n
"},{"location":"limitador-operator/doc/storage/#redis-cached","title":"Redis Cached","text":""},{"location":"limitador-operator/doc/storage/#options","title":"Options","text":"Option Description ttl TTL for cached counters in milliseconds [default: 5000] ratio Ratio to apply to the TTL from Redis on cached counters [default: flush-period Flushing period for counters in milliseconds [default: 1000] max-cached Maximum amount of counters cached [default: 10000]
apiVersion: limitador.kuadrant.io/v1alpha1\nkind: Limitador\nmetadata:\nname: limitador-sample\nspec:\nstorage:\nredis-cached:\nconfigSecretRef: # The secret reference storing the URL for Redis\nname: redisconfig\nnamespace: default # optional\noptions: # Every option is optional\nttl: 1000\nlimits:\n- conditions: [\"get_toy == 'yes'\"]\nmax_value: 2\nnamespace: toystore-app\nseconds: 30\nvariables: []\n
"},{"location":"multicluster-gateway-controller/","title":"multicluster-gateway-controller","text":""},{"location":"multicluster-gateway-controller/#description","title":"Description:","text":"

The multi-cluster gateway controller, leverages the gateway API standard and Open Cluster Management to provide multi-cluster connectivity and global load balancing

Key Features:

  • Central Gateway Definition that can then be distributed to multiple clusters
  • Automatic TLS and cert distribution for HTTPS based listeners
  • DNSPolicy to decide how North-South based traffic should be balanced and reach the gateways
  • Health checks to detect and take remedial action against unhealthy endpoints
  • Cloud DNS provider integrations (AWS route 53) with new ones being added (google DNS)

When deploying the multicluster gateway controller using the make targets, the following will be created: * Kind cluster(s) * Gateway API CRDs in the control plane cluster * Ingress controller * Cert manager * ArgoCD instance * K8s Dashboard * LetsEncrypt certs

"},{"location":"multicluster-gateway-controller/#prerequisites","title":"Prerequisites:","text":"
  • AWS or GCP
  • Various dependencies installed into $(pwd)/bin e.g. kind, yq etc.
  • Run make dependencies
  • openssl>=3
    • On macOS a later version is available with brew install openssl. You'll need to update your PATH as macOS provides an older version via libressl as well
    • On Fedora use dnf install openssl
  • go >= 1.20
"},{"location":"multicluster-gateway-controller/#1-running-the-controller-in-the-cluster","title":"1. Running the controller in the cluster:","text":"
  1. Set up your DNS Provider by following these steps

  2. Setup your local environment

    make local-setup MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER>\n

  3. Build the controller image and load it into the control plane sh kubectl config use-context kind-mgc-control-plane make kind-load-controller

  4. Deploy the controller to the control plane cluster

    make deploy-controller\n

  5. (Optional) View the logs of the deployed controller

    kubectl logs -f $(kubectl get pods -n multi-cluster-gateways | grep \"mgc-\" | awk '{print $1}') -n multi-cluster-gateways\n

"},{"location":"multicluster-gateway-controller/#2-running-the-controller-locally","title":"2. Running the controller locally:","text":"
  1. Set up your DNS Provider by following these steps

  2. Setup your local environment

    make local-setup MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER>\n
  3. Run the controller locally:

    kubectl config use-context kind-mgc-control-plane make build-controller install run-controller\n

"},{"location":"multicluster-gateway-controller/#3-running-the-agent-in-the-cluster","title":"3. Running the agent in the cluster:","text":"
  1. Build the agent image and load it into the workload cluster

    kubectl config use-context kind-mgc-workload-1 make kind-load-agent\n

  2. Deploy the agent to the workload cluster

    make deploy-agent\n

"},{"location":"multicluster-gateway-controller/#4-running-the-agent-locally","title":"4. Running the agent locally","text":"
  1. Target the workload cluster you wish to run on:
    export KUBECONFIG=./tmp/kubeconfigs/mgc-workload-1.kubeconfig\n
  2. Run the agent locally:
    make build-agent run-agent\n
"},{"location":"multicluster-gateway-controller/#5-clean-up-local-environment","title":"5. Clean up local environment","text":"

In any terminal window target control plane cluster by:

kubectl config use-context kind-mgc-control-plane 
If you want to wipe everything clean consider using:
make local-cleanup # Remove kind clusters created locally and cleanup any generated local files.\n
If the intention is to cleanup kind cluster and prepare them for re-installation consider using:
make local-cleanup-mgc MGC_WORKLOAD_CLUSTERS_COUNT=<NUMBER_WORKLOAD_CLUSTER> # prepares clusters for make local-setup-mgc\n

"},{"location":"multicluster-gateway-controller/#license","title":"License","text":"

Copyright 2022 Red Hat.

Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0\n

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"multicluster-gateway-controller/docs/getting-started/","title":"Getting Started","text":""},{"location":"multicluster-gateway-controller/docs/getting-started/#getting-started","title":"Getting Started","text":""},{"location":"multicluster-gateway-controller/docs/getting-started/#prerequisites","title":"Prerequisites","text":"
  • Docker
  • Kind
  • Kubectl
  • OpenSSL >= 3
  • AWS account with Route 53 enabled
  • Docker Mac Net Connect (macOS users only)
"},{"location":"multicluster-gateway-controller/docs/getting-started/#config","title":"Config","text":"

Export environment variables with the keys listed below. Fill in your own values as appropriate. Note that you will need to have created a root domain in AWS using Route 53:

Env Var Example Value Description MGC_ZONE_ROOT_DOMAIN jbloggs.hcpapps.net Hostname for the root Domain MGC_AWS_DNS_PUBLIC_ZONE_ID Z01234567US0IQE3YLO00 AWS Route 53 Zone ID for specified MGC_ZONE_ROOT_DOMAIN MGC_AWS_ACCESS_KEY_ID AKIA1234567890000000 Access Key ID, with access to resources in Route 53 MGC_AWS_SECRET_ACCESS_KEY Z01234567US0000000 Access Secret Access Key, with access to resources in Route 53 MGC_AWS_REGION eu-west-1 AWS Region MGC_SUB_DOMAIN myapp.jbloggs.hcpapps.net AWS Region

Alternatively, to set defaults, add the above environment variables to your .zshrc or .bash_profile.

"},{"location":"multicluster-gateway-controller/docs/getting-started/#set-up-clusters-and-multicluster-gateway-controller","title":"Set Up Clusters and Multicluster Gateway Controller","text":"
 curl https://raw.githubusercontent.com/kuadrant/multicluster-gateway-controller/main/hack/quickstart-setup.sh | bash\n
"},{"location":"multicluster-gateway-controller/docs/getting-started/#whats-next","title":"What's Next","text":"

Now that you have two Kind clusters configured with the Multicluster Gateway Controller installed you are ready to begin the Multicluster Gateways walkthrough.

"},{"location":"multicluster-gateway-controller/docs/contribution/vscode-debugging/","title":"Debugging in VS code","text":""},{"location":"multicluster-gateway-controller/docs/contribution/vscode-debugging/#introduction","title":"Introduction","text":"

The following document will show how to setup debugging for multi gateway controller.

There is an included VSCode launch.json.

"},{"location":"multicluster-gateway-controller/docs/contribution/vscode-debugging/#starting-the-controller","title":"Starting the controller","text":"

Instead of starting the Gateway Controller via something like:

make build-controller install run-controller\n

You can now simply hit F5 in VSCode. The controller will launch with the following config:

{\n  \"version\": \"0.2.0\",\n  \"configurations\": [\n    {\n      \"name\": \"Debug\",\n      \"type\": \"go\",\n      \"request\": \"launch\",\n      \"mode\": \"auto\",\n      \"program\": \"./cmd/controller/main.go\",\n      \"args\": [\n        \"--metrics-bind-address=:8080\",\n        \"--health-probe-bind-address=:8081\"\n      ]\n    }\n  ]\n}\n
"},{"location":"multicluster-gateway-controller/docs/contribution/vscode-debugging/#running-debugger","title":"Running Debugger","text":""},{"location":"multicluster-gateway-controller/docs/contribution/vscode-debugging/#debugging-tests","title":"Debugging Tests","text":""},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/","title":"Kuadrant DNSPolicy Demo","text":""},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#goals","title":"Goals","text":"
  • Show changes in how MGC manages DNS resources through a direct attachment DNS policy
  • Show changes to the DNS Record structure
  • Show weighted load balancing strategy and how it can be configured
  • Show geo load balancing strategy and how it can be configured
"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#setup","title":"Setup","text":"
# make local-setup OCM_SINGLE=true MGC_WORKLOAD_CLUSTERS_COUNT=2\n
./install.sh\n(export $(cat ./controller-config.env | xargs) && export $(cat ./aws-credentials.env | xargs) && make build-controller install run-controller)\n
"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#preamble","title":"Preamble","text":"

Three managed clusters labeled as ingress clusters

kubectl get managedclusters --show-labels\n

Show managed zone

kubectl get managedzones -n multi-cluster-gateways\n

Show gateway created on the hub

kubectl get gateway -n multi-cluster-gateways\n
Show gateways
# Check gateways\nkubectl --context kind-mgc-control-plane get gateways -A\nkubectl --context kind-mgc-workload-1 get gateways -A\nkubectl --context kind-mgc-workload-2 get gateways -A\n

Show application deployed to each cluster

curl -k -s -o /dev/null -w \"%{http_code}\\n\" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.200.0'\ncurl -k -s -o /dev/null -w \"%{http_code}\\n\" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.201.0'\ncurl -k -s -o /dev/null -w \"%{http_code}\\n\" https://bfa.jm.hcpapps.net --resolve 'bfa.jm.hcpapps.net:443:172.31.202.0'\n

Show status of gateway on the hub:

kubectl get gateway prod-web -n multi-cluster-gateways -o=jsonpath='{.status}'\n

"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#dnspolicy-using-direct-attachment","title":"DNSPolicy using direct attachment","text":"

Explain the changes that have been made to the dns reconciliation, that it now uses direct policy attachement and that a DNSPOlicy must be created and attached to a target gateway before any dns updates will be made for a gateway.

Show no dnsrecord

kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways\n

Show no response for host

# Warning, will cache for 5 mins!!!!!!\ncurl -k https://bfa.jm.hcpapps.net\n

Show no dnspolicy

kubectl --context kind-mgc-control-plane get dnspolicy -n multi-cluster-gateways\n

Create dnspolicy

cat resources/dnspolicy_prod-web-default.yaml\nkubectl --context kind-mgc-control-plane apply -f resources/dnspolicy_prod-web-default.yaml -n multi-cluster-gateways\n

# Check policy attachment\nkubectl --context kind-mgc-control-plane get gateway prod-web -n multi-cluster-gateways -o=jsonpath='{.metadata.annotations}'\n

Show dnsrecord created

kubectl --context kind-mgc-control-plane get dnsrecord -n multi-cluster-gateways\n

Show response for host

curl -k https://bfa.jm.hcpapps.net\n

"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#dns-record-structure","title":"DNS Record Structure","text":"

Show the new record structure

kubectl get dnsrecord prod-web-api -n multi-cluster-gateways -o=jsonpath='{.spec.endpoints}'\n
"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#weighted-loadbalancing-by-default","title":"Weighted loadbalancing by default","text":"

Show and update default weight in policy (Show result sin Route53)

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways\n

\"A DNSPolicy with an empty loadBalancing spec, or with a loadBalancing.weighted.defaultWeight set and nothing else produces a set of records grouped and weighted to produce a Round Robin routing strategy where all target clusters will have an equal chance of being returned in DNS queries.\"

"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#custom-weighting","title":"Custom Weighting","text":"

Edit dnsPolicy and add custom weights:

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways\n

spec:\nloadBalancing:\nweighted:\ncustom:\n- value: AWS\nweight: 200\n- value: GCP\nweight: 10\ndefaultWeight: 100\n

Add custom weight labels

kubectl get managedclusters --show-labels\nkubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS\nkubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP\n

"},{"location":"multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/#geo-load-balancing","title":"Geo load balancing","text":"

Edit dnsPolicy and add default geo:

kubectl --context kind-mgc-control-plane edit dnspolicy prod-web -n multi-cluster-gateways\n

spec:\nloadBalancing:\ngeo:\ndefaultGeo: US\nweighted:\ncustom:\n- value: AWS\nweight: 20\n- value: GCP\nweight: 200\ndefaultWeight: 100\n

Add geo labels ```bash kubectl get managedclusters --show-labels kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=FR kubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=ES

Checkout that DNS:

https://www.whatsmydns.net/#A/bfa.jm.hcpapps.net

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/","title":"DNS Health Checks","text":"

DNS Health Checks are a crucial tool for ensuring the availability and reliability of your multi-cluster applications. Kuadrant offers a powerful feature known as DNSPolicy, which allows you to configure and verify health checks for DNS endpoints. This guide provides a comprehensive overview of how to set up, utilize, and understand DNS health checks.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/#what-are-dns-health-checks","title":"What are DNS Health Checks?","text":"

DNS Health Checks are a way to assess the availability and health of DNS endpoints associated with your applications. These checks involve sending periodic requests to the specified endpoints to determine their responsiveness and health status. by configuring these checks via the DNSPolicy, you can ensure that your applications are correctly registered, operational, and serving traffic as expected.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/#configuration-of-health-checks","title":"Configuration of Health Checks","text":"

Note: By default, health checks occur at 60-second intervals.

To configure a DNS health check, you need to specify the healthCheck section of the DNSPolicy. The key part of this configuration is the healthCheck section, which includes important properties such as:

  • allowInsecureCertificates: Added for development environments, allows health probes to not fail when finding an invalid (e.g. self-signed) certificate.
  • additionalHeadersRef: This refers to a secret that holds extra headers, often containing important elements like authentication tokens.
  • endpoint: This is the path where the health checks take place, usually represented as '/healthz' or something similar.
  • expectedResponses: This setting lets you specify the expected HTTP response codes. If you don't set this, the default values assumed are 200 and 201.
  • failureThreshold: It's the number of times the health check can fail for the endpoint before it's marked as unhealthy.
  • interval: This property allows you to specify the time interval between consecutive health checks. The minimum allowed value is 5 seconds.
  • port: Specific port for the connection to be checked.
  • protocol: Type of protocol being used, like HTTP or HTTPS. (Required)

kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway\n  healthCheck:\n    allowInsecureCertificates: true\n    endpoint: /\n    expectedResponses:\n      - 200\n      - 201\n      - 301\n    failureThreshold: 5\n    port: 443\n    protocol: https\nEOF\n
This configuration sets up a DNS health check by creating DNSHealthCheckProbes for the specified prod-web Gateway endpoints.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/#how-to-validate-dns-health-checks","title":"How to Validate DNS Health Checks","text":"

After setting up DNS Health Checks to improve application reliability, it is important to verify their effectiveness. This guide provides a simple validation process to ensure that health checks are working properly and improving the operation of your applications.

  1. Verify Configuration: The first step in the validation process is to verify that the probes were created. Notice the label kuadrant.io/gateway=prod-web that only shows DNSHealthCheckProbes for the specified prod-web Gateway.

    kubectl get -l kuadrant.io/gateway=prod-web dnshealthcheckprobes -A\n

  2. Monitor Health Status: The next step is to monitor the health status of the designated endpoints. This can be done by analyzing logs, metrics generated or by the health check probes status. By reviewing this data, you can confirm that endpoints are being actively monitored and that their status is being reported accurately.

The following metrics can be used to check all the attempts and failures for a listener.

mgc_dns_health_check_failures_total\nmgc_dns_health_check_attempts_total\n

  1. Test Failure Scenarios: To gain a better understanding of how your system responds to failures, you can deliberately create endpoint failures. This can be done by stopping applications running on the endpoint or by blocking traffic, or for instance, deliberately omit specifying the expected 200 response code. This will allow you to see how DNS Health Checks dynamically redirect traffic to healthy endpoints and demonstrate their routing capabilities.

  2. Monitor Recovery: After inducing failures, it is important to monitor how your system recovers. Make sure that traffic is being redirected correctly and that applications are resuming normal operation.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/#what-happens-when-a-health-check-fails","title":"What Happens When a Health Check Fails","text":"

A pivotal aspect of DNS Health Checks is understanding of a health check failure. When a health check detects an endpoint as unhealthy, it triggers a series of strategic actions to mitigate potential disruptions:

  1. The health check probe identifies an endpoint as \"unhealthy\" and it\u2019s got greater consecutive failures than failure threshold.

  2. The system reacts by immediately removing the unhealthy endpoint from the list of available endpoints, any endpoint that doesn\u2019t have at least 1 healthy child will also be removed.

  3. This removal causes traffic to automatically get redirected to the remaining healthy endpoints.

  4. The health check continues monitoring the endpoint's status. If it becomes healthy again, endpoint is added to the list of available endpoints.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/#limitations","title":"Limitations","text":"
  1. Delayed Detection: DNS health checks are not immediate; they depend on the check intervals. Immediate issues might not be detected promptly.

  2. No Wildcard Listeners: Unsuitable for wildcard DNS listeners or dynamic domain resolution. DNS health checks do not cover wildcard listeners. Each endpoint must be explicitly defined.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/","title":"DNS Policy","text":"

The DNSPolicy is a GatewayAPI policy that uses Direct Policy Attachment as defined in the policy attachment mechanism standard. This policy is used to provide dns management for gateway listeners by managing the lifecycle of dns records in external dns providers such as AWS Route53 and Google DNS.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#terms","title":"Terms","text":"
  • GatewayAPI: resources that model service networking in Kubernetes.
  • Gateway: Kubernetes Gateway resource.
  • ManagedZone: Kuadrant resource representing a Zone Apex in a dns provider.
  • DNSPolicy: Kuadrant policy for managing gateway dns.
  • DNSRecord: Kuadrant resource representing a set of records in a managed zone.
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#dns-provider-setup","title":"DNS Provider Setup","text":"

A DNSPolicy acts against a target Gateway by processing its listeners for hostnames that it can create dns records for. In order for it to do this, it must know about dns providers, and what domains these dns providers are currently hosting. This is done through the creation of ManagedZones and dns provider secrets containing the credentials for the dns provider account.

If for example a Gateway is created with a listener with a hostname of echo.apps.hcpapps.net:

apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ngatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\nlisteners:\n- allowedRoutes:\nnamespaces:\nfrom: All\nname: api\nhostname: echo.apps.hcpapps.net\nport: 80\nprotocol: HTTP\n

In order for the DNSPolicy to act upon that listener, a ManagedZone must exist for that hostnames domain.

A secret containing the provider credentials must first be created:

kubectl create secret generic my-aws-credentials --type=kuadrant.io/aws --from-env-file=./aws-credentials.env -n multi-cluster-gateways\nkubectl get secrets my-aws-credentials -n multi-cluster-gateways -o yaml\napiVersion: v1\ndata:\n  AWS_ACCESS_KEY_ID: <AWS_ACCESS_KEY_ID>\n  AWS_REGION: <AWS_REGION>\n  AWS_SECRET_ACCESS_KEY: <AWS_SECRET_ACCESS_KEY>\nkind: Secret\nmetadata:\n  name: my-aws-credentials\n  namespace: multi-cluster-gateways\ntype: kuadrant.io/aws\n

And then a ManagedZone can be added for the desired domain referencing the provider credentials:

apiVersion: kuadrant.io/v1alpha1\nkind: ManagedZone\nmetadata:\nname: apps.hcpapps.net\nnamespace: multi-cluster-gateways\nspec:\ndomainName: apps.hcpapps.net\ndescription: \"apps.hcpapps.net managed domain\"\ndnsProviderSecretRef:\nname: my-aws-credentials\nnamespace: multi-cluster-gateways\n

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#dnspolicy-creation-and-attachment","title":"DNSPolicy creation and attachment","text":"

Once an appropriate ManagedZone is configured for a Gateways listener hostname, we can now create and attach a DNSPolicy to start managing dns for it.

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nhealthCheck:\nallowInsecureCertificates: true\nadditionalHeadersRef:\nname: probe-headers\nendpoint: /\nexpectedResponses:\n- 200\n- 201\n- 301\nfailureThreshold: 5\nport: 80\nprotocol: http\n
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#target-reference","title":"Target Reference","text":"

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: - Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. - Kind is kind of the target resource. Only valid options are Gateway. - Name is the name of the target resource. - Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#health-check","title":"Health Check","text":"

The health check section is optional, the following fields are available:

  • allowInsecureCertificates: Added for development environments, allows health probes to not fail when finding an invalid (e.g. self-signed) certificate.
  • additionalHeadersRef: A reference to a secret which contains additional headers such as an authentication token
  • endpoint: The path to specify for these health checks, e.g. /healthz
  • expectedResponses: Defaults to 200 or 201, this allows other responses to be considered valid
  • failureThreshold: How many consecutive fails are required to consider this endpoint unhealthy
  • port: The port to connect to
  • protocol: The protocol to use for this connection
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#checking-status-of-health-checks","title":"Checking status of health checks","text":"

To list all health checks:

kubectl get dnshealthcheckprobes -A\n
This will list all probes in the hub cluster, and whether they are currently healthy or not.

To find more information on why a specific health check is failing, look at the status of that probe:

kubectl get dnshealthcheckprobe <name> -n <namespace> -o yaml\n

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#dnsrecord-resources","title":"DNSRecord Resources","text":"

The DNSPolicy will create a DNSRecord resource for each listener hostname with a suitable ManagedZone configured. The DNSPolicy resource uses the status of the Gateway to determine what dns records need to be created based on the clusters it has been placed onto.

Given the following Gateway status:

status:\naddresses:\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: kind-mgc-workload-1/172.31.201.1\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: kind-mgc-workload-2/172.31.202.1\nconditions:\n- lastTransitionTime: \"2023-07-24T19:09:54Z\"\nmessage: Handled by kuadrant.io/mgc-gw-controller\nobservedGeneration: 1\nreason: Accepted\nstatus: \"True\"\ntype: Accepted\n- lastTransitionTime: \"2023-07-24T19:09:55Z\"\nmessage: 'gateway placed on clusters [kind-mgc-workload-1 kind-mgc-workload-2] '\nobservedGeneration: 1\nreason: Programmed\nstatus: \"True\"\ntype: Programmed\nlisteners:\n- attachedRoutes: 1\nconditions: []\nname: kind-mgc-workload-1.api\nsupportedKinds: []\n- attachedRoutes: 1\nconditions: []\nname: kind-mgc-workload-2.api\nsupportedKinds: []        

The example DNSPolicy shown above would create a DNSRecord like the following:

apiVersion: kuadrant.io/v1alpha1\nkind: DNSRecord\nmetadata:\ncreationTimestamp: \"2023-07-24T19:09:56Z\"\nfinalizers:\n- kuadrant.io/dns-record\ngeneration: 3\nlabels:\nkuadrant.io/Gateway-uid: 0877f97c-f3a6-4f30-97f4-e0d7f25cc401\nkuadrant.io/record-id: echo\nname: echo.apps.hcpapps.net\nnamespace: multi-cluster-gateways\nownerReferences:\n- apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nname: echo-app\nuid: 0877f97c-f3a6-4f30-97f4-e0d7f25cc401\n- apiVersion: kuadrant.io/v1alpha1\nblockOwnerDeletion: true\ncontroller: true\nkind: ManagedZone\nname: apps.hcpapps.net\nuid: 26a06799-acff-476b-a1a3-c831fd19dcc7\nresourceVersion: \"25464\"\nuid: 365bf57f-10b4-42e8-a8e7-abb6dce93985\nspec:\nendpoints:\n- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net\nrecordTTL: 60\nrecordType: A\ntargets:\n- 172.31.202.1\n- dnsName: default.lb-2903yb.echo.apps.hcpapps.net\nproviderSpecific:\n- name: weight\nvalue: \"120\"\nrecordTTL: 60\nrecordType: CNAME\nsetIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net\ntargets:\n- 24osuu.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: default.lb-2903yb.echo.apps.hcpapps.net\nproviderSpecific:\n- name: weight\nvalue: \"120\"\nrecordTTL: 60\nrecordType: CNAME\nsetIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net\ntargets:\n- lrnse3.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: echo.apps.hcpapps.net\nrecordTTL: 300\nrecordType: CNAME\ntargets:\n- lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net\nproviderSpecific:\n- name: geo-country-code\nvalue: '*'\nrecordTTL: 300\nrecordType: CNAME\nsetIdentifier: default\ntargets:\n- default.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net\nrecordTTL: 60\nrecordType: A\ntargets:\n- 172.31.201.1\nmanagedZone:\nname: apps.hcpapps.net   

Which results in the following records being created in AWS Route53 (The provider we used in our example ManagedZone above):

The listener hostname is now be resolvable through dns:

dig echo.apps.hcpapps.net +short\nlb-2903yb.echo.apps.hcpapps.net.\ndefault.lb-2903yb.echo.apps.hcpapps.net.\nlrnse3.lb-2903yb.echo.apps.hcpapps.net.\n172.31.201.1\n

More information about the dns record structure can be found in the DNSRecord structure document.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#load-balancing","title":"Load Balancing","text":"

Configuration of DNS Load Balancing features is done through the loadBalancing field in the DNSPolicy spec.

loadBalancing field contains the specification of how dns will be configured in order to provide balancing of load across multiple clusters. Fields included inside: - weighted field describes how weighting will be applied to weighted dns records. Fields included inside: - defaultWeight arbitrary weight value that will be applied to weighted dns records by default. Integer greater than 0 and no larger than the maximum value accepted by the target dns provider. - custom array of custom weights to apply when custom attribute values match. - geo field enables the geo routing strategy. Fields included inside: - defaultGeo geo code to apply to geo dns records by default. The values accepted are determined by the target dns provider.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#weighted","title":"Weighted","text":"

A DNSPolicy with an empty loadBalancing spec, or with a loadBalancing.weighted.defaultWeight set and nothing else produces a set of records grouped and weighted to produce a Round Robin routing strategy where all target clusters will have an equal chance of being returned in DNS queries.

If we apply the following update to the DNSPolicy:

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nloadBalancing:\nweighted:\ndefaultWeight: 100 # <--- New Default Weight being added\n

The weight of all records is adjusted to reflect the new defaultWeight value of 100. This will still produce the same Round Robin routing strategy as before since all records still have equal weight values.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#custom-weights","title":"Custom Weights","text":"

In order to manipulate how much traffic individual clusters receive, custom weights can be added to the DNSPolicy.

If we apply the following update to the DNSPolicy:

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nloadBalancing:\nweighted:\ndefaultWeight: 120\ncustom: # <--- New Custom Weights being added\n- weight: 255\nselector:\nmatchLabels:\nkuadrant.io/lb-attribute-custom-weight: AWS\n- weight: 10\nselector:\nmatchLabels:\nkuadrant.io/lb-attribute-custom-weight: GCP\n

And apply custom-weight labels to each of our managed cluster resources:

kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-custom-weight=AWS\nkubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-custom-weight=GCP\n

The DNSRecord for our listener host gets updated, and the weighted records are adjusted to have the new values:

kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints\n- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.202.1\n- dnsName: default.lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: weight\n      value: \"10\" # <--- Weight is updated\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - 24osuu.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: default.lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: weight\n      value: \"255\" # <--- Weight is updated\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - lrnse3.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: echo.apps.hcpapps.net\n  recordTTL: 300\nrecordType: CNAME\n  targets:\n    - lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: geo-country-code\n      value: '*'\nrecordTTL: 300\nrecordType: CNAME\n  setIdentifier: default\n  targets:\n    - default.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.201.1\n

In the above scenario the managed cluster kind-mgc-workload-2 (GCP) IP address will be returned far less frequently in DNS queries than kind-mgc-workload-1 (AWS)

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#geo","title":"Geo","text":"

To enable Geo Load balancing the loadBalancing.geo.defaultGeo field should be added. This informs the DNSPolicy that we now want to start making use of Geo Location features in our target provider. This will change the single record set group created from default (What is created for weighted only load balancing) to a geo specific one based on the value of defaultGeo.

If we apply the following update to the DNSPolicy:

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nloadBalancing:\nweighted:\ndefaultWeight: 120\ncustom:\n- weight: 255\nselector:\nmatchLabels:\nkuadrant.io/lb-attribute-custom-weight: AWS\n- weight: 10\nselector:\nmatchLabels:\nkuadrant.io/lb-attribute-custom-weight: GCP\ngeo:\ndefaultGeo: US # <--- New `geo.defaultGeo` added for `US` (United States)\n

The DNSRecord for our listener host gets updated, and the default geo is replaced with the one we specified:

kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints\n- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.202.1\n- dnsName: echo.apps.hcpapps.net\n  recordTTL: 300\nrecordType: CNAME\n  targets:\n    - lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net # <--- New `us` geo location CNAME is created\nproviderSpecific:\n    - name: geo-country-code\n      value: US\n  recordTTL: 300\nrecordType: CNAME\n  setIdentifier: US\n  targets:\n    - us.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: geo-country-code\n      value: '*'\nrecordTTL: 300\nrecordType: CNAME\n  setIdentifier: default\n  targets:\n    - us.lb-2903yb.echo.apps.hcpapps.net # <--- Default catch all CNAME is updated to point to `us` target\n- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.201.1\n- dnsName: us.lb-2903yb.echo.apps.hcpapps.net # <--- Gateway default group is now `us`\nproviderSpecific:\n    - name: weight\n      value: \"10\"\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - 24osuu.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: us.lb-2903yb.echo.apps.hcpapps.net # <--- Gateway default group is now `us`\nproviderSpecific:\n    - name: weight\n      value: \"255\"\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - lrnse3.lb-2903yb.echo.apps.hcpapps.net\n

The listener hostname is still resolvable, but now routed through the us record set:

dig echo.apps.hcpapps.net +short\nlb-2903yb.echo.apps.hcpapps.net.\nus.lb-2903yb.echo.apps.hcpapps.net. # <--- `us` CNAME now in the chain\nlrnse3.lb-2903yb.echo.apps.hcpapps.net.\n172.31.201.1\n
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#configuring-cluster-geo-locations","title":"Configuring Cluster Geo Locations","text":"

The defaultGeo as described above puts all clusters into the same geo group, but for geo to be useful we need to mark our clusters as being in different locations. We can do this though by adding geo-code attributes on the ManagedCluster to show which county each cluster is in. The values that can be used are determined by the dns provider (See Below).

Apply geo-code labels to each of our managed cluster resources:

kubectl label --overwrite managedcluster kind-mgc-workload-1 kuadrant.io/lb-attribute-geo-code=US\nkubectl label --overwrite managedcluster kind-mgc-workload-2 kuadrant.io/lb-attribute-geo-code=ES\n

The above indicates that kind-mgc-workload-1 is located in the US (United States), which is the same as our current default geo, and kind-mgc-workload-2 is in ES (Spain).

The DNSRecord for our listener host gets updated, and records are now divided into two groups, us and es:

kubectl get dnsrecord echo.apps.hcpapps.net -n multi-cluster-gateways -o yaml | yq .spec.endpoints\n- dnsName: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.202.1\n- dnsName: echo.apps.hcpapps.net\n  recordTTL: 300\nrecordType: CNAME\n  targets:\n    - lb-2903yb.echo.apps.hcpapps.net\n- dnsName: es.lb-2903yb.echo.apps.hcpapps.net # <--- kind-mgc-workload-2 target now added to `es` group\nproviderSpecific:\n    - name: weight\n      value: \"10\"\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: 24osuu.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - 24osuu.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net # <--- New `es` geo location CNAME is created\nproviderSpecific:\n    - name: geo-country-code\n      value: ES\n  recordTTL: 300\nrecordType: CNAME\n  setIdentifier: ES\n  targets:\n    - es.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: geo-country-code\n      value: US\n  recordTTL: 300\nrecordType: CNAME\n  setIdentifier: US\n  targets:\n    - us.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: geo-country-code\n      value: '*'\nrecordTTL: 300\nrecordType: CNAME\n  setIdentifier: default\n  targets:\n    - us.lb-2903yb.echo.apps.hcpapps.net\n- dnsName: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  recordTTL: 60\nrecordType: A\n  targets:\n    - 172.31.201.1\n- dnsName: us.lb-2903yb.echo.apps.hcpapps.net\n  providerSpecific:\n    - name: weight\n      value: \"255\"\nrecordTTL: 60\nrecordType: CNAME\n  setIdentifier: lrnse3.lb-2903yb.echo.apps.hcpapps.net\n  targets:\n    - lrnse3.lb-2903yb.echo.apps.hcpapps.net\n

In the above scenario any requests made in Spain will be returned the IP address of kind-mgc-workload-2 and requests made from anywhere else in the world will be returned the IP address of kind-mgc-workload-1. Weighting of records is still enforced between clusters in the same geo group, in the case above however they are having no effect since there is only one cluster in each group.

If an unsupported value is given to a provider, DNS records will not be created. Please choose carefully. For more information on what location is right for your needs please, read that provider's documentation (see links below).

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#locations-supported-per-dns-provider","title":"Locations supported per DNS provider","text":"Supported AWS GCP Continents Country codes States Regions"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#continents-and-country-codes-supported-by-aws-route-53","title":"Continents and country codes supported by AWS Route 53","text":"

:Note: For more information please the official AWS documentation

To see all regions supported by AWS Route 53, please see the official (documentation)[https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-geo.html]

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-policy/#regions-supported-by-gcp-cloud-dns","title":"Regions supported by GCP CLoud DNS","text":"

To see all regions supported by GCP Cloud DNS, please see the official (documentation)[https://cloud.google.com/compute/docs/regions-zones]

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/","title":"Configuring a DNS Provider","text":"

In order to be able to interact with supported DNS providers, Kuadrant needs a credential that it can use. This credential is leveraged by the multi-cluster gateway controller in order to create and manage DNS records within zones used by the listeners defined in your gateways.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#supported-providers","title":"Supported Providers","text":"

Kuadrant Supports the following DNS providers currently

  • AWS route 53 (aws)
  • Google DNS (gcp)
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#configuring-an-aws-route-53-provider","title":"Configuring an AWS Route 53 provider","text":"

Kuadant expects a secret with a credential. Below is an example for AWS Route 53. It is important to set the secret type to aws

apiVersion: v1\ndata:\n  AWS_ACCESS_KEY_ID: XXXXX\n  AWS_REGION: XXXXX\n  AWS_SECRET_ACCESS_KEY: XXXXX\nkind: Secret\nmetadata:\n  name: aws-credentials\n  namespace: multicluster-gateway-controller-system\ntype: kuadrant.io/aws\n
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#iam-permissions-required","title":"IAM permissions required","text":"

We have tested using the available policy AmazonRoute53FullAccess however it should also be possible to restrict the credential down to a particular zone. More info can be found in the AWS docs https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/access-control-managing-permissions.html

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#configuring-a-google-dns-provider","title":"Configuring a Google DNS provider","text":"

Kuadant expects a secret with a credential. Below is an example for Google DNS. It is important to set the secret type to gcp

apiVersion: v1\ndata:\n  GOOGLE: {\"client_id\": \"00000000-00000000000000.apps.googleusercontent.com\",\"client_secret\": \"d-FL95Q00000000000000\",\"refresh_token\": \"00000aaaaa00000000-AAAAAAAAAAAAKFGJFJDFKDK\",\"type\": \"authorized_user\"}\n  PROJECT_ID: \"my-project\"\nkind: Secret\nmetadata:\n  name: gcp-credentials\n  namespace: multicluster-gateway-controller-system\ntype: kuadrant.io/gcp\n
"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#access-permissions-required","title":"Access permissions required","text":"

https://cloud.google.com/dns/docs/access-control#dns.admin

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#where-to-create-the-secret","title":"Where to create the secret.","text":"

It is recommended that you create the secret in the same namespace as your ManagedZones Now that we have the credential created we have a DNS provdier ready to go and can start using it.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dns-provider/#using-a-credential","title":"Using a credential","text":"

Once a secret like the one shown above is created, in order for it to be used, it needs to be associated with a ManagedZone.

See ManagedZone

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/","title":"Defining a basic DNSPolicy","text":""},{"location":"multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/#what-is-a-dnspolicy","title":"What is a DNSPolicy","text":"

DNSPolicy is a Custom Resource Definition supported by the Multi-Cluster Gateway Controller (MGC) that follows the policy attachment model, which allows users to enable and configure DNS against the Gateway leveraging an existing cloud based DNS provider.

This document describes how to enable DNS by creating a basic DNSPolicy

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/#pre-requisites","title":"Pre-requisites","text":"
  • A ManagedZone has been created
  • A Gateway has been created
  • A HTTPRoute has been created and attached to the Gateway (Note: It's not a requirement to create the HTTPRoute beforehand, but DNS records will only be created once a DNSPolicy has been created)

See the Multicluster Gateways walkthrough for step by step instructions on deploying these with a simple application.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/#steps","title":"Steps","text":"

The DNSPolicy will target the existing Multi Cluster Gateway, resulting in the creation of DNS Records for each of the Gateway listeners backed by a managed zone, ensuring traffic reaches the correct gateway instances and is balanced across them, as well as optional DNS health checks and load balancing.

In order to enable basic DNS, create a minimal DNSPolicy resource

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: basic-dnspolicy\nnamespace: <Gateway namespace>\nspec:\ntargetRef:\nname: <Gateway name>\ngroup: gateway.networking.k8s.io\nkind: Gateway     

Once created, the multi-cluster Gateway Controller will reconcile the DNS records. By default it will setup a round robin / evenly weighted set of records to ensure a balance of traffic across each provisioned gateway instance. You can see the status by querying the DNSRecord resources.

kubectl get dnsrecords -A\n

The DNS records will be propagated in a few minutes, and the application will be available through the defined hosts.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/#advanced-dns-configuration","title":"Advanced DNS configuration","text":"

The DNSPolicy supports other optional configuration options like geographic and weighted load balancing and health checks. For more detailed information about these options, see DNSPolicy reference

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/","title":"Creating and using a ManagedZone resource.","text":""},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#what-is-a-managedzone","title":"What is a ManagedZone","text":"

A ManagedZone is a reference to a DNS zone. By creating a ManagedZone we are instructing the MGC about a domain or subdomain that can be used as a host by any gateways in the same namespace. These gateways can use a subdomain of the ManagedZone.

If a gateway attempts to a use a domain as a host, and there is no matching ManagedZone for that host, then that host on that gateway will fail to function.

A gateway's host will be matched to any ManagedZone that the host is a subdomain of, i.e. test.api.hcpapps.net will be matched by any ManagedZone (in the same namespace) of: test.api.hcpapps.net, api.hcpapps.net or hcpapps.net.

When MGC wants to create the DNS Records for a host, it will create them in the most exactly matching ManagedZone. e.g. given the zones hcpapps.net and api.hcpapps.net the DNS Records for the host test.api.hcpapps.net will be created in the api.hcpapps.net zone.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#delegation","title":"Delegation","text":"

Delegation allows you to give control of a subdomain of a root domain to MGC while the root domain has it's DNS zone elsewhere.

In the scenario where a root domain has a zone outside Route53, e.g. external.com, and a ManagedZone for delegated.external.com is required, the following steps can be taken: - Create the ManagedZone for delegated.external.com and wait until the status is updated with an array of nameservers (e.g. ns1.hcpapps.net, ns2.hcpapps.net). - Copy these nameservers to your root zone for external.com, you can create a NS record for each nameserver against the delegated.external.com record.

For example:

delegated.external.com. 3600 IN NS ns1.hcpapps.net.\ndelegated.external.com. 3600 IN NS ns2.hcpapps.net.\n

Now, when MGC creates a DNS record in it's Route53 zone for delegated.external.com, it will be resolved correctly.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#walkthrough","title":"Walkthrough","text":"

There is an existing walkthrough, which involves using a managed zone.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#current-limitations","title":"Current limitations","text":"

At the moment the MGC is given credentials to connect to the DNS provider at startup using environment variables, because of that, MGC is limited to one provider type (Route53), and all zones must be in the same Route53 account.

There are plans to make this more customizable and dynamic in the future, work tracked here.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#spec-of-a-managedzone","title":"Spec of a ManagedZone","text":"

The ManagedZone is a simple resource with an uncomplicated API, see a sample here.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#mandatory-fields","title":"Mandatory fields","text":"

The ManagedZone spec has 1 required field domainName:

apiVersion: kuadrant.io/v1alpha1\nkind: ManagedZone\nmetadata:\n  name: testmz.hcpapps.net\nspec:\n  domainName: testmz.hcapps.net\n  dnsProviderSecretRef:\n    Name: my-credential\n    NameSpace: ns\n

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#secret-ref","title":"Secret Ref","text":"

This is a reference to a secret that contains a credential for accessing the DNS Provider. See DNSProvider for more details.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#optional-fields","title":"Optional fields","text":"

The following fields are optional:

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#id","title":"ID","text":"

By setting the ID, you are referring to an existing zone in the DNS provider which MGC will use to manage the DNS of this zone. By leaving the ID empty, MGC will create a zone in the DNS provider, and store the reference in this field.

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#description","title":"Description","text":"

This is simply a human-readable description of this resource (e.g. \"Use this zone for the staging environment\")

"},{"location":"multicluster-gateway-controller/docs/dnspolicy/managed-zone/#parentmanagedzone","title":"ParentManagedZone","text":"

This allows a zone to be owned by another zone (e.g test.api.domain.com could be owned by api.domain.com), MGC will use this owner relationship to manage the NS values for the subdomain in the parent domain. Note that for this to work, both the owned and owner zones must be in the Route53 account accessible by MGC.

"},{"location":"multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/","title":"Skupper proof of concept: 2 clusters & gateways, resiliency walkthrough","text":""},{"location":"multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/#introduction","title":"Introduction","text":"

This walkthrough shows how Skupper can be used to provide service resiliency across 2 clusters. Each cluster is running a Gateway with a HttpRoute in front of an application Service. By leveraging Skupper, the application Service can be exposed (using the skupper cli) from either cluster. If the Service is unavailable on the local cluster, it will be routed to another cluster that has exposed that Service.

"},{"location":"multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/#requirements","title":"Requirements","text":"
  • Local environment has been set up with a hub and spoke cluster, as per the Multicluster Gateways Walkthrough.
  • The example multi-cluster Gateway has been deployed to both clusters
  • The example echo HttpRoute, Service and Deployment have been deployed to both clusters in the default namespace, and the MGC_SUB_DOMAIN env var set in your terminal
  • Skupper CLI has been installed.
"},{"location":"multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/#skupper-setup","title":"Skupper Setup","text":"

Continuing on from the previous walkthrough, in first terminal, T1, install Skupper on the hub & spoke clusters using the following command:

make skupper-setup\n

In T1 expose the Service in the default namespace:

skupper expose deployment/echo --port 8080\n

Do the same in the workload cluster T2:

skupper expose deployment/echo --port 8080\n

Verify the application route can be hit, taking note of the pod name in the response:

curl -k https://$MGC_SUB_DOMAIN\nRequest served by <POD_NAME>\n

Locate the pod that is currently serving requests. It is either in the hub or spoke cluster. There goal is to scale down the deployment to 0 replicas. Check in both T1 and T2:

kubectl get po -n default | grep echo\n

Run this command to scale down the deployment in the right cluster:

kubectl scale deployment echo --replicas=0 -n default\n

Verify the application route can still be hit, and the pod name matches the one that has not been scaled down.

curl -k https://$MGC_SUB_DOMAIN\n

You can also force resolve the DNS result to alternate between the 2 Gateway clusters to verify requests get routed across the Skupper network.

curl -k --resolve $MGC_SUB_DOMAIN:443:172.31.200.2 https://$MGC_SUB_DOMAIN\ncurl -k --resolve $MGC_SUB_DOMAIN:443:172.31.201.2 https://$MGC_SUB_DOMAIN\n
"},{"location":"multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/#known-issues","title":"Known Issues","text":"

If you get an error response no healthy upstream from curl, there may be a problem with the skupper network or link. Check back on the output from earlier commands for any indication of problems setting up the network or link. The skupper router & service controller logs can be checked in the default namespace in both clusters.

You may see an error like below when running the make skupper-setup cmd.

Error: Failed to create token: Policy validation error: Timed out trying to communicate with the API: context deadline exceeded\n
This may be a timing issue or a platform specific problem. Either way, you can try install a different version of the skupper CLI. This problem was seen on at least 1 setup when using skupper v1.4.2, but didn't happen when dropped back to 1.3.0.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/","title":"Submariner proof of concept 2 clusters & gateways resiliency walkthrough","text":""},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#introduction","title":"Introduction","text":"

This walkthrough shows how submariner can be used to provide service resiliency across 2 clusters. Each cluster is running a Gateway with a HttpRoute in front of an application Service. By leveraging Submariner (and the Multi Cluster Services API), the application Service can be exported (via a ServiceExport resource) from either cluster, and imported (via a ServiceImport resource) to either cluster. This provides a clusterset hostname for the service in either cluster e.g. echo.default.svc.clusterset.local The HttpRoute has a backendRef to a Service that points to this hostname. If the Service is unavailable on the local cluster, it will be routed to another cluster that has exported that Service.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#requirements","title":"Requirements","text":"
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone

Note: this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

Note: replace.this is a placeholder that you will need to replace with your own domain

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#installation-and-setup","title":"Installation and Setup","text":"

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

Open three windows, which we'll refer to throughout this walkthrough as:

  • T1 (Hub Cluster)
  • T2 (Where we'll run our controller locally)
  • T3 (Workloads cluster)

To setup a local instance with submariner, in T1, create kind clusters by:

make local-setup-kind MGC_WORKLOAD_CLUSTERS_COUNT=1\n
And deploy onto them by running:
make local-setup-mgc OCM_SINGLE=true SUBMARINER=true MGC_WORKLOAD_CLUSTERS_COUNT=1\n

In the hub cluster (T1) we are going to label the control plane managed cluster as an Ingress cluster:

kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true\nkubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true\n

Next, in T1, create the ManagedClusterSet that uses the ingress label to select clusters:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta2\nkind: ManagedClusterSet\nmetadata:\n  name: gateway-clusters\nspec:\n  clusterSelector:\n    labelSelector: \n      matchLabels:\n        ingress-cluster: \"true\"\n    selectorType: LabelSelector\nEOF\n

Next, in T1 we need to bind this cluster set to our multi-cluster-gateways namespace so that we can use those clusters to place Gateways on:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta2\nkind: ManagedClusterSetBinding\nmetadata:\n  name: gateway-clusters\n  namespace: multi-cluster-gateways\nspec:\n  clusterSet: gateway-clusters\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#create-a-placement-for-our-gateways","title":"Create a placement for our Gateways","text":"

In order to place our Gateways onto clusters, we need to setup a placement resource. Again, in T1, run:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta1\nkind: Placement\nmetadata:\n  name: http-gateway\n  namespace: multi-cluster-gateways\nspec:\n  numberOfClusters: 2\n  clusterSets:\n    - gateway-clusters\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#create-the-gateway-class","title":"Create the Gateway class","text":"

Lastly, we will set up our multi-cluster GatewayClass. In T1, run:

kubectl create -f hack/ocm/gatewayclass.yaml\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#start-the-gateway-controller","title":"Start the Gateway Controller","text":"

In T2 run the following to start the Gateway Controller:

make build-controller install run-controller\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#create-a-gateway","title":"Create a Gateway","text":"

We know will create a multi-cluster Gateway definition in the hub cluster. In T1, run the following:

Important: Make sure to replace sub.replace.this with a subdomain of your root domain.

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\n  listeners:\n  - allowedRoutes:\n      namespaces:\n        from: All\n    name: api\n    hostname: sub.replace.this\n    port: 443\n    protocol: HTTPS\n    tls:\n      mode: Terminate\n      certificateRefs:\n        - name: apps-hcpapps-tls\n          kind: Secret\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#enable-tls","title":"Enable TLS","text":"
  1. In T1, create a TLSPolicy and attach it to your Gateway:

    kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: TLSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway\n  issuerRef:\n    group: cert-manager.io\n    kind: ClusterIssuer\n    name: glbc-ca   \nEOF\n
  2. You should now see a Certificate resource in the hub cluster. In T1, run:

    kubectl get certificates -A\n
    you'll see the following:

NAMESPACE NAME READY SECRET AGE multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#place-the-gateway","title":"Place the Gateway","text":"

To place the Gateway, we need to add a placement label to Gateway resource to instruct the Gateway controller where we want this Gateway instantiated. In T1, run:

kubectl label gateways.gateway.networking.k8s.io prod-web \"cluster.open-cluster-management.io/placement\"=\"http-gateway\" -n multi-cluster-gateways\n

Now on the hub cluster you should find there is a configured Gateway and instantiated Gateway. In T1, run:

kubectl get gateways.gateway.networking.k8s.io -A\n
kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s\nmulti-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#create-and-attach-a-httproute","title":"Create and attach a HTTPRoute","text":"

Let's create a simple echo app with a HTTPRoute and 2 Services (one that routes to the app, and one that uses an externalName) in the first cluster. Remember to replace the hostnames. Again we are creating this in the single hub cluster for now. In T1, run:

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n  - kind: Gateway\n    name: prod-web\n    namespace: kuadrant-multi-cluster-gateways\n  hostnames:\n  - \"sub.replace.this\"  \n  rules:\n  - backendRefs:\n    - name: echo-import-proxy\n      port: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo-import-proxy\nspec:\n  type: ExternalName\n  externalName: echo.default.svc.clusterset.local\n  ports:\n  - port: 8080\n    targetPort: 8080\n    protocol: TCP\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP   \nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#enable-dns","title":"Enable DNS","text":"
  1. In T1, create a DNSPolicy and attach it to your Gateway:
kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway     \nEOF\n

Once this is done, the Kuadrant multi-cluster Gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that Gateway for the host defined in that listener.

You should now see a DNSRecord and only 1 endpoint added which corresponds to address assigned to the Gateway where the HTTPRoute was created. In T1, run:

kubectl get dnsrecord -n multi-cluster-gateways -o=yaml\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#introducing-the-second-cluster","title":"Introducing the second cluster","text":"

In T3, targeting the second cluster, go ahead and create the HTTPRoute & 2 Services in the second Gateway cluster.

kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/local/kube/workload1.yaml && export KUBECONFIG=$(pwd)/local/kube/workload1.yaml\n\nkubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n  - kind: Gateway\n    name: prod-web\n    namespace: kuadrant-multi-cluster-gateways\n  hostnames:\n  - \"sub.replace.this\"  \n  rules:\n  - backendRefs:\n    - name: echo-import-proxy\n      port: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo-import-proxy\nspec:\n  type: ExternalName\n  externalName: echo.default.svc.clusterset.local\n  ports:\n  - port: 8080\n    targetPort: 8080\n    protocol: TCP\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP   \nEOF\n

Now if you move back to the hub context in T1 and take a look at the dnsrecord, you will see we now have two A records configured:

kubectl get dnsrecord -n multi-cluster-gateways -o=yaml\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#create-the-serviceexports-and-serviceimports","title":"Create the ServiceExports and ServiceImports","text":"

In T1, export the Apps echo service from cluster 1 to cluster 2, and vice versa.

./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig --namespace default echo\n./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig --namespace default echo\n

In T1, verify the ServiceExport was created on cluster 1 and cluster 2

kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceexport echo\nkubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceexport echo\n

In T1, verify the ServiceImport was created on both clusters

kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceimport echo\nkubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceimport echo\n

At this point you should get a 200 response. It might take a minute for dns to propagate internally after importing the services above.

curl -Ik https://sub.replace.this\n

You can force resolve the IP to either cluster and verify a 200 is returned when routed to both cluster Gateways.

curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this\ncurl -Ik --resolve sub.replace.this:443:172.31.201.0 https://sub.replace.this\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#testing-resiliency","title":"Testing resiliency","text":"

In T1, stop the echo pod on cluster 2

kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig scale deployment/echo --replicas=0\n

Verify a 200 is still returned when routed to either cluster

curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this\ncurl -Ik --resolve sub.replace.this:443:172.31.201.0 https://sub.replace.this\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/#known-issues","title":"Known issues","text":"

At the time of writing, Istio does not support adding a ServiceImport as a backendRef directly as per the Gateway API proposal - GEP-1748. This is why the walkthrough uses a Service of type ExternalName to route to the clusterset host instead. There is an issue questioning the current state of support.

The installation of the subctl cli fails on macs with arm architecture. The error is curl: (22) The requested URL returned error: 404. A workaround for this is to download the amd64 darwin release manually from the releases page and extract it to the ./bin directory.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/","title":"Submariner proof of concept with a Hub Gateway & 2 Workload Clusters","text":""},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#introduction","title":"Introduction","text":"

This walkthrough shows how submariner can be used to provide service resiliency across 2 clusters with a hub cluster as the Gateway. The hub cluster is running a Gateway with a HttpRoute in front of an application Service. By leveraging Submariner (and the Multi Cluster Services API), the application Service can be exported (via a ServiceExport resource) from the 2 workload clusters, and imported (via a ServiceImport resource) to the hub cluster. This provides a clusterset hostname for the service in the hub cluster e.g. echo.kuadrant-multi-cluster-gateways.svc.clusterset.local The HttpRoute has a backendRef to a Service that points to this hostname. If the Service is unavailable in either workload cluster, it will be routed to the other workload cluster.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#requirements","title":"Requirements","text":"
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone

Note: this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

Note: replace.this is a placeholder that you will need to replace with your own domain

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#installation-and-setup","title":"Installation and Setup","text":"

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

Open three windows, which we'll refer to throughout this walkthrough as:

  • T1 (Hub Cluster)
  • T2 (Where we'll run our controller locally)
  • T3 (Workload cluster 1)
  • T4 (Workload cluster 2)

To setup a local instance with submariner, in T1, create kind clusters:

make local-setup-kind MGC_WORKLOAD_CLUSTERS_COUNT=2\n
And deploy onto the using:
make local-setup-mgc OCM_SINGLE=true SUBMARINER=true MGC_WORKLOAD_CLUSTERS_COUNT=2\n

In the hub cluster (T1) we are going to label the control plane managed cluster as an Ingress cluster:

kubectl label managedcluster kind-mgc-control-plane ingress-cluster=true\n

Next, in T1, create the ManagedClusterSet that uses the ingress label to select clusters:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta2\nkind: ManagedClusterSet\nmetadata:\n  name: gateway-clusters\nspec:\n  clusterSelector:\n    labelSelector: \n      matchLabels:\n        ingress-cluster: \"true\"\n    selectorType: LabelSelector\nEOF\n

Next, in T1 we need to bind this cluster set to our multi-cluster-gateways namespace so that we can use that cluster to place Gateway on:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta2\nkind: ManagedClusterSetBinding\nmetadata:\n  name: gateway-clusters\n  namespace: multi-cluster-gateways\nspec:\n  clusterSet: gateway-clusters\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#create-a-placement-for-our-gateway","title":"Create a placement for our Gateway","text":"

In order to place our Gateway onto the hub clusters, we need to setup a placement resource. Again, in T1, run:

kubectl apply -f - <<EOF\napiVersion: cluster.open-cluster-management.io/v1beta1\nkind: Placement\nmetadata:\n  name: http-gateway\n  namespace: multi-cluster-gateways\nspec:\n  numberOfClusters: 1\n  clusterSets:\n    - gateway-clusters\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#create-the-gatewayclass","title":"Create the GatewayClass","text":"

Lastly, we will set up our multi-cluster GatewayClass. In T1, run:

kubectl create -f hack/ocm/gatewayclass.yaml\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#start-the-gateway-controller","title":"Start the Gateway Controller","text":"

In T2 run the following to start the Gateway Controller:

kind export kubeconfig --name=mgc-control-plane --kubeconfig=$(pwd)/local/kube/control-plane.yaml && export KUBECONFIG=$(pwd)/local/kube/control-plane.yaml\nmake build-controller install run-controller\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#create-a-gateway","title":"Create a Gateway","text":"

We know will create a multi-cluster Gateway definition in the hub cluster. In T1, run the following:

Important: Make sure to replace sub.replace.this with a subdomain of your root domain.

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\n  listeners:\n  - allowedRoutes:\n      namespaces:\n        from: All\n    name: api\n    hostname: sub.replace.this\n    port: 443\n    protocol: HTTPS\n    tls:\n      mode: Terminate\n      certificateRefs:\n        - name: apps-hcpapps-tls\n          kind: Secret\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#enable-tls","title":"Enable TLS","text":"
  1. In T1, create a TLSPolicy and attach it to your Gateway:

    kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: TLSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway\n  issuerRef:\n    group: cert-manager.io\n    kind: ClusterIssuer\n    name: glbc-ca   \nEOF\n
  2. You should now see a Certificate resource in the hub cluster. In T1, run:

    kubectl get certificates -A\n
    you'll see the following:

NAMESPACE NAME READY SECRET AGE multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#place-the-gateway","title":"Place the gateway","text":"

To place the Gateway, we need to add a placement label to Gateway resource to instruct the Gateway controller where we want this Gateway instantiated. In T1, run:

kubectl label gateways.gateway.networking.k8s.io prod-web \"cluster.open-cluster-management.io/placement\"=\"http-gateway\" -n multi-cluster-gateways\n

Now on the hub cluster you should find there is a configured Gateway and instantiated Gateway. In T1, run:

kubectl get gateways.gateway.networking.k8s.io -A\n
kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s\nmulti-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#deploy-the-app-to-the-2-workload-clusters","title":"Deploy the App to the 2 workload clusters","text":"

We do this before the HttpRoute is created for the purposes of the walkthrough. If we don't do it in this order, there may be negative dns caching of the ServiceImport clusterset host resulting in 503 responses. In T3, targeting the 1st workload cluster, go ahead and create Service and Deployment.

kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/local/kube/workload1.yaml && export KUBECONFIG=$(pwd)/local/kube/workload1.yaml\nkubectl create namespace kuadrant-multi-cluster-gateways\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP   \nEOF\n

In T4, targeting the 2nd workload cluster, go ahead and create Service and Deployment there too.

kind export kubeconfig --name=mgc-workload-2 --kubeconfig=$(pwd)/local/kube/workload2.yaml && export KUBECONFIG=$(pwd)/local/kube/workload2.yaml\nkubectl create namespace kuadrant-multi-cluster-gateways\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP   \nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#create-the-serviceexports-and-serviceimports","title":"Create the ServiceExports and ServiceImports","text":"

In T1, export the Apps echo service from the workload clusters.

./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig --namespace kuadrant-multi-cluster-gateways echo\n./bin/subctl export service --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-2.kubeconfig --namespace kuadrant-multi-cluster-gateways echo\n

In T1, verify the ServiceExport was created on both clusters

kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-1.kubeconfig get serviceexport echo -n kuadrant-multi-cluster-gateways\nkubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-workload-2.kubeconfig get serviceexport echo -n kuadrant-multi-cluster-gateways\n

In T1, verify the ServiceImport was created in the hub

kubectl --kubeconfig ./tmp/kubeconfigs/external/mgc-control-plane.kubeconfig get serviceimport echo -n kuadrant-multi-cluster-gateways\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#create-and-attach-a-httproute-and-service","title":"Create and attach a HTTPRoute and Service","text":"

Let's create a HTTPRoute and a Service (that uses an externalName) in the hub cluster. Remember to replace the hostnames. In T1, run:

kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: my-route\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  parentRefs:\n  - kind: Gateway\n    name: prod-web\n    namespace: kuadrant-multi-cluster-gateways\n  hostnames:\n  - \"sub.replace.this\"  \n  rules:\n  - backendRefs:\n    - name: echo-import-proxy\n      port: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo-import-proxy\n  namespace: kuadrant-multi-cluster-gateways\nspec:\n  type: ExternalName\n  externalName: echo.kuadrant-multi-cluster-gateways.svc.clusterset.local\n  ports:\n  - port: 8080\n    targetPort: 8080\n    protocol: TCP\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#enable-dns","title":"Enable DNS","text":"
  1. In T1, create a DNSPolicy and attach it to your Gateway:
kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway     \nEOF\n

Once this is done, the Kuadrant multi-cluster Gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that Gateway for the host defined in that listener.

You should now see a DNSRecord and only 1 endpoint added which corresponds to address assigned to the Gateway where the HTTPRoute was created. In T1, run:

kubectl get dnsrecord -n multi-cluster-gateways -o=yaml\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#verify-the-httproute-works","title":"Verify the HttpRoute works","text":"

At this point you should get a 200 response. It might take a minute for dns to propagate internally by submariner after importing the services above.

curl -Ik https://sub.replace.this\n

If DNS is not resolving for you yet, you may get a 503. In that case you can force resolve the IP to the hub cluster and verify a 200 is returned.

curl -Ik --resolve sub.replace.this:443:172.31.200.0 https://sub.replace.this\n
"},{"location":"multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/#known-issues","title":"Known issues","text":"

At the time of writing, Istio does not support adding a ServiceImport as a backendRef directly as per the Gateway API proposal - GEP-1748. This is why the walkthrough uses a Service of type ExternalName to route to the clusterset host instead. There is an issue questioning the current state of support.

The installation of the subctl cli fails on macs with arm architecture. The error is curl: (22) The requested URL returned error: 404. A workaround for this is to download the amd64 darwin release manually from the releases page and extract it to the ./bin directory.

"},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/","title":"Defining and Distributing Multicluster Gateways with OCM","text":""},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/#define-and-place-gateways","title":"Define and Place Gateways","text":"

In this guide, we will go through defining a Gateway in the OCM hub cluster that can then be distributed to and instantiated on a set of managed spoke clusters.

"},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/#pre-requisites","title":"Pre Requisites","text":"
  • Go through the getting started guide.

You should start this guide with OCM installed, 1 or more spoke clusters registered with the hub and Kuadrant installed into the hub.

Going through the installation will also ensure that a supported GatewayClass is registered in the hub cluster that the Kuadrant multi-cluster gateway controller will handle.

"},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/#defining-a-gateway","title":"Defining a Gateway","text":"

Once you have Kudarant installed in to the OCM hub cluster, you can begin defining and placing Gateways across your OCM managed infrastructure.

To define a Gateway and have it managed by the multi-cluster gateway controller, we need to do the following things

  • Create a Gateway API Gateway resource in the Hub cluster
  • Ensure that gateway resource specifies the correct gateway class so that it will be picked up and managed by the multi-cluster gateway controller

So really there is very little different from setting up a gateway in a none OCM hub. The key difference here is this gateway definition, represents a \"template\" gateway that will then be distributed and provisioned on chosen spoke clusters. The actual provider for this Gateway instance defaults to Istio. This is because kuadrant also offers APIs that integrate at the gateway provider level and the gateway provider we currently support is Istio.

The Gateway API CRDS will have been installed into your hub as part of installation of Kuadrant into the hub. Below is an example gateway. More Examples. Assuming you have the correct RBAC permissions and a namespace, the key thing is to define the correct GatewayClass name to use and a listener host.

apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster #this needs to be set in your gateway definiton\n  listeners:\n  - allowedRoutes:\n      namespaces:\n        from: All\n    name: specific\n    hostname: 'some.domain.example.com'\n    port: 443\n    protocol: HTTP\n
"},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/#placing-a-gateway","title":"Placing a Gateway","text":"

To place a gateway, we will need to create a Placement resource.

Below is an example placement resource. To learn more about placement check out the OCM docs placement

 apiVersion: cluster.open-cluster-management.io/v1beta1\n  kind: Placement\n  metadata:\n    name: http-gateway-placement\n    namespace: multi-cluster-gateways\n  spec:\n    clusterSets:\n    - gateway-clusters # defines which ManagedClusterSet to use. https://open-cluster-management.io/concepts/managedclusterset/ \n    numberOfClusters: 2 #defines how many clusters to select from the chosen clusterSets\n

Finally in order to actually have the Gateway instances deployed to your spoke clusters that can start receiving traffic, you need to label the hub gateway with a placement label. In the above example we would add the following label to the gateway.

cluster.open-cluster-management.io/placement: http-gateway #this value should match the name of your placement.\n
"},{"location":"multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/#what-if-you-want-to-use-a-different-gateway-provider","title":"What if you want to use a different gateway provider?","text":"

While we recommend using Istio as the gateway provider as that is how you will get access to the full suite of policy APIs, it is possible to use another provider if you choose to however this will result in a reduced set of applicable policy objects.

If you are only using the DNSPolicy and TLSPolicy resources, you can use these APIs with any Gateway provider. To change the underlying provider, you need to set the gatewayclass param downstreamClass. To do this create the following configmap:

apiVersion: v1\ndata:\n  params: |\n    {\n      \"downstreamClass\": \"eg\" #this is the class for envoy gateway used as an example\n    }\nkind: ConfigMap\nmetadata:\n  name: gateway-params\n  namespace: multi-cluster-gateways\n

Once this has been created, any gateway created from that gateway class will result in a downstream gateway being provisioned with the configured downstreamClass.

"},{"location":"multicluster-gateway-controller/docs/gateways/gateway-deletion/","title":"Gateway Deletion","text":""},{"location":"multicluster-gateway-controller/docs/gateways/gateway-deletion/#gateway-deletion","title":"Gateway deletion","text":"

When deleting a gateway it should ONLY be deleted in the control plane cluster. This will the trigger the following events:

"},{"location":"multicluster-gateway-controller/docs/gateways/gateway-deletion/#workload-clusters","title":"Workload cluster(s):","text":"
  1. The corresponding gateway in the workload clusters will also be deleted.
"},{"location":"multicluster-gateway-controller/docs/gateways/gateway-deletion/#control-plane-clusters","title":"Control plane cluster(s):","text":"
  1. DNS Record deletion:

    Gateways and DNS records have a 1:1 relationship only, when a gateway gets deleted the corresponding DNS record also gets marked for deletion. This then triggers the DNS record to be removed from the managed zone in the DNS provider (currently only route 53 is accepted). 3. Certs and secrets deletion :

    When a gateway is created a cert is also created for the host in the gateway, this is also removed when the gateway is deleted.

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/","title":"Metrics Federation (WIP)","text":""},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#introduction","title":"Introduction","text":"

This walkthrough shows how to install a metrics federation stack locally and query Istio metrics from the hub.

Note: this walkthrough is incomplete. It will be updated as issues from https://github.com/Kuadrant/multicluster-gateway-controller/issues/197 land

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#requirements","title":"Requirements","text":"
  • Local development environment has been set up as per the main README i.e. local env files have been created with AWS credentials & a zone

Note: this walkthrough will setup a zone in your AWS account and make changes to it for DNS purposes

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#installation-and-setup","title":"Installation and Setup","text":"

To setup a local instance with metrics federation, run:

make local-setup OCM_SINGLE=true METRICS_FEDERATION=true MGC_WORKLOAD_CLUSTERS_COUNT=1\n

Once complete, you should see something like the below in the output (you may need to scroll)

    Connect to Thanos Query UI\n\n        URL : https://thanos-query.172.31.0.2.nip.io\n

Open the url in a browser, accepting the non CA signed certificate. In the Thanos UI query box, enter the below query and press 'Execute'

sum(rate(container_cpu_usage_seconds_total{namespace=\"monitoring\",container=\"prometheus\"}[5m]))\n

You should see a response in the table view. In the Graph view you should see some data over time as well.

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#istio-metrics","title":"Istio Metrics","text":""},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#thanos-query-ui","title":"Thanos Query UI","text":"

To query Istio workload metrics, you should first deploy a Gateway & HttpRoute, and send traffic to it. The easiest way to do this is by following the steps in the OCM Walkthrough. Before going through the walkthrough, there are two things to note: Firstly, you do not need to re-run the make local-setup step, as that should have already been run with the METRICS_FEDERATION flag above. Secondly, you should set METRICS=true when it comes to the step to start and deploy the gateway controller, i.e:

make build-controller kind-load-controller deploy-controller METRICS=true\n

After completing the OCM walkthrough, use curl to send some traffic to the application

while true; do curl -k https://$MGC_SUB_DOMAIN && sleep 5; done\n

Open the Thanos Query UI again and try the below query:

sum(rate(istio_requests_total{}[5m])) by(destination_workload)\n

In the graph view you should see something that looks like the graph below. This shows the rate of requests (per second) for each Isito workload. In this case, there is 1 workload, balanced across 2 clusters.

To see the rate of requests per cluster (actually per pod across all clusters), the below query can be used. Over long periods of time, this graph can show traffic load balancing between application instances.

sum(rate(istio_requests_total{}[5m])) by(pod)\n

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-federation/#grafana-ui","title":"Grafana UI","text":"

In the output from local-setup, you should see something like the below (you may need to scroll)

    Connect to Grafana Query UI\n\n        URL : https://grafana.172.31.0.2.nip.io\n

Open Grafana in a browser, accepting the non CA signed certificate. The default login is admin/admin.

Using the left sidebar in the Grafana UI, navigate to Dashboards > Browse and click on the Istio Workload Dashboard.

You should be able to see the following layout, which will include data from the curl command you ran in the previous section.

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/","title":"Metrics Walkthrough","text":""},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/#installation-and-configuration-of-metrics","title":"Installation and Configuration of Metrics","text":"

This document will guide you in installing metrics for your application and provide directions on where to access them. Additionally, it will include dashboards set up to display these metrics.

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/#requirementsprerequisites","title":"Requirements/prerequisites","text":"

Prior to commencing the metrics installation process, it is imperative that you have successfully completed the initial getting started guide. For reference, please consult the guide available at the following link: Getting Started Guide.

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/#setting-up-metrics","title":"Setting Up Metrics","text":"

To establish metrics, simply execute the following script in your terminal:

    curl https://raw.githubusercontent.com/kuadrant/multicluster-gateway-controller/main/hack/quickstart-metrics.sh | bash\n

This script will initiate the setup process for your metrics configuration. After the script finishes running, you should see something like:

Connect to Thanos Query UI\n    URL: https://thanos-query.172.31.0.2.nip.io\n\nConnect to Grafana UI\n    URL: https://grafana.172.31.0.2.nip.io\n

You can visit the Grafana dashboard by accessing the provided URL for Grafana UI. (you may need to scroll)

"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/#monitoring-operational-status-in-grafana-dashboard","title":"Monitoring Operational Status in Grafana Dashboard","text":"

After setting up metrics, you can monitor the operational status of your system using the Grafana dashboard.

To generate traffic to the application, use curl as follows:

while true; do curl -k https://$MGC_SUB_DOMAIN && sleep 5; done\n
"},{"location":"multicluster-gateway-controller/docs/how-to/metrics-walkthrough/#accessing-the-grafana-dashboard","title":"Accessing the Grafana Dashboard","text":"

To view the operational metrics and status, proceed with the following steps:

  1. Access the Grafana dashboard by clicking or entering the provided URL for the Grafana UI in your web browser.
https://grafana.172.31.0.2.nip.io\n

Note: The default login credentials for Grafana are admin/admin. You may need to accept the non-CA signed certificate to proceed.

  1. Navigate to the included Grafana Dashboard

Using the left sidebar in the Grafana UI, navigate to Dashboards > Browse and select either the Istio Workload Dashboard or MGC SRE Dashboard.

In Istio Workload Dashboard you should be able to see the following layout, which will include data from the curl command you ran in the previous section.

The MGC SRE Dashboard displays real-time insights and visualizations of resources managed by the multicluster-gateway-controller e.g. DNSPolicy, TLSPolicy, DNSRecord etc..

The Grafana dashboard will provide you with real-time insights and visualizations of your gateway's performance and metrics.

By utilizing the Grafana dashboard, you can effectively monitor the health and behavior of your system, making informed decisions based on the displayed data. This monitoring capability enables you to proactively identify and address any potential issues to ensure the smooth operation of your environment.

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/","title":"Multicluster Gateways Walkthrough","text":""},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#introduction","title":"Introduction","text":"

This document will walk you through using Open Cluster Management (OCM) and Kuadrant to configure and deploy a multi-cluster gateway.

You will also deploy a simple application that uses that gateway for ingress and protects that applications endpoints with a rate limit policy.

We will start with a single cluster and move to multiple clusters to illustrate how a single gateway definition can be used across multiple clusters and highlight the automatic TLS integration and also the automatic DNS load balancing between gateway instances.

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#requirements","title":"Requirements","text":"
  • Complete the Getting Started Guide
"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#open-terminal-sessions-and-set-cluster-context","title":"Open terminal sessions and set cluster context","text":"

For this walkthrough, we're going to use multiple terminal sessions/windows.

Open two windows, which we'll refer to throughout this walkthrough as:

  • T1 (Hub Cluster)
  • T2 (Workloads cluster)

Set the kubecontext for each terminal, refer back to these commands if re-config is needed.

In T1 run kind export kubeconfig --name=mgc-control-plane --kubeconfig=$(pwd)/control-plane.yaml && export KUBECONFIG=$(pwd)/control-plane.yaml

In T2 run kind export kubeconfig --name=mgc-workload-1 --kubeconfig=$(pwd)/workload1.yaml && export KUBECONFIG=$(pwd)/workload1.yaml

export MGC_SUB_DOMAIN in each terminal if you haven't already added it to your .zshrc or .bash_profile.

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#create-a-gateway","title":"Create a gateway","text":""},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#check-the-managed-zone","title":"Check the managed zone","text":"
  1. First let's ensure the managedzone is present. In T1, run the following:

    kubectl get managedzone -n multi-cluster-gateways\n
    1. You should see the following:
    NAME          DOMAIN NAME      ID                                  RECORD COUNT   NAMESERVERS                                                                                        READY\nmgc-dev-mz   test.hcpapps.net   /hostedzone/Z08224701SVEG4XHW89W0   7              [\"ns-1414.awsdns-48.org\",\"ns-1623.awsdns-10.co.uk\",\"ns-684.awsdns-21.net\",\"ns-80.awsdns-10.com\"]   True\n

You are now ready to begin creating a gateway!

  1. We will now create a multi-cluster gateway definition in the hub cluster. In T1, run the following:
kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  gatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\n  listeners:\n  - allowedRoutes:\n      namespaces:\n        from: All\n    name: api\n    hostname: $MGC_SUB_DOMAIN\n    port: 443\n    protocol: HTTPS\n    tls:\n      mode: Terminate\n      certificateRefs:\n        - name: apps-hcpapps-tls\n          kind: Secret\nEOF\n
"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#enable-tls","title":"Enable TLS","text":"
  1. In T1, create a TLSPolicy and attach it to your Gateway:

    kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: TLSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway\n  issuerRef:\n    group: cert-manager.io\n    kind: ClusterIssuer\n    name: glbc-ca   \nEOF\n
  2. You should now see a Certificate resource in the hub cluster. In T1, run:

    kubectl get certificates -A\n
    you'll see the following:

NAMESPACE NAME READY SECRET AGE multi-cluster-gateways apps-hcpapps-tls True apps-hcpapps-tls 12m

It is possible to also use a letsencrypt certificate, but for simplicity in this walkthrough we are using a self-signed cert.

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#place-the-gateway","title":"Place the gateway","text":"

In the hub cluster there will be a single gateway definition but no actual gateway for handling traffic yet.

This is because we haven't placed the gateway yet onto any of our ingress clusters (in this case the hub and ingress cluster are the same)

  1. To place the gateway, we need to add a placement label to gateway resource to instruct the gateway controller where we want this gateway instantiated. In T1, run:

    kubectl label gateway prod-web \"cluster.open-cluster-management.io/placement\"=\"http-gateway\" -n multi-cluster-gateways\n
  2. Now on the hub cluster you should find there is a configured gateway and instantiated gateway. In T1, run:

    kubectl get gateway -A\n
    you'll see the following:

    kuadrant-multi-cluster-gateways   prod-web   istio                                         172.31.200.0                29s\nmulti-cluster-gateways            prod-web   kuadrant-multi-cluster-gateway-instance-per-cluster                  True         2m42s\n

    The instantiated gateway in this case is handled by Istio and has been assigned the 172.x address. You can define this gateway to be handled in the multi-cluster-gateways namespace. As we are in a single cluster you can see both. Later on we will add in another ingress cluster and in that case you will only see the instantiated gateway.

    Additionally, you should be able to see a secret containing a self-signed certificate.

  3. In T1, run:

    kubectl get secrets -n kuadrant-multi-cluster-gateways\n
    you'll see the following:
    NAME               TYPE                DATA   AGE\napps-hcpapps-tls   kubernetes.io/tls   3      13m\n

The listener is configured to use this TLS secret also. So now our gateway has been placed and is running in the right locations with the right configuration and TLS has been setup for the HTTPS listeners.

So what about DNS how do we bring traffic to these gateways?

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#create-and-attach-a-httproute","title":"Create and attach a HTTPRoute","text":"
  1. In T1, using the following command in the hub cluster, you will see we currently have no DNSRecord resources.

    kubectl get dnsrecord -A\n
    No resources found\n

  2. Let's create a simple echo app with a HTTPRoute in one of the gateway clusters. Remember to replace the hostnames. Again we are creating this in the single hub cluster for now. In T1, run:

    kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n  - kind: Gateway\n    name: prod-web\n    namespace: kuadrant-multi-cluster-gateways\n  hostnames:\n  - \"$MGC_SUB_DOMAIN\"  \n  rules:\n  - backendRefs:\n    - name: echo\n      port: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo     \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP       \nEOF\n
"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#enable-dns","title":"Enable DNS","text":"
  1. In T1, create a DNSPolicy and attach it to your Gateway:

    kubectl apply -f - <<EOF\napiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\n  name: prod-web\n  namespace: multi-cluster-gateways\nspec:\n  targetRef:\n    name: prod-web\n    group: gateway.networking.k8s.io\n    kind: Gateway     \nEOF\n

Once this is done, the Kuadrant multi-cluster gateway controller will pick up that a HTTPRoute has been attached to the Gateway it is managing from the hub and it will setup a DNS record to start bringing traffic to that gateway for the host defined in that listener.

  1. You should now see a DNSRecord resource in the hub cluster. In T1, run:

    kubectl get dnsrecord -A\n
    NAMESPACE                NAME                 READY\nmulti-cluster-gateways   prod-web-api         True\n

  2. You should also be able to see there is only 1 endpoint added which corresponds to address assigned to the gateway where the HTTPRoute was created. In T1, run:

    kubectl get dnsrecord -n multi-cluster-gateways -o=yaml\n
  3. Give DNS a minute or two to update. You should then be able to execute the following and get back the correct A record.

    dig $MGC_SUB_DOMAIN\n
    1. You should also be able to curl that endpoint

    curl -k https://$MGC_SUB_DOMAIN\n# Request served by echo-XXX-XXX\n
"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#introducing-the-second-cluster","title":"Introducing the second cluster","text":"

So now we have a working gateway with DNS and TLS configured. Let place this gateway on a second cluster and bring traffic to that gateway also.

  1. First add the second cluster to the clusterset, by running the following in T1:

    kubectl label managedcluster kind-mgc-workload-1 ingress-cluster=true\n
  2. This has added our workload-1 cluster to the ingress clusterset. Next we need to modify our placement to update our numberOfClusters to 2. To patch, in T1, run:

    kubectl patch placement http-gateway -n multi-cluster-gateways --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/numberOfClusters\", \"value\": 2}]'\n
  3. In T2 window execute the following to see the gateway on the workload-1 cluster:

    kubectl get gateways -A\n
    You'll see the following
    NAMESPACE                         NAME       CLASS   ADDRESS        PROGRAMMED   AGE\nkuadrant-multi-cluster-gateways   prod-web   istio   172.31.201.0                90s\n

    So now we have second ingress cluster configured with the same Gateway.

  4. In T2, targeting the second cluster, go ahead and create the HTTPRoute in the second gateway cluster.

    Note: Ensure the MGC_SUB_DOMAIN environment variable has been exported in this terminal session before applying this config.

    kubectl apply -f - <<EOF\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n  - kind: Gateway\n    name: prod-web\n    namespace: kuadrant-multi-cluster-gateways\n  hostnames:\n  - \"$MGC_SUB_DOMAIN\"  \n  rules:\n  - backendRefs:\n    - name: echo\n      port: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\nspec:\n  ports:\n    - name: http-port\n      port: 8080\n      targetPort: http-port\n      protocol: TCP\n  selector:\n    app: echo     \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: echo\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n        - name: echo\n          image: docker.io/jmalloc/echo-server\n          ports:\n            - name: http-port\n              containerPort: 8080\n              protocol: TCP       \nEOF\n
  5. Now if you move back to the hub context in T1 and take a look at the dnsrecord, you will see we now have two A records configured:

kubectl get dnsrecord -n multi-cluster-gateways -o=yaml\n
"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#watching-dns-changes","title":"Watching DNS changes","text":"

If you want you can use watch dig $MGC_SUB_DOMAIN to see the DNS switching between the two addresses

"},{"location":"multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/#follow-on-walkthroughs","title":"Follow on Walkthroughs","text":"

Some good follow on walkthroughs that build on this walkthrough

  • Deploying/Configuring Metrics.
"},{"location":"multicluster-gateway-controller/docs/how-to/template/","title":"Title","text":""},{"location":"multicluster-gateway-controller/docs/how-to/template/#introduction","title":"Introduction","text":"

blah blah amazing and wonderful feature blah blah gateway blah blah DNS

"},{"location":"multicluster-gateway-controller/docs/how-to/template/#requirements","title":"Requirements","text":"
  • A computer
  • Electricity
  • Kind
  • AWS Account
  • Route 53 enabled
  • Other Walkthroughs

## Installation and Setup 1. Clone this repo locally 1. Setup a ./controller-config.env file in the root of the repo with the following key values

```bash\n# this sets up your default managed zone\nAWS_DNS_PUBLIC_ZONE_ID=<AWS ZONE ID>\n# this is the domain at the root of your zone (foo.example.com)\nZONE_ROOT_DOMAIN=<replace.this>\nLOG_LEVEL=1\n```\n
  1. Setup a ./aws-credentials.env with credentials to access route 53

    For example:

    AWS_ACCESS_KEY_ID=<access_key_id>\nAWS_SECRET_ACCESS_KEY=<secret_access_key>\nAWS_REGION=eu-west-1\n
"},{"location":"multicluster-gateway-controller/docs/how-to/template/#open-terminal-sessions","title":"Open terminal sessions","text":"

For this walkthrough, we're going to use multiple terminal sessions/windows, all using multicluster-gateway-controller as the pwd.

Open three windows, which we'll refer to throughout this walkthrough as:

  • T1 (Hub Cluster)
  • T2 (Where we'll run our controller locally)
  • T3 (Workloads cluster)

To setup a local instance, in T1, run:

"},{"location":"multicluster-gateway-controller/docs/how-to/template/#known-bugs","title":"Known bugs","text":"

buzzzzz

"},{"location":"multicluster-gateway-controller/docs/how-to/template/#follow-on-walkthroughs","title":"Follow on Walkthroughs","text":"

Some good follow on walkthroughs that build on this walkthrough

"},{"location":"multicluster-gateway-controller/docs/how-to/template/#helpful-symbols-for-dev-use","title":"Helpful symbols (For dev use)","text":"

* for more see https://gist.github.com/rxaviers/7360908

"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/","title":"Setting up MGC in Existing OCM Clusters","text":"

This guide will show you how to install and configure the Multi-Cluster Gateway Controller in preexisting Open Cluster Management-configured clusters.

"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#prerequisites","title":"Prerequisites","text":"
  • A hub cluster running the OCM control plane (v0.11.0 or greater)
  • Any number of additional spoke clusters that have been configured as OCM ManagedClusters
  • Kubectl (>= v1.14.0)
  • Either a preexisting cert-manager installation or the Kustomize and Helm CLIs
"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#configure-ocm-with-rawfeedbackjsonstring-feature-gate","title":"Configure OCM with RawFeedbackJsonString Feature Gate","text":"

All OCM spoke clusters must be configured with the RawFeedbackJsonString feature gate enabled. This can be done in two ways:

  1. When running the clusteradm join command that joins the spoke cluster to the hub:
clusteradm join <snip> --feature-gates=RawFeedbackJsonString=true\n
  1. By patching each spoke cluster's klusterlet in an existing OCM install:
kubectl patch klusterlet klusterlet --type merge --patch '{\"spec\": {\"workConfiguration\": {\"featureGates\": [{\"feature\": \"RawFeedbackJsonString\", \"mode\": \"Enable\"}]}}}' --context <EACH_SPOKE_CLUSTER>\n
"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#installing-mgc","title":"Installing MGC","text":"

First, run the following command in the context of your hub cluster to install the Gateway API CRDs:

kubectl apply -k \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.2\"\n

We can then add a wait to verify the CRDs have been established:

kubectl wait --timeout=5m crd/gatewayclasses.gateway.networking.k8s.io crd/gateways.gateway.networking.k8s.io crd/httproutes.gateway.networking.k8s.io --for=condition=Established\n
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io condition met\ncustomresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io condition met\ncustomresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io condition met\n

Then run the following command to install the MGC:

kubectl apply -k \"github.com/kuadrant/multicluster-gateway-controller.git/config/mgc-install-guide?ref=main\"\n

In addition to the MGC, this will also install the Kuadrant add-on manager and a GatewayClass from which MGC-managed Gateways can be instantiated.

After the configuration has been applied, you can verify that the MGC and add-on manager have been installed and are running:

kubectl wait --timeout=5m -n multicluster-gateway-controller-system deployment/mgc-controller-manager deployment/mgc-kuadrant-add-on-manager --for=condition=Available\n
deployment.apps/mgc-controller-manager condition met\ndeployment.apps/mgc-kuadrant-add-on-manager condition met\n

We can also verify that the GatewayClass has been accepted by the MGC:

kubectl wait --timeout=5m gatewayclass/kuadrant-multi-cluster-gateway-instance-per-cluster --for=condition=Accepted\n
gatewayclass.gateway.networking.k8s.io/kuadrant-multi-cluster-gateway-instance-per-cluster condition met\n

"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#creating-a-managedzone","title":"Creating a ManagedZone","text":"

To manage the creation of DNS records, MGC uses ManagedZone resources. A ManagedZone can be configured to use DNS Zones on either AWS (Route53), and GCP. We will now create a ManagedZone on the cluster using AWS credentials.

First, export the environment variables detailed here in a terminal session.

Next, create a secret containing the AWS credentials. We'll also create a namespace for your MGC configs:

cat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: multi-cluster-gateways\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mgc-aws-credentials\n  namespace: multi-cluster-gateways\ntype: \"kuadrant.io/aws\"\nstringData:\n  AWS_ACCESS_KEY_ID: ${MGC_AWS_ACCESS_KEY_ID}\n  AWS_SECRET_ACCESS_KEY: ${MGC_AWS_SECRET_ACCESS_KEY}\n  AWS_REGION: ${MGC_AWS_REGION}\nEOF\n

A ManagedZone can then be created:

cat <<EOF | kubectl apply -f -\napiVersion: kuadrant.io/v1alpha1\nkind: ManagedZone\nmetadata:\n  name: mgc-dev-mz\n  namespace: multi-cluster-gateways\nspec:\n  id: ${MGC_AWS_DNS_PUBLIC_ZONE_ID}\n  domainName: ${MGC_ZONE_ROOT_DOMAIN}\n  description: \"Dev Managed Zone\"\n  dnsProviderSecretRef:\n    name: mgc-aws-credentials\n    namespace: multi-cluster-gateways\nEOF\n

You can now verify that the ManagedZone has been created and is in a ready state:

kubectl get managedzone -n multi-cluster-gateways\n
NAME         DOMAIN NAME      ID                                  RECORD COUNT   NAMESERVERS                                                                                         READY\nmgc-dev-mz   ef.hcpapps.net   /hostedzone/Z06419551EM30QQYMZN7F   2              [\"ns-1547.awsdns-01.co.uk\",\"ns-533.awsdns-02.net\",\"ns-200.awsdns-25.com\",\"ns-1369.awsdns-43.org\"]   True\n

"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#creating-a-cert-issuer","title":"Creating a Cert Issuer","text":"

To create a CertIssuer, cert-manager first needs to be installed on your hub cluster. If this has not previously been installed on the cluster you can run the command below to do so:

kustomize --load-restrictor LoadRestrictionsNone build \"github.com/kuadrant/multicluster-gateway-controller.git/config/mgc-install-guide/cert-manager?ref=main\" --enable-helm | kubectl apply -f -\n

We will now create a ClusterIssuer to be used with cert-manager. For simplicity, we will create a self-signed cert issuer here, but other issuers can also be configured.

cat <<EOF | kubectl apply -f -\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: mgc-ca\n  namespace: cert-manager\nspec:\n  selfSigned: {}\nEOF\n

Verify that the clusterIssuer is ready:

kubectl wait --timeout=5m -n cert-manager clusterissuer/mgc-ca --for=condition=Ready\n
clusterissuer.cert-manager.io/mgc-ca condition met\n

"},{"location":"multicluster-gateway-controller/docs/installation/control-plane-installation/#next-steps","title":"Next Steps","text":"

Now that you have MGC installed and configured in your hub cluster, you can now continue with any of these follow-on guides:

  • Installing the Kuadrant Service Protection components
"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/","title":"Installing Kuadrant Service Protection into an existing OCM Managed Cluster","text":""},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#introduction","title":"Introduction","text":"

This walkthrough will show you how to install and setup the Kuadrant Operator into an OCM Managed Cluster.

"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#prerequisites","title":"Prerequisites","text":"
  • Access to an Open Cluster Management (>= v0.11.0) Managed Cluster, which has already been bootstrapped and registered with a hub cluster
  • We have a guide which covers this in detail
  • Also see:
    • https://open-cluster-management.io/getting-started/quick-start/
    • https://open-cluster-management.io/concepts/managedcluster/
  • OLM will need to be installed into the ManagedCluster where you want to run the Kuadrant Service Protection components
  • See https://olm.operatorframework.io/docs/getting-started/
  • Kuadrant uses Istio as a Gateway API provider - this will need to be installed into the data plane clusters
  • We recommend installing Istio 1.17.0, including Gateway API v0.6.2
  • bash kubectl apply -k \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.2\"
  • See also: https://istio.io/v1.17/blog/2022/getting-started-gtwapi/

Alternatively, if you'd like to quickly get started locally, without having to worry to much about the pre-requisites, take a look our Quickstart Guide. It will get you setup with Kind, OLM, OCM & Kuadrant in a few short steps.

"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#install-the-kuadrant-ocm-add-on","title":"Install the Kuadrant OCM Add-On","text":"

Note: if you've run our Getting Started Guide, you'll be set to run this command as-is.

To install the Kuadrant Service Protection components into a ManagedCluster, target your OCM hub cluster with kubectl and run:

kubectl apply -k \"github.com/kuadrant/multicluster-gateway-controller.git/config/service-protection-install-guide?ref=main\" -n <your-managed-cluster>

The above command will install the ManagedClusterAddOn resource needed to install the Kuadrant addon into the specified namespace, and install the Kuadrant data-plane components into the open-cluster-management-agent-addon namespace.

The Kuadrant addon will install:

  • the Kuadrant Operator
  • Limitador (and its associated operator)
  • Authorino (and its associated operator)

For more details, see the Kuadrant components installed by the (kuadrant-operator)[https://github.com/Kuadrant/kuadrant-operator#kuadrant-components]

"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#existing-istio-installations-and-changing-the-default-istio-operator-name","title":"Existing Istio installations and changing the default Istio Operator name","text":"

In the case where you have an existing Istio installation on a cluster, you may encounter an issue where the Kuadrant Operator expects Istio's Operator to be named istiocontrolplane.

The istioctl command saves the IstioOperator CR that was used to install Istio in a copy of the CR named installed-state.

To let the Kuadrant operator use this existing installation, set the following:

kubectl annotate managedclusteraddon kuadrant-addon \"addon.open-cluster-management.io/values\"='{\"IstioOperator\":\"installed-state\"}' -n <managed spoke cluster>

This will propogate down and update the Kuadrant Operator, used by the Kuadrant OCM Addon.

"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#verify-the-kuadrant-addon-installation","title":"Verify the Kuadrant addon installation","text":"

To verify the Kuadrant OCM addon has installed currently, run:

kubectl wait --timeout=5m -n kuadrant-system kuadrant/kuadrant-sample --for=condition=Ready\n

You should see the namespace kuadrant-system, and the following pods come up: * authorino-value * authorino-operator-value * kuadrant-operator-controller-manager-value * limitador-value * limitador-operator-controller-manager-value

"},{"location":"multicluster-gateway-controller/docs/installation/service-protection-installation/#further-reading","title":"Further Reading","text":"

With the Kuadrant data plane components installed, here is some further reading material to help you utilise Authorino and Limitador:

Getting started with Authorino Getting started With Limitador

"},{"location":"multicluster-gateway-controller/docs/proposals/","title":"Index","text":""},{"location":"multicluster-gateway-controller/docs/proposals/#proposals","title":"Proposals","text":"

This directory contains proposals accepted into the MGC. The template for add a proposal is located in this directory. Make a copy of the template and use it to define your own proposal.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/","title":"DNS Policy","text":""},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#problem","title":"Problem","text":"

Gateway admins, need a way to define the DNS policy for a gateway distributed across multiple clusters in order to control how much and which traffic reaches these gateways. Ideally we would allow them to express a strategy that they want to use without needing to get into the details of each provider and needing to create and maintain dns record structure and individual records for all the different gateways that may be within their infrastructure.

Use Cases

As a gateway admin, I want to be able to reduce latency for my users by routing traffic based on the GEO location of the client. I want this strategy to automatically expand and adjust as my gateway topology grows and changes.

As a gateway admin, I have a discount with a particular cloud provider and want to send more of my traffic to the gateways hosted in that providers infrastructure and as I add more gateways I want that balance to remain constant and evolve to include my new gateways.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#goals","title":"Goals","text":"
  • Allow definition of a DNS load balancing strategy to decide how traffic should be weighted across multiple gateway instances from the central control plane.
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#none-goals","title":"None Goals","text":"
  • Allow different DNS policies for different listeners. Although this may be something we look to support in the future, currently policy attachment does not allow for this type of targeting. This means a DNSPolicy is applied for the whole gateway currently.
  • Define how health checks should work, this will be part of a separate proposal
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#terms","title":"Terms","text":"
  • managed listener: This is a listener with a host backed by a DNS zone managed by the multi-cluster gateway controller
  • hub cluster: control plane cluster that managed 1 or more spokes
  • spoke cluster: a cluster managed by the hub control plane cluster. This is where gateway are instantiated
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#proposal","title":"Proposal","text":"

Provide a control plane DNSPolicy API that uses the idea of direct policy attachment from gateway API that allows a load balancing strategy to be applied to the DNS records structure for any managed listeners being served by the data plane instances of this gateway. The DNSPolicy also covers health checks that inform the DNS response but that is not covered in this document.

Below is a draft API for what we anticipate the DNSPolicy to look like

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nhealth:\n...\nloadBalancing:\nweighted:\ndefaultWeight: 10\ncustom: #optional\n- value: AWS  #optional with both GEO and weighted. With GEO the custom weight is applied to gateways within a Geographic region\nweight: 10\n- value: GCP\nweight: 20\nGEO: #optional\ndefaultGeo: IE # required with GEO. Chooses a default DNS response when no particular response is defined for a request from an unknown GEO.\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#available-load-balancing-strategies","title":"Available Load Balancing Strategies","text":"

GEO and Weighted load balancing are well understood strategies and this API effectively allow a complex requirement to be expressed relatively simply and executed by the gateway controller in the chosen DNS provider. Our default policy will execute a \"Round Robin\" weighted strategy which reflects the current default behaviour.

With the above API we can provide weighted and GEO and weighted within a GEO. A weighted strategy with a minimum of a default weight is always required and the simplest type of policy. The multi-cluster gateway controller will set up a default policy when a gateway is discovered (shown below). This policy can be replaced or modified by the user. A weighted strategy can be complimented with a GEO strategy IE they can be used together in order to provide a GEO and weighted (within a GEO) load balancing. By defining a GEO section, you are indicating that you want to use a GEO based strategy (how this works is covered below).

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nname: default-policy\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nloadBalancing:\nweighted: # required\ndefaultWeight: 10  #required, all records created get this weight\nhealth:\n...   

In order to provide GEO based DNS and allow customisation of the weighting, we need some additional information to be provided by the gateway / cluster admin about where this gateway has been placed. For example if they want to use GEO based DNS as a strategy, we need to know what GEO identifier(s) to use for each record we create and a default GEO to use as a catch-all. Also, if the desired load balancing approach is to provide custom weighting and no longer simply use Round Robin, we will need a way to identify which records to apply that custom weighting to based on the clusters the gateway is placed on.

To solve this we will allow two new attributes to be added to the ManagedCluster resource as labels:

   kuadrant.io/lb-attribute-geo-code: \"IE\"\n   kuadrant.io/lb-attribute-custom-weight: \"GCP\"\n

These two labels allow setting values in the DNSPolicy that will be reflected into DNS records for gateways placed on that cluster depending on the strategies used. (see the first DNSPolicy definition above to see how these values are used) or take a look at the examples at the bottom.

example :

apiVersion: cluster.open-cluster-management.io/v1\nkind: ManagedCluster\nmetadata:\nlabels:\nkuadrant.io/lb-attribute-geo-code: \"IE\"\nkuadrant.io/lb-attribute-custom-weight: \"GCP\"\nspec:    

The attributes provide the key and value we need in order to understand how to define records for a given LB address based on the DNSPolicy targeting the gateway.

The kuadrant.io/lb-attribute-geo-code attribute value is provider specific, using an invalid code will result in an error status condition in the DNSrecord resource.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#dns-record-structure","title":"DNS Record Structure","text":"

This is an advanced topic and so is broken out into its own proposal doc DNS Record Structure

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#custom-weighting","title":"Custom Weighting","text":"

Custom weighting will use the associated custom-weight attribute set on the ManagedCluster to decide which records should get a specific weight. The value of this attribute is up to the end user.

example:

apiVersion: cluster.open-cluster-management.io/v1\nkind: ManagedCluster\nmetadata:\nlabels:\nkuadrant.io/lb-attribute-custom-weight: \"GCP\"\n

The above is then used in the DNSPolicy to set custom weights for the records associated with the target gateway.

    - value: GCP\nweight: 20\n

So any gateway targeted by a DNSPolicy with the above definition that is placed on a ManagedCluster with the kuadrant.io/lb-attribute-custom-weight set with a value of GCP will get an A record with a weight of 20

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#status","title":"Status","text":"

DNSPolicy should have a ready condition that reflect that the DNSRecords have been created and configured as expected. In the case that there is an invalid policy, the status message should reflect this and indicate to the user that the old DNS has been preserved.

We will also want to add a status condition to the gateway status indicating it is effected by this policy. Gateway API recommends the following status condition

- type: gateway.networking.k8s.io/PolicyAffected\nstatus: True message: \"DNSPolicy has been applied\"\nreason: PolicyApplied\n...\n

https://github.com/kubernetes-sigs/gateway-api/pull/2128/files#diff-afe84021d0647e83f420f99f5d18b392abe5ec82d68f03156c7534de9f19a30aR888

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#example-policies","title":"Example Policies","text":""},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#round-robin-the-default-policy","title":"Round Robin (the default policy)","text":"
apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nname: RoundRobinPolicy\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nloadBalancing:\nweighted:\ndefaultWeight: 10\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#geo-round-robin","title":"GEO (Round Robin)","text":"
apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nname: GEODNS\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nloadBalancing:\nweighted:\ndefaultWeight: 10\nGEO:\ndefaultGeo: IE\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#custom","title":"Custom","text":"
apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nname: SendMoreToAzure\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nloadBalancing:\nweighted:\ndefaultWeight: 10\ncustom:\n- attribute: cloud\nvalue: Azure #any record associated with a gateway on a cluster without this value gets the default\nweight: 30\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#geo-with-custom-weights","title":"GEO with Custom Weights","text":"
apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nname: GEODNSAndSendMoreToAzure\nspec:\ntargetRef: # defaults to gateway gvk and current namespace\nname: gateway-name\nloadBalancing:\nweighted:\ndefaultWeight: 10\ncustom:\n- attribute: cloud\nvalue: Azure\nweight: 30\nGEO:\ndefaultGeo: IE\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#considerations-and-limitations","title":"Considerations and Limitations","text":"

You cannot have a different load balancing strategy for each listener within a gateway. So in the following gateway definition

spec:\ngatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\nlisteners:\n- allowedRoutes:\nnamespaces:\nfrom: All\nhostname: myapp.hcpapps.net\nname: api\nport: 443\nprotocol: HTTPS\n- allowedRoutes:\nnamespaces:\nfrom: All\nhostname: other.hcpapps.net\nname: api\nport: 443\nprotocol: HTTPS      

The DNS policy targeting this gateway will apply to both myapp.hcpapps.net and other.hcpapps.net

However, there is still significant value even with this limitation. This limitation is something we will likely revisit in the future

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSPolicy/#background-docs","title":"Background Docs","text":"

DNS Provider Support

AWS DNS

Google DNS

Azure DNS

Direct Policy Attachment

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/","title":"DNSRecordStructure","text":"

DNSRecord is our API for expressing DNS endpoints via a kube CRD based API. It is managed by the multi-cluster gateway controller based on the desired state expressed in higher level APIs such as the Gateway or a DNSPolicy. In order to provide our feature set, we need to carefully consider how we structure our records and the types of records we need. This document proposes a particular structure based on the requirements and feature set we have.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#requirements","title":"Requirements","text":"

We want to be able to support Gateway definitions that use the following listener definitions:

  • wildcard: *.example.com and fully qualified listener host www.example.com definitions with the notable exception of fully wildcarded ie * as we cannot provide any DNS or TLS for something with no defined hostname.
  • listeners that have HTTPRoute defined on less than all the clusters where the listener is available. IE we don't want to send traffic to clusters where there is no HTTPRoute attached to the listener.
  • Gateway instances that provide IPs that are deployed alongside instances on different infra that provide host names causing the addresses types on each of gateway instance to be different (IPAddress or HostAddress).
  • We want to provide GEO based DNS as a feature of DNSPolicy and so our DNSRecord structure must support this.
  • We want to offer default weighted and custom weighted DNS as part of DNSPolicy
  • We want to allow root or apex domain to be used as listener hosts
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#diagram","title":"Diagram","text":"

https://lucid.app/lucidchart/2f95c9c9-8ddf-4609-af37-48145c02ef7f/edit?viewport_loc=-188%2C-61%2C2400%2C1183%2C0_0&invitationId=inv_d5f35eb7-16a9-40ec-b568-38556de9b568

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#proposal","title":"Proposal","text":"

For each listener defined in a gateway, we will create a set of records with the following rules.

none apex domain:

We will have a generated lb (load balancer) dns name that we will use as a CNAME for the listener hostname. This DNS name is not intended for use within a HTTPRoute but is instead just a DNS construct. This will allow us to set up additional CNAME records for that DNS name in the future that are returned based a GEO location. These DNS records will also be CNAMES pointing to specific gateway dns names, this will allow us to setup a weighted response. So the first layer CNAME handles balancing based on geo, the second layer handles balancing based on weighting.

                                        shop.example.com\n                                        |             |\n                                      (IE)          (AUS)\n                                CNAME lb.shop..      lb.shop..\n                                    |     |         |      |\n                                 (w 100) (w 200)   (w 100) (w100)\n                                CNAME g1.lb.. g2.lb..   g3.lb..  g4.lb..\n                                A 192..   A 81..  CNAME  aws.lb   A 82..\n

When there is no geo strategy defined within the DNSPolicy, we will put everything into a default geo (IE a catch-all record) default.lb-{guid}.{listenerHost} but set the routing policy to GEO allowing us to add more geo based records in the future if the gateway admin decides to move to a geo strategy as their needs grow.

To ensure this lb dns name is unique and does not clash we will use a short guid as part of the subdomain so lb-{guid}.{listenerHost}. this guid will be based on the gateway name and gateway namespace in the control plane.

For a geo strategy we will add a geo record with a prefix to the lb subdomain based on the geo code. When there is no geo we will use default as the prefix. {geo-code}.lb-{guid}.{listenerHost}. Finally, for each gateway instance on a target cluster we will add a {spokeClusterName}.lb-{guid}.{listenerHost}

To allow for a mix of hostname and IP address types, we will always use a CNAME . So we will create a dns name for IPAddress with the following structure: {guid}.lb-{guid}.{listenerHost} where the first guid will be based on the cluster name where the gateway is placed.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#apex-domains","title":"Apex Domains","text":"

An apex domain is the domain at the apex or root of a zone. These are handled differently by DNS as they often have NS and SOA records. Generally it is not possible to set up a CNAME for apex domain (although some providers allow it).

If a listener is added to a gateway that is an apex domain, we can only add A records for that domain to keep ourselves compliant with as many providers as possible. If a listener is the apex domain, we will setup A records for that domain (favouring gateways with an IP address or resolving the IP behind a host) but there will be no special balancing/weighting done. Instead, we will expect that the owner of that will setup a HTTPRoute with a 301 permanent redirect sending users from the apex domain e.g. example.com to something like: www.example.com where the www subdomain based listener would use the rules of the none apex domains and be where advanced geo and weighted strategies are applied.

  • gateway listener host name : example.com
    • example.com A 81.17.241.20
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#geo-agnostic-everything-is-in-a-default-geo-catch-all","title":"Geo Agnostic (everything is in a default * geo catch all)","text":"

This is the type of DNS Record structure that would back our default DNSPolicy.

  • gateway listener host name : www.example.com

    DNSRecords: - www.example.com CNAME lb-1ab1.www.example.com - lb-1ab1.www.example.com CNAME geolocation * default.lb-1ab1.www.example.com - default.lb-1ab1.www.example.com CNAME weighted 100 1bc1.lb-1ab1.www.example.com - default.lb-1ab1.www.example.com CNAME weighted 100 aws.lb.com - 1bc1.lb-1ab1.www.example.com A 192.22.2.1

So in the above example working up from the bottom, we have a mix of hostname and IP based addresses for the gateway instance. We have 2 evenly weighted records that balance between the two available gateways, then next we have the geo based record that is set to a default catch all as no geo has been specified then finally we have the actual listener hostname that points at our DNS based load balancer name.

DNSRecord Yaml

apiVersion: kuadrant.io/v1alpha1\nkind: DNSRecord\nmetadata:\nname: {gateway-name}-{listenerName}\nnamespace: multi-cluster-gateways\nspec:\ndnsName: www.example.com\nmanagedZone:\nname: mgc-dev-mz\nendpoints:\n- dnsName: www.example.com\nrecordTTL: 300\nrecordType: CNAME\ntargets:\n- lb-1ab1.www.example.com\n- dnsName: lb-1ab1.www.example.com\nrecordTTL: 300\nrecordType: CNAME\nsetIdentifier: mygateway-multicluster-gateways\nproviderSpecific:\n- name: \"geolocation-country-code\"\nvalue: \"*\"\ntargets:\n- default.lb-1ab1.www.example.com\n- dnsName: default.lb-1ab1.www.example.com\nrecordTTL: 300\nrecordType: CNAME\nsetIdentifier: cluster1\nproviderSpecific:\n- name: \"weight\"\nvalue: \"100\"\ntargets:\n- 1bc1.lb-1ab1.www.example.com\n- dnsName: default.lb-a1b2.shop.example.com\nrecordTTL: 300\nrecordType: CNAME\nsetIdentifier: cluster2\nproviderSpecific:\n- name: \"weight\"\nvalue: \"100\"\ntargets:\n- aws.lb.com\n- dnsName: 1bc1.lb-1ab1.www.example.com\nrecordTTL: 60\nrecordType: A\ntargets:\n- 192.22.2.1\n
"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#geo-specific","title":"geo specific","text":"

Once the end user selects to use a geo strategy via the DNSPolicy, we then need to restructure our DNS to add in our geo specific records. Here the default record

lb short code is {gw name + gw namespace} gw short code is {cluster name}

  • gateway listener host : shop.example.com

    DNSRecords: - shop.example.com CNAME lb-a1b2.shop.example.com - lb-a1b2.shop.example.com CNAME geolocation ireland ie.lb-a1b2.shop.example.com - lb-a1b2.shop.example.com geolocation australia aus.lb-a1b2.shop.example.com - lb-a1b2.shop.example.com geolocation default ie.lb-a1b2.shop.example.com (set by the default geo option) - ie.lb-a1b2.shop.example.com CNAME weighted 100 ab1.lb-a1b2.shop.example.com - ie.lb-a1b2.shop.example.com CNAME weighted 100 aws.lb.com - aus.lb-a1b2.shop.example.com CNAME weighted 100 ab2.lb-a1b2.shop.example.com - aus.lb-a1b2.shop.example.com CNAME weighted 100 ab3.lb-a1b2.shop.example.com - ab1.lb-a1b2.shop.example.com A 192.22.2.1 192.22.2.5 - ab2.lb-a1b2.shop.example.com A 192.22.2.3 - ab3.lb-a1b2.shop.example.com A 192.22.2.4

In the above example we move from a default catch all to geo specific setup. Based on a DNSPolicy that specifies IE as the default geo location. We leave the default subdomain in place to allow for clients that may still be using that and set up geo specific subdomains that allow us to route traffic based on its origin. In this example we are load balancing across 2 geos and 4 clusters

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#wildcards","title":"WildCards","text":"

In the examples we have used fully qualified domain names, however sometimes it may be required to use a wildcard subdomain. example:

  • gateway listener host : *.example.com

To support these we need to change the name of the DNSRecord away from the name of the listener as the k8s resource does not allow * in the name.

To do this we will set the dns record resource name to be a combination of {gateway-name}-{listenerName}

to keep a record of the host this is for we will set a top level property named dnsName. You can see an example in the DNSRecord above.

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#pros","title":"Pros","text":"

This setup allows us a powerful set of features and flexibility

"},{"location":"multicluster-gateway-controller/docs/proposals/DNSRecordStructure/#cons","title":"Cons","text":"

With this CNAME based approach we are increasing the number of DNS lookups required to get to an IP which will increase the cost and add a small amount of latency. To counteract this, we will set a reasonably high TTL (at least 5 mins) for our CNAMES and (2 mins) for A records

"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/","title":"Multiple DNS Provider Support","text":"

Authors: Michael Nairn @mikenairn

Epic: https://github.com/Kuadrant/multicluster-gateway-controller/issues/189

Date: 25th May 2023

"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#job-stories","title":"Job Stories","text":"
  • As a developer, I want to use MGC with a domain hosted in one of the major cloud DNS providers (Google Cloud DNS, Azure DNS or AWS Route53)
  • As a developer, I want to use multiple domains with a single instance of MGC, each hosted on different cloud providers
"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#goals","title":"Goals","text":"
  • Add ManagedZone and DNSRecord support for Google Cloud DNS
  • Add ManagedZone and DNSRecord support for Azure DNS
  • Add DNSRecord support for CoreDNS (Default for development environment)
  • Update ManagedZone and DNSRecord support for AWS Route53
  • Add support for multiple providers with a single instance of MGC
"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#non-goals","title":"Non Goals","text":"
  • Support for every DNS provider
  • Support for health checks
"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#current-approach","title":"Current Approach","text":"

Currently, MGC only supports AWS Route53 as a dns provider. A single instance of a DNSProvider resource is created per MGC instance which is configured with AWS config loaded from the environment. This provider is loaded into all controllers requiring dns access (ManagedZone and DNSRecord reconciliations), allowing a single instance of MGC to operate against a single account on a single dns provider.

"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#proposed-solution","title":"Proposed Solution","text":"

MGC has three features it requires of any DNS provider in order to offer full support, DNSRecord management, Zone management and DNS Health checks. We do not however want to limit to providers that only offer this functionality, so to add support for a provider the minimum that provider should offer is API access to managed DNS records. MGC will continue to provide Zone management and DNS Health checks support on a per-provider basis.

Support will be added for AWS(Route53), Google(Google Cloud DNS), Azure and investigation into possible adding CoreDNS (intended for local dev purposes), with the following proposed initial support:

Provider DNS Records DNS Zones DNS Health AWS Route53 X X X Google Cloud DNS X X - AzureDNS X X - CoreDNS X - -

Add DNSProvider as an API for MGC which contains all the required config for that particular provider including the credentials. This can be thought of in a similar way to a cert manager Issuer. Update ManagedZone to add a reference to a DNSProvider. This will be a required field on the ManagedZone and a DNSProvider must exist before a ManagedZone can be created. Update all controllers load the DNSProvider directly from the ManagedZone during reconciliation loops and remove the single controller wide instance. Add new provider implementations for google, azure and coredns. * All providers constructors should accept a single struct containing all required config for that particular provider. * Providers must be configured from credentials passed in the config and not rely on environment variables.

"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#other-solutions-investigated","title":"Other Solutions investigated","text":"

Investigation was carried out into the suitability of [External DNS] (https://github.com/kubernetes-sigs/external-dns) as the sole means of managing dns resources. Unfortunately, while external dns does offer support for basic dns record management with a wide range of providers, there were too many features missing making it unsuitable at this time for integration.

"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#external-dns-as-a-separate-controller","title":"External DNS as a separate controller","text":"

Run external dns, as intended, as a separate controller alongside mgc, and pass all responsibility for reconciling DNSRecord resources to it. All DNSRecord reconciliation is removed from MGC.

Issues:

  • A single instance of external dns will only work with a single provider and a single set of credentials. As it is, in order to support more than a single provider, more than one external dns instance would need to be created, one for each provider/account pair.
  • Geo and Weighted routing policies are not implemented for any provider other than AWS Route53.
  • Only supports basic dns record management (A,CNAME, NS records etc ..), with no support for managed zones or health checks.
"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#external-dns-as-a-module-dependency","title":"External DNS as a module dependency","text":"

Add external dns as a module dependency in order to make use of their DNS Providers, but continue to reconcile DNSRecords in MGC.

Issues:

  • External DNS Providers all create clients using the current environment. Would require extensive refactoring in order to modify each provider to optionally be constructed using static credentials.
  • Clients were all internal making it impossible, without modification, to use the upstream code to extend the provider behaviour to support additional functionality such as managed zone creation.
"},{"location":"multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/#checklist","title":"Checklist","text":"
  • [ ] An epic has been created and linked to
  • [ ] Reviewers have been added. It is important that the right reviewers are selected.
"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/","title":"Provider agnostic DNS Health checks","text":""},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#introduction","title":"Introduction","text":"

The MGC has the ability to extend the DNS configuration of the gateway with the DNSPolicy resource. This resource allows users to configure health checks. As a result of configuring health checks, the controller creates the health checks in Route53, attaching them to the related DNS records. This has the benefit of automatically disabling an endpoint if it becomes unhealthy, and enabling it again when it becomes healthy again.

This feature has a few shortfalls: 1. It\u2019s tightly coupled with Route53. If other DNS providers are supported they must either provide a similar feature, or health checks will not be supported 2. Lacks the ability to reach endpoints in private networks 3. requires using the gateway controller to implement, maintain and test multiple providers

This document describes a proposal to extend the current health check implementation to overcome these shortfalls.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#goals","title":"Goals","text":"
  • Ability to configure health checks in the DNSPolicy associated to a Gateway
  • DNS records are disabled when the associated health check fails
  • Current status of the defined health checks is visible to the end user
"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#nongoals","title":"Nongoals","text":"
  • Ability for the health checks to reach endpoints in separate private networks
  • Transparently keep support for other health check providers like Route53
  • Having health checks for wildcard listeners
"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#use-cases","title":"Use-cases","text":"
  • As a gateway administrator, I would like to define a health check that each service sitting behind a particular listener across the production clusters has to implement to ensure we can automatically respond, failover and mitigate a failing instance of the service
"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#proposal","title":"Proposal","text":"

Currently, this functionality will be added to the existing MGC, and executed within that component. This will be created with the knowledge that it may need to be made into an external component in the future.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#dnspolicy-resource","title":"DNSPolicy resource","text":"

The presence of the healthCheck means that for every DNS endpoint (that is either an A record, or a CNAME to an external host), a health check is created based on the health check configuration in the DNSPolicy.

A failureThreshold field will be added to the health spec, allowing users to configure a number of consecutive health check failures that must be observed before the endpoint is considered unhealthy.

Example DNS Policy with a defined health check.

apiVersion: kuadrant.io/v1alpha1\nkind: DNSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\nhealthCheck:\nendpoint: /health\nfailureThreshold: 5\nport: 443\nprotocol: https\nadditionalHeaders: <SecretRef>\nexpectedResponses:\n- 200\n- 301\n- 302\n- 407\nAllowInsecureCertificates: true\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: Gateway\nname: prod-web\nnamespace: multi-cluster-gateways\n

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#dnshealthcheckprobe-resource","title":"DNSHealthCheckProbe resource","text":"

The DNSHealthCheckProbe resource configures a health probe in the controller to perform the health checks against an identified final A or CNAME endpoint. When created by the controller as a result of a DNS Policy, this will have an owner ref of the DNS Policy that caused it to be created.

apiVersion: kuadrant.io/v1alpha1\nkind: DNSHealthCheckProbe\nmetadata:\nname: example-probe\nspec:\nport: \"...\"\nhost: \u201c...\u201d\naddress: \"...\"\npath: \"...\"\nprotocol: \"...\"\ninterval: \"...\"\nadditionalHeaders: <SecretRef>\nexpectedResponses:\n- 200\n201\n301\nAllowInsecureCertificate: true\nstatus:\nhealthy: true\nconsecutiveFailures: 0\nreason: \"\"\nlastCheck: \"...\"\n
"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#spec-fields-definition","title":"Spec Fields Definition","text":"
  • Port The port to use
  • Address The address to connect to (e.g. IP address or hostname of a clusters loadbalancer)
  • Host The host to request in the Host header
  • Path The path to request
  • Protocol The protocol to use for this request
  • Interval How frequently this check would ideally be executed.
  • AdditionalHeaders Optional secret ref which contains k/v: headers and their values that can be specified to ensure the health check is successful.
  • ExpectedResponses Optional HTTP response codes that should be considered healthy (defaults are 200 and 201).
  • AllowInsecureCertificate Optional flag to allow using invalid (e.g. self-signed) certificates, default is false.

The reconciliation of this resource results in the configuration of a health probe, which targets the endpoint and updates the status. The status is propagated to the providerSpecific status of the equivalent endpoint in the DNSRecord

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#changes-to-current-controllers","title":"Changes to current controllers","text":"

In order to support this new feature, the following changes in the behaviour of the controllers are proposed.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#dnspolicy-controller","title":"DNSPolicy controller","text":"

Currently, the reconciliation loop of this controller creates health checks in the configured DNS provider (Route53 currently) based on the spec of the DNSPolicy, separately from the reconciliation of the DNSRecords. The proposed change is to reconcile health check probe CRs based on the combination of DNS Records and DNS Policies.

Instead of Route53 health checks, the controller will create DNSHealthCheckProbe resources.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#dnsrecord-controller","title":"DNSRecord controller","text":"

When reconciling a DNS Record, the DNS Record reconciler will retrieve the relevant DNSHealthCheckProbe CRs, and consult the status of them when determining what value to assign to a particular endpoint's weight.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#dns-record-structure-diagram","title":"DNS Record Structure Diagram:","text":"

https://lucid.app/lucidchart/2f95c9c9-8ddf-4609-af37-48145c02ef7f/edit?viewport_loc=-188%2C-61%2C2400%2C1183%2C0_0&invitationId=inv_d5f35eb7-16a9-40ec-b568-38556de9b568 How

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#removing-unhealthy-endpoints","title":"Removing unhealthy Endpoints","text":"

When a DNS health check probe is failing, it will update the DNS Record CR with a custom field on that endpoint to mark it as failing.

There are then 3 scenarios which we need to consider: 1 - All endpoints are healthy 2 - All endpoints are unhealthy 3 - Some endpoints are healthy and some are unhealthy.

In the cases 1 and 2, the result should be the same: All records are published to the DNS Provider.

When scenario 3 is encountered the following process should be followed:

For each gateway IP or CNAME: this should be omitted if unhealthy.\nFor each managed gateway CNAME: This should be omitted if all child records are unhealthy.\nFor each GEO CNAME: This should be omitted if all the managed gateway CNAMEs have been omitted.\nLoad balancer CNAME: This should never be omitted.\n

If we consider the DNS record to be a hierarchy of parents and children, then whenever any parent has no healthy children that parent is also considered unhealthy. No unhealthy elements are to be included in the DNS Record.

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#removal-process","title":"Removal Process","text":"

When removing DNS records, we will want to avoid any NXDOMAIN responses from the DNS service as this will cause the resolver to cache this missed domain for a while (30 minutes or more). The NXDOMAIN response is triggered when the resolver attempts to resolve a host that does not have any records in the zone file.

The situation that would cause this to occur is when we have removed a record but still refer to it from other records.

As we wish to avoid any NXDOMAIN responses from the nameserver - causing the resolver to cache this missed response we will need to ensure that any time a DNS Record (CNAME or A) is removed, we also remove any records that refer to the removed record. (e.g. when the gateway A record is removed, we will need to remove the managed gateway CNAME that refers to that A record).

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#removal-example","title":"Removal Example","text":"

Given the following DNS Records (simplified hosts used in example):

01 host.example.com. 300 IN CNAME lb.hcpapps.net.\n02 lb.hcpapps.net. 60 IN CNAME default-geo.hcpapps.net.\n03 default-geo.hcpapps.net. 120 IN CNAME cluster1.hcpapps.net.\n04 default-geo.hcpapps.net. 120 IN CNAME cluster2.hcpapps.net.\n05 cluster1.hcpapps.net. 300 IN CNAME cluster1-gw1.hcpapps.net.\n06 cluster1.hcpapps.net. 300 IN CNAME cluster1-gw2.hcpapps.net.\n07 cluster2.hcpapps.net. 300 IN CNAME cluster2-gw1.hcpapps.net.\n08 cluster2.hcpapps.net. 300 IN CNAME cluster2-gw2.hcpapps.net.\n09 cluster1-gw1.hcpapps.net. 60 IN CNAME cluster1-gw1.aws.com.\n10 cluster1-gw2.hcpapps.net. 60 IN CNAME cluster1-gw2.aws.com.\n11 cluster2-gw1.hcpapps.net. 60 IN CNAME cluster2-gw1.aws.com.\n12 cluster2-gw2.hcpapps.net. 60 IN CNAME cluster2-gw2.aws.com.\n
cases: - Record 09 becomes unhealthy: remove records 09 and 05. - Record 09 and 10 become unhealthy: remove records 09, 10, 05, 06, 03

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#further-reading","title":"Further reading","text":"

Domain Names RFC: https://datatracker.ietf.org/doc/html/rfc1034

"},{"location":"multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/#executing-the-probes","title":"Executing the probes","text":"

There will be a DNSHealthCheckProbe CR controller added to the controller. This controller will create an instance of a HealthMonitor, the HealthMonitor ensures that each DNSHealthCheckProbe CR has a matching probeQueuer object running. It will also handle both the updating of the probeQueuer on CR update and the removal of probeQueuers, when a DNSHealthcheckProbe is removed.

The ProbeQueuer will add a health check request to a queue based on a configured interval, this queue is consumed by a ProbeWorker, probeQueuers work on their own goroutine.

The ProbeWorker is responsible for actually executing the probe, and updating the DNSHealthCheckProbe CR status. The probeWorker executes on its own goroutine.

"},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/","title":"Proposal: Aggregation of Status Conditions","text":""},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/#background","title":"Background","text":"

Status conditions are used to represent the current state of a resource and provide information about any problems or issues that might be affecting it. They are defined as an array of Condition objects within the status section of a resource's YAML definition.

"},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/#problem-statement","title":"Problem Statement","text":"

When multiple instances of a resource (e.g. a Gateway) are running across multiple clusters, it can be difficult to know the current state of each instance without checking each one individually. This can be time-consuming and error-prone, especially when there are a large number of clusters or resources.

"},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/#proposal","title":"Proposal","text":"

To solve this problem, I'm proposing we leverage the status block in the control plane instance of that resource, aggregating the statuses to convey the necessary information.

"},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/#status-conditions","title":"Status Conditions","text":"

For example, if the Ready status condition type of a Gateway is True for all instances of the Gateway resource across all clusters, then the Gateway in the control plane will have the Ready status condition type also set to True.

status:\nconditions:\n- type: Ready\nstatus: True\nmessage: All listeners are valid\n

If the Ready status condition type of some instances is not True, the Ready status condition type of the Gateway in the control plane will be False.

status:\nconditions:\n- type: Ready\nstatus: False\n

In addition, if the Ready status condition type is False, the Gateway in the control plane should include a status message for each Gateway instance where Ready is False. This message would indicate the reason why the condition is not true for each Gateway.

status:\nconditions:\n- type: Ready\nstatus: False\nmessage: \"gateway-1 Listener certificate is expired; gateway-3 No listener configured for port 80\"\n

In this example, the Ready status condition type is False because two of the three Gateway instances (gateway-1 and gateway-3) have issues with their listeners. For gateway-1, the reason for the False condition is that the listener certificate is expired, and for gateway-3, the reason is that no listener is configured for port 80. These reasons are included as status messages in the Gateway resource in the control plane.

As there may be different reasons for the condition being False across different clusters, it doesn't make sense to aggregate the reason field. The reason field is intended to be a programmatic identifier, while the message field allows for a human readable message i.e. a semi-colon separated list of messages.

The lastTransitionTime and observedGeneration fields will behave as normal for the resource in the control plane.

"},{"location":"multicluster-gateway-controller/docs/proposals/status-aggregation/#addresses-and-listeners-status","title":"Addresses and Listeners status","text":"

The Gateway status can include information about addresses, like load balancer IP Addresses assigned to the Gateway, and listeners, such as the number of attached routes for each listener. This information is useful at the control plane level. For example, a DNS Record should only exist as long as there is at least 1 attached route for a listener. It can also be more complicated than that when it comes to multi cluster gateways. A DNS Record should only include the IP Addresses of the Gateway instances where the listener has at least 1 attached route. This is important when initial setup of DNS Records happen as applications start. It doesn't make sense to route traffic to a Gateway where a listener isn't ready/attached yet. It also comes into play when a Gateway is displaced either due to changing placement decision or removal.

In summary, the IP Addresses and number of attached routes per listener per Gateway instance is needed in the control plane to manage DNS effectively. This proposal adds that information the hub Gateway status block. This will ensure a decoupling of the DNS logic from the underlying resource/status syncing implementation (i.e. ManifestWork status feedback rules)

First, here are 2 instances of a multi cluster Gateway in 2 separate spoke clusters. The yaml is shortened to highlight the status block.

apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\nname: gateway\nstatus:\naddresses:\n- type: IPAddress\nvalue: 172.31.200.0\n- type: IPAddress\nvalue: 172.31.201.0\nlisteners:\n- attachedRoutes: 0\nconditions:\nname: api\n- attachedRoutes: 1\nconditions:\nname: web\n---\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\nname: gateway\nstatus:\naddresses:\n- type: IPAddress\nvalue: 172.31.202.0\n- type: IPAddress\nvalue: 172.31.203.0\nlisteners:\n- attachedRoutes: 1\nname: api\n- attachedRoutes: 1\nname: web\n

And here is the proposed status aggregation in the hub Gateway:

apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\nname: gateway\nstatus:\naddresses:\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: cluster_1/172.31.200.0\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: cluster_1/172.31.201.0\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: cluster_2/172.31.202.0\n- type: kuadrant.io/MultiClusterIPAddress\nvalue: cluster_2/172.31.203.0\nlisteners:\n- attachedRoutes: 0\nname: cluster_1.api\n- attachedRoutes: 1\nname: cluster_1.web\n- attachedRoutes: 1\nname: cluster_2.api\n- attachedRoutes: 1\nname: cluster_2.web\n

The MultiCluster Gateway Controller will use a custom implementation of the addresses and listenerers fields. The address type is of type AddressType, where the type is a domain-prefixed string identifier. The value can be split on the forward slash, /, to give the cluster name and the underlying Gateway IPAddress value of type IPAddress. Both the IPAddress and Hostname types will be supported. The type strings for either will be kuadrant.io/MultiClusterIPAddress and kuadrant.io/MultiClusterHostname

The listener name is of type SectionName, with validation on allowed characters and max length of 253. The name can be split on the period, ., to give the cluster name and the underlying listener name. As there are limits on the character length for the name field, this puts a lower limit restriction on the cluster names and listener names used to ensure proper operation of this status aggregation. If the validation fails, a status condition showing a validation error should be included in the hub Gateway status block.

"},{"location":"multicluster-gateway-controller/docs/proposals/template/","title":"Proposal Template","text":"

Authors: {authors names} Epic: {Issue of type epic this relates to} Date: {date proposed}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#job-stories","title":"Job Stories","text":"

{ A bullet point list of stories this proposal solves}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#goals","title":"Goals","text":"

{A bullet point list of the goals this will achieve}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#non-goals","title":"Non Goals","text":"

{A bullet point list of goals that this will not achieve, IE scoping}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#current-approach","title":"Current Approach","text":"

{outline the current approach if any}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#proposed-solution","title":"Proposed Solution","text":"

{outline the proposed solution, links to diagrams and PRs can go here along with the details of your solution}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#testing","title":"Testing","text":"

{outline any testing considerations. Does this need some form of load/performance test. Are there any considerations when thinking about an e2e test}

"},{"location":"multicluster-gateway-controller/docs/proposals/template/#checklist","title":"Checklist","text":"
  • [ ] An epic has been created and linked to
  • [ ] Reviewers have been added. It is important that the right reviewers are selected.
"},{"location":"multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws/","title":"Aws","text":"

AWS supports Weighted(Weighted Round Robin) and Geolocation routing policies https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html. Both of these can be configured directly on records in AWS route 53.

GEO Weighted

Weighted

"},{"location":"multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/","title":"Azure","text":""},{"location":"multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/#azure","title":"Azure","text":"

https://portal.azure.com/

Azure supports Weighted and Geolocation routing policies, but requires records to alias to a Traffic Manager resource that must also be created in the users account https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods

Notes:

  • A Traffic Manager Profile is created per record set and is created with a routing method (Weighted or Geographic) https://portal.azure.com/#view/Microsoft_Azure_Network/LoadBalancingHubMenuBlade/~/TrafficManagers
  • Only a singe IP can be added to a DNSRecord set. A traffic manager profile must be created and aliased from a DNSRecord set for anything that involves more than a single target.
  • Significantly more resources to manage in order to achieve functionality comparable with Google and AWS.
  • The modelling of the records is significantly different from AWS Route53, but the current DNSRecord spec could still work. The azure implementation will have to process the endpoint list and create traffic manager policies as required to satisfy the record set.

Given the example DNSRecord here describing a record set for a geo location routing policy with four clusters, two in two regions (North America and Europe), the following Azure resources are required.

Three DNSRecords, each aliased to a different traffic manager:

  • dnsrecord-geo-azure-hcpapps-net (dnsrecord-geo.azure.hcpapps.net) aliased to Traffic Manager Profile 1 (dnsrecord-geo-azure-hcpapps-net)
  • dnsrecord-geo-na.azure-hcpapps-net (dnsrecord-geo.na.azure.hcpapps.net) aliased to Traffic Manager Profile 2 (dnsrecord-geo-na-azure-hcpapps-net)
  • dnsrecord-geo-eu.azure-hcpapps-net (dnsrecord-geo.eu.azure.hcpapps.net) aliased to Traffic Manager Profile 3 (dnsrecord-geo-eu-azure-hcpapps-net)

Three Traffic Manager Profiles:

  • Traffic Manager Profile 1 (dnsrecord-geo-azure-hcpapps-net): Geolocation routing policy with two region specific FQDN targets (dnsrecord-geo.eu.azure.hcpapps.net and dnsrecord-geo.na.azure.hcpapps.net).
  • Traffic Manager Profile 2 (dnsrecord-geo-na-azure-hcpapps-net): Weighted routed policy with two IP address endpoints (172.31.0.1 and 172.31.0.2) with equal weighting.
  • Traffic Manager Profile 3 (dnsrecord-geo-eu-azure-hcpapps-net): Weighted routed policy with two IP address endpoints (172.31.0.3 and 172.31.0.4) with equal weighting.
dig dnsrecord-geo.azure.hcpapps.net\n\n; <<>> DiG 9.18.12 <<>> dnsrecord-geo.azure.hcpapps.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16236\n;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 65494\n;; QUESTION SECTION:\n;dnsrecord-geo.azure.hcpapps.net. IN    A\n\n;; ANSWER SECTION:\ndnsrecord-geo.azure.hcpapps.net. 60 IN  CNAME   dnsrecord-geo-azure-hcpapps-net.trafficmanager.net.\ndnsrecord-geo-azure-hcpapps-net.trafficmanager.net. 60 IN CNAME dnsrecord-geo.eu.azure.hcpapps.net.\ndnsrecord-geo.eu.azure.hcpapps.net. 60 IN A     172.31.0.3\n\n;; Query time: 88 msec\n;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)\n;; WHEN: Tue May 30 15:05:07 IST 2023\n;; MSG SIZE  rcvd: 168\n
"},{"location":"multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/","title":"Google","text":""},{"location":"multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/#google","title":"Google","text":"

https://console.cloud.google.com/net-services/dns/zones

Google supports Weighted(Weighted Round Robin) and Geolocation routing policies https://cloud.google.com/dns/docs/zones/manage-routing-policies. Both of these can be configured directly on records in Google Cloud DNS and no secondary Traffic Management resource is required.

Notes:

  • Record sets are modelled as a single endpoint with routing policy embedded. This is a different approach to Route53 where each individual A/CNAME would have its own record entry.
  • Weight must be an integer between 0 - 10000
  • There are no continent options for region, only finer grained regions such as us-east1, europe-west-1 etc...
  • There appears to be no way to set a default region, google just routes requests to the nearest supported region.
  • The current approach used in AWS Route53 for geo routing will work in the same way on Google DNS. A single CNAME record with geo routing policy specifying multiple geo specific A record entries as targets.
  • Geo and weighted routing can be combined, as with AWS Route53, allowing traffic with a region to be routed using weightings.
  • The modelling of the records is slightly different from AWS, but the current DNSRecord spec could still work. The Google implementation of AddRecords will have to process the list of endpoints in order to group related endpoints in order to build up the required API request. In this case there would not be a 1:1 mapping between an endpoint in a DNSRecord and the dns provider, but the DNSRecord contents would be kept consistent across all providers and compatibility with external-dns would be maintained.

Example request for Geo CNAME record:

POST https://dns.googleapis.com/dns/v1beta2/projects/it-cloud-gcp-rd-midd-san/managedZones/google-hcpapps-net/rrsets

{\n\"name\": \"dnsrecord-geo.google.hcpapps.net.\",\n\"routingPolicy\": {\n\"geo\": {\n\"item\": [\n{\n\"location\": \"us-east1\",\n\"rrdata\": [\n\"dnsrecord-geo.na.google.hcpapps.net.\"\n]\n},\n{\n\"location\": \"europe-west1\",\n\"rrdata\": [\n\"dnsrecord-geo.eu.google.hcpapps.net.\"\n]\n}\n],\n\"enableFencing\": false\n}\n},\n\"ttl\": 60,\n\"type\": \"CNAME\"\n}\n

Example request for Weighted A record:

POST https://dns.googleapis.com/dns/v1beta2/projects/it-cloud-gcp-rd-midd-san/managedZones/google-hcpapps-net/rrsets

{\n\"name\": \"dnsrecord-geo.na.google.hcpapps.net.\",\n\"routingPolicy\": {\n\"wrr\": {\n\"item\": [\n{\n\"weight\": 60.0,\n\"rrdata\": [\n\"172.31.0.1\"\n]\n},\n{\n\"weight\": 60.0,\n\"rrdata\": [\n\"172.31.0.2\"\n]\n}\n]\n}\n},\n\"ttl\": 60,\n\"type\": \"A\"\n}\n

Given the example DNSRecord here describing a record set for a geo location routing policy with four clusters, two in two regions (North America and Europe), the following resources are required.

Three DNSRecords, one CNAME (dnsrecord-geo.google.hcpapps.net) and 2 A records (dnsrecord-geo.na.google.hcpapps.net and dnsrecord-geo.eu.google.hcpapps.net)

dig dnsrecord-geo.google.hcpapps.net\n\n; <<>> DiG 9.18.12 <<>> dnsrecord-geo.google.hcpapps.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22504\n;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 65494\n;; QUESTION SECTION:\n;dnsrecord-geo.google.hcpapps.net. IN   A\n\n;; ANSWER SECTION:\ndnsrecord-geo.google.hcpapps.net. 60 IN CNAME   dnsrecord-geo.eu.google.hcpapps.net.\ndnsrecord-geo.eu.google.hcpapps.net. 60 IN A    172.31.0.4\n\n;; Query time: 33 msec\n;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)\n;; WHEN: Tue May 30 15:05:25 IST 2023\n;; MSG SIZE  rcvd: 108\n
"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/","title":"TLS Policy","text":"

The TLSPolicy is a GatewayAPI policy that uses Direct Policy Attachment as defined in the policy attachment mechanism standard. This policy is used to provide tls for gateway listeners by managing the lifecycle of tls certificates using CertManager, and is a policy implementation of securing gateway resources.

"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#terms","title":"Terms","text":"
  • GatewayAPI: resources that model service networking in Kubernetes.
  • Gateway: Kubernetes Gateway resource.
  • CertManager: X.509 certificate management for Kubernetes and OpenShift.
  • TLSPolicy: Kuadrant policy for managing tls certificates with certificate manager.
"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#tls-provider-setup","title":"TLS Provider Setup","text":"

A TLSPolicy acts against a target Gateway by processing its listeners for appropriately configured tls sections.

If for example a Gateway is created with a listener with a hostname of echo.apps.hcpapps.net:

apiVersion: gateway.networking.k8s.io/v1beta1\nkind: Gateway\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ngatewayClassName: kuadrant-multi-cluster-gateway-instance-per-cluster\nlisteners:\n- allowedRoutes:\nnamespaces:\nfrom: All\nname: api\nhostname: echo.apps.hcpapps.net\nport: 443\nprotocol: HTTPS\ntls:\nmode: Terminate\ncertificateRefs:\n- name: apps-hcpapps-tls\nkind: Secret\n

"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#tlspolicy-creation-and-attachment","title":"TLSPolicy creation and attachment","text":"

The TLSPolicy requires a reference to an existing [CertManager Issuer] https://cert-manager.io/docs/configuration/. If we create a selfigned cluster issuer with the following:

apiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\nname: selfsigned-cluster-issuer\nspec:\nselfSigned: {}\n

We can then create and attach a TLSPolicy to start managing tls certificates for it:

apiVersion: kuadrant.io/v1alpha1\nkind: TLSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nissuerRef:\ngroup: cert-manager.io\nkind: ClusterIssuer\nname: selfsigned-cluster-issuer\n
"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#target-reference","title":"Target Reference","text":"

targetRef field is taken from policy attachment's target reference API. It can only target one resource at a time. Fields included inside: - Group is the group of the target resource. Only valid option is gateway.networking.k8s.io. - Kind is kind of the target resource. Only valid options are Gateway. - Name is the name of the target resource. - Namespace is the namespace of the referent. Currently only local objects can be referred so value is ignored.

"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#issuer-reference","title":"Issuer Reference","text":"

issuerRef field is required and is a reference to a [CertManager Issuer] https://cert-manager.io/docs/configuration/. Fields included inside: - Group is the group of the target resource. Only valid option is cert-manager.io. - Kind is kind of issuer. Only valid options are Issuer and ClusterIssuer. - Name is the name of the target issuer.

The example TLSPolicy shown above would create a CertManager Certificate like the following:

apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\nlabels:\ngateway: prod-web\ngateway-namespace: multi-cluster-gateways\nkuadrant.io/tlspolicy: prod-web\nkuadrant.io/tlspolicy-namespace: multi-cluster-gateways\nname: apps-hcpapps-tls\nnamespace: multi-cluster-gateways\nspec:\ndnsNames:\n- echo.apps.hcpapps.net\nissuerRef:\ngroup: cert-manager.io\nkind: ClusterIssuer\nname: selfsigned-cluster-issuer\nsecretName: apps-hcpapps-tls\nsecretTemplate:\nlabels:\ngateway: prod-web\ngateway-namespace: multi-cluster-gateways\nkuadrant.io/tlspolicy: prod-web\nkuadrant.io/tlspolicy-namespace: multi-cluster-gateways\nusages:\n- digital signature\n- key encipherment\n

And valid tls secrets generated and synced out to workload clusters:

kubectl get secrets -A | grep apps-hcpapps-tls\nkuadrant-multi-cluster-gateways   apps-hcpapps-tls                    kubernetes.io/tls               3      6m42s\nmulti-cluster-gateways            apps-hcpapps-tls                    kubernetes.io/tls               3      7m12s\n
"},{"location":"multicluster-gateway-controller/docs/tlspolicy/tls-policy/#lets-encrypt-issuer-for-route53-hosted-domain","title":"Let's Encrypt Issuer for Route53 hosted domain","text":"

Any type of Issuer that is supported by CertManager can be referenced in the TLSPolicy. The following shows how you would create a TLSPolicy that uses let's encypt to create production certs for a domain hosted in AWS Route53.

Create a secret containing AWS access key and secret:

kubectl create secret generic mgc-aws-credentials --from-literal=AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> --from-literal=AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> -n multi-cluster-gateways\n

Create a new Issuer:

apiVersion: cert-manager.io/v1\nkind: Issuer\nmetadata:\nname: le-production\nspec:\nacme:\nemail: <YOUR EMAIL>\npreferredChain: \"\"\nprivateKeySecretRef:\nname: le-production\nserver: https://acme-v02.api.letsencrypt.org/directory\nsolvers:\n- dns01:\nroute53:\nhostedZoneID: <YOUR HOSTED ZONE ID>\nregion: us-east-1\naccessKeyID: <AWS_SECRET_ACCESS_KEY>\nsecretAccessKeySecretRef:\nkey: AWS_SECRET_ACCESS_KEY\nname: mgc-aws-credentials\n

Create a TLSPolicy:

apiVersion: kuadrant.io/v1alpha1\nkind: TLSPolicy\nmetadata:\nname: prod-web\nnamespace: multi-cluster-gateways\nspec:\ntargetRef:\nname: prod-web\ngroup: gateway.networking.k8s.io\nkind: Gateway\nissuerRef:\ngroup: cert-manager.io\nkind: Issuer\nname: le-production\n

"},{"location":"multicluster-gateway-controller/docs/versioning/olm/","title":"Olm","text":""},{"location":"multicluster-gateway-controller/docs/versioning/olm/#how-to-create-a-mgc-olm-bundle-catalog-and-how-to-install-mgc-via-olm","title":"How to create a MGC OLM bundle, catalog and how to install MGC via OLM","text":"

NOTE: You can supply different env vars to the following make commands these include:

* Version using the env var VERSION \n* Tag via the env var IMAGE_TAG for tags not following the semantic format.\n* Image registry via the env var REGISTRY\n* Registry org via the env var ORG\n\nFor example\n

make bundle-build-push VERISON=2.0.1 make catalog-build-push IMAGE_TAG=asdf

"},{"location":"multicluster-gateway-controller/docs/versioning/olm/#creating-the-bundle","title":"Creating the bundle","text":"
  1. Generate build and push the OLM bundle manifests for MGC, run the following make target:
    make bundle-build-push\n
"},{"location":"multicluster-gateway-controller/docs/versioning/olm/#creating-the-catalog","title":"Creating the catalog","text":"
  1. Build and push the catalog image
    make catalog-build-push\n
"},{"location":"multicluster-gateway-controller/docs/versioning/olm/#installing-the-operator-via-olm-catalog","title":"Installing the operator via OLM catalog","text":"
  1. Create a namespace:

       cat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: multi-cluster-gateways-system\nEOF\n

  2. Create a catalog source: ```bash cat <<EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: mgc-catalog namespace: olm spec: sourceType: grpc image: quay.io/kuadrant/multicluster-gateway-controller-catalog:v6.5.4 grpcPodConfig: securityContextConfig: restricted displayName: mgc-catalog publisher: Red Hat EOF

    3. Create a subscription\n```bash\n    cat <<EOF | kubectl apply -f -\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n  name: multicluster-gateway-controller\n  namespace: multi-cluster-gateways-system\nspec:\n  channel: alpha\n  name: multicluster-gateway-controller\n  source: mgc-catalog\n  sourceNamespace: olm\n  installPlanApproval: Automatic\nEOF\n

  3. Create a operator group bash cat <<EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: og-mgc namespace: multi-cluster-gateways-system EOF For more information on each of these OLM resources please see the offical docs
"},{"location":"architecture/rfcs/0001-rlp-v2/","title":"RateLimitPolicy API v2","text":"
  • Feature Name: rlp-v2
  • Start Date: 2023-02-02
  • RFC PR: Kuadrant/architecture#12
  • Issue tracking: Kuadrant/architecture#13
"},{"location":"architecture/rfcs/0001-rlp-v2/#summary","title":"Summary","text":"

Proposal of new API for the Kuadrant's RateLimitPolicy (RLP) CRD, for improved UX.

"},{"location":"architecture/rfcs/0001-rlp-v2/#motivation","title":"Motivation","text":"

The RateLimitPolicy API (v1beta1), particularly its RateLimit type used in ratelimitpolicy.spec.rateLimits, designed in part to fit the underlying implementation based on the Envoy Rate limit filter, has been proven to be complex, as well as somewhat limiting for the extension of the API for other platforms and/or for supporting use cases of not contemplated in the original design.

Users of the RateLimitPolicy will immediately recognize elements of Envoy's Rate limit API in the definitions of the RateLimit type, with almost 1:1 correspondence between the Configuration type and its counterpart in the Envoy configuration. Although compatibility between those continue to be desired, leaking such implementation details to the level of the API can be avoided to provide a better abstraction for activators (\"matchers\") and payload (\"descriptors\"), stated by users in a seamless way.

Furthermore, the Limit type \u2013 used as well in the RLP's RateLimit type \u2013 implies presently a logical relationship between its inner concepts \u2013 i.e. conditions and variables on one side, and limits themselves on the other \u2013 that otherwise could be shaped in a different manner, to provide clearer understanding of the meaning of these concepts by the user and avoid repetition. I.e., one limit definition contains multiple rate limits, and not the other way around.

"},{"location":"architecture/rfcs/0001-rlp-v2/#goals","title":"Goals","text":"
  1. Decouple the API from the underlying implementation - i.e. provide a more generic and more user-friendly abstraction
  2. Prepare the API for upcoming changes in the Gateway API Policy Attachment specification
  3. Improve consistency of the API with respect to Kuadrant's AuthPolicy CRD - i.e. same language, similar UX
"},{"location":"architecture/rfcs/0001-rlp-v2/#current-wip-to-consider","title":"Current WIP to consider","text":"
  1. Policy attachment update (kubernetes-sigs/gateway-api#1565)
  2. No merging of policies (kuadrant/architecture#10)
  3. A single Policy scoped to HTTPRoutes and HTTPRouteRule (kuadrant/architecture#4) - future
  4. Implement skip_if_absent for the RequestHeaders action (kuadrant/wasm-shim#29)
"},{"location":"architecture/rfcs/0001-rlp-v2/#highlights","title":"Highlights","text":"
  • spec.rateLimits[] replaced with spec.limits{<limit-name>: <limit-definition>}
  • spec.rateLimits.limits replaced with spec.limits.<limit-name>.rates
  • spec.rateLimits.limits.maxValue replaced with spec.limits.<limit-name>.rates.limit
  • spec.rateLimits.limits.seconds replaced with spec.limits.<limit-name>.rates.duration + spec.limits.<limit-name>.rates.unit
  • spec.rateLimits.limits.conditions replaced with spec.limits.<limit-name>.when, structured field based on well-known selectors, mainly for expressing conditions not related to the HTTP route (although not exclusively)
  • spec.rateLimits.limits.variables replaced with spec.limits.<limit-name>.counters, based on well-known selectors
  • spec.rateLimits.rules replaced with spec.limits.<limit-name>.routeSelectors, for selecting (or \"sub-targeting\") HTTPRouteRules that trigger the limit
  • new matcher spec.limits.<limit-name>.routeSelectors.hostnames[]
  • spec.rateLimits.configurations removed \u2013 descriptor actions configuration (previously spec.rateLimits.configurations.actions) generated from spec.limits.<limit-name>.when.selector \u222a spec.limits.<limit-name>.counters and unique identifier of the limit (associated with spec.limits.<limit-name>.routeSelectors)
  • Limitador conditions composed of \"soft\" spec.limits.<limit-name>.when conditions + a \"hard\" condition that binds the limit to its trigger HTTPRouteRules

For detailed differences between current and vew RLP API, see Comparison to current RateLimitPolicy.

"},{"location":"architecture/rfcs/0001-rlp-v2/#guide-level-explanation","title":"Guide-level explanation","text":""},{"location":"architecture/rfcs/0001-rlp-v2/#examples-of-rlps-based-on-the-new-api","title":"Examples of RLPs based on the new API","text":"

Given the following network resources:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: Gateway\nmetadata:\nname: istio-ingressgateway\nnamespace: istio-system\nspec:\ngatewayClassName: istio\nlisteners:\n- hostname:\n- \"*.acme.com\"\n---\napiVersion: gateway.networking.k8s.io/v1alpha2\nkind: HTTPRoute\nmetadata:\nname: toystore\nnamespace: toystore\nspec:\nparentRefs:\n- name: istio-ingressgateway\nnamespace: istio-system\nhostnames:\n- \"*.toystore.acme.com\"\nrules:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\nbackendRefs:\n- name: toystore\nport: 80\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nbackendRefs:\n- name: toystore\nport: 80\nfilters:\n- type: ResponseHeaderModifier\nresponseHeaderModifier:\nset:\n- name: Cache-Control\nvalue: \"max-age=31536000, immutable\"\n

The following are examples of RLPs targeting the route and the gateway. Each example is independent from the other.

"},{"location":"architecture/rfcs/0001-rlp-v2/#example-1-minimal-example-network-resource-targeted-entirely-without-filtering-unconditional-and-unqualified-rate-limiting","title":"Example 1. Minimal example - network resource targeted entirely without filtering, unconditional and unqualified rate limiting","text":"

In this example, all traffic to *.toystore.acme.com will be limited to 5rps, regardless of any other attribute of the HTTP request (method, path, headers, etc), without any extra \"soft\" conditions (conditions non-related to the HTTP route), across all consumers of the API (unqualified rate limiting).

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-infra-rl\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\nbase: # user-defined name of the limit definition - future use for handling hierarchical policy attachment\n- rates: # at least one rate limit required\n- limit: 5\nunit: second\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/assets/*\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-infra-rl/base\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- toystore/toystore-infra-rl/base == \"1\"\nmax_value: 5\nseconds: 1\nnamespace: TDB\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-2-targeting-specific-route-rules-with-counter-qualifiers-multiple-rates-per-limit-definition-and-soft-conditions","title":"Example 2. Targeting specific route rules, with counter qualifiers, multiple rates per limit definition and \"soft\" conditions","text":"

In this example, a distinct limit will be associated (\"bound\") to each individual HTTPRouteRule of the targeted HTTPRoute, by using the routeSelectors field for selecting (or \"sub-targeting\") the HTTPRouteRule.

The following limit definitions will be bound to each HTTPRouteRule: - /toys* \u2192 50rpm, enforced per username (counter qualifier) and only in case the user is not an admin (\"soft\" condition). - /assets/* \u2192 5rpm / 100rp12h

Each set of trigger matches in the RLP will be matched to all HTTPRouteRules whose HTTPRouteMatches is a superset of the set of trigger matches in the RLP. For every HTTPRouteRule matched, the HTTPRouteRule will be bound to the corresponding limit definition that specifies that trigger. In case no HTTPRouteRule is found containing at least one HTTPRouteMatch that is identical to some set of matching rules of a particular limit definition, the limit definition is considered invalid and reported as such in the status of RLP.

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-per-endpoint\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ntoys:\nrates:\n- limit: 50\nduration: 1\nunit: minute\ncounters:\n- auth.identity.username\nrouteSelectors:\n- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nwhen:\n- selector: auth.identity.group\noperator: neq\nvalue: admin\nassets:\nrates:\n- limit: 5\nduration: 1\nunit: minute\n- limit: 100\nduration: 12\nunit: hour\nrouteSelectors:\n- matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-endpoint/toys\"\ndescriptor_value: \"1\"\n- metadata:\ndescriptor_key: \"auth.identity.group\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"group\"\n- metadata:\ndescriptor_key: \"auth.identity.username\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"username\"\n- rules:\n- paths: [\"/assets/*\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-endpoint/assets\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- toystore/toystore-per-endpoint/toys == \"1\"\n- auth.identity.group != \"admin\"\nvariables:\n- auth.identity.username\nmax_value: 50\nseconds: 60\nnamespace: kuadrant\n- conditions:\n- toystore/toystore-per-endpoint/assets == \"1\"\nmax_value: 5\nseconds: 60\nnamespace: kuadrant\n- conditions:\n- toystore/toystore-per-endpoint/assets == \"1\"\nmax_value: 100\nseconds: 43200 # 12 hours\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-3-targeting-a-subset-of-an-httprouterule-httproutematch-missing","title":"Example 3. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch missing","text":"

Consider a 150rps rate limit set on requests to GET /toys/special. Such specific application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, to avoid binding limits to HTTPRouteRules that are more permissive than the actual intended scope of the limit, the RateLimitPolicy controller requires trigger matches to find identical matching rules explicitly defined amongst the sets of HTTPRouteMatches of the HTTPRouteRules potentially targeted.

As a consequence, by simply defining a trigger match for GET /toys/special in the RLP, the GET|POST /toys* HTTPRouteRule will NOT be bound to the limit definition. In order to ensure the limit definition is properly bound to a routing rule that strictly covers the GET /toys/special application endpoint, first the user has to modify the spec of the HTTPRoute by adding an explicit HTTPRouteRule for this case:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: HTTPRoute\nmetadata:\nname: toystore\nnamespace: toystore\nspec:\nparentRefs:\n- name: istio-ingressgateway\nnamespace: istio-system\nhostnames:\n- \"*.toystore.acme.com\"\nrules:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\nbackendRefs:\n- name: toystore\nport: 80\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nbackendRefs:\n- name: toystore\nport: 80\nfilters:\n- type: ResponseHeaderModifier\nresponseHeaderModifier:\nset:\n- name: Cache-Control\nvalue: \"max-age=31536000, immutable\"\n- matches: # new (more specific) HTTPRouteRule added\n- path:\ntype: Exact\nvalue: \"/toys/special\"\nmethod: GET\nbackendRefs:\n- name: toystore\nport: 80\n

After that, the RLP can target the new HTTPRouteRule strictly:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-special-toys\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\nspecialToys:\nrates:\n- limit: 150\nunit: second\nrouteSelectors:\n- matches: # matches the new HTTPRouteRule (i.e. GET /toys/special)\n- path:\ntype: Exact\nvalue: \"/toys/special\"\nmethod: GET\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys/special\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-special-toys/specialToys\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- toystore/toystore-special-toys/specialToys == \"1\"\nmax_value: 150\nseconds: 1\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-4-targeting-a-subset-of-an-httprouterule-httproutematch-found","title":"Example 4. Targeting a subset of an HTTPRouteRule - HTTPRouteMatch found","text":"

This example is similar to Example 3. Consider the use case of setting a 150rpm rate limit on requests to GET /toys*.

The targeted application endpoint is covered by the first HTTPRouteRule in the HTTPRoute (as a subset of GET or POST to any path that starts with /toys). However, unlike in the previous example where, at first, no HTTPRouteRule included an explicit HTTPRouteMatch for GET /toys/special, in this example the HTTPRouteMatch for the targeted application endpoint GET /toys* does exist explicitly in one of the HTTPRouteRules, thus the RateLimitPolicy controller would find no problem to bind the limit definition to the HTTPRouteRule. That would nonetheless cause a unexpected behavior of the limit triggered not strictly for GET /toys*, but also for POST /toys*.

To avoid extending the scope of the limit beyond desired, with no extra \"soft\" conditions, again the user must modify the spec of the HTTPRoute, so an exclusive HTTPRouteRule exists for the GET /toys* application endpoint:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: HTTPRoute\nmetadata:\nname: toystore\nnamespace: toystore\nspec:\nparentRefs:\n- name: istio-ingressgateway\nnamespace: istio-system\nhostnames:\n- \"*.toystore.acme.com\"\nrules:\n- matches: # first HTTPRouteRule split into two \u2013 one for GET /toys*, other for POST /toys*\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\nbackendRefs:\n- name: toystore\nport: 80\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\nbackendRefs:\n- name: toystore\nport: 80\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nbackendRefs:\n- name: toystore\nport: 80\nfilters:\n- type: ResponseHeaderModifier\nresponseHeaderModifier:\nset:\n- name: Cache-Control\nvalue: \"max-age=31536000, immutable\"\n

The RLP can then target the new HTTPRouteRule strictly:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toy-readers\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ntoyReaders:\nrates:\n- limit: 150\nunit: second\nrouteSelectors:\n- matches: # matches the new more speficic HTTPRouteRule (i.e. GET /toys*)\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toy-readers/toyReaders\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- toystore/toy-readers/toyReaders == \"1\"\nmax_value: 150\nseconds: 1\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-5-one-limit-triggered-by-multiple-httprouterules","title":"Example 5. One limit triggered by multiple HTTPRouteRules","text":"

In this example, both HTTPRouteRules, i.e. GET|POST /toys* and /assets/*, are targeted by the same limit of 50rpm per username.

Because the HTTPRoute has no other rule, this is technically equivalent to targeting the entire HTTPRoute and therefore similar to Example 1. However, if the HTTPRoute had other rules or got other rules added afterwards, this would ensure the limit applies only to the two original route rules.

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-per-user\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ntoysOrAssetsPerUsername:\nrates:\n- limit: 50\nduration: 1\nunit: minute\ncounters:\n- auth.identity.username\nrouteSelectors:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/assets/*\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-user/toysOrAssetsPerUsername\"\ndescriptor_value: \"1\"\n- metadata:\ndescriptor_key: \"auth.identity.username\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"username\"\n
limits:\n- conditions:\n- toystore/toystore-per-user/toysOrAssetsPerUsername == \"1\"\nvariables:\n- auth.identity.username\nmax_value: 50\nseconds: 60\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-6-multiple-limit-definitions-targeting-the-same-httprouterule","title":"Example 6. Multiple limit definitions targeting the same HTTPRouteRule","text":"

In case multiple limit definitions target a same HTTPRouteRule, all those limit definitions will be bound to the HTTPRouteRule. No limit \"shadowing\" will be be enforced by the RLP controller. Due to how things work as of today in Limitador nonetheless (i.e. the rule of the most restrictive limit wins), in some cases, across multiple limits triggered, one limit ends up \"shadowing\" others, depending on further qualification of the counters and the actual RL values.

E.g., the following RLP intends to set 50rps per username on GET /toys*, and 100rps on POST /toys* or /assets/*:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-per-endpoint\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\nreadToys:\nrates:\n- limit: 50\nunit: second\ncounters:\n- auth.identity.username\nrouteSelectors:\n- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\npostToysOrAssets:\nrates:\n- limit: 100\nunit: second\nrouteSelectors:\n- matches: # matches the 1st HTTPRouteRule (i.e. GET or POST to /toys*)\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\n- matches: # matches the 2nd HTTPRouteRule (i.e. /assets/*)\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-endpoint/readToys\"\ndescriptor_value: \"1\"\n- metadata:\ndescriptor_key: \"auth.identity.username\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"username\"\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/assets/*\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-endpoint/readToys\"\ndescriptor_value: \"1\"\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-endpoint/postToysOrAssets\"\ndescriptor_value: \"1\"\n
limits:\n- conditions: # actually applies to GET|POST /toys*\n- toystore/toystore-per-endpoint/readToys == \"1\"\nvariables:\n- auth.identity.username\nmax_value: 50\nseconds: 1\nnamespace: kuadrant\n- conditions: # actually applies to GET|POST /toys* and /assets/*\n- toystore/toystore-per-endpoint/postToysOrAssets == \"1\"\nmax_value: 100\nseconds: 1\nnamespace: kuadrant\n

This example was only written in this way to highlight that it is possible that multiple limit definitions select a same HTTPRouteRule. To avoid over-limiting between GET|POST /toys* and thus ensure the originally intended limit definitions for each of these routes apply, the HTTPRouteRule should be split into two, like done in Example 4.

"},{"location":"architecture/rfcs/0001-rlp-v2/#example-7-limits-triggered-for-specific-hostnames","title":"Example 7. Limits triggered for specific hostnames","text":"

In the previous examples, the limit definitions and therefore the counters were set indistinctly for all hostnames \u2013 i.e. no matter if the request is sent to games.toystore.acme.com or dolls.toystore.acme.com, the same counters are expected to be affected. In this example on the other hand, a 1000rpd rate limit is set for requests to /assets/* only when the hostname matches games.toystore.acme.com.

First, the user needs to edit the HTTPRoute to make the targeted hostname games.toystore.acme.com explicit:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: HTTPRoute\nmetadata:\nname: toystore\nnamespace: toystore\nspec:\nparentRefs:\n- name: istio-ingressgateway\nnamespace: istio-system\nhostnames:\n- \"*.toystore.acme.com\"\n- games.toystore.acme.com # new (more specific) hostname added\nrules:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\nbackendRefs:\n- name: toystore\nport: 80\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nbackendRefs:\n- name: toystore\nport: 80\nfilters:\n- type: ResponseHeaderModifier\nresponseHeaderModifier:\nset:\n- name: Cache-Control\nvalue: \"max-age=31536000, immutable\"\n

After that, the RLP can target specifically the newly added hostname:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-per-hostname\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ngames:\nrates:\n- limit: 1000\nunit: day\nrouteSelectors:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nhostnames:\n- games.toystore.acme.com\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/assets/*\"]\nhosts: [\"games.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-per-hostname/games\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- toystore/toystore-per-hostname/games == \"1\"\nmax_value: 1000\nseconds: 86400 # 1 day\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#example-8-targeting-the-gateway","title":"Example 8. Targeting the Gateway","text":"

Note: Additional meaning and context may be given to this use case in the future, when discussing defaults and overrides.

Targeting a Gateway is a shortcut to targeting all individual HTTPRoutes referencing the gateway as parent. This differs from Example 1 nonetheless because, by targeting the gateway rather than an individual HTTPRoute, the RLP applies automatically to all HTTPRoutes pointing to the gateway, including routes created before and after the creation of the RLP. Moreover, all those routes will share the same limit counters specified in the RLP.

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: gw-rl\nnamespace: istio-ingressgateway\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: Gateway\nname: istio-ingressgateway\nlimits:\nbase:\n- rates:\n- limit: 5\nunit: second\n
How is this RLP implemented under the hood?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/assets/*\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"istio-system/gw-rl/base\"\ndescriptor_value: \"1\"\n
limits:\n- conditions:\n- istio-system/gw-rl/base == \"1\"\nmax_value: 5\nseconds: 1\nnamespace: TDB\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#comparison-to-current-ratelimitpolicy","title":"Comparison to current RateLimitPolicy","text":"Current New Reason 1:1 relation between Limit (the object) and the actual Rate limit (the value) (spec.rateLimits.limits) Rate limit becomes a detail of Limit where each limit may define one or more rates (1:N) (spec.limits.<limit-name>.rates)
  • It allows to reuse when conditions and counters for groups of rate limits
Parsed spec.rateLimits.limits.conditions field, directly exposing the Limitador's API Structured spec.limits.<limit-name>.when condition field composed of 3 well-defined properties: selector, operator and value
  • Feels more K8s-native
  • Consistent with github.com/kuadrant/authorino/api/v1beta1#JSONPatternExpression
  • No need for a parser (only if implemented by Limitador)
spec.rateLimits.configurations as a list of \"variables assignments\" and direct exposure of Envoy's RL descriptor actions API Descriptor actions composed from selectors used in the limit definitions (spec.limits.<limit-name>.when.selector and spec.limits.<limit-name>.counters) plus a fixed identifier of the route rules (spec.limits.<limit-name>.routeSelectors)
  • Abstract the Envoy-specific concepts of \"actions\" and \"descriptors\"
  • No risk of mismatching descriptors keys between \"actions\" and actual usage in the limits
  • No user-defined generic descriptors (e.g. \"limited = 1\")
  • Source value of the selectors defined from an implicit \"context\" data structure
Key-value descriptors Structured descriptors referring to a contextual well-known data structure
  • Consistent with Authorino's Authorization JSON (#context)
Limitador conditions independent from the route rules Artificial Limitador condition injected to bind routes and corresponding limits
  • Ensure the limit is enforced only for corresponding selected HTTPRouteRules
translate(spec.rateLimits.rules) \u2282 httproute.spec.rules spec.limits.<limit-name>.routeSelectors.matches \u2286 httproute.spec.rules.matches
  • HTTPRouteRule selector (via HTTPRouteMatch subset)
  • Gateway API language
  • Preparation for inherited policies and defaults & overrides
spec.rateLimits.limits.seconds spec.limits.<limit-name>.rates.duration and spec.limits.<limit-name>.rates.unit
  • Support for more units beyond seconds
  • duration: 1 by default
spec.rateLimits.limits.variables spec.limits.<limit-name>.counters
  • Improved (more specific) naming
spec.rateLimits.limits.maxValue spec.limits.<limit-name>.rates.limit
  • Improved (more generic) naming
"},{"location":"architecture/rfcs/0001-rlp-v2/#reference-level-explanation","title":"Reference-level explanation","text":"

By completely dropping out the configurations field from the RLP, composing the RL descriptor actions is now done based essentially on the selectors listed in the when conditions and the counters, plus an artificial condition used to bind the HTTPRouteRules to the corresponding limits to trigger in Limitador.

The descriptor actions composed from the selectors in the \"soft\" when conditions and counter qualifiers originate from the direct references these selectors make to paths within a well-known data structure that stores information about the context (HTTP request and ext-authz filter). These selectors in \"soft\" when conditions and counter qualifiers are thereby called well-known selectors.

Other descriptor actions might be composed by the RLP controller to define additional RL conditions to bind HTTPRouteRules and corresponding limits.

"},{"location":"architecture/rfcs/0001-rlp-v2/#well-known-selectors","title":"Well-known selectors","text":"

Each selector used in a when condition or counter qualifier is a direct reference to a path within a well-known data structure that stores information about the context (L4 and L7 data of the original request handled by the proxy), as well as auth data (dynamic metadata occasionally exported by the external authorization filter and injected by the proxy into the rate-limit filter).

The well-known data structure for building RL descriptor actions resembles Authorino's \"Authorization JSON\", whose context component consists of Envoy's AttributeContext type of the external authorization API (marshalled as JSON). Compared to the more generic RateLimitRequest struct, the AttributeContext provides a more structured and arguibly more intuitive relation between the data sources for the RL descriptors actions and their corresponding key names through which the values are referred within the RLP, in a context of predominantly serving for HTTP applications.

To keep compatibility with the Envoy Rate Limit API, the well-known data structure can optionally be extended with the RateLimitRequest, thus resulting in the following final structure.

context: # Envoy's Ext-Authz `CheckRequest.AttributeContext` type\nsource:\naddress: \u2026\nservice: \u2026\n\u2026\ndestination:\naddress: \u2026\nservice: \u2026\n\u2026\nrequest:\nhttp:\nhost: \u2026\npath: \u2026\nmethod: \u2026\nheaders: {\u2026}\nauth: # Dynamic metadata exported by the external authorization service\nratelimit: # Envoy's Rate Limit `RateLimitRequest` type\ndomain: \u2026 # generated by the Kuadrant controller\ndescriptors: {\u2026} # descriptors configured by the user directly in the proxy (not generated by the Kuadrant controller, if allowed)\nhitsAddend: \u2026 # only in case we want to allow users to refer to this value in a policy\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#mechanics-of-generating-rl-descriptor-actions","title":"Mechanics of generating RL descriptor actions","text":"

From the perspective of a user who writes a RLP, the selectors used in then when and counters fields are paths to the well-known data structure (see Well-known selectors). While desiging a policy, the user intuitively pictures the well-known data structure and states each limit definition having in mind the possible values assumed by each of those paths in the data plane. For example,

The user story:

Each distinct user (auth.identity.username) can send no more than 1rps to the same HTTP path (context.request.http.path).

...materializes as the following RLP:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ndolls:\nrates:\n- limit: 1\nunit: second\ncounters:\n- auth.identity.username\n- context.request.http.path\n

The following selectors are to be interpreted by the RLP controller: - auth.identity.username - context.request.http.path

The RLP controller uses a map to translate each selector into its corresponding descriptor action. (Roughly described:)

context.source.address    \u2192 source_cluster(...) # TBC\ncontext.source.service    \u2192 source_cluster(...) # TBC\ncontext.destination...    \u2192 destination_cluster(...)\ncontext.destination...    \u2192 destination_cluster(...)\ncontext.request.http.<X>  \u2192 request_headers(header_name: \":<X>\")\ncontext.request...        \u2192 ...\nauth.<X>                  \u2192 metadata(key: \"envoy.filters.http.ext_authz\", path: <X>)\nratelimit.domain          \u2192 <hostname>\n

...to yield effectively:

rate_limits:\n- actions:\n- metadata:\ndescriptor_key: \"auth.identity.username\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"username\"\n- request_headers:\ndescriptor_key: \"context.request.http.path\"\nheader_name: \":path\"\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#artificial-limitador-condition-for-routeselectors","title":"Artificial Limitador condition for routeSelectors","text":"

For each limit definition that explicitly or implicitly defines a routeSelectors field, the RLP controller will generate an artificial Limitador condition that ensures that the limit applies only when the filterred rules are honoured when serving the request. This can be implemented with a 2-step procedure: 1. generate an unique identifier of the limit - i.e. <policy-namespace>/<policy-name>/<limit-name> 2. associate a generic_key type descriptor action with each HTTPRouteRule targeted by the limit \u2013 i.e. { descriptor_key: <unique identifier of the limit>, descriptor_value: \"1\" }.

For example, given the following RLP:

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-non-admin-users\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\ntoys:\nrouteSelectors:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: POST\nrates:\n- limit: 50\nduration: 1\nunit: minute\nwhen:\n- selector: auth.identity.group\noperator: neq\nvalue: admin\nassets:\nrouteSelectors:\n- matches:\n- path:\ntype: PathPrefix\nvalue: \"/assets/\"\nrates:\n- limit: 5\nduration: 1\nunit: minute\nwhen:\n- selector: auth.identity.group\noperator: neq\nvalue: admin\n

Apart from the following descriptor action associated with both routes:

- metadata:\ndescriptor_key: \"auth.identity.group\"\nmetadata_key:\nkey: \"envoy.filters.http.ext_authz\"\npath:\n- segment:\nkey: \"identity\"\n- segment:\nkey: \"group\"\n

...and its corresponding Limitador condition:

auth.identity.group != \"admin\"\n

The following additional artificial descriptor actions will be generated:

# associated with route rule GET|POST /toys*\n- generic_key:\ndescriptor_key: \"toystore/toystore-non-admin-users/toys\"\ndescriptor_value: \"1\"\n# associated with route rule /assets/*\n- generic_key:\ndescriptor_key: \"toystore/toystore-non-admin-users/assets\"\ndescriptor_value: \"1\"\n

...and their corresponding Limitador conditions.

In the end, the following Limitador configuration is yielded:

- conditions:\n- toystore/toystore-non-admin-users/toys == \"1\"\n- auth.identity.group != \"admin\"\nmax_value: 50\nseconds: 60\nnamespace: kuadrant\n- conditions:\n- toystore/toystore-non-admin-users/assets == \"1\"\n- auth.identity.group != \"admin\"\nmax_value: 5\nseconds: 60\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#support-in-wasm-shim-and-envoy-rl-api","title":"Support in wasm shim and Envoy RL API","text":"

This proposal tries to keep compatibility with the Envoy API for rate limit and does not introduce any new requirement that otherwise would require the use of wasm shim to be implemented.

In the case of implementation of this proposal in the wasm shim, all types of matchers supported by the HTTPRouteMatch type of Gateway API must be also supported in the rate_limit_policies.gateway_actions.rules field of the wasm plugin configuration. These include matchers based on path (prefix, exact), headers, query string parameters and method.

"},{"location":"architecture/rfcs/0001-rlp-v2/#drawbacks","title":"Drawbacks","text":"

HTTPRoute editing occasionally required Need to duplicate rules that don't explicitly include a matcher wanted for the policy, so that matcher can be added as a special case for each of those rules.

Risk of over-targeting Some HTTPRouteRules might need to be split into more specific ones so a limit definition is not bound to beyond intended (e.g. target method: GET when the route matches method: POST|GET).

Prone to consistency issues Typos and updates to the HTTPRoute can easily cause a mismatch and invalidate a RLP.

Two types of conditions \u2013 routeSelectors and when conditions Although with different meanings (evaluates in the gateway vs. evaluated in Limitador) and meant for expressing different types of rules (HTTPRouteRule selectors vs. \"soft\" conditions based on attributes not related to the HTTP request), users might still perceive these as two ways of expressing conditions and find difficult to understand at first that \"soft\" conditions do not accept expressions related to attributes of the HTTP request.

"},{"location":"architecture/rfcs/0001-rlp-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"architecture/rfcs/0001-rlp-v2/#targeting-full-httprouterules","title":"Targeting full HTTPRouteRules","text":"

Requiring users to specify full HTTPRouteRule matches in the RLP (as opposed to any subset of HTTPRoureMatches of targeted HTTPRouteRules \u2013 current proposal) contains some of the same drawbacks of this proposal, such as HTTPRoute editing occasionally required and prone to consistency issues. If, on one hand, it eliminates the risk of over-targeting, on the other hand, it does it at the cost of requiring excessively verbose policies written by the users, to the point of sometimes expecting user to have to specify trigger matching rules that are significantly more than what's originally and strictly intended.

E.g.:

On a HTTPRoute that contains the following HTTPRouteRules (simplified representation):

{ header: x-canary=true } \u2192 backend-canary\n{ * } \u2192 backend-rest\n

Where the user wants to define a RLP that targets { method: POST }. First, the user needs to edit the HTTPRoute and duplicate the HTTPRouteRules:

{ header: x-canary=true, method: POST } \u2192 backend-canary\n{ header: x-canary=true } \u2192 backend-canary\n{ method: POST } \u2192 backend-rest\n{ * } \u2192 backend-rest\n

Then, user needs to include the following trigger in the RLP so only full HTTPRouteRules are specified:

{ header: x-canary=true, method: POST }\n{ method: POST }\n

The first matching rule of the trigger (i.e. { header: x-canary=true, method: POST }) is beoynd the original user intent of targeting simply { method: POST }.

This issue can be even more concerning in the case of targeting gateways with multiple child HTTPRoutes. All the HTTPRoutes would have to be fixed and the HTTPRouteRules that cover for all the cases in all HTTPRoutes listed in the policy targeting the gateway.

"},{"location":"architecture/rfcs/0001-rlp-v2/#all-limit-definitions-apply-vs-limit-shadowing","title":"All limit definitions apply vs. Limit \"shadowing\"","text":"

The proposed binding between limit definition and HTTPRouteRules that trigger the limits was thought so multiple limit definitions can be bound to a same HTTPRouteRule that triggers those limits in Limitador. That means that no limit definition will \"shadow\" another at the level of the RLP controller, i.e. the RLP controller will honour the intended binding according to the selectors specified in the policy.

Due to how things work as of today in Limitador nonetheless, i.e., the rule of the most restrictive limit wins, and because all limit definitions triggered by a given shared HTTPRouteRule, it might be the case that, across multiple limits triggered, one limit ends up \"shadowing\" other limits. However, that is by implementation of Limitador and therefore beyond the scope of the API.

An alternative to the approach of allowing all limit definitions to be bound to a same selected HTTPRouteRules would be enforcing that, amongst multiple limit definitions targeting a same HTTPRouteRule, only the first of those limits definitions is bound to the HTTPRouteRule. This alternative approach effectively would cause the first limit to \"shadow\" any other on that particular HTTPRouteRule, as by implementation of the RLP controller (i.e., at API level).

While the first approach causes an artificial Limitador condition of the form <policy-ns>/<policy-name>/<limit-name> == \"1\", the alternative approach (\"limit shadowing\") could be implemented by generating a descriptor of the following form instead: ratelimit.binding == \"<policy-ns>/<policy-name>/<limit-name>\".

The downside of allowing multiple bindings to the same HTTPRouteRule is that all limits apply in Limitador, thus making status report frequently harder. The most restritive rate limit strategy implemented by Limitador might not be obvious to users who set multiple limit definitions and will require additional information reported back to the user about the actual status of the limit definitions stated in a RLP. On the other hand, it allows enables use cases of different limit definitions that vary on the counter qualifiers, additional \"soft\" conditions, or actual rate limit values to be triggered by a same HTTPRouteRule.

"},{"location":"architecture/rfcs/0001-rlp-v2/#writing-soft-when-conditions-based-on-attributes-of-the-http-request","title":"Writing \"soft\" when conditions based on attributes of the HTTP request","text":"

As a first step, users will not be able to write \"soft\" when conditions to selective apply rate limit definitions based on attributes of the HTTP request that otherwise could be specified using the routeSelectors field of the RLP instead.

On one hand, using when conditions for route filtering would make it easy to define limits when the HTTPRoute cannot be modified to include the special rule. On the other hand, users would miss information in the status. An HTTPRouteRule for GET|POST /toys*, for example, that is targeted with an additional \"soft\" when condition that specifies that the method must be equal to GET and the path exactly equal to /toys/special (see Example 3) would be reported as rate limited with extra details that this is in fact only for GET /toys/special. For small deployments, this might be considered acceptable; however it would easily explode to unmanageable number of cases for deployments with only a few limit definitions and HTTPRouteRules.

Moreover, by not specifying a more strict HTTPRouteRule for GET /toys/special, the RLP controller would bind the limit definition to other rules that would cause the rate limit filter to invoke the rate limit service (Limitador) for cases other than strictly GET /toys/special. Even if the rate limits would still be ensured to apply in Limitador only for GET /toys/special (due to the presence of a hypothetical \"soft\" when condition), an extra no-op hop to the rate limit service would happen. This is avoided with the current imposed limitation.

Example of \"soft\" when conditions for rate limit based on attributes of the HTTP request (NOT SUPPORTED):

apiVersion: kuadrant.io/v2beta1\nkind: RateLimitPolicy\nmetadata:\nname: toystore-special-toys\nnamespace: toystore\nspec:\ntargetRef:\ngroup: gateway.networking.k8s.io\nkind: HTTPRoute\nname: toystore\nlimits:\nspecialToys:\nrates:\n- limit: 150\nunit: second\nrouteSelectors:\n- matches: # matches the original HTTPRouteRule GET|POST /toys*\n- path:\ntype: PathPrefix\nvalue: \"/toys\"\nmethod: GET\nwhen:\n- selector: context.request.http.method # cannot omit this selector or POST /toys/special would also be rate limited\noperator: eq\nvalue: GET\n- selector: context.request.http.path\noperator: eq\nvalue: /toys/special\n
How is this RLP would be implemented under the hood if supported?
gateway_actions:\n- rules:\n- paths: [\"/toys*\"]\nmethods: [\"GET\"]\nhosts: [\"*.toystore.acme.com\"]\n- paths: [\"/toys*\"]\nmethods: [\"POST\"]\nhosts: [\"*.toystore.acme.com\"]\nconfigurations:\n- generic_key:\ndescriptor_key: \"toystore/toystore-special-toys/specialToys\"\ndescriptor_value: \"1\"\n- request_headers:\ndescriptor_key: \"context.request.http.method\"\nheader_name: \":method\"\n- request_headers:\ndescriptor_key: \"context.request.http.path\"\nheader_name: \":path\"\n
limits:\n- conditions:\n- toystore/toystore-special-toys/specialToys == \"1\"\n- context.request.http.method == \"GET\"\n- context.request.http.path == \"/toys/special\"\nmax_value: 150\nseconds: 1\nnamespace: kuadrant\n
"},{"location":"architecture/rfcs/0001-rlp-v2/#possible-variations-for-the-selectors-conditions-and-counter-qualifiers","title":"Possible variations for the selectors (conditions and counter qualifiers)","text":"

The main drivers behind the proposed design for the selectors (conditions and counter qualifiers), based on (i) structured condition expressions composed of fields selector, operator, and value, and (ii) when conditions and counters separated in two distinct fields (variation \"C\" below), are: 1. consistency with the Authorino AuthConfig API, which also specifies when conditions expressed in selector, operator, and value fields; 2. explicit user intent, without subtle distinction of meaning based on presence of optional fields.

Nonetheless here are a few alternative variations to consider:

Structured condition expressions Parsed condition expressions Single field A
\nselectors:\n  - selector: context.request.http.method\n    operator: eq\n    value: GET\n  - selector: auth.identity.username
B
\nselectors:\n  - context.request.http.method == \"GET\"\n  - auth.identity.username
Distinct fields C \u2b50\ufe0f
\nwhen:\n  - selector: context.request.http.method\n    operator: eq\n    value: GET\ncounters:\n  - auth.identity.username
D
\nwhen:\n  - context.request.http.method == \"GET\"\ncounters:\n  - auth.identity.username

\u2b50\ufe0f Variation adopted for the examples and (so far) final design proposal.

"},{"location":"architecture/rfcs/0001-rlp-v2/#prior-art","title":"Prior art","text":"

Most implementations currently orbiting around Gateway API (e.g. Istio, Envoy Gateway, etc) for added RL functionality seem to have been leaning more to the direct route extension pattern instead of Policy Attachment. That might be an option particularly suitable for gateway implementations (gateway providers) and for those aiming to avoid dealing with defaults and overrides.

"},{"location":"architecture/rfcs/0001-rlp-v2/#unresolved-questions","title":"Unresolved questions","text":"
  1. In case a limit definition lists route selectors such that some can be bound to HTTPRouteRules and some cannot (see Example 6), do we bind the valid route selectors and ignore the invalid ones or the limit definition is invalid altogether and bound to no HTTPRouteRule at all? A: By allowing multiple limit definitions to target a same HTTPRouteRule, the issue here stated will become less often. For the other cases where a limit definition still fails to select an HTTPRouteRule (e.g. due to mismatching trigger matches), the limit definition is not considered invalid. Possibly the limit definitions is considered \"stale\" (or \"orphan\"), i.e., not bound to any HTTPRouteRule.
  2. What should we fill domain/namespace with, if no longer with the hostname? This can be useful for multi-tenancy. A: For now, the domain/namespace field of the RL configuration (Envoy and Limitador ends) will be filled with a fixed (configurable) string (e.g. \"kuadrant\"). This can change in future to better support multi-tenancy and/or other use cases where a total sharding of the limit definitions within a same instance of Kuadrant is desired.
  3. How do we support lists of hostnames in Limitador conditions (single counter)? Should we open an issue for a new in operator? A: Not needed. The hostnames must exist in the targeted object explicitly, just like any other routing rules intended to be targeted by a limit definition. By setting the explicit hostname in the targeted network object (Gateway or HTTPRoute), the also becomes a route rules available for \"hard\" trigger configuration.
  4. What \"soft\" condition operators do we need to support (e.g. eq, neq, exists, nexists, matches)?
  5. Do we need special field to define shared counters across clusters/Limitador instances or that's to be solved at another layer (Limitador, Kuadrant CRDs, MCTC)?
"},{"location":"architecture/rfcs/0001-rlp-v2/#future-possibilities","title":"Future possibilities","text":"
  • Port routeSelectors and the semantics around it to the AuthPolicy API (aka \"KAP v2\").
  • Defaults and overrides, either along the lines of architecture#4 or architecture#10.
"},{"location":"architecture/rfcs/0002-well-known-attributes/","title":"Well-known Attributes","text":"
  • Feature Name: well-known-attributes
  • Start Date: 2023-06-13
  • RFC PR: Kuadrant/architecture#17
"},{"location":"architecture/rfcs/0002-well-known-attributes/#summary","title":"Summary","text":"

Define a well-known structure for users to declare request data selectors in their RateLimitPolicies and AuthPolicies. This structure is referred to as the Kuadrant Well-known Attributes.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#motivation","title":"Motivation","text":"

The well-known attributes let users write policy rules \u2013 conditions and, in general, dynamic values that refer to attributes in the data plane - in a concise and seamless way.

Decoupled from the policy CRDs, the well-known attributes: 1. define a common language for referring to values of the data plane in the Kuadrant policies; 2. allow dynamically evolving the policy APIs regarding how they admit references to data plane attributes; 3. encompass all common and component-specific selectors for data plane attributes; 4. have a single and unified specification, although this specification may occasionally link to additional, component-specific, external docs.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#guide-level-explanation","title":"Guide-level explanation","text":"

One who writes a Kuadrant policy and wants to build policy constructs such as conditions, qualifiers, variables, etc, based on dynamic values of the data plane, must refer the attributes that carry those values, using the declarative language of Kuadrant's Well-known Attributes.

A dynamic data plane value is typically a value of an attribute of the request or an Envoy Dynamic Metadata entry. It can be a value of the outer request being handled by the API gatway or proxy that is managed by Kuadrant (\"context request\") or an attribute of the direct request to the Kuadrant component that delivers the functionality in the data plane (rate-limiting or external auth).

A Well-known Selector is a construct of a policy API whose value contains a direct reference to a well-known attribute. The language of the well-known attributes and therefore what one would declare within a well-known selector resembles a JSON path for navigating a possibly complex JSON object.

Example 1. Well-known selector used in a condition

apiGroup: examples.kuadrant.io\nkind: PaintPolicy\nspec:\nrules:\n- when:\n- selector: auth.identity.group\noperator: eq\nvalue: admin\ncolor: red\n

In the example, auth.identity.group is a well-known selector of an attribute group, known to be injected by the external authorization service (auth) to describe the group the user (identity) belongs to. In the data plane, whenever this value is equal to admin, the abstract PaintPolicy policy states that the traffic must be painted red.

Example 2. Well-known selector used in a variable

apiGroup: examples.kuadrant.io\nkind: PaintPolicy\nspec:\nrules:\n- color: red\nalpha:\ndynamic: request.headers.x-color-alpha\n

In the example, request.headers.x-color-alpha is a selector of a well-known attribute request.headers that gives access to the headers of the context HTTP request. The selector retrieves the value of the x-color-alpha request header to dynamically fill the alpha property of the abstract PaintPolicy policy at each request.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#reference-level-explanation","title":"Reference-level explanation","text":"

The Well-known Attributes are a compilation inspired by some of the Envoy attributes and Authorino's Authorization JSON and its related JSON paths.

From the Envoy attributes, only attributes that are available before establishing connection with the upstream server qualify as a Kuadrant well-known attribute. This excludes attributes such as the response attributes and the upstream attributes.

As for the attributes inherited from Authorino, these are either based on Envoy's AttributeContext type of the external auth request API or from internal types defined by Authorino to fulfill the Auth Pipeline.

These two subsets of attributes are unified into a single set of well-known attributes. For each attribute that exists in both subsets, the name of the attribute as specified in the Envoy attributes subset prevails. Example of such is request.id (to refer to the ID of the request) superseding context.request.http.id (as the same attribute is referred in an Authorino AuthConfig).

The next sections specify the well-known attributes organized in the following groups: - Request attributes - Connection attributes - Metadata and filter state attributes - Auth attributes - Rate-limit attributes

"},{"location":"architecture/rfcs/0002-well-known-attributes/#request-attributes","title":"Request attributes","text":"

The following attributes are related to the context HTTP request that is handled by the API gateway or proxy managed by Kuadrant.

Attribute

Type

Description

Auth

RL

request.id

String

Request ID corresponding to x-request-id header value

\u2713

\u2713

request.time

Timestamp

Time of the first byte received

\u2713

\u2713

request.protocol

String

Request protocol (\u201cHTTP/1.0\u201d, \u201cHTTP/1.1\u201d, \u201cHTTP/2\u201d, or \u201cHTTP/3\u201d)

\u2713

\u2713

request.scheme

String

The scheme portion of the URL e.g. \u201chttp\u201d

\u2713

\u2713

request.host

String

The host portion of the URL

\u2713

\u2713

request.method

String

Request method e.g. \u201cGET\u201d

\u2713

\u2713

request.path

String

The path portion of the URL

\u2713

\u2713

request.url_path

String

The path portion of the URL without the query string

\u2713

request.query

String

The query portion of the URL in the format of \u201cname1=value1&name2=value2\u201d

\u2713

\u2713

request.headers

Map<String, String>

All request headers indexed by the lower-cased header name

\u2713

\u2713

request.referer

String

Referer request header

\u2713

request.useragent

String

User agent request header

\u2713

request.size

Number

The HTTP request size in bytes. If unknown, it must be -1

\u2713

request.body

String

The HTTP request body. (Disabled by default. Requires additional proxy configuration to enabled it.)

\u2713

request.raw_body

Array<Number>

The HTTP request body in bytes. This is sometimes used instead of body depending on the proxy configuration.

\u2713

request.context_extensions

Map<String, String>

This is analogous to request.headers, however these contents are not sent to the upstream server. It provides an extension mechanism for sending additional information to the auth service without modifying the proto definition. It maps to the internal opaque context in the proxy filter chain. (Requires additional configuration in the proxy.)

\u2713

"},{"location":"architecture/rfcs/0002-well-known-attributes/#connection-attributes","title":"Connection attributes","text":"

The following attributes are available once the downstream connection with the API gateway or proxy managed by Kuadrant is established. They apply to HTTP requests (L7) as well, but also to proxied connections limited at L3/L4.

Attribute

Type

Description

Auth

RL

source.address

String

Downstream connection remote address

\u2713

\u2713

source.port

Number

Downstream connection remote port

\u2713

\u2713

source.service

String

The canonical service name of the peer

\u2713

source.labels

Map<String, String>

The labels associated with the peer. These could be pod labels for Kubernetes or tags for VMs. The source of the labels could be an X.509 certificate or other configuration.

\u2713

source.principal

String

The authenticated identity of this peer. If an X.509 certificate is used to assert the identity in the proxy, this field is sourced from \u201cURI Subject Alternative Names\u201c, \u201cDNS Subject Alternate Names\u201c or \u201cSubject\u201c in that order. The format is issuer specific \u2013 e.g. SPIFFE format is spiffe://trust-domain/path, Google account format is https://accounts.google.com/{userid}.

\u2713

source.certificate

String

The X.509 certificate used to authenticate the identify of this peer. When present, the certificate contents are encoded in URL and PEM format.

\u2713

destination.address

String

Downstream connection local address

\u2713

\u2713

destination.port

Number

Downstream connection local port

\u2713

\u2713

destination.service

String

The canonical service name of the peer

\u2713

destination.labels

Map<String, String>

The labels associated with the peer. These could be pod labels for Kubernetes or tags for VMs. The source of the labels could be an X.509 certificate or other configuration.

\u2713

destination.principal

String

The authenticated identity of this peer. If an X.509 certificate is used to assert the identity in the proxy, this field is sourced from \u201cURI Subject Alternative Names\u201c, \u201cDNS Subject Alternate Names\u201c or \u201cSubject\u201c in that order. The format is issuer specific \u2013 e.g. SPIFFE format is spiffe://trust-domain/path, Google account format is https://accounts.google.com/{userid}.

\u2713

destination.certificate

String

The X.509 certificate used to authenticate the identify of this peer. When present, the certificate contents are encoded in URL and PEM format.

\u2713

connection.id

Number

Downstream connection ID

\u2713

connection.mtls

Boolean

Indicates whether TLS is applied to the downstream connection and the peer ceritificate is presented

\u2713

connection.requested_server_name

String

Requested server name in the downstream TLS connection

\u2713

connection.tls_session.sni

String

SNI used for TLS session

\u2713

connection.tls_version

String

TLS version of the downstream TLS connection

\u2713

connection.subject_local_certificate

String

The subject field of the local certificate in the downstream TLS connection

\u2713

connection.subject_peer_certificate

String

The subject field of the peer certificate in the downstream TLS connection

\u2713

connection.dns_san_local_certificate

String

The first DNS entry in the SAN field of the local certificate in the downstream TLS connection

\u2713

connection.dns_san_peer_certificate

String

The first DNS entry in the SAN field of the peer certificate in the downstream TLS connection

\u2713

connection.uri_san_local_certificate

String

The first URI entry in the SAN field of the local certificate in the downstream TLS connection

\u2713

connection.uri_san_peer_certificate

String

The first URI entry in the SAN field of the peer certificate in the downstream TLS connection

\u2713

connection.sha256_peer_certificate_digest

String

SHA256 digest of the peer certificate in the downstream TLS connection if present

\u2713

"},{"location":"architecture/rfcs/0002-well-known-attributes/#metadata-and-filter-state-attributes","title":"Metadata and filter state attributes","text":"

The following attributes are related to the Envoy proxy filter chain. They include metadata exported by the proxy throughout the filters and information about the states of the filters themselves.

Attribute

Type

Description

Auth

RL

metadata

Metadata

Dynamic request metadata

\u2713

\u2713

filter_state

Map<String, String>

Mapping from a filter state name to its serialized string value

\u2713

"},{"location":"architecture/rfcs/0002-well-known-attributes/#auth-attributes","title":"Auth attributes","text":"

The following attributes are exclusive of the external auth service (Authorino).

Attribute

Type

Description

Auth

RL

auth.identity

Any

Single resolved identity object, post-identity verification

\u2713

auth.metadata

Map<String, Any>

External metadata fetched

\u2713

auth.authorization

Map<String, Any>

Authorization results resolved by each authorization rule, access granted only

\u2713

auth.response

Map<String, Any>

Response objects exported by the auth service post-access granted

\u2713

auth.callbacks

Map<String, Any>

Response objects returned by the callback requests issued by the auth service

\u2713

The auth service also supports modifying selected values by chaining modifiers in the path.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#rate-limit-attributes","title":"Rate-limit attributes","text":"

The following attributes are exclusive of the rate-limiting service (Limitador).

Attribute

Type

Description

Auth

RL

ratelimit.domain

String

The rate limit domain. This enables the configuration to be namespaced per application (multi-tenancy).

\u2713

ratelimit.hits_addend

Number

Specifies the number of hits a request adds to the matched limit. Fixed value: `1`. Reserved for future usage.

\u2713

"},{"location":"architecture/rfcs/0002-well-known-attributes/#drawbacks","title":"Drawbacks","text":"

The decoupling of the well-known attributes and the language of well-known attributes and selectors from the individual policy CRDs is what makes it somewhat flexible and common across the components (rate-limiting and auth). However, it's less structured and it introduces another syntax for users to get familiar with.

This additional language competes with the language of the route selectors (RFC 0001), based on Gateway API's HTTPRouteMatch type.

Being \"soft-coded\" in the policy specs (as opposed to a hard-coded sub-structure inside of each policy type) does not mean it's completely decoupled from implementation in the control plane and/or intermediary data plane components. Although many attributes can be supported almost as a pass-through, from being used in a selector in a policy, to a corresponding value requested by the wasm-shim to its host, that is not always the case. Some translation may be required for components not integrated via wasm-shim (e.g. Authorino), as well as for components integrated via wasm-shim (e.g. Limitador) in special cases of composite or abstraction well-known attributes (i.e. attributes not available as-is via ABI, e.g. auth.identity in a RLP). Either way, some validation of the values introduced by users in the selectors may be needed at some point in the control plane, thus requiring arguably a level of awaresness and coupling between the well-known selectors specification and the control plane (policy controllers) or intermediary data plane (wasm-shim) components.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

As an alternative to JSON path-like selectors based on a well-known structure that induces the proposed language of well-known attributes, these same attributes could be defined as sub-types of each policy CRD. The Golang packages defining the common attributes across CRDs could be shared by the policy type definitions to reduce repetition. However, that approach would possibly involve a staggering number of new type definitions to cover all the cases for all the groups of attributes to be supported. These are constructs that not only need to be understood by the policy controllers, but also known by the user who writes a policy.

Additionally, all attributes, including new attributes occasionally introduced by Envoy and made available to the wasm-shim via ABI, would always require translation from the user-level abstraction how it's represented in a policy, to the actual form how it's used in the wasm-shim configuration and Authorino AuthConfigs.

Not implementing this proposal and keeping the current state of things mean little consistency between these common constructs for rules and conditions on how they are represented in each type of policy. This lack of consistency has a direct impact on the overhead faced by users to learn how to interact with Kuadrant and write different kinds of policies, as well as for the maintainers on tasks of coding for policy validation and reconciliation of data plane configurations.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#prior-art","title":"Prior art","text":"

Authorino's dynamic JSON paths, related to Authorino's Authorization JSON and used in when conditions and inside of multiple other constructs of the AuthConfig, are an example of feature of very similar approach to the one proposed here.

Arguably, Authorino's perceived flexibility would not have been possible with the Authorization JSON selectors. Users can write quite sophisticated policy rules (conditions, variable references, etc) by leveraging the those dynamic selectors. Beacause they are backed by JSON-based machinery in the code, Authorino's selectors have very little to, in some cases, none at all variation compared Open Policy Agent's Rego policy language, which is often used side by side in the same AuthConfigs.

Authorino's Authorization JSON selectors are, in one hand, more restrict to the structure of the CheckRequest payload (context.* attributes). At the same time, they are very open in the part associated with the internal attributes built along the Auth Pipeline (i.e. auth.* attributes). That makes Authorino's Authorization JSON selectors more limited, compared to the Envoy attributes made available to the wasm-shim via ABI, but also harder to validate. In some cases, such as of deep references to inside objects fetched from external sources of metadata, resolved OPA objects, JWT claims, etc, it is impossible to validate for correct references.

Another experience learned from Authorino's Authorization JSON selectors is that they depend substantially on the so-called \"modifiers\". Many use cases involving parsing and breaking down attributes that are originally available in a more complex form would not be possible without the modifiers. Examples of such cases are: extracting portions of the path and/or query string parameters (e.g. collection and resource identifiers), applying translations on HTTP verbs into corresponding operations, base64-decoding values from the context HTTP request, amongst several others.

"},{"location":"architecture/rfcs/0002-well-known-attributes/#unresolved-questions","title":"Unresolved questions","text":"
  1. How to deal with the differences regarding the availability and data types of the attributes across clients/hosts?

  2. Can we make more attributes that are currently available to only one of the components common to both?

  3. Will we need some kind of global support for modifiers (functions) in the well-known selectors or those can continue to be an Authorino-only feature?

  4. Does Authorino, which is more strict regarding the data structure that induces the selectors, need to implement this specification or could/should it keep its current selectors and a translation be performed by the AuthPolicy controller?

"},{"location":"architecture/rfcs/0002-well-known-attributes/#future-possibilities","title":"Future possibilities","text":"
  1. Extend with more well-known attributes that abstract common patterns and/or for rather opinioned use cases. Examples:
  2. auth.* attributes supported in the rate limit service
  3. request.authenticated
  4. request.operation.(read|write)
  5. request.param.my-param
  6. connection.secure

  7. Other Envoy attributes

Wasm attributes

Attribute

Type

Description

Auth

RL

wasm.plugin_name

String

Plugin name

\u2713

wasm.plugin_root_id

String

Plugin root ID

\u2713

wasm.plugin_vm_id

String

Plugin VM ID

\u2713

wasm.node

Node

Local node description

\u2713

wasm.cluster_name

String

Upstream cluster name

\u2713

wasm.cluster_metadata

Metadata

Upstream cluster metadata

\u2713

wasm.listener_direction

Number

Enumeration value of the listener traffic direction

\u2713

wasm.listener_metadata

Metadata

Listener metadata

\u2713

wasm.route_name

String

Route name

\u2713

wasm.route_metadata

Metadata

Route metadata

\u2713

wasm.upstream_host_metadata

Metadata

Upstream host metadata

\u2713

Proxy configuration attributes

Attribute

Type

Description

Auth

RL

xds.cluster_name

String

Upstream cluster name

\u2713

xds.cluster_metadata

Metadata

Upstream cluster metadata

\u2713

xds.route_name

String

Route name

\u2713

xds.route_metadata

Metadata

Route metadata

\u2713

xds.upstream_host_metadata

Metadata

Upstream host metadata

\u2713

xds.filter_chain_name

String

Listener filter chain name

\u2713

  1. Add some support for value modifiers (functions), along the lines of Authorino's JSON path modifiers and/or Envoy attributes' path expressions.
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..d35cad1b --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,503 @@ + + + + https://docs.kuadrant.io/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/development/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/logging/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/rate-limiting/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/ratelimitpolicy-reference/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/proposals/authpolicy-crd/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/proposals/rlp-target-gateway-resource/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/user-guides/authenticated-rl-for-app-developers/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/user-guides/authenticated-rl-with-jwt-and-k8s-authnz/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/user-guides/gateway-rl-for-cluster-operators/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/kuadrant-operator/doc/user-guides/simple-rl-for-app-developers/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/architecture/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/code_of_conduct/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/contributing/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/features/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/getting-started/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/terminology/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/anonymous-access/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/api-key-authentication/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/authenticated-rate-limiting-envoy-dynamic-metadata/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/authzed/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/caching/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/deny-with-redirect-to-login/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/edge-authentication-architecture-festival-wristbands/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/envoy-jwt-authn-and-authorino/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/external-metadata/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/hello-world/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/host-override/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/http-basic-authentication/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/injecting-data/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/json-pattern-matching-authorization/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/keycloak-authorization-services/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/kubernetes-subjectaccessreview/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/kubernetes-tokenreview/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/mtls-authentication/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/oauth2-token-introspection/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/observability/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/oidc-jwt-authentication/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/oidc-rbac/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/oidc-user-info/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/opa-authorization/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/passing-credentials/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/resource-level-authorization-uma/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/sharding/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/token-normalization/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino/docs/user-guides/validating-webhook/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/authorino-operator/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/doc/how-it-works/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/doc/topologies/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/doc/migrations/conditions/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/doc/server/configuration/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/limitador/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/limitador-server/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador/limitador-server/docs/sandbox/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/doc/development/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/doc/logging/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/doc/rate-limit-headers/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/doc/resource-requirements/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/limitador-operator/doc/storage/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/getting-started/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/contribution/contributing/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/contribution/vscode-debugging/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/demos/dns-policy/dnspolicy-demo/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/dns-health-checks/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/dns-policy/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/dns-provider/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/dnspolicy-quickstart/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/dnspolicy/managed-zone/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/experimental/skupper-poc-2-gateways-resiliency-walkthrough/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/experimental/submariner-poc-2-gateways-resiliency-walkthrough/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/experimental/submariner-poc-hub-gateway-walkthrough/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/gateways/define-and-place-a-gateway/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/gateways/gateway-deletion/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/how-to/metrics-federation/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/how-to/metrics-walkthrough/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/how-to/multicluster-gateways-walkthrough/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/how-to/template/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/installation/control-plane-installation/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/installation/service-protection-installation/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/DNSPolicy/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/DNSRecordStructure/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/multiple-dns-provider-support/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/provider-agnostic-dns-health-checks/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/status-aggregation/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/template/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/aws/aws/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/azure/azure/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/proposals/assets/multiple-dns-provider-support/google/google/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/tlspolicy/tls-policy/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/multicluster-gateway-controller/docs/versioning/olm/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/architecture/rfcs/0001-rlp-v2/ + 2023-09-11 + daily + + + https://docs.kuadrant.io/architecture/rfcs/0002-well-known-attributes/ + 2023-09-11 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..8740b8da Binary files /dev/null and b/sitemap.xml.gz differ