diff --git a/.circleci/config.yml b/.circleci/config.yml index 3d50d6ca98..cd4cf85b6c 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -21,8 +21,8 @@ executors: amd_linux_test: &amd_linux_test_executor docker: - image: cimg/base:stable - - image: cimg/redis:7.2.0 - - image: jaegertracing/all-in-one:1.48.0 + - image: cimg/redis:7.2.3 + - image: jaegertracing/all-in-one:1.49.0 resource_class: xlarge environment: CARGO_BUILD_JOBS: 4 diff --git a/CHANGELOG.md b/CHANGELOG.md index cd7ee83c93..fde22ecc4a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,13 +4,122 @@ All notable changes to Router will be documented in this file. This project adheres to [Semantic Versioning v2.0.0](https://semver.org/spec/v2.0.0.html). +# [1.34.1] - 2023-11-21 + +## 🐛 Fixes + +### Authorization: Filtered fragments remove corresponding fragment spreads ([Issue #4060](https://github.com/apollographql/router/issues/4060)) + +When fragments have been removed because they do not meet the authorization requirements to be queried, or in the case that their conditions cannot be fulfilled, any related fragment spreads which remain will be now be removed from the operation before execution. Additionally, fragment error paths are now applied at the point that the fragment use. + +By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/pull/4155 + +### Authorization: Maintain a special case for `__typename` ([PR #3821](https://github.com/apollographql/router/pull/3821)) + +When evaluating authorization directives on fields returning interfaces, the special GraphQL `__typename` field will be maintained as an exception since it must work for _all_ implementors + +By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/pull/3821 + +### Enforce JWT expiration for subscriptions ([Issue #3947](https://github.com/apollographql/router/issues/3947)) + +If a JWT expires whilst a subscription is executing, the subscription should be terminated. This also applies to deferred responses. + +By [@garypen](https://github.com/garypen) in https://github.com/apollographql/router/pull/4166 + +### Improved channel bounding via conversion of `futures` channels into `tokio` channels ([Issue #4103](https://github.com/apollographql/router/issues/4103), [Issue #4109](https://github.com/apollographql/router/issues/4109), [Issue #4110](https://github.com/apollographql/router/issues/4110), [Issue #4117](https://github.com/apollographql/router/issues/4117)) + +The use of `futures` channels have been converted to `tokio` channels which should ensure that channel bounds are observed correctly. We hope this brings some additional stability and predictability to the memory footprint. + +By [@garypen](https://github.com/garypen) in https://github.com/apollographql/router/pull/4111, https://github.com/apollographql/router/pull/4118, https://github.com/apollographql/router/pull/4138 + +### Reduce recursion in GraphQL parsing via `apollo-parser` improvements ([Issue #4142](https://github.com/apollographql/router/issues/4142)) + +Improvements to `apollo-parser` are brought in which remove unnecessary recursion when parsing repeated syntax elements, such as enum values and union members, in type definitions. Some documents that used to hit the parser’s recursion limit will now successfully parse. + +By [@lrlna](https://github.com/lrlna) in https://github.com/apollographql/router/pull/4167 + +### Maintain query ordering within a batch ([Issue #4143](https://github.com/apollographql/router/issues/4143)) + +A bug in batch manipulation meant that the last element in a batch was treated as the first element. Ordering should be maintained and there is now an updated unit test to ensure this. + +By [@garypen](https://github.com/garypen) in https://github.com/apollographql/router/pull/4144 + +### Port to `apollo-compiler` usage to `1.0-beta` ([PR #4038](https://github.com/apollographql/router/pull/4038)) + +Version 1.0 of `apollo-compiler` is a near-complete rewrite and introducing it in the Router unblocks a lot of upcoming work, including our _Rust-ification_ of the query planner. + +As an immediate benefit, some serialization-related bugs — including [Issue #3541](https://github.com/apollographql/router/issues/3541) — are fixed. Additionally, the representation of GraphQL documents within `apollo-compiler` is now mutable. This means that when modifying a query (such as to remove `@authenticated` fields from an unauthenticated request) the Router no longer needs to construct a new data structure (with `apollo-encoder`), serialize it, and reparse it. + +By [@SimonSapin](https://github.com/SimonSapin) in https://github.com/apollographql/router/pull/4038 + +### Propagate multi-value headers to subgraphs ([Issue #4153](https://github.com/apollographql/router/issues/4153)) + +Use `HeaderMap.append` instead of `insert` to avoid erasing previous values when using multiple headers with the same name. + +By [@nmoutschen](https://github.com/nmoutschen) in https://github.com/apollographql/router/pull/4154 + +## 📃 Configuration + +### Authentication: Allow customizing a `poll_interval` for the JWKS endpoint configuration ([Issue #4185](https://github.com/apollographql/router/issues/4185)) + +In order to compensate for variances in rate-limiting requirements for JWKS endpoints, a new `poll_interval` configuration option exists to adjust the polling interval for each JWKS URL. When not specified for a URL, the polling interval will remain as the default of 60 seconds. + +The configuration option accepts a human-readable duration (e.g., `60s` or `1minute 30s`). For example, the following configuration snippet sets the polling interval for a single JWKS URL to be every 30 seconds: + +```yml +authentication: + router: + jwt: + jwks: + - url: https://dev-zzp5enui.us.auth0.com/.well-known/jwks.json + poll_interval: 30s +``` + +By [@lleadbet](https://github.com/lleadbet) in https://github.com/apollographql/router/pull/4212 + +### Allow customization of the health check endpoint path ([Issue #2938](https://github.com/apollographql/router/issues/2938)) + +Adds a configuration option for custom health check endpoints, `health_check.path`, with `/health` as the default value. + +By [@aaronArinder](https://github.com/aaronArinder) in https://github.com/apollographql/router/pull/4145 + +## 📚 Documentation + +### Coprocessors: Clarify capabilities of `RouterRequest` and `RouterResponse`'s `control` responses ([PR #4189](https://github.com/apollographql/router/pull/4189)) + +The coprocessor `RouterRequest` and `RouterResponse` stages already fully support `control: { break: 500 }`, but the response body *must* be a string. The documentation has been improved to provides examples in the [Terminating a client request](https://www.apollographql.com/docs/router/customizations/coprocessor#terminating-a-client-request) section. + +By [@lennyburdette](https://github.com/lennyburdette) in https://github.com/apollographql/router/pull/4189 + +## 🧪 Experimental + +### Support time-to-live (TTL) expiration for distributed cache entries ([Issue #4163](https://github.com/apollographql/router/issues/4163)) + +It is now possible to use configuration to set an expiration (time-to-live or TTL) for distributed caching (i.e., Redis) entries, both for APQ and query planning caches (using either `apq` or `query_planning`, respectively). By default, entries have no expiration. + +For example, to define the TTL for cached query plans stored in Redis to be 24 hours, the following configuration snippet could be used which specifies `ttl: 24h`. + +```yaml title="router.yaml" +supergraph: + query_planning: + experimental_cache: + redis: + urls: ["redis://..."] + timeout: 5ms # Optional, by default: 2ms + ttl: 24h # Optional, by default no expiration +``` + +Similarly, it is possible to set the cache for APQ entries. For details, see the [Distributed APQ caching](https://www.apollographql.com/docs/router/configuration/distributed-caching#distributed-apq-cachinghttps://www.apollographql.com/docs/router/configuration/distributed-caching#distributed-apq-caching) documentation. + +By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/pull/4164 + # [1.34.0] - 2023-11-15 ## 🚀 Features ### Authorization: dry run option ([Issue #3843](https://github.com/apollographql/router/issues/3843)) -The `authorization.dry_run` option allows you to execute authorization directives without modifying a query while still returning the list of affected paths as top-level errors in a response. Use it to test authorization without breaking existing traffic. +The `authorization.dry_run` option allows you to execute authorization directives without modifying a query while still returning the list of affected paths as top-level errors in a response. Use it to test authorization without breaking existing traffic. For details, see the documentation for [`authorization.dry_run`](https://www.apollographql.com/docs/router/configuration/authorization#dry_run). @@ -18,7 +127,7 @@ By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/p ### Rhai: support alternative base64 alphabets ([Issue #3783](https://github.com/apollographql/router/issues/3783)) -When encoding or decoding strings, your Rhai customization scripts can now use alternative base64 alphabets in addition to the default `STANDARD`. +When encoding or decoding strings, your Rhai customization scripts can now use alternative base64 alphabets in addition to the default `STANDARD`. The available base64 alphabets: @@ -46,7 +155,7 @@ By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/p ### GraphOS authorization directives: `@policy` directive ([PR #3751](https://github.com/apollographql/router/pull/3751)) > ⚠️ This is an Enterprise feature of the Apollo Router. It requires an organization with a GraphOS Enterprise plan. -> +> > If your organization doesn't currently have an Enterprise plan, you can test out this functionality by signing up for a free Enterprise trial. > The `@policy` directive requires using a federation version not yet available at the time of router release `1.34.0`. @@ -131,7 +240,7 @@ By [@bnjjj](https://github.com/bnjjj) in https://github.com/apollographql/router The Apollo metrics exporter has been improved to not overconsume memory under high load. -Previously, the router appeared to leak memory when under load. The root cause was a bounded `futures` channel that did not enforce expected bounds on channel capacity and could overconsume memory. +Previously, the router appeared to leak memory when under load. The root cause was a bounded `futures` channel that did not enforce expected bounds on channel capacity and could overconsume memory. We have fixed the issue by: @@ -153,12 +262,12 @@ By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/p Configuration between tracing and metrics was inconsistent and did not align with the terminology defined in the OpenTelemetry (OTel) specification. To correct this, the following changes have been made to the router's YAML configuration, `router.yaml`: `telemetry.tracing.trace_config` has been renamed to `common` - + ```diff telemetry tracing: - trace_config: -+ common: ++ common: ``` `telemetry.tracing.common.attributes` has been renamed to `resource` @@ -167,7 +276,7 @@ telemetry tracing: common: - attributes: -+ resource: ++ resource: ``` `telemetry.metrics.common.resources` has been renamed to `resource` @@ -176,7 +285,7 @@ telemetry metrics: common: - resources: -+ resource: ++ resource: ``` `telemetry.tracing.propagation.awsxray` has been renamed to `aws_xray` ```diff @@ -187,7 +296,7 @@ telemetry + aws_xray: true ``` -Although the router will upgrade any existing configuration on startup, you should update your configuration to use the new format as soon as possible. +Although the router will upgrade any existing configuration on startup, you should update your configuration to use the new format as soon as possible. By [@BrynCooke](https://github.com/BrynCooke) in https://github.com/apollographql/router/pull/4044, https://github.com/apollographql/router/pull/4050 and https://github.com/apollographql/router/pull/4051 @@ -209,7 +318,7 @@ By [@garypen](https://github.com/garypen) in https://github.com/apollographql/ro ### Clarify and fix docs about supported WebSocket subprotocols ([PR #4063](https://github.com/apollographql/router/pull/4063)) -The documentation about setting up and configuring WebSocket protocols for router-to-subgraph communication has been improved, including clarifying how to set the subgraph path that exposes WebSocket capabilities. +The documentation about setting up and configuring WebSocket protocols for router-to-subgraph communication has been improved, including clarifying how to set the subgraph path that exposes WebSocket capabilities. For details, see the [updated documentation](https://www.apollographql.com/docs/router/executing-operations/subscription-support/#websocket-setup) @@ -323,7 +432,7 @@ Telemetry configuration now supports `enabled` on all exporters. This allows exp ```diff telemetry: - tracing: + tracing: datadog: + enabled: true jaeger: @@ -334,7 +443,7 @@ telemetry: + enabled: true ``` -Existing configurations will be migrated to the new format automatically on startup. However, you should update your configuration to use the new format as soon as possible. +Existing configurations will be migrated to the new format automatically on startup. However, you should update your configuration to use the new format as soon as possible. By [@BrynCooke](https://github.com/BrynCooke) in https://github.com/apollographql/router/pull/3952 @@ -368,7 +477,7 @@ By [@garypen](https://github.com/garypen) in https://github.com/apollographql/ro ### Move persisted queries to general availability ([PR #3914](https://github.com/apollographql/router/pull/3914)) -[Persisted Queries](https://www.apollographql.com/docs/graphos/operations/persisted-queries/) (a GraphOS Enterprise feature) is now moving to General Availability, from Preview where it has been since Apollo Router 1.25. In addition to Safelisting, persisted queries can now also be used to [pre-warm the query plan cache](https://github.com/apollographql/router/releases/tag/v1.31.0) to speed up schema updates. +[Persisted Queries](https://www.apollographql.com/docs/graphos/operations/persisted-queries/) (a GraphOS Enterprise feature) is now moving to General Availability, from Preview where it has been since Apollo Router 1.25. In addition to Safelisting, persisted queries can now also be used to [pre-warm the query plan cache](https://github.com/apollographql/router/releases/tag/v1.31.0) to speed up schema updates. The feature is now configured with a `persisted_queries` top-level key in the YAML configuration instead of with `preview_persisted_queries`. Existing configuration files will keep working as before, but with a warning that can be resolved by renaming the configuration section from `preview_persisted_queries` to `persisted_queries`: @@ -454,7 +563,7 @@ An experimental implementation of query batching has been added to support clien If you’re using Apollo Client, you can leverage its built-in support for batching to reduce the number of individual requests sent to the Apollo Router. -Once [configured](https://www.apollographql.com/docs/react/api/link/apollo-link-batch-http/), Apollo Client automatically combines multiple operations into a single HTTP request. The number of operations within a batch is client configurable, including the maximum number of operations in a batch and the maximum duration to wait for operations to accumulate before sending the batch request. +Once [configured](https://www.apollographql.com/docs/react/api/link/apollo-link-batch-http/), Apollo Client automatically combines multiple operations into a single HTTP request. The number of operations within a batch is client configurable, including the maximum number of operations in a batch and the maximum duration to wait for operations to accumulate before sending the batch request. The Apollo Router must be configured to receive batch requests, otherwise it rejects them. When processing a batch request, the router deserializes and processes each operation of a batch independently, and it responds to the client only after all operations of the batch have been completed. @@ -487,15 +596,15 @@ tls: subgraph: all: client_authentication: - certificate_chain: - key: + certificate_chain: + key: # if configuring for a specific subgraph: subgraphs: # subgraph name products: client_authentication: - certificate_chain: - key: + certificate_chain: + key: ``` Details on TLS client authentication can be found in the [documentation](https://www.apollographql.com/docs/router/configuration/overview#tls-client-authentication-for-subgraph-requests) @@ -1163,7 +1272,7 @@ By [@garypen](https://github.com/garypen) in https://github.com/apollographql/ro ### Spelling of `content_negociation` corrected to `content_negotiation` ([Issue #3204](https://github.com/apollographql/router/issues/3204)) -We had a bit of a French twist on one of our internal module names. We won't promise it won't happen again, but `content_negociation` is spelled as `content_negotiation` now. 😄 +We had a bit of a French twist on one of our internal module names. We won't promise it won't happen again, but `content_negociation` is spelled as `content_negotiation` now. 😄 Thank you for this contribution! diff --git a/Cargo.lock b/Cargo.lock index 694079d2f5..c61057667d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -206,7 +206,7 @@ checksum = "bb2436eb22464134efc641e508b33229361b27a3d5b6f03242b66b170ab8786c" dependencies = [ "apollo-parser 0.6.3", "ariadne", - "indexmap 2.0.2", + "indexmap 2.1.0", "ordered-float 4.1.0", "rowan", "salsa", @@ -217,13 +217,13 @@ dependencies = [ [[package]] name = "apollo-compiler" -version = "1.0.0-beta.4" +version = "1.0.0-beta.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a5ff474a85ea7b22c944aba74bab8863366adde503d00864d01da2284c670aa8" +checksum = "1ae76a91725ab7ecd35d552db3fb3d3b3f534a4520330920a58a39892174b7ae" dependencies = [ - "apollo-parser 0.7.1", + "apollo-parser 0.7.3", "ariadne", - "indexmap 2.0.2", + "indexmap 2.1.0", "rowan", "salsa", "serde", @@ -243,6 +243,22 @@ dependencies = [ "thiserror", ] +[[package]] +name = "apollo-federation" +version = "0.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6deb778a0fbd1448bffb6ebf8d33d8719ffda3f785482ddd7c26d0c72fc243d2" +dependencies = [ + "apollo-compiler 1.0.0-beta.5", + "indexmap 2.1.0", + "lazy_static", + "salsa", + "strum", + "strum_macros", + "thiserror", + "url", +] + [[package]] name = "apollo-parser" version = "0.6.3" @@ -256,9 +272,9 @@ dependencies = [ [[package]] name = "apollo-parser" -version = "0.7.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c382f531987366a6954ac687305652b1428b049ca1294b4502b0a9dbbf0d37b8" +checksum = "7bb81e4793effa1744cc96f49542aec487696e595e0faeabbd9f8a83e5c83036" dependencies = [ "memchr", "rowan", @@ -267,11 +283,12 @@ dependencies = [ [[package]] name = "apollo-router" -version = "1.34.0" +version = "1.34.1" dependencies = [ "access-json", "anyhow", - "apollo-compiler 1.0.0-beta.4", + "apollo-compiler 1.0.0-beta.5", + "apollo-federation", "arc-swap", "askama", "async-compression", @@ -281,13 +298,13 @@ dependencies = [ "aws-sigv4", "aws-types", "axum", - "base64 0.21.4", + "base64 0.21.5", "bloomfilter", "brotli", "buildstructor", "bytes", "ci_info", - "clap 4.4.6", + "clap 4.4.8", "console-subscriber", "dashmap", "derivative", @@ -312,7 +329,7 @@ dependencies = [ "humantime-serde", "hyper", "hyper-rustls", - "indexmap 2.0.2", + "indexmap 2.1.0", "insta", "itertools 0.11.0", "jsonpath-rust", @@ -329,7 +346,7 @@ dependencies = [ "mime", "mockall", "multer", - "multimap 0.9.0", + "multimap 0.9.1", "notify", "nu-ansi-term 0.49.0", "num-traits", @@ -355,7 +372,6 @@ dependencies = [ "proteus", "rand 0.8.5", "rand_core 0.6.4", - "redis", "regex", "reqwest", "rhai", @@ -371,10 +387,9 @@ dependencies = [ "serde_json_bytes", "serde_urlencoded", "serde_yaml", - "sha1 0.10.6", + "sha1", "sha2", "shellexpand", - "similar-asserts", "static_assertions", "strum_macros", "sys-info", @@ -416,7 +431,7 @@ dependencies = [ [[package]] name = "apollo-router-benchmarks" -version = "1.34.0" +version = "1.34.1" dependencies = [ "apollo-parser 0.6.3", "apollo-router", @@ -432,11 +447,11 @@ dependencies = [ [[package]] name = "apollo-router-scaffold" -version = "1.34.0" +version = "1.34.1" dependencies = [ "anyhow", "cargo-scaffold", - "clap 4.4.6", + "clap 4.4.8", "copy_dir", "regex", "str_inflector", @@ -459,9 +474,9 @@ dependencies = [ [[package]] name = "arbitrary" -version = "1.3.1" +version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a2e1373abdaa212b704512ec2bd8b26bd0b7d5c3f70117411a5d9a451383c859" +checksum = "7d5a26814d8dcb93b0e5a0ff3c6d80a8843bafb21b39e8e18a6f05471870e110" dependencies = [ "derive_arbitrary", ] @@ -569,9 +584,9 @@ dependencies = [ [[package]] name = "async-compression" -version = "0.4.4" +version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f658e2baef915ba0f26f1f7c42bfb8e12f532a01f449a090ded75ae7a07e9ba2" +checksum = "bc2d0cfb2a7388d34f590e76686704c494ed7aaceed62ee1ba35cbf363abc2a5" dependencies = [ "brotli", "flate2", @@ -652,7 +667,7 @@ dependencies = [ "cfg-if", "event-listener 3.0.0", "futures-lite", - "rustix 0.38.8", + "rustix 0.38.21", "windows-sys 0.48.0", ] @@ -797,7 +812,7 @@ dependencies = [ "hex", "http", "hyper", - "ring", + "ring 0.16.20", "time", "tokio", "tower", @@ -1101,7 +1116,7 @@ checksum = "3b829e4e32b91e643de6eafe82b1d90675f5874230191a4ffbc1b336dec4d6bf" dependencies = [ "async-trait", "axum-core", - "base64 0.21.4", + "base64 0.21.5", "bitflags 1.3.2", "bytes", "futures-util", @@ -1120,7 +1135,7 @@ dependencies = [ "serde_json", "serde_path_to_error", "serde_urlencoded", - "sha1 0.10.6", + "sha1", "sync_wrapper", "tokio", "tokio-tungstenite", @@ -1175,9 +1190,9 @@ checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8" [[package]] name = "base64" -version = "0.21.4" +version = "0.21.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9ba43ea6f343b788c8764558649e08df62f86c6ef251fdaeb1ffd010a9ae50a2" +checksum = "35636a1494ede3b646cc98f74f8e62c773a38a659ebc777a2cf26b9b74171df9" [[package]] name = "base64-simd" @@ -1297,17 +1312,6 @@ dependencies = [ "alloc-stdlib", ] -[[package]] -name = "bstr" -version = "0.2.17" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba3569f383e8f1598449f1a423e72e99569137b47740b1da11ef19af3d5c3223" -dependencies = [ - "lazy_static", - "memchr", - "regex-automata 0.1.10", -] - [[package]] name = "bstr" version = "1.6.0" @@ -1381,9 +1385,9 @@ dependencies = [ [[package]] name = "cargo-scaffold" -version = "0.8.12" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf9db1ea8a6020a23eba0a0bbb64fa1e85bdf6b792d88de589078445362b0314" +checksum = "a445d09579569a365ec97d081faf5f464f1c8909d3be8a1f08e23dba014c2d10" dependencies = [ "anyhow", "auth-git2", @@ -1499,9 +1503,9 @@ dependencies = [ [[package]] name = "clap" -version = "4.4.6" +version = "4.4.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d04704f56c2cde07f43e8e2c154b43f216dc5c92fc98ada720177362f953b956" +checksum = "2275f18819641850fa26c89acc84d465c1bf91ce57bc2748b28c420473352f64" dependencies = [ "clap_builder", "clap_derive", @@ -1509,9 +1513,9 @@ dependencies = [ [[package]] name = "clap_builder" -version = "4.4.6" +version = "4.4.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0e231faeaca65ebd1ea3c737966bf858971cd38c3849107aa3ea7de90a804e45" +checksum = "07cdf1b148b25c1e1f7a42225e30a0d99a615cd4637eae7365548dd4529b95bc" dependencies = [ "anstream", "anstyle", @@ -1521,9 +1525,9 @@ dependencies = [ [[package]] name = "clap_derive" -version = "4.4.2" +version = "4.4.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0862016ff20d69b84ef8247369fabf5c008a7417002411897d40ee1f4532b873" +checksum = "cf9804afaaf59a91e75b022a30fb7229a7901f60c755489cc61c9b423b836442" dependencies = [ "heck 0.4.1", "proc-macro2 1.0.66", @@ -1533,9 +1537,9 @@ dependencies = [ [[package]] name = "clap_lex" -version = "0.5.0" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2da6da31387c7e4ef160ffab6d5e7f00c42626fe39aea70a7b0f1773f7dd6c1b" +checksum = "702fc72eb24e5a1e48ce58027a675bc24edd52096d5397d4aea7c6dd9eca0bd1" [[package]] name = "cmake" @@ -1565,20 +1569,6 @@ dependencies = [ "unreachable", ] -[[package]] -name = "combine" -version = "4.6.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "35ed6e9d84f0b51a7f52daf1c7d71dd136fd7a3f41a8462b8cdb8c78d920fad4" -dependencies = [ - "bytes", - "futures-core", - "memchr", - "pin-project-lite", - "tokio", - "tokio-util", -] - [[package]] name = "concolor" version = "0.1.1" @@ -1819,7 +1809,7 @@ dependencies = [ "anes", "cast", "ciborium", - "clap 4.4.6", + "clap 4.4.8", "criterion-plot", "futures", "is-terminal", @@ -2118,12 +2108,12 @@ dependencies = [ "p256 0.11.1", "p384", "rand 0.8.5", - "ring", + "ring 0.16.20", "rsa", "sec1", "serde", "serde_bytes", - "sha1 0.10.6", + "sha1", "sha2", "signature 1.6.4", "spki", @@ -2220,9 +2210,9 @@ dependencies = [ [[package]] name = "derive_arbitrary" -version = "1.3.1" +version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "53e0efad4403bfc52dc201159c4b842a246a14b98c64b55dfd0f2d89729dfeb8" +checksum = "67e77553c4162a157adbf834ebae5b415acbecbeafc7a74b0e886657506a7611" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -2463,9 +2453,9 @@ dependencies = [ [[package]] name = "env_logger" -version = "0.10.0" +version = "0.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "85cdab6a89accf66733ad5a1693a4dcced6aeff64602b634530dd73c1f3ee9f0" +checksum = "95b3f3e67048839cb0d0781f445682a35113da7121f7c949db0e2be96a4fbece" dependencies = [ "humantime", "is-terminal", @@ -2547,7 +2537,7 @@ dependencies = [ "futures", "http", "hyper", - "multimap 0.9.0", + "multimap 0.9.1", "schemars", "serde", "serde_json", @@ -2771,9 +2761,9 @@ dependencies = [ [[package]] name = "futures" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "23342abe12aba583913b2e62f22225ff9c950774065e4bfb61a19cd9770fec40" +checksum = "da0290714b38af9b4a7b094b8a37086d1b4e61f2df9122c3cad2577669145335" dependencies = [ "futures-channel", "futures-core", @@ -2786,9 +2776,9 @@ dependencies = [ [[package]] name = "futures-channel" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "955518d47e09b25bbebc7a18df10b81f0c766eaf4c4f1cccef2fca5f2a4fb5f2" +checksum = "ff4dd66668b557604244583e3e1e1eada8c5c2e96a6d0d6653ede395b78bbacb" dependencies = [ "futures-core", "futures-sink", @@ -2796,15 +2786,15 @@ dependencies = [ [[package]] name = "futures-core" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4bca583b7e26f571124fe5b7561d49cb2868d79116cfa0eefce955557c6fee8c" +checksum = "eb1d22c66e66d9d72e1758f0bd7d4fd0bee04cad842ee34587d68c07e45d088c" [[package]] name = "futures-executor" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ccecee823288125bd88b4d7f565c9e58e41858e47ab72e8ea2d64e93624386e0" +checksum = "0f4fb8693db0cf099eadcca0efe2a5a22e4550f98ed16aba6c48700da29597bc" dependencies = [ "futures-core", "futures-task", @@ -2814,9 +2804,9 @@ dependencies = [ [[package]] name = "futures-io" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4fff74096e71ed47f8e023204cfd0aa1289cd54ae5430a9523be060cdb849964" +checksum = "8bf34a163b5c4c52d0478a4d757da8fb65cabef42ba90515efee0f6f9fa45aaa" [[package]] name = "futures-lite" @@ -2835,9 +2825,9 @@ dependencies = [ [[package]] name = "futures-macro" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "89ca545a94061b6365f2c7355b4b32bd20df3ff95f02da9329b34ccc3bd6ee72" +checksum = "53b153fd91e4b0147f4aced87be237c98248656bb01050b96bf3ee89220a8ddb" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -2846,21 +2836,21 @@ dependencies = [ [[package]] name = "futures-sink" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f43be4fe21a13b9781a69afa4985b0f6ee0e1afab2c6f454a8cf30e2b2237b6e" +checksum = "e36d3378ee38c2a36ad710c5d30c2911d752cb941c00c72dbabfb786a7970817" [[package]] name = "futures-task" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "76d3d132be6c0e6aa1534069c705a74a5997a356c0dc2f86a47765e5617c5b65" +checksum = "efd193069b0ddadc69c46389b740bbccdd97203899b48d09c5f7969591d6bae2" [[package]] name = "futures-test" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84af27744870a4a325fa342ce65a940dfba08957b260b790ec278c1d81490349" +checksum = "73ad78d6c79a3c76f8bc7496240d0586e069ed6797824fdd8c41d7c42b145b8d" dependencies = [ "futures-core", "futures-executor", @@ -2881,9 +2871,9 @@ checksum = "e64b03909df88034c26dc1547e8970b91f98bdb65165d6a4e9110d94263dbb2c" [[package]] name = "futures-util" -version = "0.3.28" +version = "0.3.29" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "26b01e40b772d54cf6c6d721c1d1abd0647a0106a12ecaa1c186273392a69533" +checksum = "a19526d624e703a3179b3d322efec918b6246ea0fa51d41124525f00f1cc8104" dependencies = [ "futures-channel", "futures-core", @@ -2989,7 +2979,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "759c97c1e17c55525b57192c06a267cda0ac5210b222d6b82189a2338fa1c13d" dependencies = [ "aho-corasick", - "bstr 1.6.0", + "bstr", "fnv", "log", "regex", @@ -3022,7 +3012,7 @@ version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2ebc8013b4426d5b81a4364c419a95ed0b404af2b82e2457de52d9348f0e474" dependencies = [ - "combine 3.8.1", + "combine", "thiserror", ] @@ -3157,7 +3147,7 @@ dependencies = [ "http", "httpdate", "mime", - "sha1 0.10.6", + "sha1", ] [[package]] @@ -3251,9 +3241,9 @@ dependencies = [ [[package]] name = "http" -version = "0.2.9" +version = "0.2.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bd6effc99afb63425aff9b05836f029929e345a6148a14b7ecd5ab67af944482" +checksum = "8947b1a6fad4393052c7ba1f4cd97bed3e953a95c79c92ad9b051a04611d9fbb" dependencies = [ "bytes", "fnv", @@ -3371,9 +3361,9 @@ dependencies = [ [[package]] name = "hyper-rustls" -version = "0.24.1" +version = "0.24.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8d78e1e73ec14cf7375674f74d7dde185c8206fd9dea6fb6295e8a98098aaa97" +checksum = "ec3efd23720e2049821a693cbc7e65ea87c72f1c58ff2f9522ff332b1491e590" dependencies = [ "futures-util", "http", @@ -3425,9 +3415,9 @@ dependencies = [ [[package]] name = "indexmap" -version = "2.0.2" +version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8adf3ddd720272c6ea8bf59463c04e0f93d0bbf7c5439b691bca2987e0270897" +checksum = "d530e1a18b1cb4c484e6e34556a0d948706958449fca0cab753d649f2bce3d1f" dependencies = [ "equivalent", "hashbrown 0.14.1", @@ -3540,7 +3530,7 @@ version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f" dependencies = [ - "socket2 0.5.3", + "socket2 0.5.5", "widestring", "windows-sys 0.48.0", "winreg", @@ -3559,7 +3549,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cb0889898416213fab133e1d33a0e5858a48177452750691bde3666d0fdbaf8b" dependencies = [ "hermit-abi 0.3.2", - "rustix 0.38.8", + "rustix 0.38.21", "windows-sys 0.48.0", ] @@ -3616,9 +3606,9 @@ dependencies = [ [[package]] name = "jsonpath-rust" -version = "0.3.2" +version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4a51ce4ed9d2d91361df55efafdd6115ecc0147ccfdb8b0e8b6d696d068a01b5" +checksum = "7b49f8b2d0028f609aa69b71aa87812eb86b2c4adfffc477b270ccaa59fa9487" dependencies = [ "pest", "pest_derive", @@ -3645,7 +3635,7 @@ checksum = "2a071f4f7efc9a9118dfb627a0a94ef247986e1ab8606a4c806ae2b3aa3b6978" dependencies = [ "ahash", "anyhow", - "base64 0.21.4", + "base64 0.21.5", "bytecount", "fancy-regex", "fraction", @@ -3671,9 +3661,9 @@ version = "8.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6971da4d9c3aa03c3d8f3ff0f4155b534aad021292003895a469716b2a230378" dependencies = [ - "base64 0.21.4", + "base64 0.21.5", "pem", - "ring", + "ring 0.16.20", "serde", "serde_json", "simple_asn1", @@ -3754,9 +3744,9 @@ dependencies = [ [[package]] name = "libc" -version = "0.2.149" +version = "0.2.150" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a08173bc88b7955d1b3145aa561539096c421ac8debde8cbc3612ec635fee29b" +checksum = "89d92a4743f9a61002fae18374ed11e7973f530cb3a3255fb354818118b2203c" [[package]] name = "libfuzzer-sys" @@ -3862,9 +3852,9 @@ checksum = "ef53942eb7bf7ff43a617b3e2c1c4a5ecf5944a7c1bc12d7ee39bbb15e5c1519" [[package]] name = "linux-raw-sys" -version = "0.4.5" +version = "0.4.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57bcfdad1b858c2db7c38303a6d2ad4dfaf5eb53dfeb0910128b2c26d6158503" +checksum = "969488b55f8ac402214f3f5fd243ebb7206cf82de60d3172994707a4bcc2b829" [[package]] name = "lock_api" @@ -4019,9 +4009,9 @@ dependencies = [ [[package]] name = "mio" -version = "0.8.8" +version = "0.8.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "927a765cd3fc26206e66b296465fa9d3e5ab003e651c1b3c060e7956d96b19d2" +checksum = "3dce281c5e46beae905d4de1870d8b1509a9142b62eedf18b443b011ca8343d0" dependencies = [ "libc", "log", @@ -4082,9 +4072,9 @@ checksum = "e5ce46fe64a9d73be07dcbe690a38ce1b293be448fd8ce1e6c1b8062c9f72c6a" [[package]] name = "multimap" -version = "0.9.0" +version = "0.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "70db9248a93dc36a36d9a47898caa007a32755c7ad140ec64eeeb50d5a730631" +checksum = "e1a5d38b9b352dbd913288736af36af41c48d61b1a8cd34bcecd727561b7d511" dependencies = [ "serde", ] @@ -4740,7 +4730,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e1d3afd2628e69da2be385eb6f2fd57c8ac7977ceeff6dc166ff1657b0e386a9" dependencies = [ "fixedbitset", - "indexmap 2.0.2", + "indexmap 2.1.0", "serde", "serde_derive", ] @@ -5223,26 +5213,6 @@ dependencies = [ "num_cpus", ] -[[package]] -name = "redis" -version = "0.21.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "152f3863635cbb76b73bc247845781098302c6c9ad2060e1a9a7de56840346b6" -dependencies = [ - "async-trait", - "bytes", - "combine 4.6.6", - "futures-util", - "itoa", - "percent-encoding", - "pin-project-lite", - "ryu", - "sha1 0.6.1", - "tokio", - "tokio-util", - "url", -] - [[package]] name = "redis-protocol" version = "4.1.0" @@ -5275,6 +5245,15 @@ dependencies = [ "bitflags 1.3.2", ] +[[package]] +name = "redox_syscall" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4722d768eff46b75989dd134e5c353f0d6296e5aaa3132e776cbdb56be7731aa" +dependencies = [ + "bitflags 1.3.2", +] + [[package]] name = "redox_users" version = "0.4.3" @@ -5336,7 +5315,7 @@ version = "0.11.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "046cd98826c46c2ac8ddecae268eb5c2e58628688a5fc7a2643704a73faba95b" dependencies = [ - "base64 0.21.4", + "base64 0.21.5", "bytes", "encoding_rs", "futures-core", @@ -5402,9 +5381,9 @@ dependencies = [ [[package]] name = "rhai" -version = "1.16.2" +version = "1.16.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "206cee941730eaf90a22c84235b25193df661393688162e15551164f92f09eca" +checksum = "e3625f343d89990133d013e39c46e350915178cf94f1bec9f49b0cbef98a3e3c" dependencies = [ "ahash", "bitflags 2.4.0", @@ -5498,11 +5477,25 @@ dependencies = [ "libc", "once_cell", "spin 0.5.2", - "untrusted", + "untrusted 0.7.1", "web-sys", "winapi", ] +[[package]] +name = "ring" +version = "0.17.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fb0205304757e5d899b9c2e448b867ffd03ae7f988002e47cd24954391394d0b" +dependencies = [ + "cc", + "getrandom 0.2.10", + "libc", + "spin 0.9.8", + "untrusted 0.9.0", + "windows-sys 0.48.0", +] + [[package]] name = "rmp" version = "0.8.12" @@ -5516,9 +5509,9 @@ dependencies = [ [[package]] name = "router-bridge" -version = "0.5.6+v2.5.5" +version = "0.5.8+v2.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4ecc9bd4dd82d62426e56f03850fe23b614601d5854b1c225933096c8383f67e" +checksum = "5ea5f8ca050d3d2651f57b26b00c5ff100ddf538d60e99869b2bcd40d39ae694" dependencies = [ "anyhow", "async-channel", @@ -5679,25 +5672,25 @@ dependencies = [ [[package]] name = "rustix" -version = "0.38.8" +version = "0.38.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "19ed4fa021d81c8392ce04db050a3da9a60299050b7ae1cf482d862b54a7218f" +checksum = "2b426b0506e5d50a7d8dafcf2e81471400deb602392c7dd110815afb4eaf02a3" dependencies = [ "bitflags 2.4.0", "errno", "libc", - "linux-raw-sys 0.4.5", + "linux-raw-sys 0.4.11", "windows-sys 0.48.0", ] [[package]] name = "rustls" -version = "0.21.7" +version = "0.21.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cd8d6c9f025a446bc4d18ad9632e69aec8f287aa84499ee335599fabd20c3fd8" +checksum = "629648aced5775d558af50b2b4c7b02983a04b312126d45eeead26e7caa498b9" dependencies = [ "log", - "ring", + "ring 0.17.5", "rustls-webpki", "sct", ] @@ -5716,21 +5709,21 @@ dependencies = [ [[package]] name = "rustls-pemfile" -version = "1.0.3" +version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2d3987094b1d07b653b7dfdc3f70ce9a1da9c51ac18c1b06b662e4f9a0e9f4b2" +checksum = "1c74cae0a4cf6ccbbf5f359f08efdf8ee7e1dc532573bf0db71968cb56b1448c" dependencies = [ - "base64 0.21.4", + "base64 0.21.5", ] [[package]] name = "rustls-webpki" -version = "0.101.4" +version = "0.101.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7d93931baf2d282fff8d3a532bbfd7653f734643161b87e3e01e59a04439bf0d" +checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765" dependencies = [ - "ring", - "untrusted", + "ring 0.17.5", + "untrusted 0.9.0", ] [[package]] @@ -5794,9 +5787,9 @@ dependencies = [ [[package]] name = "schemars" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1f7b0ce13155372a76ee2e1c5ffba1fe61ede73fbea5630d61eee6fac4929c0c" +checksum = "45a28f4c49489add4ce10783f7911893516f15afe45d015608d41faca6bc4d29" dependencies = [ "dyn-clone", "schemars_derive", @@ -5807,9 +5800,9 @@ dependencies = [ [[package]] name = "schemars_derive" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e85e2a16b12bdb763244c69ab79363d71db2b4b918a2def53f80b02e0574b13c" +checksum = "c767fd6fa65d9ccf9cf026122c1b555f2ef9a4f0cea69da4d7dbc3e258d30967" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -5829,8 +5822,8 @@ version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d53dcdb7c9f8158937a7981b48accfd39a43af418591a5d008c7b22b5e1b7ca4" dependencies = [ - "ring", - "untrusted", + "ring 0.16.20", + "untrusted 0.7.1", ] [[package]] @@ -5893,9 +5886,9 @@ checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3" [[package]] name = "serde" -version = "1.0.189" +version = "1.0.192" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8e422a44e74ad4001bdc8eede9a4570ab52f71190e9c076d14369f38b9200537" +checksum = "bca2a08484b285dcb282d0f67b26cadc0df8b19f8c12502c13d966bf9482f001" dependencies = [ "serde_derive", ] @@ -5911,9 +5904,9 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.189" +version = "1.0.192" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e48d1f918009ce3145511378cf68d613e3b3d9137d67272562080d68a2b32d5" +checksum = "d6c7207fbec9faa48073f3e3074cbe553af6ea512d7c21ba46e434e70ea9fbc1" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -5945,11 +5938,11 @@ dependencies = [ [[package]] name = "serde_json" -version = "1.0.107" +version = "1.0.108" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6b420ce6e3d8bd882e9b243c6eed35dbc9a6110c9769e74b584e0d68d1f20c65" +checksum = "3d1c7e3eac408d115102c4c24ad393e0821bb3a5df4d506a80f85f7a742a526b" dependencies = [ - "indexmap 2.0.2", + "indexmap 2.1.0", "itoa", "ryu", "serde", @@ -5962,7 +5955,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "feb260b2939374fad6f939f803662d4971d03395fcd03752b674bdba06565779" dependencies = [ "bytes", - "indexmap 2.0.2", + "indexmap 2.1.0", "serde", "serde_json", ] @@ -6048,15 +6041,6 @@ dependencies = [ "digest 0.10.7", ] -[[package]] -name = "sha1" -version = "0.6.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c1da05c97445caa12d05e848c4a4fcbbea29e748ac28f7e80e9b010392063770" -dependencies = [ - "sha1_smol", -] - [[package]] name = "sha1" version = "0.10.6" @@ -6068,12 +6052,6 @@ dependencies = [ "digest 0.10.7", ] -[[package]] -name = "sha1_smol" -version = "1.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae1a47186c03a32177042e55dbc5fd5aee900b8e0069a8d70fba96a9375cd012" - [[package]] name = "sha2" version = "0.10.8" @@ -6143,20 +6121,6 @@ name = "similar" version = "2.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "420acb44afdae038210c99e69aae24109f32f15500aa708e81d46c9f29d55fcf" -dependencies = [ - "bstr 0.2.17", - "unicode-segmentation", -] - -[[package]] -name = "similar-asserts" -version = "1.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e041bb827d1bfca18f213411d51b665309f1afb37a04a5d1464530e13779fc0f" -dependencies = [ - "console 0.15.7", - "similar", -] [[package]] name = "simple_asn1" @@ -6218,9 +6182,9 @@ dependencies = [ [[package]] name = "socket2" -version = "0.5.3" +version = "0.5.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2538b18701741680e0322a2302176d3253a35388e2e62f172f64f4f16605f877" +checksum = "7b5fac59a5cb5dd637972e5fca70daf0523c9067fcdc4842f053dae04a18f8e9" dependencies = [ "libc", "windows-sys 0.48.0", @@ -6355,7 +6319,7 @@ name = "supergraph_sdl" version = "0.1.0" dependencies = [ "anyhow", - "apollo-compiler 1.0.0-beta.4", + "apollo-compiler 1.0.0-beta.5", "apollo-router", "async-trait", "tower", @@ -6434,14 +6398,14 @@ dependencies = [ [[package]] name = "tempfile" -version = "3.8.0" +version = "3.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cb94d2f3cc536af71caac6b6fcebf65860b347e7ce0cc9ebe8f70d3e521054ef" +checksum = "7ef1adac450ad7f4b3c28589471ade84f25f731a7a0fe30d71dfa9f60fd808e5" dependencies = [ "cfg-if", "fastrand 2.0.0", - "redox_syscall 0.3.5", - "rustix 0.38.8", + "redox_syscall 0.4.1", + "rustix 0.38.21", "windows-sys 0.48.0", ] @@ -6549,18 +6513,18 @@ dependencies = [ [[package]] name = "thiserror" -version = "1.0.49" +version = "1.0.50" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1177e8c6d7ede7afde3585fd2513e611227efd6481bd78d2e82ba1ce16557ed4" +checksum = "f9a7210f5c9a7156bb50aa36aed4c95afb51df0df00713949448cf9e97d382d2" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" -version = "1.0.49" +version = "1.0.50" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "10712f02019e9288794769fba95cd6847df9874d49d871d062172f9dd41bc4cc" +checksum = "266b2e40bc00e5a6c09c3584011e08b06f123c00362c92b975ba9843aaaa14b8" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -6702,9 +6666,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" [[package]] name = "tokio" -version = "1.33.0" +version = "1.34.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4f38200e3ef7995e5ef13baec2f432a6da0aa9ac495b2c0e8f3b7eec2c92d653" +checksum = "d0c014766411e834f7af5b8f4cf46257aab4036ca95e9d2c144a10f59ad6f5b9" dependencies = [ "backtrace", "bytes", @@ -6714,7 +6678,7 @@ dependencies = [ "parking_lot 0.12.1", "pin-project-lite", "signal-hook-registry", - "socket2 0.5.3", + "socket2 0.5.5", "tokio-macros", "tracing", "windows-sys 0.48.0", @@ -6732,9 +6696,9 @@ dependencies = [ [[package]] name = "tokio-macros" -version = "2.1.0" +version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "630bdcf245f78637c13ec01ffae6187cca34625e8c63150d424b59e55af2675e" +checksum = "5b8a1e28f2deaa14e508979454cb3a223b10b938b45af148bc0986de36f1923b" dependencies = [ "proc-macro2 1.0.66", "quote 1.0.33", @@ -6793,9 +6757,9 @@ dependencies = [ [[package]] name = "tokio-util" -version = "0.7.9" +version = "0.7.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d68074620f57a0b21594d9735eb2e98ab38b17f80d3fcb189fca266771ca60d" +checksum = "5419f34732d9eb6ee4c3578b7989078579b7f039cbbb9ca2c4da015749371e15" dependencies = [ "bytes", "futures-core", @@ -6842,7 +6806,7 @@ version = "0.19.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f8123f27e969974a3dfba720fdb560be359f57b44302d280ba72e76a74480e8a" dependencies = [ - "indexmap 2.0.2", + "indexmap 2.1.0", "serde", "serde_spanned", "toml_datetime", @@ -6858,7 +6822,7 @@ dependencies = [ "async-stream", "async-trait", "axum", - "base64 0.21.4", + "base64 0.21.5", "bytes", "flate2", "futures-core", @@ -7187,7 +7151,7 @@ dependencies = [ "log", "rand 0.8.5", "rustls", - "sha1 0.10.6", + "sha1", "thiserror", "url", "utf-8", @@ -7369,6 +7333,12 @@ version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a" +[[package]] +name = "untrusted" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" + [[package]] name = "url" version = "2.4.1" @@ -7414,9 +7384,9 @@ checksum = "711b9620af191e0cdc7468a8d14e709c3dcdb115b36f838e601583af800a370a" [[package]] name = "uuid" -version = "1.4.1" +version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "79daa5ed5740825c40b389c5e50312b9c86df53fccd33f281df655642b43869d" +checksum = "c58fe91d841bc04822c9801002db4ea904b9e4b8e6bbad25127b46eff8dc516b" dependencies = [ "getrandom 0.2.10", "serde", @@ -7810,13 +7780,13 @@ dependencies = [ [[package]] name = "wiremock" -version = "0.5.19" +version = "0.5.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c6f71803d3a1c80377a06221e0530be02035d5b3e854af56c6ece7ac20ac441d" +checksum = "079aee011e8a8e625d16df9e785de30a6b77f80a6126092d76a57375f96448da" dependencies = [ "assert-json-diff", "async-trait", - "base64 0.21.4", + "base64 0.21.5", "deadpool", "futures", "futures-timer", diff --git a/DEVELOPMENT.md b/DEVELOPMENT.md index 4f6c80a468..2e2a013996 100644 --- a/DEVELOPMENT.md +++ b/DEVELOPMENT.md @@ -18,7 +18,7 @@ The **Apollo Router** is a configurable, high-performance **graph router** for a ## Development -You will need a recent version of rust (`1.65` works well as of writing). +You will need a recent version of rust (`1.72` works well as of writing). Installing rust [using rustup](https://www.rust-lang.org/tools/install) is the recommended way to do it as it will install rustup, rustfmt and other goodies that are not always included by default in other rust distribution channels: @@ -27,7 +27,7 @@ goodies that are not always included by default in other rust distribution chann curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` -In addition, you will need to [install protoc](https://grpc.io/docs/protoc-installation/). +In addition, you will need to [install protoc](https://grpc.io/docs/protoc-installation/) and [cmake](https://cmake.org/). Set up your git hooks: @@ -76,6 +76,9 @@ The CI checks require `cargo-deny` and `cargo-about` which can both be installed - `cargo install cargo-deny` - `cargo install cargo-about` +Updating the snapshots used during testing requires installing `cargo-insta`: +- `cargo install cargo-insta` + They also need you to have the federation-demo project up and running, as explained in the Getting started section above. diff --git a/RELEASE_CHECKLIST.md b/RELEASE_CHECKLIST.md index 98e4e0cfa2..9ae088e975 100644 --- a/RELEASE_CHECKLIST.md +++ b/RELEASE_CHECKLIST.md @@ -56,7 +56,11 @@ The examples below will use [the GitHub CLI (`gh`)](https://cli.github.com/) to A release can be cut from any branch, but we assume you'll be doing it from `dev`. If you're just doing a release candidate, you can skip merging it back into `main`. -1. Make sure you have `cargo` installed on your machine and in your `PATH`. +1. Make sure you have `cargo` installed on your machine and in your `PATH`. You also need: + - `helm-docs`: see + - `cargo-about`: `cargo install --locked cargo-about` + - `cargo-deny`: `cargo install --locked cargo-deny` + - `set-version` from `cargo-edit`: `cargo install --locked cargo-edit` 2. Pick the version number you are going to release. This project uses [Semantic Versioning 2.0.0](https://semver.org/), so analyze the existing changes in the `.changesets/` directory to pick the right next version. (e.g., If there are `feat_` changes, it must be a minor version bump. If there are `breaking_` changes, it must be a _major_ version bump). **Do not release a major version without explicit agreement from core team members**. 3. Checkout the branch you want to cut from. Typically, this is `dev`, but you could do this from another branch as well. diff --git a/apollo-router-benchmarks/Cargo.toml b/apollo-router-benchmarks/Cargo.toml index f257a617d1..0267dc8428 100644 --- a/apollo-router-benchmarks/Cargo.toml +++ b/apollo-router-benchmarks/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "apollo-router-benchmarks" -version = "1.34.0" +version = "1.34.1" authors = ["Apollo Graph, Inc. "] edition = "2021" license = "Elastic-2.0" @@ -20,7 +20,7 @@ tower = "0.4" [build-dependencies] apollo-smith = { version = "0.4.0", features = ["parser-impl"] } apollo-parser = "0.6.2" -arbitrary = "1.3.1" +arbitrary = "1.3.2" [[bench]] name = "basic_composition" diff --git a/apollo-router-scaffold/Cargo.toml b/apollo-router-scaffold/Cargo.toml index a166d99a35..8e23cdbc0e 100644 --- a/apollo-router-scaffold/Cargo.toml +++ b/apollo-router-scaffold/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "apollo-router-scaffold" -version = "1.34.0" +version = "1.34.1" authors = ["Apollo Graph, Inc. "] edition = "2021" license = "Elastic-2.0" @@ -8,11 +8,11 @@ publish = false [dependencies] anyhow = "1.0.75" -clap = { version = "4.4.6", features = ["derive"] } -cargo-scaffold = { version = "0.8.12", default-features = false } +clap = { version = "4.4.8", features = ["derive"] } +cargo-scaffold = { version = "0.8.14", default-features = false } regex = "1" str_inflector = "0.12.0" toml = "0.5.11" [dev-dependencies] -tempfile = "3.8.0" +tempfile = "3.8.1" copy_dir = "0.1.3" diff --git a/apollo-router-scaffold/templates/base/Cargo.toml b/apollo-router-scaffold/templates/base/Cargo.toml index e767831cd3..cf7308c057 100644 --- a/apollo-router-scaffold/templates/base/Cargo.toml +++ b/apollo-router-scaffold/templates/base/Cargo.toml @@ -22,7 +22,7 @@ apollo-router = { path ="{{integration_test}}apollo-router" } apollo-router = { git="https://github.com/apollographql/router.git", branch="{{branch}}" } {{else}} # Note if you update these dependencies then also update xtask/Cargo.toml -apollo-router = "1.34.0" +apollo-router = "1.34.1" {{/if}} {{/if}} async-trait = "0.1.52" diff --git a/apollo-router-scaffold/templates/base/xtask/Cargo.toml b/apollo-router-scaffold/templates/base/xtask/Cargo.toml index c0bcafec4d..02df09e4ed 100644 --- a/apollo-router-scaffold/templates/base/xtask/Cargo.toml +++ b/apollo-router-scaffold/templates/base/xtask/Cargo.toml @@ -13,7 +13,7 @@ apollo-router-scaffold = { path ="{{integration_test}}apollo-router-scaffold" } {{#if branch}} apollo-router-scaffold = { git="https://github.com/apollographql/router.git", branch="{{branch}}" } {{else}} -apollo-router-scaffold = { git = "https://github.com/apollographql/router.git", tag = "v1.34.0" } +apollo-router-scaffold = { git = "https://github.com/apollographql/router.git", tag = "v1.34.1" } {{/if}} {{/if}} anyhow = "1.0.58" diff --git a/apollo-router/Cargo.toml b/apollo-router/Cargo.toml index ef630b62e9..8c9f87c9c0 100644 --- a/apollo-router/Cargo.toml +++ b/apollo-router/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "apollo-router" -version = "1.34.0" +version = "1.34.1" authors = ["Apollo Graph, Inc. "] repository = "https://github.com/apollographql/router/" documentation = "https://docs.rs/apollo-router" @@ -57,9 +57,10 @@ features = ["docs_rs"] askama = "0.12.1" access-json = "0.1.0" anyhow = "1.0.75" -apollo-compiler = "=1.0.0-beta.4" +apollo-compiler = "=1.0.0-beta.5" +apollo-federation = "0.0.3" arc-swap = "1.6.0" -async-compression = { version = "0.4.4", features = [ +async-compression = { version = "0.4.5", features = [ "tokio", "brotli", "gzip", @@ -67,11 +68,11 @@ async-compression = { version = "0.4.4", features = [ ] } async-trait = "0.1.74" axum = { version = "0.6.20", features = ["headers", "json", "original-uri"] } -base64 = "0.21.4" +base64 = "0.21.5" bloomfilter = "1.0.12" buildstructor = "0.5.4" bytes = "1.5.0" -clap = { version = "4.4.6", default-features = false, features = [ +clap = { version = "4.4.8", default-features = false, features = [ "env", "derive", "std", @@ -91,24 +92,24 @@ directories = "5.0.1" displaydoc = "0.2" flate2 = "1.0.28" fred = { version = "6.3.2", features = ["enable-rustls", "no-client-setname"] } -futures = { version = "0.3.28", features = ["thread-pool"] } +futures = { version = "0.3.29", features = ["thread-pool"] } graphql_client = "0.13.0" hex = "0.4.3" -http = "0.2.9" +http = "0.2.11" http-body = "0.4.5" heck = "0.4.1" humantime = "2.1.0" humantime-serde = "1.1.1" hyper = { version = "0.14.27", features = ["server", "client"] } -hyper-rustls = { version = "0.24.1", features = ["http1", "http2"] } -indexmap = { version = "2.0.2", features = ["serde"] } +hyper-rustls = { version = "0.24.2", features = ["http1", "http2"] } +indexmap = { version = "2.1.0", features = ["serde"] } itertools = "0.11.0" jsonpath_lib = "0.3.0" -jsonpath-rust = "0.3.2" +jsonpath-rust = "0.3.4" jsonschema = { version = "0.17.1", default-features = false } jsonwebtoken = "8.3.0" lazy_static = "1.4.0" -libc = "0.2.149" +libc = "0.2.150" linkme = "0.3.17" lru = "0.11.1" maplit = "1.0.2" @@ -116,7 +117,7 @@ mediatype = "0.19.15" mockall = "0.11.4" mime = "0.3.17" multer = "2.1.0" -multimap = "0.9.0" +multimap = "0.9.1" # To avoid tokio issues notify = { version = "6.1.1", default-features = false, features = [ "macos_kqueue", @@ -171,8 +172,7 @@ prost = "0.11.9" prost-types = "0.11.9" proteus = "0.5.0" rand = "0.8.5" -rand_core = "0.6.4" -rhai = { version = "1.16.2", features = ["sync", "serde", "internals"] } +rhai = { version = "1.16.3", features = ["sync", "serde", "internals"] } regex = "1.10.2" reqwest = { version = "0.11.22", default-features = false, features = [ "rustls-tls", @@ -181,17 +181,17 @@ reqwest = { version = "0.11.22", default-features = false, features = [ "stream", ] } # note: this dependency should _always_ be pinned, prefix the version with an `=` -router-bridge = "=0.5.6+v2.5.5" +router-bridge = "=0.5.8+v2.5.7" rust-embed = "6.8.1" -rustls = "0.21.7" -rustls-pemfile = "1.0.3" -schemars = { version = "0.8.15", features = ["url"] } +rustls = "0.21.9" +rustls-pemfile = "1.0.4" +schemars = { version = "0.8.16", features = ["url"] } shellexpand = "3.1.0" sha2 = "0.10.8" -serde = { version = "1.0.189", features = ["derive", "rc"] } +serde = { version = "1.0.192", features = ["derive", "rc"] } serde_derive_default = "0.1" serde_json_bytes = { version = "0.2.2", features = ["preserve_order"] } -serde_json = { version = "1.0.107", features = [ +serde_json = { version = "1.0.108", features = [ "preserve_order", "float_roundtrip", ] } @@ -200,10 +200,10 @@ serde_yaml = "0.8.26" static_assertions = "1.1.0" strum_macros = "0.25.3" sys-info = "0.9.1" -thiserror = "1.0.49" -tokio = { version = "1.33.0", features = ["full"] } +thiserror = "1.0.50" +tokio = { version = "1.34.0", features = ["full"] } tokio-stream = { version = "0.1.14", features = ["sync", "net"] } -tokio-util = { version = "0.7.9", features = ["net", "codec", "time"] } +tokio-util = { version = "0.7.10", features = ["net", "codec", "time"] } tonic = { version = "0.9.2", features = [ "transport", "tls", @@ -232,9 +232,9 @@ tracing-subscriber = { version = "0.3.17", features = ["env-filter", "json"] } trust-dns-resolver = "0.23.2" url = { version = "2.4.1", features = ["serde"] } urlencoding = "2.1.3" -uuid = { version = "1.4.1", features = ["serde", "v4"] } +uuid = { version = "1.6.0", features = ["serde", "v4"] } yaml-rust = "0.4.5" -wiremock = "0.5.19" +wiremock = "0.5.21" wsl = "0.1.0" tokio-tungstenite = { version = "0.20.1", features = [ "rustls-tls-native-roots", @@ -272,7 +272,7 @@ axum = { version = "0.6.20", features = [ ] } ecdsa = { version = "0.15.1", features = ["signing", "pem", "pkcs8"] } fred = { version = "6.3.2", features = ["enable-rustls", "no-client-setname"] } -futures-test = "0.3.28" +futures-test = "0.3.29" insta = { version = "1.34.0", features = ["json", "redactions", "yaml"] } maplit = "1.0.2" memchr = { version = "2.6.4", default-features = false } @@ -282,19 +282,17 @@ once_cell = "1.18.0" opentelemetry-stdout = { version = "0.1.0", features = ["trace"] } p256 = "0.12.0" rand_core = "0.6.4" -redis = { version = "0.21.7", features = ["tokio-comp"] } reqwest = { version = "0.11.22", default-features = false, features = [ "json", "stream", ] } -rhai = { version = "1.16.2", features = [ +rhai = { version = "1.16.3", features = [ "sync", "serde", "internals", "testing-environ", ] } -similar-asserts = "1.5.0" -tempfile = "3.8.0" +tempfile = "3.8.1" test-log = { version = "0.2.13", default-features = false, features = [ "trace", ] } @@ -310,7 +308,7 @@ tracing-subscriber = { version = "0.3", default-features = false, features = [ ] } tracing-test = "0.2.4" walkdir = "2.4.0" -wiremock = "0.5.19" +wiremock = "0.5.21" [target.'cfg(target_os = "linux")'.dev-dependencies] rstack = { version = "0.3.3", features = ["dw"], default-features = false } diff --git a/apollo-router/src/axum_factory/axum_http_server_factory.rs b/apollo-router/src/axum_factory/axum_http_server_factory.rs index 1373586dec..9c791f7c37 100644 --- a/apollo-router/src/axum_factory/axum_http_server_factory.rs +++ b/apollo-router/src/axum_factory/axum_http_server_factory.rs @@ -103,13 +103,14 @@ where if configuration.health_check.enabled { tracing::info!( - "Health check endpoint exposed at {}/health", - configuration.health_check.listen + "Health check exposed at {}/{}", + configuration.health_check.listen, + configuration.health_check.path ); endpoints.insert( configuration.health_check.listen.clone(), Endpoint::from_router_service( - "/health".to_string(), + configuration.health_check.path.clone(), service_fn(move |req: router::Request| { let mut status_code = StatusCode::OK; let health = if let Some(query) = req.router_request.uri().query() { diff --git a/apollo-router/src/axum_factory/listeners.rs b/apollo-router/src/axum_factory/listeners.rs index cdd6a49b91..954ebcefa8 100644 --- a/apollo-router/src/axum_factory/listeners.rs +++ b/apollo-router/src/axum_factory/listeners.rs @@ -481,14 +481,13 @@ mod tests { use crate::configuration::Sandbox; use crate::configuration::Supergraph; use crate::services::router; - use crate::services::router_service; #[tokio::test] async fn it_makes_sure_same_listenaddrs_are_accepted() { let configuration = Configuration::fake_builder().build().unwrap(); init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(configuration), MultiMap::new(), ) @@ -525,7 +524,7 @@ mod tests { ); let error = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(configuration), web_endpoints, ) @@ -563,7 +562,7 @@ mod tests { Endpoint::from_router_service("/".to_string(), endpoint), ); - let error = init_with_config(router_service::empty().await, Arc::new(configuration), mm) + let error = init_with_config(router::service::empty().await, Arc::new(configuration), mm) .await .unwrap_err(); diff --git a/apollo-router/src/axum_factory/tests.rs b/apollo-router/src/axum_factory/tests.rs index a918a8387b..b62b3806a0 100644 --- a/apollo-router/src/axum_factory/tests.rs +++ b/apollo-router/src/axum_factory/tests.rs @@ -74,8 +74,7 @@ use crate::services::layers::static_page::home_page_content; use crate::services::layers::static_page::sandbox_page_content; use crate::services::new_service::ServiceFactory; use crate::services::router; -use crate::services::router_service; -use crate::services::router_service::RouterCreator; +use crate::services::router::service::RouterCreator; use crate::services::supergraph; use crate::services::HasSchema; use crate::services::PluggableSupergraphServiceBuilder; @@ -358,7 +357,7 @@ async fn it_displays_sandbox() { .unwrap(), ); - let router_service = router_service::from_supergraph_mock_callback_and_configuration( + let router_service = router::service::from_supergraph_mock_callback_and_configuration( move |_| { panic!("this should never be called"); }, @@ -405,7 +404,7 @@ async fn it_displays_sandbox_with_different_supergraph_path() { .unwrap(), ); - let router_service = router_service::from_supergraph_mock_callback_and_configuration( + let router_service = router::service::from_supergraph_mock_callback_and_configuration( move |_| { panic!("this should never be called"); }, @@ -441,7 +440,7 @@ async fn it_compress_response_body() -> Result<(), ApolloRouterError> { .data(json!({"response": "yayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"})) // Body must be bigger than 32 to be compressed .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( @@ -524,7 +523,7 @@ async fn it_decompress_request_body() -> Result<(), ApolloRouterError> { .data(json!({"response": "yayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"})) // Body must be bigger than 32 to be compressed .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); assert_eq!(req.supergraph_request.into_body().query.unwrap(), "query"); Ok(SupergraphResponse::new_from_graphql_response( @@ -558,7 +557,7 @@ async fn it_decompress_request_body() -> Result<(), ApolloRouterError> { #[tokio::test] async fn malformed_request() -> Result<(), ApolloRouterError> { - let (server, client) = init(router_service::empty().await).await; + let (server, client) = init(router::service::empty().await).await; let response = client .post(format!( @@ -597,7 +596,7 @@ async fn response() -> Result<(), ApolloRouterError> { .data(json!({"response": "yay"})) .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( @@ -650,7 +649,7 @@ async fn response() -> Result<(), ApolloRouterError> { #[tokio::test] async fn bad_response() -> Result<(), ApolloRouterError> { - let (server, client) = init(router_service::empty().await).await; + let (server, client) = init(router::service::empty().await).await; let url = format!("{}/test", server.graphql_listen_address().as_ref().unwrap()); // Post query @@ -690,7 +689,7 @@ async fn response_with_root_wildcard() -> Result<(), ApolloRouterError> { .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( example_response, @@ -776,7 +775,7 @@ async fn response_with_custom_endpoint() -> Result<(), ApolloRouterError> { .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( example_response, @@ -840,7 +839,7 @@ async fn response_with_custom_prefix_endpoint() -> Result<(), ApolloRouterError> .data(json!({"response": "yay"})) .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( example_response, @@ -905,7 +904,7 @@ async fn response_with_custom_endpoint_wildcard() -> Result<(), ApolloRouterErro .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( example_response, @@ -971,7 +970,7 @@ async fn response_with_custom_endpoint_wildcard() -> Result<(), ApolloRouterErro #[tokio::test] async fn response_failure() -> Result<(), ApolloRouterError> { - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = crate::error::FetchError::SubrequestHttpError { status_code: Some(200), service: "Mock service".to_string(), @@ -1030,7 +1029,7 @@ async fn cors_preflight() -> Result<(), ApolloRouterError> { .build() .unwrap(); let (server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -1082,7 +1081,7 @@ async fn cors_preflight() -> Result<(), ApolloRouterError> { #[tokio::test] async fn test_previous_health_check_returns_four_oh_four() { - let (server, client) = init(router_service::empty().await).await; + let (server, client) = init(router::service::empty().await).await; let url = format!( "{}/.well-known/apollo/server-health", server.graphql_listen_address().as_ref().unwrap() @@ -1097,7 +1096,7 @@ async fn it_errors_on_bad_content_type_header() -> Result<(), ApolloRouterError> let query = "query"; let operation_name = "operationName"; - let router_service = router_service::from_supergraph_mock_callback(|req| { + let router_service = router::service::from_supergraph_mock_callback(|req| { Ok(SupergraphResponse::new_from_graphql_response( graphql::Response::builder() .data(json!({"response": "hey"})) @@ -1135,7 +1134,7 @@ async fn it_errors_on_bad_accept_header() -> Result<(), ApolloRouterError> { let query = "query"; let operation_name = "operationName"; - let router_service = router_service::from_supergraph_mock_callback(|req| { + let router_service = router::service::from_supergraph_mock_callback(|req| { Ok(SupergraphResponse::new_from_graphql_response( graphql::Response::builder() .data(json!({"response": "hey"})) @@ -1173,7 +1172,7 @@ async fn it_errors_on_bad_accept_header() -> Result<(), ApolloRouterError> { async fn it_displays_homepage() { let conf = Arc::new(Configuration::fake_builder().build().unwrap()); - let router_service = router_service::from_supergraph_mock_callback_and_configuration( + let router_service = router::service::from_supergraph_mock_callback_and_configuration( |req| { Ok(SupergraphResponse::new_from_graphql_response( graphql::Response::builder() @@ -1220,7 +1219,7 @@ async fn it_doesnt_display_disabled_homepage() { .unwrap(), ); - let router_service = router_service::from_supergraph_mock_callback_and_configuration( + let router_service = router::service::from_supergraph_mock_callback_and_configuration( |req| { Ok(SupergraphResponse::new_from_graphql_response( graphql::Response::builder() @@ -1287,8 +1286,12 @@ async fn it_answers_to_custom_endpoint() -> Result<(), ApolloRouterError> { ); let conf = Configuration::fake_builder().build().unwrap(); - let (server, client) = - init_with_config(router_service::empty().await, Arc::new(conf), web_endpoints).await?; + let (server, client) = init_with_config( + router::service::empty().await, + Arc::new(conf), + web_endpoints, + ) + .await?; for path in &["/a-custom-path", "/an-other-custom-path"] { let response = client @@ -1394,9 +1397,13 @@ async fn it_refuses_to_bind_two_extra_endpoints_on_the_same_path() { ); let conf = Configuration::fake_builder().build().unwrap(); - let error = init_with_config(router_service::empty().await, Arc::new(conf), web_endpoints) - .await - .unwrap_err(); + let error = init_with_config( + router::service::empty().await, + Arc::new(conf), + web_endpoints, + ) + .await + .unwrap_err(); assert_eq!( "tried to register two endpoints on `127.0.0.1:0/a-custom-path`", @@ -1406,7 +1413,7 @@ async fn it_refuses_to_bind_two_extra_endpoints_on_the_same_path() { #[tokio::test] async fn cors_origin_default() -> Result<(), ApolloRouterError> { - let (server, client) = init(router_service::empty().await).await; + let (server, client) = init(router::service::empty().await).await; let url = format!("{}/", server.graphql_listen_address().as_ref().unwrap()); let response = @@ -1426,7 +1433,7 @@ async fn cors_max_age() -> Result<(), ApolloRouterError> { .build() .unwrap(); let (server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -1446,7 +1453,7 @@ async fn cors_allow_any_origin() -> Result<(), ApolloRouterError> { .build() .unwrap(); let (server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -1472,7 +1479,7 @@ async fn cors_origin_list() -> Result<(), ApolloRouterError> { .build() .unwrap(); let (server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -1503,7 +1510,7 @@ async fn cors_origin_regex() -> Result<(), ApolloRouterError> { .build() .unwrap(); let (server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -1578,7 +1585,7 @@ fn origin_valid(headers: &HeaderMap, origin: &str) -> bool { #[test(tokio::test)] async fn response_shape() -> Result<(), ApolloRouterError> { - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { Ok(SupergraphResponse::new_from_graphql_response( graphql::Response::builder() .data(json!({ @@ -1624,7 +1631,7 @@ async fn response_shape() -> Result<(), ApolloRouterError> { #[test(tokio::test)] async fn deferred_response_shape() -> Result<(), ApolloRouterError> { - let router_service = router_service::from_supergraph_mock_callback(|req| { + let router_service = router::service::from_supergraph_mock_callback(|req| { let body = stream::iter(vec![ graphql::Response::builder() .data(json!({ @@ -1696,7 +1703,7 @@ async fn deferred_response_shape() -> Result<(), ApolloRouterError> { #[test(tokio::test)] async fn multipart_response_shape_with_one_chunk() -> Result<(), ApolloRouterError> { - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let body = stream::iter(vec![graphql::Response::builder() .data(json!({ "test": "hello", @@ -2049,7 +2056,7 @@ async fn listening_to_unix_socket() { .build(); let example_response = expected_response.clone(); - let router_service = router_service::from_supergraph_mock_callback(move |req| { + let router_service = router::service::from_supergraph_mock_callback(move |req| { let example_response = example_response.clone(); Ok(SupergraphResponse::new_from_graphql_response( example_response, @@ -2153,7 +2160,7 @@ Accept: application/json\r #[tokio::test] async fn test_health_check() { - let router_service = router_service::from_supergraph_mock_callback(|_| { + let router_service = router::service::from_supergraph_mock_callback(|_| { Ok(supergraph::Response::builder() .data(json!({ "__typename": "Query"})) .context(Context::new()) @@ -2190,7 +2197,7 @@ async fn test_health_check_custom_listener() { // keep the server handle around otherwise it will immediately shutdown let (_server, client) = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -2219,7 +2226,7 @@ async fn test_sneaky_supergraph_and_health_check_configuration() { .build() .unwrap(); let error = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -2245,7 +2252,7 @@ async fn test_sneaky_supergraph_and_disabled_health_check_configuration() { .build() .unwrap(); let _ = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) @@ -2270,7 +2277,7 @@ async fn test_supergraph_and_health_check_same_port_different_listener() { .build() .unwrap(); let error = init_with_config( - router_service::empty().await, + router::service::empty().await, Arc::new(conf), MultiMap::new(), ) diff --git a/apollo-router/src/cache/mod.rs b/apollo-router/src/cache/mod.rs index a577e847a7..8aaf10510a 100644 --- a/apollo-router/src/cache/mod.rs +++ b/apollo-router/src/cache/mod.rs @@ -1,7 +1,6 @@ use std::collections::HashMap; use std::num::NonZeroUsize; use std::sync::Arc; -use std::time::Duration; use tokio::sync::broadcast; use tokio::sync::oneshot; @@ -10,6 +9,7 @@ use tokio::sync::Mutex; use self::storage::CacheStorage; use self::storage::KeyType; use self::storage::ValueType; +use crate::configuration::RedisCache; pub(crate) mod redis; pub(crate) mod storage; @@ -34,13 +34,12 @@ where { pub(crate) async fn with_capacity( capacity: NonZeroUsize, - redis_urls: Option>, - timeout: Option, + redis: Option, caller: &str, ) -> Self { Self { wait_map: Arc::new(Mutex::new(HashMap::new())), - storage: CacheStorage::new(capacity, redis_urls, timeout, caller).await, + storage: CacheStorage::new(capacity, redis, caller).await, } } @@ -48,13 +47,7 @@ where config: &crate::configuration::Cache, caller: &str, ) -> Self { - Self::with_capacity( - config.in_memory.limit, - config.redis.as_ref().map(|c| c.urls.clone()), - config.redis.as_ref().and_then(|r| r.timeout), - caller, - ) - .await + Self::with_capacity(config.in_memory.limit, config.redis.clone(), caller).await } pub(crate) async fn get(&self, key: &K) -> Entry { @@ -214,8 +207,7 @@ mod tests { async fn example_cache_usage() { let k = "key".to_string(); let cache = - DeduplicatingCache::with_capacity(NonZeroUsize::new(1).unwrap(), None, None, "test") - .await; + DeduplicatingCache::with_capacity(NonZeroUsize::new(1).unwrap(), None, "test").await; let entry = cache.get(&k).await; @@ -232,8 +224,7 @@ mod tests { #[test(tokio::test)] async fn it_should_enforce_cache_limits() { let cache: DeduplicatingCache = - DeduplicatingCache::with_capacity(NonZeroUsize::new(13).unwrap(), None, None, "test") - .await; + DeduplicatingCache::with_capacity(NonZeroUsize::new(13).unwrap(), None, "test").await; for i in 0..14 { let entry = cache.get(&i).await; @@ -256,8 +247,7 @@ mod tests { mock.expect_retrieve().times(1).return_const(1usize); let cache: DeduplicatingCache = - DeduplicatingCache::with_capacity(NonZeroUsize::new(10).unwrap(), None, None, "test") - .await; + DeduplicatingCache::with_capacity(NonZeroUsize::new(10).unwrap(), None, "test").await; // Let's trigger 100 concurrent gets of the same value and ensure only // one delegated retrieve is made diff --git a/apollo-router/src/cache/redis.rs b/apollo-router/src/cache/redis.rs index 81cc335310..1c77ce5f50 100644 --- a/apollo-router/src/cache/redis.rs +++ b/apollo-router/src/cache/redis.rs @@ -17,6 +17,7 @@ use url::Url; use super::KeyType; use super::ValueType; +use crate::configuration::RedisCache; #[derive(Clone, Debug, Eq, Hash, PartialEq)] pub(crate) struct RedisKey(pub(crate) K) @@ -115,18 +116,17 @@ where } impl RedisCacheStorage { - pub(crate) async fn new( - urls: Vec, - ttl: Option, - timeout: Option, - ) -> Result { - let url = Self::preprocess_urls(urls)?; - let config = RedisConfig::from_url(url.as_str())?; + pub(crate) async fn new(config: RedisCache) -> Result { + let url = Self::preprocess_urls(config.urls)?; + let client_config = RedisConfig::from_url(url.as_str())?; let client = RedisClient::new( - config, + client_config, Some(PerformanceConfig { - default_command_timeout_ms: timeout.map(|t| t.as_millis() as u64).unwrap_or(2), + default_command_timeout_ms: config + .timeout + .map(|t| t.as_millis() as u64) + .unwrap_or(2), ..Default::default() }), Some(ReconnectPolicy::new_exponential(0, 1, 2000, 5)), @@ -158,7 +158,7 @@ impl RedisCacheStorage { tracing::trace!("redis connection established"); Ok(Self { inner: Arc::new(client), - ttl, + ttl: config.ttl, }) } diff --git a/apollo-router/src/cache/storage.rs b/apollo-router/src/cache/storage.rs index 8106b5a864..8acd5a06fb 100644 --- a/apollo-router/src/cache/storage.rs +++ b/apollo-router/src/cache/storage.rs @@ -3,7 +3,6 @@ use std::fmt::{self}; use std::hash::Hash; use std::num::NonZeroUsize; use std::sync::Arc; -use std::time::Duration; use lru::LruCache; use serde::de::DeserializeOwned; @@ -12,6 +11,7 @@ use tokio::sync::Mutex; use tokio::time::Instant; use super::redis::*; +use crate::configuration::RedisCache; pub(crate) trait KeyType: Clone + fmt::Debug + fmt::Display + Hash + Eq + Send + Sync @@ -58,15 +58,14 @@ where { pub(crate) async fn new( max_capacity: NonZeroUsize, - redis_urls: Option>, - timeout: Option, + config: Option, caller: &str, ) -> Self { Self { caller: caller.to_string(), inner: Arc::new(Mutex::new(LruCache::new(max_capacity))), - redis: if let Some(urls) = redis_urls { - match RedisCacheStorage::new(urls, None, timeout).await { + redis: if let Some(config) = config { + match RedisCacheStorage::new(config).await { Err(e) => { tracing::error!( "could not open connection to Redis for {} caching: {:?}", diff --git a/apollo-router/src/configuration/mod.rs b/apollo-router/src/configuration/mod.rs index e003b92c3d..09b7da6677 100644 --- a/apollo-router/src/configuration/mod.rs +++ b/apollo-router/src/configuration/mod.rs @@ -168,6 +168,10 @@ pub struct Configuration { #[serde(default)] pub(crate) experimental_graphql_validation_mode: GraphQLValidationMode, + /// Set the API schema generation implementation to use. + #[serde(default)] + pub(crate) experimental_api_schema_generation_mode: ApiSchemaMode, + /// Plugin configuration #[serde(default)] pub(crate) plugins: UserPlugins, @@ -210,6 +214,21 @@ pub(crate) enum GraphQLValidationMode { Both, } +/// GraphQL validation modes. +#[derive(Clone, PartialEq, Eq, Default, Derivative, Serialize, Deserialize, JsonSchema)] +#[derivative(Debug)] +#[serde(rename_all = "lowercase")] +pub(crate) enum ApiSchemaMode { + /// Use the new Rust-based implementation. + New, + /// Use the old JavaScript-based implementation. + #[default] + Legacy, + /// Use Rust-based and Javascript-based implementations side by side, logging warnings if the + /// implementations disagree. + Both, +} + impl<'de> serde::Deserialize<'de> for Configuration { fn deserialize(deserializer: D) -> Result where @@ -292,6 +311,7 @@ impl Configuration { chaos: Option, uplink: Option, graphql_validation_mode: Option, + experimental_api_schema_generation_mode: Option, experimental_batching: Option, ) -> Result { #[cfg(not(test))] @@ -319,6 +339,7 @@ impl Configuration { limits: operation_limits.unwrap_or_default(), experimental_chaos: chaos.unwrap_or_default(), experimental_graphql_validation_mode: graphql_validation_mode.unwrap_or_default(), + experimental_api_schema_generation_mode: experimental_api_schema_generation_mode.unwrap_or_default(), plugins: UserPlugins { plugins: Some(plugins), }, @@ -367,6 +388,7 @@ impl Configuration { uplink: Option, graphql_validation_mode: Option, experimental_batching: Option, + experimental_api_schema_generation_mode: Option, ) -> Result { let configuration = Self { validated_yaml: Default::default(), @@ -378,6 +400,8 @@ impl Configuration { limits: operation_limits.unwrap_or_default(), experimental_chaos: chaos.unwrap_or_default(), experimental_graphql_validation_mode: graphql_validation_mode.unwrap_or_default(), + experimental_api_schema_generation_mode: experimental_api_schema_generation_mode + .unwrap_or_default(), plugins: UserPlugins { plugins: Some(plugins), }, @@ -870,6 +894,11 @@ pub(crate) struct RedisCache { #[schemars(with = "Option", default)] /// Redis request timeout (default: 2ms) pub(crate) timeout: Option, + + #[serde(deserialize_with = "humantime_serde::deserialize", default)] + #[schemars(with = "Option", default)] + /// TTL for entries + pub(crate) ttl: Option, } /// TLS related configuration options. @@ -1137,25 +1166,43 @@ pub(crate) struct HealthCheck { /// Defaults to 127.0.0.1:8088 pub(crate) listen: ListenAddr, - /// Set to false to disable the health check endpoint + /// Set to false to disable the health check pub(crate) enabled: bool, + + /// Optionally set a custom healthcheck path + /// Defaults to /health + pub(crate) path: String, } fn default_health_check_listen() -> ListenAddr { SocketAddr::from_str("127.0.0.1:8088").unwrap().into() } -fn default_health_check() -> bool { +fn default_health_check_enabled() -> bool { true } +fn default_health_check_path() -> String { + "/health".to_string() +} + #[buildstructor::buildstructor] impl HealthCheck { #[builder] - pub(crate) fn new(listen: Option, enabled: Option) -> Self { + pub(crate) fn new( + listen: Option, + enabled: Option, + path: Option, + ) -> Self { + let mut path = path.unwrap_or_else(default_health_check_path); + if !path.starts_with('/') { + path = format!("/{path}").to_string(); + } + Self { listen: listen.unwrap_or_else(default_health_check_listen), - enabled: enabled.unwrap_or_else(default_health_check), + enabled: enabled.unwrap_or_else(default_health_check_enabled), + path, } } } @@ -1164,10 +1211,20 @@ impl HealthCheck { #[buildstructor::buildstructor] impl HealthCheck { #[builder] - pub(crate) fn fake_new(listen: Option, enabled: Option) -> Self { + pub(crate) fn fake_new( + listen: Option, + enabled: Option, + path: Option, + ) -> Self { + let mut path = path.unwrap_or_else(default_health_check_path); + if !path.starts_with('/') { + path = format!("/{path}"); + } + Self { listen: listen.unwrap_or_else(test_listen), - enabled: enabled.unwrap_or_else(default_health_check), + enabled: enabled.unwrap_or_else(default_health_check_enabled), + path, } } } diff --git a/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap b/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap index d31f49c065..b85cd67010 100644 --- a/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap +++ b/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap @@ -89,6 +89,12 @@ expression: "&schema" "type": "string", "nullable": true }, + "ttl": { + "description": "TTL for entries", + "default": null, + "type": "string", + "nullable": true + }, "urls": { "description": "List of URLs to the Redis cluster", "type": "array", @@ -205,6 +211,14 @@ expression: "&schema" "type": "string", "nullable": true }, + "poll_interval": { + "description": "Polling interval for each JWKS endpoint in human-readable format; defaults to 60s", + "default": { + "secs": 60, + "nanos": 0 + }, + "type": "string" + }, "url": { "description": "Retrieve the JWK Set", "type": "string" @@ -1061,6 +1075,33 @@ expression: "&schema" }, "additionalProperties": false }, + "experimental_api_schema_generation_mode": { + "description": "Set the API schema generation implementation to use.", + "default": "legacy", + "oneOf": [ + { + "description": "Use the new Rust-based implementation.", + "type": "string", + "enum": [ + "new" + ] + }, + { + "description": "Use the old JavaScript-based implementation.", + "type": "string", + "enum": [ + "legacy" + ] + }, + { + "description": "Use Rust-based and Javascript-based implementations side by side, logging warnings if the implementations disagree.", + "type": "string", + "enum": [ + "both" + ] + } + ] + }, "experimental_batching": { "description": "Batching configuration.", "default": { @@ -1530,12 +1571,13 @@ expression: "&schema" "description": "Health check configuration", "default": { "listen": "127.0.0.1:8088", - "enabled": true + "enabled": true, + "path": "/health" }, "type": "object", "properties": { "enabled": { - "description": "Set to false to disable the health check endpoint", + "description": "Set to false to disable the health check", "default": true, "type": "boolean" }, @@ -1552,6 +1594,11 @@ expression: "&schema" "type": "string" } ] + }, + "path": { + "description": "Optionally set a custom healthcheck path Defaults to /health", + "default": "/health", + "type": "string" } }, "additionalProperties": false @@ -2086,6 +2133,12 @@ expression: "&schema" "type": "string", "nullable": true }, + "ttl": { + "description": "TTL for entries", + "default": null, + "type": "string", + "nullable": true + }, "urls": { "description": "List of URLs to the Redis cluster", "type": "array", @@ -5702,6 +5755,12 @@ expression: "&schema" "type": "string", "nullable": true }, + "ttl": { + "description": "TTL for entries", + "default": null, + "type": "string", + "nullable": true + }, "urls": { "description": "List of URLs to the Redis cluster", "type": "array", diff --git a/apollo-router/src/configuration/tests.rs b/apollo-router/src/configuration/tests.rs index 95ddccef4d..8ef806aefb 100644 --- a/apollo-router/src/configuration/tests.rs +++ b/apollo-router/src/configuration/tests.rs @@ -428,6 +428,10 @@ fn validate_project_config_files() { { continue; } + #[cfg(not(telemetry_next))] + if entry.path().to_string_lossy().contains("telemetry_next") { + continue; + } let name = entry.file_name().to_string_lossy(); if filename_matcher.is_match(&name) { @@ -679,6 +683,10 @@ fn visit_schema(path: &str, schema: &Value, errors: &mut Vec) { for (k, v) in o { if k.as_str() == "properties" { let properties = v.as_object().expect("properties must be an object"); + if properties.len() == 1 { + // This is probably an enum property + continue; + } for (k, v) in properties { let path = format!("{path}.{k}"); if v.as_object().and_then(|o| o.get("description")).is_none() { @@ -926,6 +934,39 @@ fn test_deserialize_derive_default() { } } +#[test] +fn it_defaults_health_check_configuration() { + let conf = Configuration::default(); + let addr: ListenAddr = SocketAddr::from_str("127.0.0.1:8088").unwrap().into(); + + assert_eq!(conf.health_check.listen, addr); + assert_eq!(&conf.health_check.path, "/health"); + + // Defaults to enabled: true + assert!(conf.health_check.enabled); +} + +#[test] +fn it_sets_custom_health_check_path() { + let conf = Configuration::builder() + .health_check(HealthCheck::new(None, None, Some("/healthz".to_string()))) + .build() + .unwrap(); + + assert_eq!(&conf.health_check.path, "/healthz"); +} + +#[test] +fn it_adds_slash_to_custom_health_check_path_if_missing() { + let conf = Configuration::builder() + // NB the missing `/` + .health_check(HealthCheck::new(None, None, Some("healthz".to_string()))) + .build() + .unwrap(); + + assert_eq!(&conf.health_check.path, "/healthz"); +} + fn has_field_level_serde_defaults(lines: &[&str], line_number: usize) -> bool { let serde_field_default = Regex::new( r#"^\s*#[\s\n]*\[serde\s*\((.*,)?\s*default\s*=\s*"[a-zA-Z0-9_:]+"\s*(,.*)?\)\s*\]\s*$"#, diff --git a/apollo-router/src/files.rs b/apollo-router/src/files.rs index 0974f0f4b2..452f02aaec 100644 --- a/apollo-router/src/files.rs +++ b/apollo-router/src/files.rs @@ -2,7 +2,6 @@ use std::path::Path; use std::path::PathBuf; use std::time::Duration; -use futures::channel::mpsc; use futures::prelude::*; use notify::event::DataChange; use notify::event::MetadataKind; @@ -12,6 +11,8 @@ use notify::EventKind; use notify::PollWatcher; use notify::RecursiveMode; use notify::Watcher; +use tokio::sync::mpsc; +use tokio::sync::mpsc::error::TrySendError; #[cfg(not(test))] const DEFAULT_WATCH_DURATION: Duration = Duration::from_secs(3); @@ -38,7 +39,8 @@ fn watch_with_duration(path: &Path, duration: Duration) -> impl Stream impl Stream impl Stream Result<(), ApolloRouterError> { + #[cfg(unix)] let listen_addresses = std::mem::take(&mut self.listen_addresses); let (_main_listener, _extra_listener) = self.wait_for_servers().await?; diff --git a/apollo-router/src/introspection.rs b/apollo-router/src/introspection.rs index 33dd3d0394..fad7d54823 100644 --- a/apollo-router/src/introspection.rs +++ b/apollo-router/src/introspection.rs @@ -25,7 +25,7 @@ impl Introspection { capacity: NonZeroUsize, ) -> Self { Self { - cache: CacheStorage::new(capacity, None, None, "introspection").await, + cache: CacheStorage::new(capacity, None, "introspection").await, planner, } } diff --git a/apollo-router/src/notification.rs b/apollo-router/src/notification.rs index 90652de818..52a0d7477d 100644 --- a/apollo-router/src/notification.rs +++ b/apollo-router/src/notification.rs @@ -10,20 +10,21 @@ use std::task::Poll; use std::time::Duration; use std::time::Instant; -use futures::channel::mpsc; -use futures::channel::mpsc::SendError; -use futures::channel::oneshot; -use futures::channel::oneshot::Canceled; use futures::Sink; -use futures::SinkExt; use futures::Stream; use futures::StreamExt; use pin_project_lite::pin_project; use thiserror::Error; use tokio::sync::broadcast; +use tokio::sync::mpsc; +use tokio::sync::mpsc::error::SendError; +use tokio::sync::mpsc::error::TrySendError; +use tokio::sync::oneshot; +use tokio::sync::oneshot::error::RecvError; use tokio_stream::wrappers::errors::BroadcastStreamRecvError; use tokio_stream::wrappers::BroadcastStream; use tokio_stream::wrappers::IntervalStream; +use tokio_stream::wrappers::ReceiverStream; use crate::graphql; use crate::spec::Schema; @@ -35,15 +36,42 @@ static DEFAULT_MSG_CHANNEL_SIZE: usize = 128; #[derive(Error, Debug)] pub(crate) enum NotifyError { #[error("cannot send data to pubsub")] - SendError(#[from] SendError), + SendError(#[from] SendError), #[error("cannot send data to response stream")] BroadcastSendError(#[from] broadcast::error::SendError), - #[error("cannot send data to pubsub because channel has been closed")] - Canceled(#[from] Canceled), #[error("this topic doesn't exist")] UnknownTopic, } +impl From>> for NotifyError +where + K: Send + Hash + Eq + Clone + 'static, + V: Send + Clone + 'static, +{ + fn from(error: SendError>) -> Self { + error.into() + } +} + +impl From for NotifyError +where + V: Send + Clone + 'static, +{ + fn from(error: RecvError) -> Self { + error.into() + } +} + +impl From>> for NotifyError +where + K: Send + Hash + Eq + Clone + 'static, + V: Send + Clone + 'static, +{ + fn from(error: TrySendError>) -> Self { + error.into() + } +} + type ResponseSender = oneshot::Sender>, broadcast::Receiver>)>>; @@ -124,7 +152,8 @@ where router_broadcasts: Option>, ) -> Notify { let (sender, receiver) = mpsc::channel(NOTIFY_CHANNEL_SIZE); - tokio::task::spawn(task(receiver, ttl, heartbeat_error_message)); + let receiver_stream = ReceiverStream::new(receiver); + tokio::task::spawn(task(receiver_stream, ttl, heartbeat_error_message)); Notify { sender, queue_size, @@ -305,7 +334,7 @@ where // if disconnected, we don't care (the task was stopped) self.sender .try_send(Notification::TryDelete { topic }) - .map_err(|try_send_error| try_send_error.into_send_error().into()) + .map_err(|try_send_error| try_send_error.into()) } #[cfg(test)] @@ -542,10 +571,7 @@ where Poll::Ready(Ok(())) } - fn poll_close( - mut self: Pin<&mut Self>, - _cx: &mut Context<'_>, - ) -> Poll> { + fn poll_close(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll> { let topic = self.handle_guard.topic.clone(); let _ = self .handle_guard @@ -558,7 +584,7 @@ where impl Handle where K: Clone {} async fn task( - mut receiver: mpsc::Receiver>, + mut receiver: ReceiverStream>, ttl: Option, heartbeat_error_message: Option, ) where diff --git a/apollo-router/src/orbiter/mod.rs b/apollo-router/src/orbiter/mod.rs index 922e208730..ef4be4ac0c 100644 --- a/apollo-router/src/orbiter/mod.rs +++ b/apollo-router/src/orbiter/mod.rs @@ -23,7 +23,7 @@ use crate::executable::Opt; use crate::plugin::DynPlugin; use crate::router_factory::RouterSuperServiceFactory; use crate::router_factory::YamlRouterFactory; -use crate::services::router_service::RouterCreator; +use crate::services::router::service::RouterCreator; use crate::services::HasSchema; use crate::spec::Schema; use crate::Configuration; diff --git a/apollo-router/src/plugins/authentication/jwks.rs b/apollo-router/src/plugins/authentication/jwks.rs index e99677a7a8..e6f864d9b4 100644 --- a/apollo-router/src/plugins/authentication/jwks.rs +++ b/apollo-router/src/plugins/authentication/jwks.rs @@ -3,6 +3,7 @@ use std::collections::HashSet; use std::str::FromStr; use std::sync::Arc; use std::sync::RwLock; +use std::time::Duration; use futures::future::join_all; use futures::future::select; @@ -23,7 +24,6 @@ use url::Url; use super::CLIENT; use super::DEFAULT_AUTHENTICATION_NETWORK_TIMEOUT; -use crate::plugins::authentication::DEFAULT_AUTHENTICATION_DOWNLOAD_INTERVAL; #[derive(Clone)] pub(super) struct JwksManager { @@ -37,6 +37,7 @@ pub(super) struct JwksConfig { pub(super) url: Url, pub(super) issuer: Option, pub(super) algorithms: Option>, + pub(super) poll_interval: Duration, } #[derive(Clone)] @@ -102,7 +103,7 @@ async fn poll( let jwks_map = jwks_map.clone(); Box::pin( repeat((config, jwks_map)).then(|(config, jwks_map)| async move { - tokio::time::sleep(DEFAULT_AUTHENTICATION_DOWNLOAD_INTERVAL).await; + tokio::time::sleep(config.poll_interval).await; if let Some(jwks) = get_jwks(config.url.clone()).await { if let Ok(mut map) = jwks_map.write() { diff --git a/apollo-router/src/plugins/authentication/mod.rs b/apollo-router/src/plugins/authentication/mod.rs index 5874dd0284..6742c52de7 100644 --- a/apollo-router/src/plugins/authentication/mod.rs +++ b/apollo-router/src/plugins/authentication/mod.rs @@ -4,6 +4,8 @@ use std::collections::HashMap; use std::ops::ControlFlow; use std::str::FromStr; use std::time::Duration; +use std::time::SystemTime; +use std::time::UNIX_EPOCH; use displaydoc::Display; use http::StatusCode; @@ -23,6 +25,7 @@ use once_cell::sync::Lazy; use reqwest::Client; use schemars::JsonSchema; use serde::Deserialize; +use serde_json::Value; use thiserror::Error; use tower::BoxError; use tower::ServiceBuilder; @@ -128,6 +131,13 @@ struct JWTConf { struct JwksConf { /// Retrieve the JWK Set url: String, + /// Polling interval for each JWKS endpoint in human-readable format; defaults to 60s + #[serde( + deserialize_with = "humantime_serde::deserialize", + default = "default_poll_interval" + )] + #[schemars(with = "String", default = "default_poll_interval")] + poll_interval: Duration, /// Expected issuer for tokens verified by that JWKS issuer: Option, /// List of accepted algorithms. Possible values are `HS256`, `HS384`, `HS512`, `ES256`, `ES384`, `RS256`, `RS384`, `RS512`, `PS256`, `PS384`, `PS512`, `EdDSA` @@ -163,6 +173,10 @@ fn default_header_value_prefix() -> String { "Bearer".to_string() } +fn default_poll_interval() -> Duration { + DEFAULT_AUTHENTICATION_DOWNLOAD_INTERVAL +} + #[derive(Debug, Default)] struct JWTCriteria { alg: Algorithm, @@ -381,6 +395,7 @@ impl Plugin for AuthenticationPlugin { .algorithms .as_ref() .map(|algs| algs.iter().cloned().collect()), + poll_interval: jwks_conf.poll_interval, }); } @@ -698,6 +713,41 @@ fn decode_jwt( } } +pub(crate) fn jwt_expires_in(context: &Context) -> Duration { + let claims = context + .get(APOLLO_AUTHENTICATION_JWT_CLAIMS) + .map_err(|err| tracing::error!("could not read JWT claims: {err}")) + .ok() + .flatten(); + let ts_opt = claims.as_ref().and_then(|x: &Value| { + if !x.is_object() { + tracing::error!("JWT claims should be an object"); + return None; + } + let claims = x.as_object().expect("claims should be an object"); + let exp = claims.get("exp")?; + if !exp.is_number() { + tracing::error!("JWT 'exp' (expiry) claim should be a number"); + return None; + } + exp.as_i64() + }); + match ts_opt { + Some(ts) => { + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .expect("we should not run before EPOCH") + .as_secs() as i64; + if now < ts { + Duration::from_secs((ts - now) as u64) + } else { + Duration::ZERO + } + } + None => Duration::MAX, + } +} + // This macro allows us to use it in our plugin registry! // register_plugin takes a group name, and a plugin name. // diff --git a/apollo-router/src/plugins/authentication/tests.rs b/apollo-router/src/plugins/authentication/tests.rs index 9a75703cc9..01c813423e 100644 --- a/apollo-router/src/plugins/authentication/tests.rs +++ b/apollo-router/src/plugins/authentication/tests.rs @@ -602,6 +602,7 @@ async fn build_jwks_search_components() -> JwksManager { url, issuer: None, algorithms: None, + poll_interval: Duration::from_secs(60), }); } @@ -711,6 +712,7 @@ fn make_manager(jwk: &Jwk, issuer: Option) -> JwksManager { url: url.clone(), issuer, algorithms: None, + poll_interval: Duration::from_secs(60), }]; let map = HashMap::from([(url, jwks); 1]); @@ -907,6 +909,7 @@ async fn it_rejects_key_with_restricted_algorithm() { url, issuer: None, algorithms: Some(HashSet::from([Algorithm::RS256])), + poll_interval: Duration::from_secs(60), }); } @@ -937,6 +940,7 @@ async fn it_rejects_and_accepts_keys_with_restricted_algorithms_and_unknown_jwks url, issuer: None, algorithms: Some(HashSet::from([Algorithm::RS256])), + poll_interval: Duration::from_secs(60), }); } diff --git a/apollo-router/src/plugins/authorization/authenticated.rs b/apollo-router/src/plugins/authorization/authenticated.rs index 3b3875ef7a..1f8298fbb3 100644 --- a/apollo-router/src/plugins/authorization/authenticated.rs +++ b/apollo-router/src/plugins/authorization/authenticated.rs @@ -13,6 +13,7 @@ use crate::json_ext::PathElement; use crate::spec::query::transform; use crate::spec::query::traverse; use crate::spec::Schema; +use crate::spec::TYPENAME; pub(crate) const AUTHENTICATED_DIRECTIVE_NAME: &str = "authenticated"; pub(crate) const AUTHENTICATED_SPEC_URL: &str = "https://specs.apollo.dev/authenticated/v0.1"; @@ -129,6 +130,9 @@ pub(crate) struct AuthenticatedVisitor<'a> { implementers_map: &'a HashMap>, pub(crate) query_requires_authentication: bool, pub(crate) unauthorized_paths: Vec, + // store the error paths from fragments so we can add them at + // the point of application + fragments_unauthorized_paths: HashMap<&'a ast::Name, Vec>, current_path: Path, authenticated_directive_name: String, dry_run: bool, @@ -148,6 +152,7 @@ impl<'a> AuthenticatedVisitor<'a> { dry_run, query_requires_authentication: false, unauthorized_paths: Vec::new(), + fragments_unauthorized_paths: HashMap::new(), current_path: Path::default(), authenticated_directive_name: Schema::directive_name( schema, @@ -175,13 +180,16 @@ impl<'a> AuthenticatedVisitor<'a> { field_def: &ast::FieldDefinition, node: &ast::Field, ) -> bool { - // if all selections under the interface field are fragments with type conditions + // we can request __typename outside of fragments even if the types have different + // authorization requirements + if node.name.as_str() == TYPENAME { + return false; + } + // if all selections under the interface field are __typename or fragments with type conditions // then we don't need to check that they have the same authorization requirements - if node.selection_set.iter().all(|sel| { - matches!( - sel, - ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) - ) + if node.selection_set.iter().all(|sel| match sel { + ast::Selection::Field(f) => f.name == TYPENAME, + ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) => true, }) { return false; } @@ -338,22 +346,50 @@ impl<'a> transform::Visitor for AuthenticatedVisitor<'a> { .get(&node.type_condition) .is_some_and(|type_definition| self.is_type_authenticated(type_definition)); - if !fragment_requires_authentication || self.dry_run { + let current_unauthorized_paths_index = self.unauthorized_paths.len(); + let res = if !fragment_requires_authentication || self.dry_run { transform::fragment_definition(self, node) } else { + self.unauthorized_paths.push(self.current_path.clone()); Ok(None) + }; + + if self.unauthorized_paths.len() > current_unauthorized_paths_index { + if let Some((name, _)) = self.fragments.get_key_value(&node.name) { + self.fragments_unauthorized_paths.insert( + name, + self.unauthorized_paths + .split_off(current_unauthorized_paths_index), + ); + } } + + if let Ok(None) = res { + self.fragments.remove(&node.name); + } + + res } fn fragment_spread( &mut self, node: &ast::FragmentSpread, ) -> Result, BoxError> { - let condition = &self - .fragments - .get(&node.fragment_name) - .ok_or("MissingFragment")? - .type_condition; + // record the fragment errors at the point of application + if let Some(paths) = self.fragments_unauthorized_paths.get(&node.fragment_name) { + for path in paths { + let path = self.current_path.join(path); + self.unauthorized_paths.push(path); + } + } + + let fragment = match self.fragments.get(&node.fragment_name) { + Some(fragment) => fragment, + None => return Ok(None), + }; + + let condition = &fragment.type_condition; + self.current_path .push(PathElement::Fragment(condition.as_str().into())); @@ -707,6 +743,32 @@ mod tests { }); } + #[test] + fn fragment_fields() { + static QUERY: &str = r#" + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + "#; + + let (doc, paths) = filter(BASIC_SCHEMA, QUERY); + + insta::assert_display_snapshot!(TestResult { + query: QUERY, + result: doc, + paths + }); + } + #[test] fn defer() { static QUERY: &str = r#" @@ -1173,6 +1235,103 @@ mod tests { let _ = filter(ALTERNATIVE_DIRECTIVE_SCHEMA, QUERY); } + #[test] + fn interface_typename() { + static SCHEMA: &str = r#" + schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) + @link(url: "https://specs.apollo.dev/authenticated/v0.1", for: SECURITY) + { + query: Query + } + directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @authenticated on OBJECT | FIELD_DEFINITION | INTERFACE | SCALAR | ENUM + directive @defer on INLINE_FRAGMENT | FRAGMENT_SPREAD + scalar link__Import + enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION + } + type Query { + post(id: ID!): Post + } + + interface Post { + id: ID! + author: String! + title: String! + content: String! + } + + type Stats { + views: Int + } + + type PublicBlog implements Post { + id: ID! + author: String! + title: String! + content: String! + stats: Stats @authenticated + } + + type PrivateBlog implements Post @authenticated { + id: ID! + author: String! + title: String! + content: String! + publishAt: String + } + "#; + + static QUERY: &str = r#" + query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let (doc, paths) = filter(SCHEMA, QUERY); + + insta::assert_display_snapshot!(TestResult { + query: QUERY, + result: doc, + paths + }); + + static QUERY2: &str = r#" + query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let (doc, paths) = filter(SCHEMA, QUERY2); + + insta::assert_display_snapshot!(TestResult { + query: QUERY2, + result: doc, + paths + }); + } + const SCHEMA: &str = r#"schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) diff --git a/apollo-router/src/plugins/authorization/policy.rs b/apollo-router/src/plugins/authorization/policy.rs index 5e23cd5392..a165ea1ed1 100644 --- a/apollo-router/src/plugins/authorization/policy.rs +++ b/apollo-router/src/plugins/authorization/policy.rs @@ -19,6 +19,7 @@ use crate::json_ext::PathElement; use crate::spec::query::transform; use crate::spec::query::traverse; use crate::spec::Schema; +use crate::spec::TYPENAME; pub(crate) struct PolicyExtractionVisitor<'a> { schema: &'a schema::Schema, @@ -149,6 +150,9 @@ pub(crate) struct PolicyFilteringVisitor<'a> { request_policies: HashSet, pub(crate) query_requires_policies: bool, pub(crate) unauthorized_paths: Vec, + // store the error paths from fragments so we can add them at + // the point of application + fragments_unauthorized_paths: HashMap<&'a ast::Name, Vec>, current_path: Path, policy_directive_name: String, } @@ -188,6 +192,7 @@ impl<'a> PolicyFilteringVisitor<'a> { request_policies: successful_policies, query_requires_policies: false, unauthorized_paths: vec![], + fragments_unauthorized_paths: HashMap::new(), current_path: Path::default(), policy_directive_name: Schema::directive_name( schema, @@ -246,13 +251,16 @@ impl<'a> PolicyFilteringVisitor<'a> { field_def: &ast::FieldDefinition, node: &ast::Field, ) -> bool { - // if all selections under the interface field are fragments with type conditions + // we can request __typename outside of fragments even if the types have different + // authorization requirements + if node.name.as_str() == TYPENAME { + return false; + } + // if all selections under the interface field are __typename or fragments with type conditions // then we don't need to check that they have the same authorization requirements - if node.selection_set.iter().all(|sel| { - matches!( - sel, - ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) - ) + if node.selection_set.iter().all(|sel| match sel { + ast::Selection::Field(f) => f.name == TYPENAME, + ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) => true, }) { return false; } @@ -459,22 +467,51 @@ impl<'a> transform::Visitor for PolicyFilteringVisitor<'a> { .get(&node.type_condition) .is_some_and(|ty| self.is_type_authorized(ty)); - if fragment_is_authorized || self.dry_run { + let current_unauthorized_paths_index = self.unauthorized_paths.len(); + + let res = if fragment_is_authorized || self.dry_run { transform::fragment_definition(self, node) } else { + self.unauthorized_paths.push(self.current_path.clone()); Ok(None) + }; + + if self.unauthorized_paths.len() > current_unauthorized_paths_index { + if let Some((name, _)) = self.fragments.get_key_value(&node.name) { + self.fragments_unauthorized_paths.insert( + name, + self.unauthorized_paths + .split_off(current_unauthorized_paths_index), + ); + } } + + if let Ok(None) = res { + self.fragments.remove(&node.name); + } + + res } fn fragment_spread( &mut self, node: &ast::FragmentSpread, ) -> Result, BoxError> { - let condition = &self - .fragments - .get(&node.fragment_name) - .ok_or("MissingFragment")? - .type_condition; + // record the fragment errors at the point of application + if let Some(paths) = self.fragments_unauthorized_paths.get(&node.fragment_name) { + for path in paths { + let path = self.current_path.join(path); + self.unauthorized_paths.push(path); + } + } + + let fragment = match self.fragments.get(&node.fragment_name) { + Some(fragment) => fragment, + None => return Ok(None), + }; + + let condition = &fragment.type_condition; + self.current_path .push(PathElement::Fragment(condition.as_str().into())); @@ -981,6 +1018,35 @@ mod tests { }); } + #[test] + fn fragment_fields() { + static QUERY: &str = r#" + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + "#; + + let extracted_policies = extract(BASIC_SCHEMA, QUERY); + let (doc, paths) = filter(BASIC_SCHEMA, QUERY, HashSet::new()); + + insta::assert_display_snapshot!(TestResult { + query: QUERY, + extracted_policies: &extracted_policies, + successful_policies: Vec::new(), + result: doc, + paths + }); + } + #[test] fn or_and() { static QUERY: &str = r#" @@ -1415,4 +1481,96 @@ mod tests { paths }); } + + #[test] + fn interface_typename() { + static SCHEMA: &str = r#" + schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) + @link(url: "https://specs.apollo.dev/policy/v0.1", for: SECURITY) + { + query: Query + } + directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @policy(policies: [String]) on OBJECT | FIELD_DEFINITION | INTERFACE | SCALAR | ENUM + directive @defer on INLINE_FRAGMENT | FRAGMENT_SPREAD + scalar link__Import + enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + + type Query { + post(id: ID!): Post + } + + interface Post { + id: ID! + author: String! + title: String! + content: String! + } + + type Stats { + views: Int + } + + type PublicBlog implements Post { + id: ID! + author: String! + title: String! + content: String! + stats: Stats @policy(policies: ["a"]) + } + + type PrivateBlog implements Post @policy(policies: ["b"]) { + id: ID! + author: String! + title: String! + content: String! + publishAt: String + } + "#; + + static QUERY: &str = r#" + query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let (doc, paths) = filter(SCHEMA, QUERY, HashSet::new()); + + insta::assert_display_snapshot!(doc); + insta::assert_debug_snapshot!(paths); + + static QUERY2: &str = r#" + query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let (doc, paths) = filter(SCHEMA, QUERY2, HashSet::new()); + + insta::assert_display_snapshot!(doc); + insta::assert_debug_snapshot!(paths); + } } diff --git a/apollo-router/src/plugins/authorization/scopes.rs b/apollo-router/src/plugins/authorization/scopes.rs index 6a9e726663..34f57ee4e8 100644 --- a/apollo-router/src/plugins/authorization/scopes.rs +++ b/apollo-router/src/plugins/authorization/scopes.rs @@ -19,6 +19,7 @@ use crate::json_ext::PathElement; use crate::spec::query::transform; use crate::spec::query::traverse; use crate::spec::Schema; +use crate::spec::TYPENAME; pub(crate) struct ScopeExtractionVisitor<'a> { schema: &'a schema::Schema, @@ -165,6 +166,9 @@ pub(crate) struct ScopeFilteringVisitor<'a> { request_scopes: HashSet, pub(crate) query_requires_scopes: bool, pub(crate) unauthorized_paths: Vec, + // store the error paths from fragments so we can add them at + // the point of application + fragments_unauthorized_paths: HashMap<&'a ast::Name, Vec>, current_path: Path, requires_scopes_directive_name: String, dry_run: bool, @@ -186,6 +190,7 @@ impl<'a> ScopeFilteringVisitor<'a> { dry_run, query_requires_scopes: false, unauthorized_paths: vec![], + fragments_unauthorized_paths: HashMap::new(), current_path: Path::default(), requires_scopes_directive_name: Schema::directive_name( schema, @@ -244,13 +249,22 @@ impl<'a> ScopeFilteringVisitor<'a> { field_def: &ast::FieldDefinition, node: &ast::Field, ) -> bool { - // if all selections under the interface field are fragments with type conditions + println!( + "implementors with different requirements for {:?}, node name={}", + field_def.name, + node.name.as_str() + ); + // we can request __typename outside of fragments even if the types have different + // authorization requirements + if node.name.as_str() == TYPENAME { + return false; + } + + // if all selections under the interface field are __typename or fragments with type conditions // then we don't need to check that they have the same authorization requirements - if node.selection_set.iter().all(|sel| { - matches!( - sel, - ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) - ) + if node.selection_set.iter().all(|sel| match sel { + ast::Selection::Field(f) => f.name == TYPENAME, + ast::Selection::FragmentSpread(_) | ast::Selection::InlineFragment(_) => true, }) { return false; } @@ -460,28 +474,51 @@ impl<'a> transform::Visitor for ScopeFilteringVisitor<'a> { .get(&node.type_condition) .is_some_and(|ty| self.is_type_authorized(ty)); - // FIXME: if a field was removed inside a fragment definition, then we should add an unauthorized path - // starting at the fragment spread, instead of starting at the definition. - // If we modified the transform visitor implementation to modify the fragment definitions before the - // operations, we would be able to store the list of unauthorized paths per fragment, and at the point - // of application, generate unauthorized paths starting at the operation root + let current_unauthorized_paths_index = self.unauthorized_paths.len(); - if fragment_is_authorized || self.dry_run { + let res = if fragment_is_authorized || self.dry_run { transform::fragment_definition(self, node) } else { + self.unauthorized_paths.push(self.current_path.clone()); Ok(None) + }; + + if self.unauthorized_paths.len() > current_unauthorized_paths_index { + if let Some((name, _)) = self.fragments.get_key_value(&node.name) { + self.fragments_unauthorized_paths.insert( + name, + self.unauthorized_paths + .split_off(current_unauthorized_paths_index), + ); + } + } + + if let Ok(None) = res { + self.fragments.remove(&node.name); } + + res } fn fragment_spread( &mut self, node: &ast::FragmentSpread, ) -> Result, BoxError> { - let condition = &self - .fragments - .get(&node.fragment_name) - .ok_or("MissingFragment")? - .type_condition; + // record the fragment errors at the point of application + if let Some(paths) = self.fragments_unauthorized_paths.get(&node.fragment_name) { + for path in paths { + let path = self.current_path.join(path); + self.unauthorized_paths.push(path); + } + } + + let fragment = match self.fragments.get(&node.fragment_name) { + Some(fragment) => fragment, + None => return Ok(None), + }; + + let condition = &fragment.type_condition; + self.current_path .push(PathElement::Fragment(condition.as_str().into())); @@ -1054,6 +1091,35 @@ mod tests { }); } + #[test] + fn fragment_fields() { + static QUERY: &str = r#" + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + "#; + + let extracted_scopes = extract(BASIC_SCHEMA, QUERY); + let (doc, paths) = filter(BASIC_SCHEMA, QUERY, HashSet::new()); + + insta::assert_display_snapshot!(TestResult { + query: QUERY, + extracted_scopes: &extracted_scopes, + scopes: Vec::new(), + result: doc, + paths + }); + } + static INTERFACE_SCHEMA: &str = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @@ -1442,4 +1508,109 @@ mod tests { paths }); } + + #[test] + fn interface_typename() { + static SCHEMA: &str = r#" + schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) + @link(url: "https://specs.apollo.dev/requiresScopes/v0.1", for: SECURITY) + { + query: Query + } + directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @requiresScopes(scopes: [[String!]!]!) on OBJECT | FIELD_DEFINITION | INTERFACE | SCALAR | ENUM + directive @defer on INLINE_FRAGMENT | FRAGMENT_SPREAD + scalar link__Import + enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION + } + type Query { + post(id: ID!): Post + } + + interface Post { + id: ID! + author: String! + title: String! + content: String! + } + + type Stats { + views: Int + } + + type PublicBlog implements Post { + id: ID! + author: String! + title: String! + content: String! + stats: Stats @requiresScopes(scopes: [["a"]]) + } + + type PrivateBlog implements Post @requiresScopes(scopes: [["b"]]) { + id: ID! + author: String! + title: String! + content: String! + publishAt: String + } + "#; + + static QUERY: &str = r#" + query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let extracted_scopes: BTreeSet = extract(SCHEMA, QUERY); + + let (doc, paths) = filter(SCHEMA, QUERY, HashSet::new()); + + insta::assert_display_snapshot!(TestResult { + query: QUERY, + extracted_scopes: &extracted_scopes, + scopes: Vec::new(), + result: doc, + paths + }); + + static QUERY2: &str = r#" + query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } + } + "#; + + let extracted_scopes: BTreeSet = extract(SCHEMA, QUERY2); + + let (doc, paths) = filter(SCHEMA, QUERY2, HashSet::new()); + + insta::assert_display_snapshot!(TestResult { + query: QUERY2, + extracted_scopes: &extracted_scopes, + scopes: Vec::new(), + result: doc, + paths + }); + } } diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__fragment_fields.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__fragment_fields.snap new file mode 100644 index 0000000000..8fa8aaf54d --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__fragment_fields.snap @@ -0,0 +1,27 @@ +--- +source: apollo-router/src/plugins/authorization/authenticated.rs +expression: "TestResult { query: QUERY, result: doc, paths }" +--- +query: + + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + +filtered: +{ + topProducts { + type + } +} + +paths: ["/topProducts/reviews/@"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_fragment.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_fragment.snap index b6102639fc..8b46c2a62c 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_fragment.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_fragment.snap @@ -28,4 +28,4 @@ filtered: } } -paths: ["/itf/... on User"] +paths: ["/itf"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename-2.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename-2.snap new file mode 100644 index 0000000000..123bc5ba87 --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename-2.snap @@ -0,0 +1,28 @@ +--- +source: apollo-router/src/plugins/authorization/authenticated.rs +expression: "TestResult { query: QUERY2, result: doc, paths }" +--- +query: + + query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } + } + +filtered: +query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } +} + +paths: [] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename.snap new file mode 100644 index 0000000000..a9949e1404 --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__interface_typename.snap @@ -0,0 +1,26 @@ +--- +source: apollo-router/src/plugins/authorization/authenticated.rs +expression: "TestResult { query: QUERY, result: doc, paths }" +--- +query: + + query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } + } + +filtered: +query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } +} + +paths: [] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__fragment_fields.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__fragment_fields.snap new file mode 100644 index 0000000000..f11b9fa59f --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__fragment_fields.snap @@ -0,0 +1,29 @@ +--- +source: apollo-router/src/plugins/authorization/policy.rs +expression: "TestResult {\n query: QUERY,\n extracted_policies: &extracted_policies,\n successful_policies: Vec::new(),\n result: doc,\n paths,\n}" +--- +query: + + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + +extracted_policies: {"review"} +successful policies: [] +filtered: +{ + topProducts { + type + } +} + +paths: ["/topProducts/reviews/@"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment-2.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment-2.snap index fd2f876120..0ae4015930 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment-2.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment-2.snap @@ -21,7 +21,11 @@ query: extracted_policies: {"read user", "read username"} successful policies: ["read user", "read username"] filtered: -{ +fragment F on User { + name +} + +query { topProducts { type } @@ -31,8 +35,4 @@ filtered: } } -fragment F on User { - name -} - paths: [] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment.snap index c75621f2a2..f47f0abf18 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_fragment.snap @@ -30,4 +30,4 @@ filtered: } } -paths: ["/itf/... on User"] +paths: ["/itf"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-2.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-2.snap new file mode 100644 index 0000000000..8dd6c4970f --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-2.snap @@ -0,0 +1,5 @@ +--- +source: apollo-router/src/plugins/authorization/policy.rs +expression: paths +--- +[] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-3.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-3.snap new file mode 100644 index 0000000000..1f9b12ad5c --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-3.snap @@ -0,0 +1,14 @@ +--- +source: apollo-router/src/plugins/authorization/policy.rs +expression: doc +--- +query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } +} + diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-4.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-4.snap new file mode 100644 index 0000000000..8dd6c4970f --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename-4.snap @@ -0,0 +1,5 @@ +--- +source: apollo-router/src/plugins/authorization/policy.rs +expression: paths +--- +[] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename.snap new file mode 100644 index 0000000000..b14d6c3ed8 --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__policy__tests__interface_typename.snap @@ -0,0 +1,13 @@ +--- +source: apollo-router/src/plugins/authorization/policy.rs +expression: doc +--- +query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } +} + diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__fragment_fields.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__fragment_fields.snap new file mode 100644 index 0000000000..7b62a09474 --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__fragment_fields.snap @@ -0,0 +1,29 @@ +--- +source: apollo-router/src/plugins/authorization/scopes.rs +expression: "TestResult {\n query: QUERY,\n extracted_scopes: &extracted_scopes,\n scopes: Vec::new(),\n result: doc,\n paths,\n}" +--- +query: + + query { + topProducts { + type + ...F + } + } + + fragment F on Product { + reviews { + body + } + } + +extracted_scopes: {"review"} +request scopes: [] +filtered: +{ + topProducts { + type + } +} + +paths: ["/topProducts/reviews/@"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-2.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-2.snap index 1d3ce16f04..1028c92ffc 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-2.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-2.snap @@ -22,7 +22,11 @@ query: extracted_scopes: {"read:user", "read:username"} request scopes: ["read:user"] filtered: -{ +fragment F on User { + id2: id +} + +query { topProducts { type } @@ -32,8 +36,4 @@ filtered: } } -fragment F on User { - id2: id -} - -paths: ["/name"] +paths: ["/itf/name"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-3.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-3.snap index 410f0562cb..9496930257 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-3.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment-3.snap @@ -22,7 +22,12 @@ query: extracted_scopes: {"read:user", "read:username"} request scopes: ["read:user", "read:username"] filtered: -{ +fragment F on User { + id2: id + name +} + +query { topProducts { type } @@ -32,9 +37,4 @@ filtered: } } -fragment F on User { - id2: id - name -} - paths: [] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment.snap index 24f991e076..d41f868248 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_fragment.snap @@ -31,4 +31,4 @@ filtered: } } -paths: ["/itf/... on User"] +paths: ["/itf"] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename-2.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename-2.snap new file mode 100644 index 0000000000..e6bcfeafde --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename-2.snap @@ -0,0 +1,30 @@ +--- +source: apollo-router/src/plugins/authorization/scopes.rs +expression: "TestResult {\n query: QUERY2,\n extracted_scopes: &extracted_scopes,\n scopes: Vec::new(),\n result: doc,\n paths,\n}" +--- +query: + + query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } + } + +extracted_scopes: {} +request scopes: [] +filtered: +query Anonymous { + post(id: "1") { + __typename + ... on PublicBlog { + __typename + title + } + } +} + +paths: [] diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename.snap new file mode 100644 index 0000000000..a290337070 --- /dev/null +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__scopes__tests__interface_typename.snap @@ -0,0 +1,28 @@ +--- +source: apollo-router/src/plugins/authorization/scopes.rs +expression: "TestResult {\n query: QUERY,\n extracted_scopes: &extracted_scopes,\n scopes: Vec::new(),\n result: doc,\n paths,\n}" +--- +query: + + query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } + } + +extracted_scopes: {} +request scopes: [] +filtered: +query Anonymous { + post(id: "1") { + ... on PublicBlog { + __typename + title + } + } +} + +paths: [] diff --git a/apollo-router/src/plugins/coprocessor/test.rs b/apollo-router/src/plugins/coprocessor/test.rs index c5f7b68de0..ed7d551684 100644 --- a/apollo-router/src/plugins/coprocessor/test.rs +++ b/apollo-router/src/plugins/coprocessor/test.rs @@ -24,7 +24,6 @@ mod tests { use crate::services::external::Externalizable; use crate::services::external::PipelineStep; use crate::services::external::EXTERNALIZABLE_VERSION; - use crate::services::router_service; use crate::services::subgraph; use crate::services::supergraph; @@ -713,7 +712,7 @@ mod tests { response: Default::default(), }; - let mock_router_service = router_service::from_supergraph_mock_callback(move |req| { + let mock_router_service = router::service::from_supergraph_mock_callback(move |req| { // Let's assert that the router request has been transformed as it should have. assert_eq!( req.supergraph_request.headers().get("cookie").unwrap(), @@ -831,7 +830,7 @@ mod tests { response: Default::default(), }; - let mock_router_service = router_service::from_supergraph_mock_callback(move |req| { + let mock_router_service = router::service::from_supergraph_mock_callback(move |req| { // Let's assert that the router request has been transformed as it should have. assert_eq!( req.supergraph_request.headers().get("cookie").unwrap(), @@ -1125,7 +1124,7 @@ mod tests { request: Default::default(), }; - let mock_router_service = router_service::from_supergraph_mock_callback(move |req| { + let mock_router_service = router::service::from_supergraph_mock_callback(move |req| { Ok(supergraph::Response::builder() .data(json!("{ \"test\": 1234_u32 }")) .context(req.context) diff --git a/apollo-router/src/plugins/expose_query_plan.rs b/apollo-router/src/plugins/expose_query_plan.rs index d76fb2abef..78d771554c 100644 --- a/apollo-router/src/plugins/expose_query_plan.rs +++ b/apollo-router/src/plugins/expose_query_plan.rs @@ -121,7 +121,6 @@ register_plugin!("experimental", "expose_query_plan", ExposeQueryPlan); #[cfg(test)] mod tests { - use once_cell::sync::Lazy; use serde_json_bytes::ByteString; use serde_json_bytes::Value; use tower::Service; @@ -132,19 +131,6 @@ mod tests { use crate::plugin::test::MockSubgraph; use crate::MockedSubgraphs; - static EXPECTED_RESPONSE_WITH_QUERY_PLAN: Lazy = Lazy::new(|| { - serde_json::from_str(include_str!( - "../../tests/fixtures/expected_response_with_queryplan.json" - )) - .unwrap() - }); - static EXPECTED_RESPONSE_WITHOUT_QUERY_PLAN: Lazy = Lazy::new(|| { - serde_json::from_str(include_str!( - "../../tests/fixtures/expected_response_without_queryplan.json" - )) - .unwrap() - }); - static VALID_QUERY: &str = r#"query TopProducts($first: Int) { topProducts(first: $first) { upc name reviews { id product { name } author { id name } } } }"#; async fn build_mock_supergraph(config: serde_json::Value) -> supergraph::BoxCloneService { @@ -204,9 +190,8 @@ mod tests { async fn execute_supergraph_test( query: &str, - body: &Response, mut supergraph_service: supergraph::BoxCloneService, - ) { + ) -> Response { let request = supergraph::Request::fake_builder() .query(query.to_string()) .variable("first", 2usize) @@ -214,7 +199,7 @@ mod tests { .build() .expect("expecting valid request"); - let response = supergraph_service + supergraph_service .ready() .await .unwrap() @@ -223,19 +208,13 @@ mod tests { .unwrap() .next_response() .await - .unwrap(); - - assert_eq!( - serde_json::to_string(&response).unwrap(), - serde_json::to_string(body).unwrap() - ); + .unwrap() } #[tokio::test] async fn it_expose_query_plan() { - execute_supergraph_test( + let response = execute_supergraph_test( VALID_QUERY, - &EXPECTED_RESPONSE_WITH_QUERY_PLAN, build_mock_supergraph(serde_json::json! {{ "plugins": { "experimental.expose_query_plan": true @@ -244,10 +223,11 @@ mod tests { .await, ) .await; + insta::assert_json_snapshot!(serde_json::to_value(response).unwrap()); + // let's try that again - execute_supergraph_test( + let response = execute_supergraph_test( VALID_QUERY, - &EXPECTED_RESPONSE_WITH_QUERY_PLAN, build_mock_supergraph(serde_json::json! {{ "plugins": { "experimental.expose_query_plan": true @@ -256,6 +236,8 @@ mod tests { .await, ) .await; + + insta::assert_json_snapshot!(serde_json::to_value(response).unwrap()); } #[tokio::test] @@ -266,11 +248,8 @@ mod tests { } }}) .await; - execute_supergraph_test( - VALID_QUERY, - &EXPECTED_RESPONSE_WITHOUT_QUERY_PLAN, - supergraph, - ) - .await; + let response = execute_supergraph_test(VALID_QUERY, supergraph).await; + + insta::assert_json_snapshot!(serde_json::to_value(response).unwrap()); } } diff --git a/apollo-router/src/plugins/headers.rs b/apollo-router/src/plugins/headers.rs index 074cafd5e8..575ad221ac 100644 --- a/apollo-router/src/plugins/headers.rs +++ b/apollo-router/src/plugins/headers.rs @@ -373,9 +373,15 @@ where default, }) => { let headers = req.subgraph_request.headers_mut(); - let value = req.supergraph_request.headers().get(named); - if let Some(value) = value.or(default.as_ref()) { - headers.insert(rename.as_ref().unwrap_or(named), value.clone()); + let values = req.supergraph_request.headers().get_all(named); + if values.iter().count() == 0 { + if let Some(default) = default { + headers.append(rename.as_ref().unwrap_or(named), default.clone()); + } + } else { + for value in values { + headers.append(rename.as_ref().unwrap_or(named), value.clone()); + } } } Operation::Propagate(Propagate::Matching { matching }) => { @@ -387,7 +393,7 @@ where !RESERVED_HEADERS.contains(name) && matching.is_match(name.as_str()) }) .for_each(|(name, value)| { - headers.insert(name, value.clone()); + headers.append(name, value.clone()); }); } } @@ -650,6 +656,7 @@ mod test { ("ac", "vac"), ("da", "vda"), ("db", "vdb"), + ("db", "vdb2"), ]) }) .returning(example_response); @@ -762,6 +769,7 @@ mod test { .header("da", "vda") .header("db", "vdb") .header("db", "vdb") + .header("db", "vdb2") .header(HOST, "host") .header(CONTENT_LENGTH, "2") .header(CONTENT_TYPE, "graphql") diff --git a/apollo-router/src/plugins/include_subgraph_errors.rs b/apollo-router/src/plugins/include_subgraph_errors.rs index 8f66888243..72abeac89e 100644 --- a/apollo-router/src/plugins/include_subgraph_errors.rs +++ b/apollo-router/src/plugins/include_subgraph_errors.rs @@ -95,7 +95,7 @@ mod test { use crate::services::layers::persisted_queries::PersistedQueryLayer; use crate::services::layers::query_analysis::QueryAnalysisLayer; use crate::services::router; - use crate::services::router_service::RouterCreator; + use crate::services::router::service::RouterCreator; use crate::services::HasSchema; use crate::services::PluggableSupergraphServiceBuilder; use crate::services::SupergraphRequest; diff --git a/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_doesnt_expose_query_plan.snap b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_doesnt_expose_query_plan.snap new file mode 100644 index 0000000000..19bfb8dd04 --- /dev/null +++ b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_doesnt_expose_query_plan.snap @@ -0,0 +1,52 @@ +--- +source: apollo-router/src/plugins/expose_query_plan.rs +expression: "serde_json::to_value(response).unwrap()" +--- +{ + "data": { + "topProducts": [ + { + "upc": "1", + "name": "Table", + "reviews": [ + { + "id": "1", + "product": { + "name": "Table" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + }, + { + "id": "4", + "product": { + "name": "Table" + }, + "author": { + "id": "2", + "name": "Alan Turing" + } + } + ] + }, + { + "upc": "2", + "name": "Couch", + "reviews": [ + { + "id": "2", + "product": { + "name": "Couch" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + } + ] + } + ] + } +} diff --git a/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan-2.snap b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan-2.snap new file mode 100644 index 0000000000..8aec49ee28 --- /dev/null +++ b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan-2.snap @@ -0,0 +1,191 @@ +--- +source: apollo-router/src/plugins/expose_query_plan.rs +expression: "serde_json::to_value(response).unwrap()" +--- +{ + "data": { + "topProducts": [ + { + "upc": "1", + "name": "Table", + "reviews": [ + { + "id": "1", + "product": { + "name": "Table" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + }, + { + "id": "4", + "product": { + "name": "Table" + }, + "author": { + "id": "2", + "name": "Alan Turing" + } + } + ] + }, + { + "upc": "2", + "name": "Couch", + "reviews": [ + { + "id": "2", + "product": { + "name": "Couch" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + } + ] + } + ] + }, + "extensions": { + "apolloQueryPlan": { + "object": { + "kind": "QueryPlan", + "node": { + "kind": "Sequence", + "nodes": [ + { + "kind": "Fetch", + "serviceName": "products", + "variableUsages": [ + "first" + ], + "operation": "query TopProducts__products__0($first:Int){topProducts(first:$first){__typename upc name}}", + "operationName": "TopProducts__products__0", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + }, + { + "kind": "Flatten", + "path": [ + "topProducts", + "@" + ], + "node": { + "kind": "Fetch", + "serviceName": "reviews", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "Product", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "upc" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__reviews__1($representations:[_Any!]!){_entities(representations:$representations){...on Product{reviews{id product{__typename upc}author{__typename id}}}}}", + "operationName": "TopProducts__reviews__1", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + }, + { + "kind": "Parallel", + "nodes": [ + { + "kind": "Flatten", + "path": [ + "topProducts", + "@", + "reviews", + "@", + "product" + ], + "node": { + "kind": "Fetch", + "serviceName": "products", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "Product", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "upc" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__products__2($representations:[_Any!]!){_entities(representations:$representations){...on Product{name}}}", + "operationName": "TopProducts__products__2", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + }, + { + "kind": "Flatten", + "path": [ + "topProducts", + "@", + "reviews", + "@", + "author" + ], + "node": { + "kind": "Fetch", + "serviceName": "accounts", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "User", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "id" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__accounts__3($representations:[_Any!]!){_entities(representations:$representations){...on User{name}}}", + "operationName": "TopProducts__accounts__3", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + } + ] + } + ] + } + }, + "text": "QueryPlan {\n Sequence {\n Fetch(service: \"products\") {\n {\n topProducts(first: $first) {\n __typename\n upc\n name\n }\n }\n },\n Flatten(path: \"topProducts.@\") {\n Fetch(service: \"reviews\") {\n {\n ... on Product {\n __typename\n upc\n }\n } =>\n {\n ... on Product {\n reviews {\n id\n product {\n __typename\n upc\n }\n author {\n __typename\n id\n }\n }\n }\n }\n },\n },\n Parallel {\n Flatten(path: \"topProducts.@.reviews.@.product\") {\n Fetch(service: \"products\") {\n {\n ... on Product {\n __typename\n upc\n }\n } =>\n {\n ... on Product {\n name\n }\n }\n },\n },\n Flatten(path: \"topProducts.@.reviews.@.author\") {\n Fetch(service: \"accounts\") {\n {\n ... on User {\n __typename\n id\n }\n } =>\n {\n ... on User {\n name\n }\n }\n },\n },\n },\n },\n}" + } + } +} diff --git a/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan.snap b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan.snap new file mode 100644 index 0000000000..8aec49ee28 --- /dev/null +++ b/apollo-router/src/plugins/snapshots/apollo_router__plugins__expose_query_plan__tests__it_expose_query_plan.snap @@ -0,0 +1,191 @@ +--- +source: apollo-router/src/plugins/expose_query_plan.rs +expression: "serde_json::to_value(response).unwrap()" +--- +{ + "data": { + "topProducts": [ + { + "upc": "1", + "name": "Table", + "reviews": [ + { + "id": "1", + "product": { + "name": "Table" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + }, + { + "id": "4", + "product": { + "name": "Table" + }, + "author": { + "id": "2", + "name": "Alan Turing" + } + } + ] + }, + { + "upc": "2", + "name": "Couch", + "reviews": [ + { + "id": "2", + "product": { + "name": "Couch" + }, + "author": { + "id": "1", + "name": "Ada Lovelace" + } + } + ] + } + ] + }, + "extensions": { + "apolloQueryPlan": { + "object": { + "kind": "QueryPlan", + "node": { + "kind": "Sequence", + "nodes": [ + { + "kind": "Fetch", + "serviceName": "products", + "variableUsages": [ + "first" + ], + "operation": "query TopProducts__products__0($first:Int){topProducts(first:$first){__typename upc name}}", + "operationName": "TopProducts__products__0", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + }, + { + "kind": "Flatten", + "path": [ + "topProducts", + "@" + ], + "node": { + "kind": "Fetch", + "serviceName": "reviews", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "Product", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "upc" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__reviews__1($representations:[_Any!]!){_entities(representations:$representations){...on Product{reviews{id product{__typename upc}author{__typename id}}}}}", + "operationName": "TopProducts__reviews__1", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + }, + { + "kind": "Parallel", + "nodes": [ + { + "kind": "Flatten", + "path": [ + "topProducts", + "@", + "reviews", + "@", + "product" + ], + "node": { + "kind": "Fetch", + "serviceName": "products", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "Product", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "upc" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__products__2($representations:[_Any!]!){_entities(representations:$representations){...on Product{name}}}", + "operationName": "TopProducts__products__2", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + }, + { + "kind": "Flatten", + "path": [ + "topProducts", + "@", + "reviews", + "@", + "author" + ], + "node": { + "kind": "Fetch", + "serviceName": "accounts", + "requires": [ + { + "kind": "InlineFragment", + "typeCondition": "User", + "selections": [ + { + "kind": "Field", + "name": "__typename" + }, + { + "kind": "Field", + "name": "id" + } + ] + } + ], + "variableUsages": [], + "operation": "query TopProducts__accounts__3($representations:[_Any!]!){_entities(representations:$representations){...on User{name}}}", + "operationName": "TopProducts__accounts__3", + "operationKind": "query", + "id": null, + "inputRewrites": null, + "outputRewrites": null + } + } + ] + } + ] + } + }, + "text": "QueryPlan {\n Sequence {\n Fetch(service: \"products\") {\n {\n topProducts(first: $first) {\n __typename\n upc\n name\n }\n }\n },\n Flatten(path: \"topProducts.@\") {\n Fetch(service: \"reviews\") {\n {\n ... on Product {\n __typename\n upc\n }\n } =>\n {\n ... on Product {\n reviews {\n id\n product {\n __typename\n upc\n }\n author {\n __typename\n id\n }\n }\n }\n }\n },\n },\n Parallel {\n Flatten(path: \"topProducts.@.reviews.@.product\") {\n Fetch(service: \"products\") {\n {\n ... on Product {\n __typename\n upc\n }\n } =>\n {\n ... on Product {\n name\n }\n }\n },\n },\n Flatten(path: \"topProducts.@.reviews.@.author\") {\n Fetch(service: \"accounts\") {\n {\n ... on User {\n __typename\n id\n }\n } =>\n {\n ... on User {\n name\n }\n }\n },\n },\n },\n },\n}" + } + } +} diff --git a/apollo-router/src/plugins/telemetry/config_new/attributes.rs b/apollo-router/src/plugins/telemetry/config_new/attributes.rs index 0db2b06fbe..662504ba03 100644 --- a/apollo-router/src/plugins/telemetry/config_new/attributes.rs +++ b/apollo-router/src/plugins/telemetry/config_new/attributes.rs @@ -148,13 +148,12 @@ pub(crate) enum RouterEvent { #[derive(Deserialize, JsonSchema, Clone, Debug, Default)] #[serde(deny_unknown_fields, rename_all = "snake_case")] pub(crate) enum DefaultAttributeRequirementLevel { + /// Attributes that are marked as required or recommended in otel semantic conventions and apollo documentation will be included + Recommended, + /// Attributes that are marked as required in otel semantic conventions and apollo documentation will be included (default) #[default] Required, - /// Attributes that are marked as required or recommended in otel semantic conventions and apollo documentation will be included - Recommended, - /// Attributes that are marked as required, recommended or opt-in in otel semantic conventions and apollo documentation will be included - OptIn, } #[allow(dead_code)] @@ -415,6 +414,14 @@ pub(crate) enum SubgraphCustomAttribute { /// The supergraph query operation kind (query|mutation|subscription). supergraph_operation_kind: OperationKind, }, + SupergraphQuery { + /// The supergraph query to the subgraph. + supergraph_query: Query, + /// Optional redaction pattern. + redact: Option, + /// Optional default value. + default: Option, + }, SupergraphQueryVariable { /// The supergraph query variable name. supergraph_query_variable: String, @@ -528,28 +535,28 @@ pub(crate) struct SubgraphAttributes { /// Examples: /// * products /// Requirement level: Required - #[serde(rename = "graphql.federation.subgraph.name")] - graphql_federation_subgraph_name: Option, + #[serde(rename = "subgraph.name")] + subgraph_name: Option, /// The GraphQL document being executed. /// Examples: /// * query findBookById { bookById(id: ?) { name } } /// Requirement level: Recommended - #[serde(rename = "graphql.document")] - graphql_document: Option, + #[serde(rename = "subgraph.graphql.document")] + subgraph_graphql_document: Option, /// The name of the operation being executed. /// Examples: /// * findBookById /// Requirement level: Recommended - #[serde(rename = "graphql.operation.name")] - graphql_operation_name: Option, + #[serde(rename = "subgraph.graphql.operation.name")] + subgraph_graphql_operation_name: Option, /// The type of the operation being executed. /// Examples: /// * query /// * subscription /// * mutation /// Requirement level: Recommended - #[serde(rename = "graphql.operation.type")] - graphql_operation_type: Option, + #[serde(rename = "subgraph.graphql.operation.type")] + subgraph_graphql_operation_type: Option, } /// Common attributes for http server and client. @@ -737,7 +744,7 @@ pub(crate) struct HttpServerAttributes { url_scheme: Option, } -/// Attrubtes for HTTP clients +/// Attributes for HTTP clients /// https://opentelemetry.io/docs/specs/semconv/http/http-spans/#http-client #[allow(dead_code)] #[derive(Deserialize, JsonSchema, Clone, Default, Debug)] diff --git a/apollo-router/src/plugins/telemetry/config_new/conditions.rs b/apollo-router/src/plugins/telemetry/config_new/conditions.rs new file mode 100644 index 0000000000..b4a11fc53e --- /dev/null +++ b/apollo-router/src/plugins/telemetry/config_new/conditions.rs @@ -0,0 +1,34 @@ +use schemars::JsonSchema; +use serde::Deserialize; + +use crate::plugins::telemetry::config::AttributeValue; + +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case")] +pub(crate) enum Condition { + /// A condition to check a selection against a value. + Eq([SelectorOrValue; 2]), + /// All sub-conditions must be true. + All(Vec>), + /// At least one sub-conditions must be true. + Any(Vec>), + /// The sub-condition must not be true + Not(Box>), +} + +impl Condition<()> { + pub(crate) fn empty() -> Condition { + Condition::Any(vec![]) + } +} + +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case", untagged)] +pub(crate) enum SelectorOrValue { + /// A constant value. + Value(AttributeValue), + /// Selector to extract a value from the pipeline. + Selector(T), +} diff --git a/apollo-router/src/plugins/telemetry/config_new/events.rs b/apollo-router/src/plugins/telemetry/config_new/events.rs index 82a19b1cff..a92810cd36 100644 --- a/apollo-router/src/plugins/telemetry/config_new/events.rs +++ b/apollo-router/src/plugins/telemetry/config_new/events.rs @@ -10,6 +10,7 @@ use crate::plugins::telemetry::config_new::attributes::SubgraphAttributes; use crate::plugins::telemetry::config_new::attributes::SubgraphCustomAttribute; use crate::plugins::telemetry::config_new::attributes::SupergraphAttributes; use crate::plugins::telemetry::config_new::attributes::SupergraphCustomAttribute; +use crate::plugins::telemetry::config_new::conditions::Condition; /// Events are #[allow(dead_code)] @@ -30,11 +31,11 @@ pub(crate) struct Events { #[serde(deny_unknown_fields, default)] struct RouterEvents { /// Log the router request - request: bool, + request: EventLevel, /// Log the router response - response: bool, + response: EventLevel, /// Log the router error - error: bool, + error: EventLevel, } #[allow(dead_code)] @@ -84,9 +85,31 @@ where { /// The log level of the event. level: EventLevel, + /// The event message. message: String, + + /// When to trigger the event. + on: EventOn, + /// The event attributes. #[serde(default = "Extendable::empty::")] attributes: Extendable, + + /// The event conditions. + #[serde(default = "Condition::empty::")] + condition: Condition, +} + +/// When to trigger the event. +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] +#[serde(rename_all = "snake_case")] +pub(crate) enum EventOn { + /// Log the event on request + Request, + /// Log the event on response + Response, + /// Log the event on error + Error, } diff --git a/apollo-router/src/plugins/telemetry/config_new/instruments.rs b/apollo-router/src/plugins/telemetry/config_new/instruments.rs index 292658a1aa..17122251ea 100644 --- a/apollo-router/src/plugins/telemetry/config_new/instruments.rs +++ b/apollo-router/src/plugins/telemetry/config_new/instruments.rs @@ -11,6 +11,7 @@ use crate::plugins::telemetry::config_new::attributes::SubgraphAttributes; use crate::plugins::telemetry::config_new::attributes::SubgraphCustomAttribute; use crate::plugins::telemetry::config_new::attributes::SupergraphAttributes; use crate::plugins::telemetry::config_new::attributes::SupergraphCustomAttribute; +use crate::plugins::telemetry::config_new::conditions::Condition; #[allow(dead_code)] #[derive(Clone, Deserialize, JsonSchema, Debug, Default)] @@ -37,19 +38,37 @@ pub(crate) struct Instruments { struct RouterInstruments { /// Histogram of server request duration #[serde(rename = "http.server.request.duration")] - http_server_request_duration: bool, + http_server_request_duration: + DefaultedStandardInstrument>, /// Gauge of active requests #[serde(rename = "http.server.active_requests")] - http_server_active_requests: bool, + http_server_active_requests: + DefaultedStandardInstrument>, /// Histogram of server request body size #[serde(rename = "http.server.request.body.size")] - http_server_request_body_size: bool, + http_server_request_body_size: + DefaultedStandardInstrument>, /// Histogram of server response body size #[serde(rename = "http.server.response.body.size")] - http_server_response_body_size: bool, + http_server_response_body_size: + DefaultedStandardInstrument>, +} + +#[allow(dead_code)] +#[derive(Clone, Deserialize, JsonSchema, Debug)] +#[serde(deny_unknown_fields, untagged)] +enum DefaultedStandardInstrument { + Bool(bool), + Extendable { attributes: T }, +} + +impl Default for DefaultedStandardInstrument { + fn default() -> Self { + DefaultedStandardInstrument::Bool(true) + } } #[allow(dead_code)] @@ -86,7 +105,7 @@ where ty: InstrumentType, /// The value of the instrument. - value: InstrumentValue, + value: InstrumentValue, /// The description of the instrument. description: String, @@ -97,6 +116,10 @@ where /// Attributes to include on the instrument. #[serde(default = "Extendable::empty::")] attributes: Extendable, + + /// The instrument conditions. + #[serde(default = "Condition::empty::")] + condition: Condition, } #[allow(dead_code)] @@ -116,10 +139,18 @@ pub(crate) enum InstrumentType { Gauge, } +#[allow(dead_code)] +#[derive(Clone, Deserialize, JsonSchema, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case", untagged)] +pub(crate) enum InstrumentValue { + Standard(Standard), + Custom(T), +} + #[allow(dead_code)] #[derive(Clone, Deserialize, JsonSchema, Debug)] #[serde(deny_unknown_fields, rename_all = "snake_case")] -pub(crate) enum InstrumentValue { +pub(crate) enum Standard { Duration, Unit, Active, diff --git a/apollo-router/src/plugins/telemetry/config_new/logging.rs b/apollo-router/src/plugins/telemetry/config_new/logging.rs index f47567b2d9..b887f107c0 100644 --- a/apollo-router/src/plugins/telemetry/config_new/logging.rs +++ b/apollo-router/src/plugins/telemetry/config_new/logging.rs @@ -1,4 +1,5 @@ use std::collections::BTreeMap; +use std::io::IsTerminal; use schemars::JsonSchema; use serde::Deserialize; @@ -56,7 +57,7 @@ pub(crate) struct File { /// The format for logging. #[allow(dead_code)] -#[derive(Deserialize, JsonSchema, Clone, Default, Debug)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] #[serde(deny_unknown_fields, rename_all = "snake_case")] pub(crate) enum Format { /// https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html @@ -72,11 +73,125 @@ pub(crate) enum Format { OpenTelemetry, /// https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/struct.Json.html - Json, + #[serde(rename = "json")] + JsonDefault, + /// https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/struct.Json.html + Json(JsonFormat), /// https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/struct.Full.html + #[serde(rename = "text")] + TextDefault, + /// https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/struct.Full.html + Text(TextFormat), +} + +impl Default for Format { + fn default() -> Self { + if std::io::stdout().is_terminal() { + Format::TextDefault + } else { + Format::JsonDefault + } + } +} + +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case", default)] +pub(crate) struct JsonFormat { + /// Move all span attributes to the top level json object. + flatten_event: bool, + /// Use ansi escape codes. + ansi: bool, + /// Include the timestamp with the log event. + display_timestamp: bool, + /// Include the target with the log event. + display_target: bool, + /// Include the level with the log event. + display_level: bool, + /// Include the thread_id with the log event. + display_thread_id: bool, + /// Include the thread_name with the log event. + display_thread_name: bool, + /// Include the filename with the log event. + display_filename: bool, + /// Include the line number with the log event. + display_line_number: bool, + /// Include the current span in this log event. + display_current_span: bool, + /// Include all of the containing span information with the log event. + display_span_list: bool, +} + +impl Default for JsonFormat { + fn default() -> Self { + JsonFormat { + flatten_event: false, + ansi: false, + display_timestamp: true, + display_target: true, + display_level: true, + display_thread_id: false, + display_thread_name: false, + display_filename: false, + display_line_number: false, + display_current_span: false, + display_span_list: true, + } + } +} + +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case", default)] +pub(crate) struct TextFormat { + /// The type of text output, one of `default`, `compact`, or `full`. + flavor: TextFlavor, + /// Use ansi escape codes. + ansi: bool, + /// Include the timestamp with the log event. + display_timestamp: bool, + /// Include the target with the log event. + display_target: bool, + /// Include the level with the log event. + display_level: bool, + /// Include the thread_id with the log event. + display_thread_id: bool, + /// Include the thread_name with the log event. + display_thread_name: bool, + /// Include the filename with the log event. + display_filename: bool, + /// Include the line number with the log event. + display_line_number: bool, + /// Include the location with the log event. + display_location: bool, +} + +impl Default for TextFormat { + fn default() -> Self { + TextFormat { + flavor: TextFlavor::Default, + ansi: false, + display_timestamp: true, + display_target: false, + display_level: true, + display_thread_id: false, + display_thread_name: false, + display_filename: false, + display_line_number: false, + display_location: false, + } + } +} + +#[allow(dead_code)] +#[derive(Deserialize, JsonSchema, Clone, Default, Debug)] +#[serde(deny_unknown_fields, rename_all = "snake_case")] +pub(crate) enum TextFlavor { #[default] - Text, + Default, + Compact, + Full, } /// The period to rollover the log file. @@ -84,13 +199,11 @@ pub(crate) enum Format { #[derive(Deserialize, JsonSchema, Clone, Default, Debug)] #[serde(deny_unknown_fields, rename_all = "snake_case")] pub(crate) enum Rollover { - /// Roll over every minute. - Minutely, /// Roll over every hour. Hourly, /// Roll over every day. - #[default] Daily, + #[default] /// Never roll over. Never, } diff --git a/apollo-router/src/plugins/telemetry/config_new/mod.rs b/apollo-router/src/plugins/telemetry/config_new/mod.rs index d06a466de3..5c6f362241 100644 --- a/apollo-router/src/plugins/telemetry/config_new/mod.rs +++ b/apollo-router/src/plugins/telemetry/config_new/mod.rs @@ -1,5 +1,7 @@ +/// These modules contain a new config structure for telemetry that will progressively move to pub(crate) mod attributes; -/// These modules contain a new config structure for telemetry that will progressively mo +pub(crate) mod conditions; + pub(crate) mod events; pub(crate) mod instruments; pub(crate) mod logging; diff --git a/apollo-router/src/plugins/traffic_shaping/mod.rs b/apollo-router/src/plugins/traffic_shaping/mod.rs index 566534b82d..00834574e1 100644 --- a/apollo-router/src/plugins/traffic_shaping/mod.rs +++ b/apollo-router/src/plugins/traffic_shaping/mod.rs @@ -286,23 +286,8 @@ impl Plugin for TrafficShaping { .transpose()?; { - let storage = if let Some(urls) = init - .config - .experimental_cache - .as_ref() - .map(|cache| cache.urls.clone()) - { - Some( - RedisCacheStorage::new( - urls, - None, - init.config - .experimental_cache - .as_ref() - .and_then(|c| c.timeout), - ) - .await?, - ) + let storage = if let Some(config) = init.config.experimental_cache.as_ref() { + Some(RedisCacheStorage::new(config.clone()).await?) } else { None }; @@ -503,7 +488,7 @@ mod test { use crate::services::layers::persisted_queries::PersistedQueryLayer; use crate::services::layers::query_analysis::QueryAnalysisLayer; use crate::services::router; - use crate::services::router_service::RouterCreator; + use crate::services::router::service::RouterCreator; use crate::services::HasSchema; use crate::services::PluggableSupergraphServiceBuilder; use crate::services::SupergraphRequest; diff --git a/apollo-router/src/query_planner/bridge_query_planner.rs b/apollo-router/src/query_planner/bridge_query_planner.rs index 8eefd309af..d2adcaa70b 100644 --- a/apollo-router/src/query_planner/bridge_query_planner.rs +++ b/apollo-router/src/query_planner/bridge_query_planner.rs @@ -2,6 +2,7 @@ use std::collections::HashMap; use std::fmt::Debug; +use std::fmt::Write; use std::sync::Arc; use std::time::Instant; @@ -122,8 +123,62 @@ impl BridgeQueryPlanner { let planner = Arc::new(planner); - let api_schema = planner.api_schema().await?; - let api_schema = Schema::parse(&api_schema.schema, &configuration)?; + let api_schema_string = match configuration.experimental_api_schema_generation_mode { + crate::configuration::ApiSchemaMode::Legacy => { + let api_schema = planner.api_schema().await?; + api_schema.schema + } + crate::configuration::ApiSchemaMode::New => schema.create_api_schema(), + + crate::configuration::ApiSchemaMode::Both => { + let api_schema = planner.api_schema().await?; + let new_api_schema = schema.create_api_schema(); + + if api_schema.schema != new_api_schema { + tracing::warn!( + monotonic_counter.apollo.router.api_schema = 1u64, + generation.result = "failed", + "API schema generation mismatch: apollo-federation and router-bridge write different schema" + ); + + let differences = diff::lines(&api_schema.schema, &new_api_schema); + let mut output = String::new(); + for diff_line in differences { + match diff_line { + diff::Result::Left(l) => { + let trimmed = l.trim(); + if !trimmed.starts_with('#') && !trimmed.is_empty() { + writeln!(&mut output, "-{l}").expect("write will never fail"); + } else { + writeln!(&mut output, " {l}").expect("write will never fail"); + } + } + diff::Result::Both(l, _) => { + writeln!(&mut output, " {l}").expect("write will never fail"); + } + diff::Result::Right(r) => { + let trimmed = r.trim(); + if trimmed != "---" && !trimmed.is_empty() { + writeln!(&mut output, "+{r}").expect("write will never fail"); + } + } + } + } + tracing::debug!( + "different API schema between apollo-federation and router-bridge:\n{}", + output + ); + } else { + tracing::warn!( + monotonic_counter.apollo.router.api_schema = 1u64, + generation.result = VALIDATION_MATCH, + ); + } + api_schema.schema + } + }; + let api_schema = Schema::parse(&api_schema_string, &configuration)?; + let schema = Arc::new(schema.with_api_schema(api_schema)); let introspection = if configuration.supergraph.introspection { Some(Arc::new(Introspection::new(planner.clone()).await)) diff --git a/apollo-router/src/query_planner/execution.rs b/apollo-router/src/query_planner/execution.rs index 35945844cc..bd2a600f75 100644 --- a/apollo-router/src/query_planner/execution.rs +++ b/apollo-router/src/query_planner/execution.rs @@ -3,7 +3,8 @@ use std::sync::Arc; use futures::future::join_all; use futures::prelude::*; -use tokio::sync::broadcast::Sender; +use tokio::sync::broadcast; +use tokio::sync::mpsc; use tokio_stream::wrappers::BroadcastStream; use tracing::Instrument; @@ -46,7 +47,7 @@ impl QueryPlan { service_factory: &'a Arc, supergraph_request: &'a Arc>, schema: &'a Arc, - sender: futures::channel::mpsc::Sender, + sender: mpsc::Sender, subscription_handle: Option, subscription_config: &'a Option, initial_value: Option, @@ -97,7 +98,7 @@ pub(crate) struct ExecutionParameters<'a> { pub(crate) service_factory: &'a Arc, pub(crate) schema: &'a Arc, pub(crate) supergraph_request: &'a Arc>, - pub(crate) deferred_fetches: &'a HashMap)>>, + pub(crate) deferred_fetches: &'a HashMap)>>, pub(crate) query: &'a Arc, pub(crate) root_node: &'a PlanNode, pub(crate) subscription_handle: &'a Option, @@ -110,7 +111,7 @@ impl PlanNode { parameters: &'a ExecutionParameters<'a>, current_dir: &'a Path, parent_value: &'a Value, - sender: futures::channel::mpsc::Sender, + sender: mpsc::Sender, ) -> future::BoxFuture<(Value, Vec)> { Box::pin(async move { tracing::trace!("executing plan:\n{:#?}", self); @@ -250,8 +251,10 @@ impl PlanNode { value = parent_value.clone(); errors = Vec::new(); async { - let mut deferred_fetches: HashMap)>> = - HashMap::new(); + let mut deferred_fetches: HashMap< + String, + broadcast::Sender<(Value, Vec)>, + > = HashMap::new(); let mut futures = Vec::new(); let (primary_sender, _) = @@ -388,9 +391,9 @@ impl DeferredNode { &self, parameters: &'a ExecutionParameters<'a>, parent_value: &Value, - sender: futures::channel::mpsc::Sender, - primary_sender: &Sender<(Value, Vec)>, - deferred_fetches: &mut HashMap)>>, + sender: mpsc::Sender, + primary_sender: &broadcast::Sender<(Value, Vec)>, + deferred_fetches: &mut HashMap)>>, ) -> impl Future { let mut deferred_receivers = Vec::new(); @@ -420,7 +423,7 @@ impl DeferredNode { let deferred_inner = self.node.clone(); let deferred_path = self.query_path.clone(); let label = self.label.clone(); - let mut tx = sender; + let tx = sender; let sc = parameters.schema.clone(); let orig = parameters.supergraph_request.clone(); let sf = parameters.service_factory.clone(); @@ -506,7 +509,7 @@ impl DeferredNode { e ); }; - tx.disconnect(); + drop(tx); } else { let (primary_value, primary_errors) = primary_receiver.recv().await.unwrap_or_default(); @@ -530,7 +533,7 @@ impl DeferredNode { e ); } - tx.disconnect(); + drop(tx); }; } } diff --git a/apollo-router/src/query_planner/subscription.rs b/apollo-router/src/query_planner/subscription.rs index d409c5b136..1814964251 100644 --- a/apollo-router/src/query_planner/subscription.rs +++ b/apollo-router/src/query_planner/subscription.rs @@ -1,12 +1,12 @@ use std::sync::atomic::AtomicUsize; use std::sync::atomic::Ordering; -use futures::channel::mpsc; use futures::future; use serde::Deserialize; use serde::Serialize; use serde_json_bytes::Value; use tokio::sync::broadcast; +use tokio::sync::mpsc; use tower::ServiceExt; use tracing_futures::Instrument; @@ -84,7 +84,7 @@ impl SubscriptionNode { parameters: &'a ExecutionParameters<'a>, current_dir: &'a Path, parent_value: &'a Value, - sender: futures::channel::mpsc::Sender, + sender: tokio::sync::mpsc::Sender, ) -> future::BoxFuture> { if parameters.subscription_handle.is_none() { tracing::error!("No subscription handle provided for a subscription"); @@ -151,7 +151,7 @@ impl SubscriptionNode { client_sender: sender, subscription_handle, subscription_config, - stream_rx: rx_handle, + stream_rx: rx_handle.into(), service_name: self.service_name.clone(), }; diff --git a/apollo-router/src/query_planner/tests.rs b/apollo-router/src/query_planner/tests.rs index 6fac2bccec..50eda0a159 100644 --- a/apollo-router/src/query_planner/tests.rs +++ b/apollo-router/src/query_planner/tests.rs @@ -7,6 +7,7 @@ use futures::StreamExt; use http::Method; use router_bridge::planner::UsageReporting; use serde_json_bytes::json; +use tokio_stream::wrappers::ReceiverStream; use tower::ServiceExt; use super::DeferredNode; @@ -96,7 +97,7 @@ async fn mock_subgraph_service_withf_panics_should_be_reported_as_service_closed mock_products_service }); - let (sender, _) = futures::channel::mpsc::channel(10); + let (sender, _) = tokio::sync::mpsc::channel(10); let sf = Arc::new(SubgraphServiceFactory { services: Arc::new(HashMap::from([( "product".into(), @@ -155,7 +156,7 @@ async fn fetch_includes_operation_name() { mock_products_service }); - let (sender, _) = futures::channel::mpsc::channel(10); + let (sender, _) = tokio::sync::mpsc::channel(10); let sf = Arc::new(SubgraphServiceFactory { services: Arc::new(HashMap::from([( @@ -212,7 +213,7 @@ async fn fetch_makes_post_requests() { mock_products_service }); - let (sender, _) = futures::channel::mpsc::channel(10); + let (sender, _) = tokio::sync::mpsc::channel(10); let sf = Arc::new(SubgraphServiceFactory { services: Arc::new(HashMap::from([( @@ -350,7 +351,7 @@ async fn defer() { mock_y_service }); - let (sender, mut receiver) = futures::channel::mpsc::channel(10); + let (sender, receiver) = tokio::sync::mpsc::channel(10); let schema = include_str!("testdata/defer_schema.graphql"); let schema = Arc::new(Schema::parse_test(schema, &Default::default()).unwrap()); @@ -387,7 +388,7 @@ async fn defer() { serde_json::json! {{"data":{"t":{"id":1234,"__typename":"T","x":"X"}}}} ); - let response = receiver.next().await.unwrap(); + let response = ReceiverStream::new(receiver).next().await.unwrap(); // deferred response assert_eq!( @@ -450,7 +451,8 @@ async fn defer_if_condition() { ) .build(); - let (sender, mut receiver) = futures::channel::mpsc::channel(10); + let (sender, receiver) = tokio::sync::mpsc::channel(10); + let mut receiver_stream = ReceiverStream::new(receiver); let service_factory = Arc::new(SubgraphServiceFactory { services: Arc::new(HashMap::from([( @@ -482,12 +484,13 @@ async fn defer_if_condition() { // shouldDefer: true insta::assert_json_snapshot!(defer_primary_response); - let deferred_response = receiver.next().await.unwrap(); + let deferred_response = receiver_stream.next().await.unwrap(); insta::assert_json_snapshot!(deferred_response); - assert!(receiver.next().await.is_none()); + assert!(receiver_stream.next().await.is_none()); // shouldDefer: not provided, should default to true - let (default_sender, mut default_receiver) = futures::channel::mpsc::channel(10); + let (default_sender, default_receiver) = tokio::sync::mpsc::channel(10); + let mut default_receiver_stream = ReceiverStream::new(default_receiver); let default_primary_response = query_plan .execute( &Context::new(), @@ -502,11 +505,15 @@ async fn defer_if_condition() { .await; assert_eq!(defer_primary_response, default_primary_response); - assert_eq!(deferred_response, default_receiver.next().await.unwrap()); - assert!(default_receiver.next().await.is_none()); + assert_eq!( + deferred_response, + default_receiver_stream.next().await.unwrap() + ); + assert!(default_receiver_stream.next().await.is_none()); // shouldDefer: false, only 1 response - let (sender, mut no_defer_receiver) = futures::channel::mpsc::channel(10); + let (sender, no_defer_receiver) = tokio::sync::mpsc::channel(10); + let mut no_defer_receiver_stream = ReceiverStream::new(no_defer_receiver); let defer_disabled = query_plan .execute( &Context::new(), @@ -528,7 +535,7 @@ async fn defer_if_condition() { ) .await; insta::assert_json_snapshot!(defer_disabled); - assert!(no_defer_receiver.next().await.is_none()); + assert!(no_defer_receiver_stream.next().await.is_none()); } #[tokio::test] @@ -634,7 +641,7 @@ async fn dependent_mutations() { plugins: Default::default(), }); - let (sender, _) = futures::channel::mpsc::channel(10); + let (sender, _) = tokio::sync::mpsc::channel(10); let _response = query_plan .execute( &Context::new(), diff --git a/apollo-router/src/router_factory.rs b/apollo-router/src/router_factory.rs index 124488096b..a29a396ff7 100644 --- a/apollo-router/src/router_factory.rs +++ b/apollo-router/src/router_factory.rs @@ -40,7 +40,7 @@ use crate::services::layers::persisted_queries::PersistedQueryLayer; use crate::services::layers::query_analysis::QueryAnalysisLayer; use crate::services::new_service::ServiceFactory; use crate::services::router; -use crate::services::router_service::RouterCreator; +use crate::services::router::service::RouterCreator; use crate::services::subgraph; use crate::services::transport; use crate::services::HasConfig; diff --git a/apollo-router/src/services/execution.rs b/apollo-router/src/services/execution.rs index f27a70c809..570faf90ea 100644 --- a/apollo-router/src/services/execution.rs +++ b/apollo-router/src/services/execution.rs @@ -10,6 +10,8 @@ use tower::BoxError; use crate::graphql; use crate::Context; +pub(crate) mod service; + pub type BoxService = tower::util::BoxService; pub type BoxCloneService = tower::util::BoxCloneService; pub type ServiceResult = Result; diff --git a/apollo-router/src/services/execution_service.rs b/apollo-router/src/services/execution/service.rs similarity index 83% rename from apollo-router/src/services/execution_service.rs rename to apollo-router/src/services/execution/service.rs index 9d5480508b..0f7603479e 100644 --- a/apollo-router/src/services/execution_service.rs +++ b/apollo-router/src/services/execution/service.rs @@ -5,18 +5,21 @@ use std::pin::Pin; use std::sync::Arc; use std::task::Context; use std::task::Poll; +use std::time::SystemTime; +use std::time::UNIX_EPOCH; -use futures::channel::mpsc; -use futures::channel::mpsc::Receiver; -use futures::channel::mpsc::SendError; -use futures::channel::mpsc::Sender; use futures::future::BoxFuture; use futures::stream::once; -use futures::SinkExt; use futures::Stream; use futures::StreamExt; use serde_json_bytes::Value; use tokio::sync::broadcast; +use tokio::sync::mpsc; +use tokio::sync::mpsc::error::SendError; +use tokio::sync::mpsc::error::TryRecvError; +use tokio::sync::mpsc::Receiver; +use tokio::sync::mpsc::Sender; +use tokio_stream::wrappers::ReceiverStream; use tower::BoxError; use tower::ServiceBuilder; use tower::ServiceExt; @@ -26,9 +29,6 @@ use tracing::Instrument; use tracing::Span; use tracing_core::Level; -use super::new_service::ServiceFactory; -use super::Plugins; -use super::SubgraphServiceFactory; use crate::graphql::Error; use crate::graphql::IncrementalResponse; use crate::graphql::Response; @@ -36,13 +36,17 @@ use crate::json_ext::Object; use crate::json_ext::Path; use crate::json_ext::PathElement; use crate::json_ext::ValueExt; +use crate::plugins::authentication::APOLLO_AUTHENTICATION_JWT_CLAIMS; use crate::plugins::subscription::Subscription; use crate::plugins::subscription::SubscriptionConfig; use crate::plugins::subscription::APOLLO_SUBSCRIPTION_PLUGIN; use crate::query_planner::subscription::SubscriptionHandle; use crate::services::execution; +use crate::services::new_service::ServiceFactory; use crate::services::ExecutionRequest; use crate::services::ExecutionResponse; +use crate::services::Plugins; +use crate::services::SubgraphServiceFactory; use crate::spec::query::subselections::BooleanValues; use crate::spec::Query; use crate::spec::Schema; @@ -58,7 +62,7 @@ pub(crate) struct ExecutionService { type CloseSignal = broadcast::Sender<()>; // Used to detect when the stream is dropped and then when the client closed the connection -pub(crate) struct StreamWrapper(pub(crate) Receiver, Option); +pub(crate) struct StreamWrapper(pub(crate) ReceiverStream, Option); impl Stream for StreamWrapper { type Item = Response; @@ -117,6 +121,10 @@ impl ExecutionService { .query_plan .is_deferred(operation_name.as_deref(), &variables); let is_subscription = req.query_plan.is_subscription(operation_name.as_deref()); + let mut claims = None; + if is_deferred { + claims = context.get(APOLLO_AUTHENTICATION_JWT_CLAIMS)? + } let (tx_close_signal, subscription_handle) = if is_subscription { let (tx_close_signal, rx_close_signal) = broadcast::channel(1); ( @@ -159,7 +167,9 @@ impl ExecutionService { // If it's a subscription event once(ready(first)).boxed() } else { - once(ready(first)).chain(receiver).boxed() + once(ready(first)) + .chain(ReceiverStream::new(receiver)) + .boxed() }; if has_initial_data { @@ -175,6 +185,45 @@ impl ExecutionService { let execution_span = Span::current(); let stream = stream + .map(move |mut response: Response| { + // Enforce JWT expiry for deferred responses + if is_deferred { + let ts_opt = claims.as_ref().and_then(|x: &Value| { + if !x.is_object() { + tracing::error!("JWT claims should be an object"); + return None; + } + let claims = x.as_object().expect("claims should be an object"); + let exp = claims.get("exp")?; + if !exp.is_number() { + tracing::error!("JWT 'exp' (expiry) claim should be a number"); + return None; + } + exp.as_i64() + }); + if let Some(ts) = ts_opt { + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .expect("we should not run before EPOCH") + .as_secs() as i64; + if ts < now { + tracing::debug!("token has expired, shut down the subscription"); + response = Response::builder() + .has_next(false) + .error( + Error::builder() + .message( + "deferred response closed because the JWT has expired", + ) + .extension_code("DEFERRED_RESPONSE_JWT_EXPIRED") + .build(), + ) + .build() + } + } + } + response + }) .filter_map(move |response: Response| { ready(execution_span.in_scope(|| { Self::process_graphql_response( @@ -478,14 +527,14 @@ fn filter_stream( first: Response, mut stream: Receiver, stream_mode: StreamMode, -) -> Receiver { +) -> ReceiverStream { let (mut sender, receiver) = mpsc::channel(10); tokio::task::spawn(async move { let mut seen_last_message = consume_responses(first, &mut stream, &mut sender, stream_mode).await?; - while let Some(current_response) = stream.next().await { + while let Some(current_response) = stream.recv().await { seen_last_message = consume_responses(current_response, &mut stream, &mut sender, stream_mode).await?; } @@ -500,10 +549,10 @@ fn filter_stream( sender.send(res).await?; } - Ok::<_, SendError>(()) + Ok::<_, SendError>(()) }); - receiver + receiver.into() } // returns Ok(true) when we saw the last message @@ -512,34 +561,37 @@ async fn consume_responses( stream: &mut Receiver, sender: &mut Sender, stream_mode: StreamMode, -) -> Result { +) -> Result> { loop { - match stream.try_next() { - // no messages available, but the channel is not closed - // this means more deferred responses can come - Err(_) => { - sender.send(current_response).await?; - return Ok(false); - } + match stream.try_recv() { + Err(err) => { + match err { + // no messages available, but the channel is not closed + // this means more deferred responses can come + TryRecvError::Empty => { + sender.send(current_response).await?; + return Ok(false); + } + // the channel is closed + // there will be no other deferred responses after that, + // so we set `has_next` to `false` + TryRecvError::Disconnected => { + match stream_mode { + StreamMode::Defer => current_response.has_next = Some(false), + StreamMode::Subscription => current_response.subscribed = Some(false), + } + sender.send(current_response).await?; + return Ok(true); + } + } + } // there might be other deferred responses after this one, // so we should call `try_next` again - Ok(Some(response)) => { + Ok(response) => { sender.send(current_response).await?; current_response = response; } - // the channel is closed - // there will be no other deferred responses after that, - // so we set `has_next` to `false` - Ok(None) => { - match stream_mode { - StreamMode::Defer => current_response.has_next = Some(false), - StreamMode::Subscription => current_response.subscribed = Some(false), - } - - sender.send(current_response).await?; - return Ok(true); - } } } } @@ -565,7 +617,7 @@ impl ServiceFactory for ExecutionServiceFactory { ServiceBuilder::new() .service( self.plugins.iter().rev().fold( - crate::services::execution_service::ExecutionService { + crate::services::execution::service::ExecutionService { schema: self.schema.clone(), subgraph_service_factory: self.subgraph_service_factory.clone(), subscription_config: subscription_plugin_conf, diff --git a/apollo-router/src/services/layers/apq.rs b/apollo-router/src/services/layers/apq.rs index 296896199f..180ba05000 100644 --- a/apollo-router/src/services/layers/apq.rs +++ b/apollo-router/src/services/layers/apq.rs @@ -206,9 +206,9 @@ mod apq_tests { use super::*; use crate::error::Error; use crate::graphql::Response; + use crate::services::router::service::from_supergraph_mock_callback; + use crate::services::router::service::from_supergraph_mock_callback_and_configuration; use crate::services::router::ClientRequestAccepts; - use crate::services::router_service::from_supergraph_mock_callback; - use crate::services::router_service::from_supergraph_mock_callback_and_configuration; use crate::Configuration; use crate::Context; diff --git a/apollo-router/src/services/layers/content_negotiation.rs b/apollo-router/src/services/layers/content_negotiation.rs index 42d9a30c40..df02f317d5 100644 --- a/apollo-router/src/services/layers/content_negotiation.rs +++ b/apollo-router/src/services/layers/content_negotiation.rs @@ -22,9 +22,9 @@ use crate::graphql; use crate::layers::sync_checkpoint::CheckpointService; use crate::layers::ServiceExt as _; use crate::services::router; +use crate::services::router::service::MULTIPART_DEFER_HEADER_VALUE; +use crate::services::router::service::MULTIPART_SUBSCRIPTION_HEADER_VALUE; use crate::services::router::ClientRequestAccepts; -use crate::services::router_service::MULTIPART_DEFER_HEADER_VALUE; -use crate::services::router_service::MULTIPART_SUBSCRIPTION_HEADER_VALUE; use crate::services::supergraph; use crate::services::APPLICATION_JSON_HEADER_VALUE; use crate::services::MULTIPART_DEFER_CONTENT_TYPE; diff --git a/apollo-router/src/services/mod.rs b/apollo-router/src/services/mod.rs index fa9235a129..33d2b3ac89 100644 --- a/apollo-router/src/services/mod.rs +++ b/apollo-router/src/services/mod.rs @@ -2,10 +2,10 @@ use std::sync::Arc; -pub(crate) use self::execution_service::*; +pub(crate) use self::execution::service::*; pub(crate) use self::query_planner::*; pub(crate) use self::subgraph_service::*; -pub(crate) use self::supergraph_service::*; +pub(crate) use self::supergraph::service::*; use crate::graphql::Request; use crate::http_ext; pub use crate::http_ext::TryIntoHeaderName; @@ -18,22 +18,19 @@ pub(crate) use crate::services::router::Request as RouterRequest; pub(crate) use crate::services::router::Response as RouterResponse; pub(crate) use crate::services::subgraph::Request as SubgraphRequest; pub(crate) use crate::services::subgraph::Response as SubgraphResponse; +pub(crate) use crate::services::supergraph::service::SupergraphCreator; pub(crate) use crate::services::supergraph::Request as SupergraphRequest; pub(crate) use crate::services::supergraph::Response as SupergraphResponse; -pub(crate) use crate::services::supergraph_service::SupergraphCreator; pub mod execution; -mod execution_service; pub(crate) mod external; pub(crate) mod layers; pub(crate) mod new_service; pub(crate) mod query_planner; pub mod router; -pub(crate) mod router_service; pub mod subgraph; pub(crate) mod subgraph_service; pub mod supergraph; -mod supergraph_service; pub mod transport; pub(crate) mod trust_dns_connector; diff --git a/apollo-router/src/services/query_batching/testdata/expected_good_response.json b/apollo-router/src/services/query_batching/testdata/expected_good_response.json index 9edfc22538..96ccfaaba0 100644 --- a/apollo-router/src/services/query_batching/testdata/expected_good_response.json +++ b/apollo-router/src/services/query_batching/testdata/expected_good_response.json @@ -3,7 +3,6 @@ "data": { "topProducts": [ { - "upc": "1", "name": "Table", "reviews": [ { @@ -29,7 +28,6 @@ ] }, { - "upc": "2", "name": "Couch", "reviews": [ { @@ -94,5 +92,50 @@ } ] } + }, + { + "data": { + "topProducts": [ + { + "upc": "1", + "name": "Table", + "reviews": [ + { + "id": "1", + "product": { + "name": "Table" + }, + "author": { + "name": "Ada Lovelace" + } + }, + { + "id": "4", + "product": { + "name": "Table" + }, + "author": { + "name": "Alan Turing" + } + } + ] + }, + { + "upc": "2", + "name": "Couch", + "reviews": [ + { + "id": "2", + "product": { + "name": "Couch" + }, + "author": { + "name": "Ada Lovelace" + } + } + ] + } + ] + } } ] diff --git a/apollo-router/src/services/router.rs b/apollo-router/src/services/router.rs index d259a45799..367ba70429 100644 --- a/apollo-router/src/services/router.rs +++ b/apollo-router/src/services/router.rs @@ -17,8 +17,8 @@ use serde_json_bytes::Value; use static_assertions::assert_impl_all; use tower::BoxError; -use super::router_service::MULTIPART_DEFER_HEADER_VALUE; -use super::router_service::MULTIPART_SUBSCRIPTION_HEADER_VALUE; +use self::service::MULTIPART_DEFER_HEADER_VALUE; +use self::service::MULTIPART_SUBSCRIPTION_HEADER_VALUE; use super::supergraph; use crate::graphql; use crate::http_ext::header_map; @@ -33,6 +33,10 @@ pub type ServiceResult = Result; pub type Body = hyper::Body; pub type Error = hyper::Error; +pub(crate) mod service; +#[cfg(test)] +mod tests; + assert_impl_all!(Request: Send); /// Represents the router processing step of the processing pipeline. /// diff --git a/apollo-router/src/services/router_service.rs b/apollo-router/src/services/router/service.rs similarity index 59% rename from apollo-router/src/services/router_service.rs rename to apollo-router/src/services/router/service.rs index 26601e163b..8f6ce00016 100644 --- a/apollo-router/src/services/router_service.rs +++ b/apollo-router/src/services/router/service.rs @@ -34,22 +34,7 @@ use tower::ServiceExt; use tower_service::Service; use tracing::Instrument; -use super::layers::apq::APQLayer; -use super::layers::content_negotiation; -use super::layers::query_analysis::QueryAnalysisLayer; -use super::layers::static_page::StaticPageLayer; -use super::new_service::ServiceFactory; -use super::router; -use super::router::ClientRequestAccepts; -#[cfg(test)] -use super::supergraph; -use super::HasPlugins; -#[cfg(test)] -use super::HasSchema; -use super::SupergraphCreator; -use super::APPLICATION_JSON_HEADER_VALUE; -use super::MULTIPART_DEFER_CONTENT_TYPE; -use super::MULTIPART_SUBSCRIPTION_CONTENT_TYPE; +use super::ClientRequestAccepts; use crate::cache::DeduplicatingCache; use crate::configuration::Batching; use crate::configuration::BatchingMode; @@ -62,12 +47,27 @@ use crate::protocols::multipart::ProtocolMode; use crate::query_planner::QueryPlanResult; use crate::query_planner::WarmUpCachingQueryKey; use crate::router_factory::RouterFactory; +use crate::services::layers::apq::APQLayer; +use crate::services::layers::content_negotiation; use crate::services::layers::content_negotiation::GRAPHQL_JSON_RESPONSE_HEADER_VALUE; use crate::services::layers::persisted_queries::PersistedQueryLayer; +use crate::services::layers::query_analysis::QueryAnalysisLayer; +use crate::services::layers::static_page::StaticPageLayer; +use crate::services::new_service::ServiceFactory; +use crate::services::router; +#[cfg(test)] +use crate::services::supergraph; +use crate::services::HasPlugins; +#[cfg(test)] +use crate::services::HasSchema; use crate::services::RouterRequest; use crate::services::RouterResponse; +use crate::services::SupergraphCreator; use crate::services::SupergraphRequest; use crate::services::SupergraphResponse; +use crate::services::APPLICATION_JSON_HEADER_VALUE; +use crate::services::MULTIPART_DEFER_CONTENT_TYPE; +use crate::services::MULTIPART_SUBSCRIPTION_CONTENT_TYPE; use crate::Configuration; use crate::Context; use crate::Endpoint; @@ -402,13 +402,16 @@ impl RouterService { if results.len() == 1 { Ok(results.pop().expect("we should have at least one response")) } else { - let first = results.pop().expect("we should have at least one response"); + let mut results_it = results.into_iter(); + let first = results_it + .next() + .expect("we should have at least one response"); let (parts, body) = first.response.into_parts(); let context = first.context; let mut bytes = BytesMut::new(); bytes.put_u8(b'['); bytes.extend_from_slice(&hyper::body::to_bytes(body).await?); - for result in results { + for result in results_it { bytes.put(&b", "[..]); bytes.extend_from_slice(&hyper::body::to_bytes(result.response.into_body()).await?); } @@ -599,19 +602,22 @@ impl RouterService { } }; - let mut ok_results = graphql_requests?; + let ok_results = graphql_requests?; let mut results = Vec::with_capacity(ok_results.len()); - let first = ok_results - .pop() - .expect("We must have at least one response"); - let sg = http::Request::from_parts(parts, first); - if !ok_results.is_empty() { + if ok_results.len() > 1 { context .private_entries .lock() .insert(self.experimental_batching.clone()); } + + let mut ok_results_it = ok_results.into_iter(); + let first = ok_results_it + .next() + .expect("we should have at least one request"); + let sg = http::Request::from_parts(parts, first); + // Building up the batch of supergraph requests is tricky. // Firstly note that any http extensions are only propagated for the first request sent // through the pipeline. This is because there is simply no way to clone http @@ -626,7 +632,7 @@ impl RouterService { // would mean all the requests in a batch shared the same set of private entries and review // comments expressed the sentiment that this may be a bad thing...) // - for graphql_request in ok_results { + for graphql_request in ok_results_it { // XXX Lose http extensions, is that ok? let mut new = http_ext::clone_http_request(&sg); *new.body_mut() = graphql_request; @@ -673,7 +679,7 @@ struct TranslateError<'a> { } // Process the headers to make sure that `VARY` is set correctly -fn process_vary_header(headers: &mut HeaderMap) { +pub(crate) fn process_vary_header(headers: &mut HeaderMap) { if headers.get(VARY).is_none() { // We don't have a VARY header, add one with value "origin" headers.insert(VARY, ORIGIN_HEADER_VALUE.clone()); @@ -784,511 +790,3 @@ impl RouterCreator { self.supergraph_creator.planner() } } - -#[cfg(test)] -mod tests { - use http::Uri; - use mime::APPLICATION_JSON; - use serde_json_bytes::json; - - use super::*; - use crate::services::supergraph; - use crate::Context; - - // Test Vary processing - - #[test] - fn it_adds_default_with_value_origin_if_no_vary_header() { - let mut default_headers = HeaderMap::new(); - process_vary_header(&mut default_headers); - let vary_opt = default_headers.get(VARY); - assert!(vary_opt.is_some()); - let vary = vary_opt.expect("has a value"); - assert_eq!(vary, "origin"); - } - - #[test] - fn it_leaves_vary_alone_if_set() { - let mut default_headers = HeaderMap::new(); - default_headers.insert(VARY, HeaderValue::from_static("*")); - process_vary_header(&mut default_headers); - let vary_opt = default_headers.get(VARY); - assert!(vary_opt.is_some()); - let vary = vary_opt.expect("has a value"); - assert_eq!(vary, "*"); - } - - #[test] - fn it_leaves_varys_alone_if_there_are_more_than_one() { - let mut default_headers = HeaderMap::new(); - default_headers.insert(VARY, HeaderValue::from_static("one")); - default_headers.append(VARY, HeaderValue::from_static("two")); - process_vary_header(&mut default_headers); - let vary = default_headers.get_all(VARY); - assert_eq!(vary.iter().count(), 2); - for value in vary { - assert!(value == "one" || value == "two"); - } - } - - #[tokio::test] - async fn it_extracts_query_and_operation_name() { - let query = "query"; - let expected_query = query; - let operation_name = "operationName"; - let expected_operation_name = operation_name; - - let expected_response = graphql::Response::builder() - .data(json!({"response": "yay"})) - .build(); - - let mut router_service = super::from_supergraph_mock_callback(move |req| { - let example_response = expected_response.clone(); - - assert_eq!( - req.supergraph_request.body().query.as_deref().unwrap(), - expected_query - ); - assert_eq!( - req.supergraph_request - .body() - .operation_name - .as_deref() - .unwrap(), - expected_operation_name - ); - - Ok(SupergraphResponse::new_from_graphql_response( - example_response, - req.context, - )) - }) - .await; - - // get request - let get_request = supergraph::Request::builder() - .query(query) - .operation_name(operation_name) - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(Uri::from_static("/")) - .method(Method::GET) - .context(Context::new()) - .build() - .unwrap() - .try_into() - .unwrap(); - - router_service.call(get_request).await.unwrap(); - - // post request - let post_request = supergraph::Request::builder() - .query(query) - .operation_name(operation_name) - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(Uri::from_static("/")) - .method(Method::POST) - .context(Context::new()) - .build() - .unwrap(); - - router_service - .call(post_request.try_into().unwrap()) - .await - .unwrap(); - } - - #[tokio::test] - async fn it_fails_on_empty_query() { - let expected_error = "Must provide query string."; - - let router_service = from_supergraph_mock_callback(move |_req| unreachable!()).await; - - let request = SupergraphRequest::fake_builder() - .query("".to_string()) - .build() - .expect("expecting valid request") - .try_into() - .unwrap(); - - let response = router_service - .oneshot(request) - .await - .unwrap() - .into_graphql_response_stream() - .await - .next() - .await - .unwrap() - .unwrap(); - let actual_error = response.errors[0].message.clone(); - - assert_eq!(expected_error, actual_error); - assert!(response.errors[0].extensions.contains_key("code")); - } - - #[tokio::test] - async fn it_fails_on_no_query() { - let expected_error = "Must provide query string."; - - let router_service = from_supergraph_mock_callback(move |_req| unreachable!()).await; - - let request = SupergraphRequest::fake_builder() - .build() - .expect("expecting valid request") - .try_into() - .unwrap(); - - let response = router_service - .oneshot(request) - .await - .unwrap() - .into_graphql_response_stream() - .await - .next() - .await - .unwrap() - .unwrap(); - let actual_error = response.errors[0].message.clone(); - assert_eq!(expected_error, actual_error); - assert!(response.errors[0].extensions.contains_key("code")); - } - - #[tokio::test] - async fn test_experimental_http_max_request_bytes() { - /// Size of the JSON serialization of the request created by `fn canned_new` - /// in `apollo-router/src/services/supergraph.rs` - const CANNED_REQUEST_LEN: usize = 391; - - async fn with_config(experimental_http_max_request_bytes: usize) -> router::Response { - let http_request = supergraph::Request::canned_builder() - .build() - .unwrap() - .supergraph_request - .map(|body| { - let json_bytes = serde_json::to_vec(&body).unwrap(); - assert_eq!( - json_bytes.len(), - CANNED_REQUEST_LEN, - "The request generated by `fn canned_new` \ - in `apollo-router/src/services/supergraph.rs` has changed. \ - Please update `CANNED_REQUEST_LEN` accordingly." - ); - hyper::Body::from(json_bytes) - }); - let config = serde_json::json!({ - "limits": { - "experimental_http_max_request_bytes": experimental_http_max_request_bytes - } - }); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request just at (under) the limit - let response = with_config(CANNED_REQUEST_LEN).await.response; - assert_eq!(response.status(), http::StatusCode::OK); - - // Send a request just over the limit - let response = with_config(CANNED_REQUEST_LEN - 1).await.response; - assert_eq!(response.status(), http::StatusCode::PAYLOAD_TOO_LARGE); - } - - // Test query batching - - #[tokio::test] - async fn it_only_accepts_batch_http_link_mode_for_query_batch() { - let expected_response: serde_json::Value = serde_json::from_str(include_str!( - "query_batching/testdata/batching_not_enabled_response.json" - )) - .unwrap(); - - async fn with_config() -> router::Response { - let http_request = supergraph::Request::canned_builder() - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - // Modify the request so that it is a valid array of requests. - let mut json_bytes = serde_json::to_vec(&req).unwrap(); - let mut result = vec![b'[']; - result.append(&mut json_bytes.clone()); - result.push(b','); - result.append(&mut json_bytes); - result.push(b']'); - hyper::Body::from(result) - }); - let config = serde_json::json!({}); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); - let data: serde_json::Value = - serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) - .unwrap(); - assert_eq!(expected_response, data); - } - - #[tokio::test] - async fn it_processes_a_valid_query_batch() { - let expected_response: serde_json::Value = serde_json::from_str(include_str!( - "query_batching/testdata/expected_good_response.json" - )) - .unwrap(); - - async fn with_config() -> router::Response { - let http_request = supergraph::Request::canned_builder() - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - // Modify the request so that it is a valid array of requests. - let mut json_bytes = serde_json::to_vec(&req).unwrap(); - let mut result = vec![b'[']; - result.append(&mut json_bytes.clone()); - result.push(b','); - result.append(&mut json_bytes); - result.push(b']'); - hyper::Body::from(result) - }); - let config = serde_json::json!({ - "experimental_batching": { - "enabled": true, - "mode" : "batch_http_link" - } - }); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::OK); - let data: serde_json::Value = - serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) - .unwrap(); - assert_eq!(expected_response, data); - } - - #[tokio::test] - async fn it_will_not_process_a_query_batch_without_enablement() { - let expected_response: serde_json::Value = serde_json::from_str(include_str!( - "query_batching/testdata/batching_not_enabled_response.json" - )) - .unwrap(); - - async fn with_config() -> router::Response { - let http_request = supergraph::Request::canned_builder() - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - // Modify the request so that it is a valid array of requests. - let mut json_bytes = serde_json::to_vec(&req).unwrap(); - let mut result = vec![b'[']; - result.append(&mut json_bytes.clone()); - result.push(b','); - result.append(&mut json_bytes); - result.push(b']'); - hyper::Body::from(result) - }); - let config = serde_json::json!({}); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); - let data: serde_json::Value = - serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) - .unwrap(); - assert_eq!(expected_response, data); - } - - #[tokio::test] - async fn it_will_not_process_a_poorly_formatted_query_batch() { - let expected_response: serde_json::Value = serde_json::from_str(include_str!( - "query_batching/testdata/badly_formatted_batch_response.json" - )) - .unwrap(); - - async fn with_config() -> router::Response { - let http_request = supergraph::Request::canned_builder() - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - // Modify the request so that it is a valid array of requests. - let mut json_bytes = serde_json::to_vec(&req).unwrap(); - let mut result = vec![b'[']; - result.append(&mut json_bytes.clone()); - result.push(b','); - result.append(&mut json_bytes); - // Deliberately omit the required trailing ] - hyper::Body::from(result) - }); - let config = serde_json::json!({ - "experimental_batching": { - "enabled": true, - "mode" : "batch_http_link" - } - }); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); - let data: serde_json::Value = - serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) - .unwrap(); - assert_eq!(expected_response, data); - } - - #[tokio::test] - async fn it_will_process_a_non_batched_defered_query() { - let expected_response = "\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"data\":{\"topProducts\":[{\"upc\":\"1\",\"name\":\"Table\",\"reviews\":[{\"product\":{\"name\":\"Table\"},\"author\":{\"id\":\"1\",\"name\":\"Ada Lovelace\"}},{\"product\":{\"name\":\"Table\"},\"author\":{\"id\":\"2\",\"name\":\"Alan Turing\"}}]},{\"upc\":\"2\",\"name\":\"Couch\",\"reviews\":[{\"product\":{\"name\":\"Couch\"},\"author\":{\"id\":\"1\",\"name\":\"Ada Lovelace\"}}]}]},\"hasNext\":true}\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"hasNext\":false,\"incremental\":[{\"data\":{\"id\":\"1\"},\"path\":[\"topProducts\",0,\"reviews\",0]},{\"data\":{\"id\":\"4\"},\"path\":[\"topProducts\",0,\"reviews\",1]},{\"data\":{\"id\":\"2\"},\"path\":[\"topProducts\",1,\"reviews\",0]}]}\r\n--graphql--\r\n"; - async fn with_config() -> router::Response { - let query = " - query TopProducts($first: Int) { - topProducts(first: $first) { - upc - name - reviews { - ... @defer { - id - } - product { name } - author { id name } - } - } - } - "; - let http_request = supergraph::Request::canned_builder() - .header(http::header::ACCEPT, MULTIPART_DEFER_CONTENT_TYPE) - .query(query) - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - let bytes = serde_json::to_vec(&req).unwrap(); - hyper::Body::from(bytes) - }); - let config = serde_json::json!({ - "experimental_batching": { - "enabled": true, - "mode" : "batch_http_link" - } - }); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::OK); - let bytes = hyper::body::to_bytes(response.into_body()).await.unwrap(); - let data = String::from_utf8_lossy(&bytes); - assert_eq!(expected_response, data); - } - - #[tokio::test] - async fn it_will_not_process_a_batched_deferred_query() { - let expected_response = "[\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"errors\":[{\"message\":\"Deferred responses and subscriptions aren't supported in batches\",\"extensions\":{\"code\":\"BATCHING_DEFER_UNSUPPORTED\"}}]}\r\n--graphql--\r\n, \r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"errors\":[{\"message\":\"Deferred responses and subscriptions aren't supported in batches\",\"extensions\":{\"code\":\"BATCHING_DEFER_UNSUPPORTED\"}}]}\r\n--graphql--\r\n]"; - - async fn with_config() -> router::Response { - let query = " - query TopProducts($first: Int) { - topProducts(first: $first) { - upc - name - reviews { - ... @defer { - id - } - product { name } - author { id name } - } - } - } - "; - let http_request = supergraph::Request::canned_builder() - .header(http::header::ACCEPT, MULTIPART_DEFER_CONTENT_TYPE) - .query(query) - .build() - .unwrap() - .supergraph_request - .map(|req: crate::request::Request| { - // Modify the request so that it is a valid array of requests. - let mut json_bytes = serde_json::to_vec(&req).unwrap(); - let mut result = vec![b'[']; - result.append(&mut json_bytes.clone()); - result.push(b','); - result.append(&mut json_bytes); - result.push(b']'); - hyper::Body::from(result) - }); - let config = serde_json::json!({ - "experimental_batching": { - "enabled": true, - "mode" : "batch_http_link" - } - }); - crate::TestHarness::builder() - .configuration_json(config) - .unwrap() - .build_router() - .await - .unwrap() - .oneshot(router::Request::from(http_request)) - .await - .unwrap() - } - // Send a request - let response = with_config().await.response; - assert_eq!(response.status(), http::StatusCode::NOT_ACCEPTABLE); - let bytes = hyper::body::to_bytes(response.into_body()).await.unwrap(); - let data = String::from_utf8_lossy(&bytes); - assert_eq!(expected_response, data); - } -} diff --git a/apollo-router/src/services/router/snapshots/apollo_router__services__router__tests__escaped_quotes_in_string_literal.snap b/apollo-router/src/services/router/snapshots/apollo_router__services__router__tests__escaped_quotes_in_string_literal.snap new file mode 100644 index 0000000000..9fcfad90a4 --- /dev/null +++ b/apollo-router/src/services/router/snapshots/apollo_router__services__router__tests__escaped_quotes_in_string_literal.snap @@ -0,0 +1,48 @@ +--- +source: apollo-router/src/services/router/tests.rs +expression: "(graphql_response, &subgraph_query_log)" +--- +( + Response { + label: None, + data: Some( + Object({ + "topProducts": Array([ + Object({ + "name": String( + "Table", + ), + "reviewsForAuthor": Null, + }), + Object({ + "name": String( + "Couch", + ), + "reviewsForAuthor": Null, + }), + ]), + }), + ), + path: None, + errors: [], + extensions: {}, + has_next: None, + subscribed: None, + created_at: None, + incremental: [], + }, + [ + ( + "products", + Some( + "query TopProducts__products__0($first:Int){topProducts(first:$first){__typename upc name}}", + ), + ), + ( + "reviews", + Some( + "query TopProducts__reviews__1($representations:[_Any!]!){_entities(representations:$representations){...on Product{reviewsForAuthor(authorID:\"\\\"1\\\"\"){body}}}}", + ), + ), + ], +) diff --git a/apollo-router/src/services/router/tests.rs b/apollo-router/src/services/router/tests.rs new file mode 100644 index 0000000000..8a44206af1 --- /dev/null +++ b/apollo-router/src/services/router/tests.rs @@ -0,0 +1,598 @@ +use std::sync::Arc; +use std::sync::Mutex; + +use futures::stream::StreamExt; +use http::header::CONTENT_TYPE; +use http::header::VARY; +use http::HeaderMap; +use http::HeaderValue; +use http::Method; +use http::Uri; +use mime::APPLICATION_JSON; +use serde_json_bytes::json; +use tower::ServiceExt; +use tower_service::Service; + +use crate::graphql; +use crate::services::router; +use crate::services::router::service::from_supergraph_mock_callback; +use crate::services::router::service::process_vary_header; +use crate::services::subgraph; +use crate::services::supergraph; +use crate::services::SupergraphRequest; +use crate::services::SupergraphResponse; +use crate::services::MULTIPART_DEFER_CONTENT_TYPE; +use crate::Context; + +// Test Vary processing + +#[test] +fn it_adds_default_with_value_origin_if_no_vary_header() { + let mut default_headers = HeaderMap::new(); + process_vary_header(&mut default_headers); + let vary_opt = default_headers.get(VARY); + assert!(vary_opt.is_some()); + let vary = vary_opt.expect("has a value"); + assert_eq!(vary, "origin"); +} + +#[test] +fn it_leaves_vary_alone_if_set() { + let mut default_headers = HeaderMap::new(); + default_headers.insert(VARY, HeaderValue::from_static("*")); + process_vary_header(&mut default_headers); + let vary_opt = default_headers.get(VARY); + assert!(vary_opt.is_some()); + let vary = vary_opt.expect("has a value"); + assert_eq!(vary, "*"); +} + +#[test] +fn it_leaves_varys_alone_if_there_are_more_than_one() { + let mut default_headers = HeaderMap::new(); + default_headers.insert(VARY, HeaderValue::from_static("one")); + default_headers.append(VARY, HeaderValue::from_static("two")); + process_vary_header(&mut default_headers); + let vary = default_headers.get_all(VARY); + assert_eq!(vary.iter().count(), 2); + for value in vary { + assert!(value == "one" || value == "two"); + } +} + +#[tokio::test] +async fn it_extracts_query_and_operation_name() { + let query = "query"; + let expected_query = query; + let operation_name = "operationName"; + let expected_operation_name = operation_name; + + let expected_response = graphql::Response::builder() + .data(json!({"response": "yay"})) + .build(); + + let mut router_service = from_supergraph_mock_callback(move |req| { + let example_response = expected_response.clone(); + + assert_eq!( + req.supergraph_request.body().query.as_deref().unwrap(), + expected_query + ); + assert_eq!( + req.supergraph_request + .body() + .operation_name + .as_deref() + .unwrap(), + expected_operation_name + ); + + Ok(SupergraphResponse::new_from_graphql_response( + example_response, + req.context, + )) + }) + .await; + + // get request + let get_request = supergraph::Request::builder() + .query(query) + .operation_name(operation_name) + .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) + .uri(Uri::from_static("/")) + .method(Method::GET) + .context(Context::new()) + .build() + .unwrap() + .try_into() + .unwrap(); + + router_service.call(get_request).await.unwrap(); + + // post request + let post_request = supergraph::Request::builder() + .query(query) + .operation_name(operation_name) + .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) + .uri(Uri::from_static("/")) + .method(Method::POST) + .context(Context::new()) + .build() + .unwrap(); + + router_service + .call(post_request.try_into().unwrap()) + .await + .unwrap(); +} + +#[tokio::test] +async fn it_fails_on_empty_query() { + let expected_error = "Must provide query string."; + + let router_service = from_supergraph_mock_callback(move |_req| unreachable!()).await; + + let request = SupergraphRequest::fake_builder() + .query("".to_string()) + .build() + .expect("expecting valid request") + .try_into() + .unwrap(); + + let response = router_service + .oneshot(request) + .await + .unwrap() + .into_graphql_response_stream() + .await + .next() + .await + .unwrap() + .unwrap(); + let actual_error = response.errors[0].message.clone(); + + assert_eq!(expected_error, actual_error); + assert!(response.errors[0].extensions.contains_key("code")); +} + +#[tokio::test] +async fn it_fails_on_no_query() { + let expected_error = "Must provide query string."; + + let router_service = from_supergraph_mock_callback(move |_req| unreachable!()).await; + + let request = SupergraphRequest::fake_builder() + .build() + .expect("expecting valid request") + .try_into() + .unwrap(); + + let response = router_service + .oneshot(request) + .await + .unwrap() + .into_graphql_response_stream() + .await + .next() + .await + .unwrap() + .unwrap(); + let actual_error = response.errors[0].message.clone(); + assert_eq!(expected_error, actual_error); + assert!(response.errors[0].extensions.contains_key("code")); +} + +#[tokio::test] +async fn test_experimental_http_max_request_bytes() { + /// Size of the JSON serialization of the request created by `fn canned_new` + /// in `apollo-router/src/services/supergraph.rs` + const CANNED_REQUEST_LEN: usize = 391; + + async fn with_config(experimental_http_max_request_bytes: usize) -> router::Response { + let http_request = supergraph::Request::canned_builder() + .build() + .unwrap() + .supergraph_request + .map(|body| { + let json_bytes = serde_json::to_vec(&body).unwrap(); + assert_eq!( + json_bytes.len(), + CANNED_REQUEST_LEN, + "The request generated by `fn canned_new` \ + in `apollo-router/src/services/supergraph.rs` has changed. \ + Please update `CANNED_REQUEST_LEN` accordingly." + ); + hyper::Body::from(json_bytes) + }); + let config = serde_json::json!({ + "limits": { + "experimental_http_max_request_bytes": experimental_http_max_request_bytes + } + }); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request just at (under) the limit + let response = with_config(CANNED_REQUEST_LEN).await.response; + assert_eq!(response.status(), http::StatusCode::OK); + + // Send a request just over the limit + let response = with_config(CANNED_REQUEST_LEN - 1).await.response; + assert_eq!(response.status(), http::StatusCode::PAYLOAD_TOO_LARGE); +} + +// Test query batching + +#[tokio::test] +async fn it_only_accepts_batch_http_link_mode_for_query_batch() { + let expected_response: serde_json::Value = serde_json::from_str(include_str!( + "../query_batching/testdata/batching_not_enabled_response.json" + )) + .unwrap(); + + async fn with_config() -> router::Response { + let http_request = supergraph::Request::canned_builder() + .build() + .unwrap() + .supergraph_request + .map(|req: crate::request::Request| { + // Modify the request so that it is a valid array of requests. + let mut json_bytes = serde_json::to_vec(&req).unwrap(); + let mut result = vec![b'[']; + result.append(&mut json_bytes.clone()); + result.push(b','); + result.append(&mut json_bytes); + result.push(b']'); + hyper::Body::from(result) + }); + let config = serde_json::json!({}); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); + let data: serde_json::Value = + serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) + .unwrap(); + assert_eq!(expected_response, data); +} + +#[tokio::test] +async fn it_processes_a_valid_query_batch() { + let expected_response: serde_json::Value = serde_json::from_str(include_str!( + "../query_batching/testdata/expected_good_response.json" + )) + .unwrap(); + + async fn with_config() -> router::Response { + let http_request = supergraph::Request::canned_builder() + .build() + .unwrap() + .supergraph_request + .map(|req_2: crate::request::Request| { + // Create clones of our standard query and update it to have 3 unique queries + let mut req_1 = req_2.clone(); + let mut req_3 = req_2.clone(); + req_1.query = req_2.query.clone().map(|x| x.replace("upc\n", "")); + req_3.query = req_2.query.clone().map(|x| x.replace("id name", "name")); + + // Modify the request so that it is a valid array of 3 requests. + let mut json_bytes_1 = serde_json::to_vec(&req_1).unwrap(); + let mut json_bytes_2 = serde_json::to_vec(&req_2).unwrap(); + let mut json_bytes_3 = serde_json::to_vec(&req_3).unwrap(); + let mut result = vec![b'[']; + result.append(&mut json_bytes_1); + result.push(b','); + result.append(&mut json_bytes_2); + result.push(b','); + result.append(&mut json_bytes_3); + result.push(b']'); + hyper::Body::from(result) + }); + let config = serde_json::json!({ + "experimental_batching": { + "enabled": true, + "mode" : "batch_http_link" + } + }); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::OK); + let data: serde_json::Value = + serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) + .unwrap(); + assert_eq!(expected_response, data); +} + +#[tokio::test] +async fn it_will_not_process_a_query_batch_without_enablement() { + let expected_response: serde_json::Value = serde_json::from_str(include_str!( + "../query_batching/testdata/batching_not_enabled_response.json" + )) + .unwrap(); + + async fn with_config() -> router::Response { + let http_request = supergraph::Request::canned_builder() + .build() + .unwrap() + .supergraph_request + .map(|req: crate::request::Request| { + // Modify the request so that it is a valid array of requests. + let mut json_bytes = serde_json::to_vec(&req).unwrap(); + let mut result = vec![b'[']; + result.append(&mut json_bytes.clone()); + result.push(b','); + result.append(&mut json_bytes); + result.push(b']'); + hyper::Body::from(result) + }); + let config = serde_json::json!({}); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); + let data: serde_json::Value = + serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) + .unwrap(); + assert_eq!(expected_response, data); +} + +#[tokio::test] +async fn it_will_not_process_a_poorly_formatted_query_batch() { + let expected_response: serde_json::Value = serde_json::from_str(include_str!( + "../query_batching/testdata/badly_formatted_batch_response.json" + )) + .unwrap(); + + async fn with_config() -> router::Response { + let http_request = supergraph::Request::canned_builder() + .build() + .unwrap() + .supergraph_request + .map(|req: crate::request::Request| { + // Modify the request so that it is a valid array of requests. + let mut json_bytes = serde_json::to_vec(&req).unwrap(); + let mut result = vec![b'[']; + result.append(&mut json_bytes.clone()); + result.push(b','); + result.append(&mut json_bytes); + // Deliberately omit the required trailing ] + hyper::Body::from(result) + }); + let config = serde_json::json!({ + "experimental_batching": { + "enabled": true, + "mode" : "batch_http_link" + } + }); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::BAD_REQUEST); + let data: serde_json::Value = + serde_json::from_slice(&hyper::body::to_bytes(response.into_body()).await.unwrap()) + .unwrap(); + assert_eq!(expected_response, data); +} + +#[tokio::test] +async fn it_will_process_a_non_batched_defered_query() { + let expected_response = "\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"data\":{\"topProducts\":[{\"upc\":\"1\",\"name\":\"Table\",\"reviews\":[{\"product\":{\"name\":\"Table\"},\"author\":{\"id\":\"1\",\"name\":\"Ada Lovelace\"}},{\"product\":{\"name\":\"Table\"},\"author\":{\"id\":\"2\",\"name\":\"Alan Turing\"}}]},{\"upc\":\"2\",\"name\":\"Couch\",\"reviews\":[{\"product\":{\"name\":\"Couch\"},\"author\":{\"id\":\"1\",\"name\":\"Ada Lovelace\"}}]}]},\"hasNext\":true}\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"hasNext\":false,\"incremental\":[{\"data\":{\"id\":\"1\"},\"path\":[\"topProducts\",0,\"reviews\",0]},{\"data\":{\"id\":\"4\"},\"path\":[\"topProducts\",0,\"reviews\",1]},{\"data\":{\"id\":\"2\"},\"path\":[\"topProducts\",1,\"reviews\",0]}]}\r\n--graphql--\r\n"; + async fn with_config() -> router::Response { + let query = " + query TopProducts($first: Int) { + topProducts(first: $first) { + upc + name + reviews { + ... @defer { + id + } + product { name } + author { id name } + } + } + } + "; + let http_request = supergraph::Request::canned_builder() + .header(http::header::ACCEPT, MULTIPART_DEFER_CONTENT_TYPE) + .query(query) + .build() + .unwrap() + .supergraph_request + .map(|req: crate::request::Request| { + let bytes = serde_json::to_vec(&req).unwrap(); + hyper::Body::from(bytes) + }); + let config = serde_json::json!({ + "experimental_batching": { + "enabled": true, + "mode" : "batch_http_link" + } + }); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::OK); + let bytes = hyper::body::to_bytes(response.into_body()).await.unwrap(); + let data = String::from_utf8_lossy(&bytes); + assert_eq!(expected_response, data); +} + +#[tokio::test] +async fn it_will_not_process_a_batched_deferred_query() { + let expected_response = "[\r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"errors\":[{\"message\":\"Deferred responses and subscriptions aren't supported in batches\",\"extensions\":{\"code\":\"BATCHING_DEFER_UNSUPPORTED\"}}]}\r\n--graphql--\r\n, \r\n--graphql\r\ncontent-type: application/json\r\n\r\n{\"errors\":[{\"message\":\"Deferred responses and subscriptions aren't supported in batches\",\"extensions\":{\"code\":\"BATCHING_DEFER_UNSUPPORTED\"}}]}\r\n--graphql--\r\n]"; + + async fn with_config() -> router::Response { + let query = " + query TopProducts($first: Int) { + topProducts(first: $first) { + upc + name + reviews { + ... @defer { + id + } + product { name } + author { id name } + } + } + } + "; + let http_request = supergraph::Request::canned_builder() + .header(http::header::ACCEPT, MULTIPART_DEFER_CONTENT_TYPE) + .query(query) + .build() + .unwrap() + .supergraph_request + .map(|req: crate::request::Request| { + // Modify the request so that it is a valid array of requests. + let mut json_bytes = serde_json::to_vec(&req).unwrap(); + let mut result = vec![b'[']; + result.append(&mut json_bytes.clone()); + result.push(b','); + result.append(&mut json_bytes); + result.push(b']'); + hyper::Body::from(result) + }); + let config = serde_json::json!({ + "experimental_batching": { + "enabled": true, + "mode" : "batch_http_link" + } + }); + crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .build_router() + .await + .unwrap() + .oneshot(router::Request::from(http_request)) + .await + .unwrap() + } + // Send a request + let response = with_config().await.response; + assert_eq!(response.status(), http::StatusCode::NOT_ACCEPTABLE); + let bytes = hyper::body::to_bytes(response.into_body()).await.unwrap(); + let data = String::from_utf8_lossy(&bytes); + assert_eq!(expected_response, data); +} + +/// +#[tokio::test] +async fn escaped_quotes_in_string_literal() { + let query = r#" + query TopProducts($first: Int) { + topProducts(first: $first) { + name + reviewsForAuthor(authorID: "\"1\"") { + body + } + } + } + "#; + let request = supergraph::Request::fake_builder() + .query(query) + .variable("first", 2) + .build() + .unwrap(); + let config = serde_json::json!({ + "include_subgraph_errors": {"all": true}, + }); + let subgraph_query_log = Arc::new(Mutex::new(Vec::new())); + let subgraph_query_log_2 = subgraph_query_log.clone(); + let mut response = crate::TestHarness::builder() + .configuration_json(config) + .unwrap() + .subgraph_hook(move |subgraph_name, service| { + let is_reviews = subgraph_name == "reviews"; + let subgraph_name = subgraph_name.to_owned(); + let subgraph_query_log_3 = subgraph_query_log_2.clone(); + service + .map_request(move |request: subgraph::Request| { + subgraph_query_log_3.lock().unwrap().push(( + subgraph_name.clone(), + request.subgraph_request.body().query.clone(), + )); + request + }) + .map_response(move |mut response| { + if is_reviews { + // Replace "couldn't find mock for query" error with empty data + let graphql_response = response.response.body_mut(); + graphql_response.errors.clear(); + graphql_response.data = Some(serde_json_bytes::json!({ + "_entities": {"reviews": []}, + })); + } + response + }) + .boxed() + }) + .build_supergraph() + .await + .unwrap() + .oneshot(request) + .await + .unwrap(); + let graphql_response = response.next_response().await.unwrap(); + let subgraph_query_log = subgraph_query_log.lock().unwrap(); + insta::assert_debug_snapshot!((graphql_response, &subgraph_query_log)); + let subgraph_query = subgraph_query_log[1].1.as_ref().unwrap(); + + // The string literal made it through unchanged: + assert!(subgraph_query.contains(r#"reviewsForAuthor(authorID:"\"1\"")"#)); +} diff --git a/apollo-router/src/services/subgraph.rs b/apollo-router/src/services/subgraph.rs index e926bda188..3d375cf439 100644 --- a/apollo-router/src/services/subgraph.rs +++ b/apollo-router/src/services/subgraph.rs @@ -2,7 +2,6 @@ use std::sync::Arc; -use futures::channel::mpsc; use http::StatusCode; use http::Version; use serde_json_bytes::ByteString; @@ -12,6 +11,7 @@ use sha2::Digest; use sha2::Sha256; use static_assertions::assert_impl_all; use tokio::sync::broadcast; +use tokio::sync::mpsc; use tower::BoxError; use crate::error::Error; diff --git a/apollo-router/src/services/subgraph_service.rs b/apollo-router/src/services/subgraph_service.rs index 5b2c390e24..2286481d02 100644 --- a/apollo-router/src/services/subgraph_service.rs +++ b/apollo-router/src/services/subgraph_service.rs @@ -360,13 +360,12 @@ impl tower::Service for SubgraphService { .await?; // If it existed before just send the right stream (handle) and early return - let mut stream_tx = - request.subscription_stream.clone().ok_or_else(|| { - FetchError::SubrequestWsError { - service: service_name.clone(), - reason: "cannot get the callback stream".to_string(), - } - })?; + let stream_tx = request.subscription_stream.clone().ok_or_else(|| { + FetchError::SubrequestWsError { + service: service_name.clone(), + reason: "cannot get the callback stream".to_string(), + } + })?; stream_tx.send(handle.into_stream()).await?; tracing::info!( monotonic_counter.apollo.router.operations.subscriptions = 1u64, @@ -514,7 +513,7 @@ async fn call_websocket( subscription_stream, .. } = request; - let mut subscription_stream_tx = + let subscription_stream_tx = subscription_stream.ok_or_else(|| FetchError::SubrequestWsError { service: service_name.clone(), reason: "cannot get the websocket stream".to_string(), @@ -1175,7 +1174,6 @@ mod tests { use axum::Router; use axum::Server; use bytes::Buf; - use futures::channel::mpsc; use futures::StreamExt; use http::header::HOST; use http::StatusCode; @@ -1191,6 +1189,8 @@ mod tests { use rustls::ServerConfig; use serde_json_bytes::ByteString; use serde_json_bytes::Value; + use tokio::sync::mpsc; + use tokio_stream::wrappers::ReceiverStream; use tower::service_fn; use tower::ServiceExt; use url::Url; @@ -1847,6 +1847,25 @@ mod tests { } } + fn supergraph_request(query: &str) -> Arc> { + Arc::new( + http::Request::builder() + .header(HOST, "host") + .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) + .body(Request::builder().query(query).build()) + .expect("expecting valid request"), + ) + } + + fn subgraph_http_request(uri: Uri, query: &str) -> http::Request { + http::Request::builder() + .header(HOST, "rhost") + .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) + .uri(uri) + .body(Request::builder().query(query).build()) + .expect("expecting valid request") + } + #[tokio::test(flavor = "multi_thread")] async fn test_subgraph_service_callback() { let _ = SUBSCRIPTION_CALLBACK_HMAC_KEY.set(String::from("TESTEST")); @@ -1868,33 +1887,20 @@ mod tests { let (tx, _rx) = mpsc::channel(2); let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - operation_kind: OperationKind::Subscription, - context: Context::new(), - subscription_stream: Some(tx), - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request( + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .subgraph_request(subgraph_http_request( + url, + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .operation_kind(OperationKind::Subscription) + .subscription_stream(tx) + .context(Context::new()) + .build(), + ) .await .unwrap(); response.response.body().errors.iter().for_each(|e| { @@ -1924,25 +1930,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert!(response.response.body().errors.is_empty()); @@ -1968,25 +1963,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert!(response.response.body().errors.is_empty()); @@ -2012,25 +1996,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2061,25 +2034,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2114,25 +2076,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2162,42 +2113,30 @@ mod tests { Notify::builder().build(), ) .expect("can create a SubgraphService"); - let (tx, mut rx) = mpsc::channel(2); + let (tx, rx) = mpsc::channel(2); + let mut rx_stream = ReceiverStream::new(rx); let url = Uri::from_str(&format!("ws://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - operation_kind: OperationKind::Subscription, - context: Context::new(), - subscription_stream: Some(tx), - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request( + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .subgraph_request(subgraph_http_request( + url, + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .operation_kind(OperationKind::Subscription) + .subscription_stream(tx) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert!(response.response.body().errors.is_empty()); - let mut gql_stream = rx.next().await.unwrap(); + let mut gql_stream = rx_stream.next().await.unwrap(); let message = gql_stream.next().await.unwrap(); assert_eq!( message, @@ -2230,33 +2169,20 @@ mod tests { let url = Uri::from_str(&format!("ws://{socket_addr}")).unwrap(); let err = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body( - Request::builder() - .query("subscription {\n userWasCreated {\n username\n }\n}") - .build(), - ) - .expect("expecting valid request"), - operation_kind: OperationKind::Subscription, - context: Context::new(), - subscription_stream: Some(tx), - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request( + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .subgraph_request(subgraph_http_request( + url, + "subscription {\n userWasCreated {\n username\n }\n}", + )) + .operation_kind(OperationKind::Subscription) + .subscription_stream(tx) + .context(Context::new()) + .build(), + ) .await .unwrap_err(); assert_eq!( @@ -2285,25 +2211,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2337,25 +2252,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2385,13 +2289,7 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query".to_string()).build()) - .expect("expecting valid request"), - ), + supergraph_request: supergraph_request("query"), subgraph_request: http::Request::builder() .header(HOST, "rhost") .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) @@ -2435,25 +2333,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!( @@ -2485,25 +2372,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2541,25 +2417,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2593,25 +2458,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2644,25 +2498,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2695,25 +2538,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2746,25 +2578,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let resp = subgraph_service .clone() - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2844,25 +2665,14 @@ mod tests { let url = Uri::from_str(&format!("https://localhost:{}", socket_addr.port())).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); @@ -2900,25 +2710,14 @@ mod tests { let url = Uri::from_str(&format!("https://localhost:{}", socket_addr.port())).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!(response.response.body().data, Some(Value::Null)); @@ -3009,25 +2808,14 @@ mod tests { let url = Uri::from_str(&format!("https://localhost:{}", socket_addr.port())).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert_eq!(response.response.body().data, Some(Value::Null)); @@ -3079,25 +2867,14 @@ mod tests { let url = Uri::from_str(&format!("http://{socket_addr}")).unwrap(); let response = subgraph_service - .oneshot(SubgraphRequest { - supergraph_request: Arc::new( - http::Request::builder() - .header(HOST, "host") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - ), - subgraph_request: http::Request::builder() - .header(HOST, "rhost") - .header(CONTENT_TYPE, APPLICATION_JSON.essence_str()) - .uri(url) - .body(Request::builder().query("query").build()) - .expect("expecting valid request"), - operation_kind: OperationKind::Query, - context: Context::new(), - subscription_stream: None, - connection_closed_signal: None, - }) + .oneshot( + SubgraphRequest::builder() + .supergraph_request(supergraph_request("query")) + .subgraph_request(subgraph_http_request(url, "query")) + .operation_kind(OperationKind::Query) + .context(Context::new()) + .build(), + ) .await .unwrap(); assert!(response.response.body().errors.is_empty()); diff --git a/apollo-router/src/services/supergraph.rs b/apollo-router/src/services/supergraph.rs index 8647d0cdc3..04cd955f76 100644 --- a/apollo-router/src/services/supergraph.rs +++ b/apollo-router/src/services/supergraph.rs @@ -24,6 +24,10 @@ use crate::http_ext::TryIntoHeaderValue; use crate::json_ext::Path; use crate::Context; +pub(crate) mod service; +#[cfg(test)] +mod tests; + pub type BoxService = tower::util::BoxService; pub type BoxCloneService = tower::util::BoxCloneService; pub type ServiceResult = Result; diff --git a/apollo-router/src/services/supergraph/service.rs b/apollo-router/src/services/supergraph/service.rs new file mode 100644 index 0000000000..b52b33947a --- /dev/null +++ b/apollo-router/src/services/supergraph/service.rs @@ -0,0 +1,845 @@ +//! Implements the router phase of the request lifecycle. + +use std::sync::atomic::Ordering; +use std::sync::Arc; +use std::task::Poll; +use std::time::Instant; + +use futures::future::BoxFuture; +use futures::stream::StreamExt; +use futures::TryFutureExt; +use http::StatusCode; +use indexmap::IndexMap; +use router_bridge::planner::Planner; +use router_bridge::planner::UsageReporting; +use tokio::sync::mpsc; +use tokio::sync::mpsc::error::SendError; +use tokio_stream::wrappers::ReceiverStream; +use tower::BoxError; +use tower::Layer; +use tower::ServiceBuilder; +use tower::ServiceExt; +use tower_service::Service; +use tracing::field; +use tracing::Span; +use tracing_futures::Instrument; + +use crate::configuration::Batching; +use crate::context::OPERATION_NAME; +use crate::error::CacheResolverError; +use crate::graphql; +use crate::graphql::IntoGraphQLErrors; +use crate::graphql::Response; +use crate::notification::HandleStream; +use crate::plugin::DynPlugin; +use crate::plugins::subscription::SubscriptionConfig; +use crate::plugins::telemetry::tracing::apollo_telemetry::APOLLO_PRIVATE_DURATION_NS; +use crate::plugins::telemetry::Telemetry; +use crate::plugins::telemetry::LOGGING_DISPLAY_BODY; +use crate::plugins::traffic_shaping::TrafficShaping; +use crate::plugins::traffic_shaping::APOLLO_TRAFFIC_SHAPING; +use crate::query_planner::subscription::SubscriptionHandle; +use crate::query_planner::subscription::OPENED_SUBSCRIPTIONS; +use crate::query_planner::subscription::SUBSCRIPTION_EVENT_SPAN_NAME; +use crate::query_planner::BridgeQueryPlanner; +use crate::query_planner::CachingQueryPlanner; +use crate::query_planner::QueryPlanResult; +use crate::query_planner::WarmUpCachingQueryKey; +use crate::router_factory::create_plugins; +use crate::router_factory::create_subgraph_services; +use crate::services::execution::QueryPlan; +use crate::services::layers::allow_only_http_post_mutations::AllowOnlyHttpPostMutationsLayer; +use crate::services::layers::content_negotiation; +use crate::services::layers::persisted_queries::PersistedQueryLayer; +use crate::services::layers::query_analysis::ParsedDocument; +use crate::services::layers::query_analysis::QueryAnalysisLayer; +use crate::services::new_service::ServiceFactory; +use crate::services::query_planner; +use crate::services::router::ClientRequestAccepts; +use crate::services::subgraph_service::MakeSubgraphService; +use crate::services::subgraph_service::SubgraphServiceFactory; +use crate::services::supergraph; +use crate::services::ExecutionRequest; +use crate::services::ExecutionResponse; +use crate::services::ExecutionServiceFactory; +use crate::services::QueryPlannerContent; +use crate::services::QueryPlannerResponse; +use crate::services::SupergraphRequest; +use crate::services::SupergraphResponse; +use crate::spec::Query; +use crate::spec::Schema; +use crate::Configuration; +use crate::Context; +use crate::Notify; + +pub(crate) const QUERY_PLANNING_SPAN_NAME: &str = "query_planning"; + +/// An [`IndexMap`] of available plugins. +pub(crate) type Plugins = IndexMap>; + +/// Containing [`Service`] in the request lifecyle. +#[derive(Clone)] +pub(crate) struct SupergraphService { + execution_service_factory: ExecutionServiceFactory, + query_planner_service: CachingQueryPlanner, + schema: Arc, + notify: Notify, +} + +#[buildstructor::buildstructor] +impl SupergraphService { + #[builder] + pub(crate) fn new( + query_planner_service: CachingQueryPlanner, + execution_service_factory: ExecutionServiceFactory, + schema: Arc, + notify: Notify, + ) -> Self { + SupergraphService { + query_planner_service, + execution_service_factory, + schema, + notify, + } + } +} + +impl Service for SupergraphService { + type Response = SupergraphResponse; + type Error = BoxError; + type Future = BoxFuture<'static, Result>; + + fn poll_ready(&mut self, cx: &mut std::task::Context<'_>) -> Poll> { + self.query_planner_service + .poll_ready(cx) + .map_err(|err| err.into()) + } + + fn call(&mut self, req: SupergraphRequest) -> Self::Future { + // Consume our cloned services and allow ownership to be transferred to the async block. + let clone = self.query_planner_service.clone(); + + let planning = std::mem::replace(&mut self.query_planner_service, clone); + + let schema = self.schema.clone(); + + let context_cloned = req.context.clone(); + let fut = service_call( + planning, + self.execution_service_factory.clone(), + schema, + req, + self.notify.clone(), + ) + .or_else(|error: BoxError| async move { + let errors = vec![crate::error::Error { + message: error.to_string(), + extensions: serde_json_bytes::json!({ + "code": "INTERNAL_SERVER_ERROR", + }) + .as_object() + .unwrap() + .to_owned(), + ..Default::default() + }]; + + Ok(SupergraphResponse::builder() + .errors(errors) + .status_code(StatusCode::INTERNAL_SERVER_ERROR) + .context(context_cloned) + .build() + .expect("building a response like this should not fail")) + }); + + Box::pin(fut) + } +} + +async fn service_call( + planning: CachingQueryPlanner, + execution_service_factory: ExecutionServiceFactory, + schema: Arc, + req: SupergraphRequest, + notify: Notify, +) -> Result { + let context = req.context; + let body = req.supergraph_request.body(); + let variables = body.variables.clone(); + + let QueryPlannerResponse { + content, + context, + errors, + } = match plan_query( + planning, + body.operation_name.clone(), + context.clone(), + schema.clone(), + req.supergraph_request + .body() + .query + .clone() + .expect("query presence was checked before"), + ) + .await + { + Ok(resp) => resp, + Err(err) => match err.into_graphql_errors() { + Ok(gql_errors) => { + return Ok(SupergraphResponse::builder() + .context(context) + .errors(gql_errors) + .status_code(StatusCode::BAD_REQUEST) // If it's a graphql error we return a status code 400 + .build() + .expect("this response build must not fail")); + } + Err(err) => return Err(err.into()), + }, + }; + + if !errors.is_empty() { + return Ok(SupergraphResponse::builder() + .context(context) + .errors(errors) + .status_code(StatusCode::BAD_REQUEST) // If it's a graphql error we return a status code 400 + .build() + .expect("this response build must not fail")); + } + + match content { + Some(QueryPlannerContent::Introspection { response }) => Ok( + SupergraphResponse::new_from_graphql_response(*response, context), + ), + Some(QueryPlannerContent::IntrospectionDisabled) => { + let mut response = SupergraphResponse::new_from_graphql_response( + graphql::Response::builder() + .errors(vec![crate::error::Error::builder() + .message(String::from("introspection has been disabled")) + .extension_code("INTROSPECTION_DISABLED") + .build()]) + .build(), + context, + ); + *response.response.status_mut() = StatusCode::BAD_REQUEST; + Ok(response) + } + + Some(QueryPlannerContent::Plan { plan }) => { + let operation_name = body.operation_name.clone(); + let is_deferred = plan.is_deferred(operation_name.as_deref(), &variables); + let is_subscription = plan.is_subscription(operation_name.as_deref()); + + if let Some(batching) = context.private_entries.lock().get::() { + if batching.enabled && (is_deferred || is_subscription) { + let message = if is_deferred { + "BATCHING_DEFER_UNSUPPORTED" + } else { + "BATCHING_SUBSCRIPTION_UNSUPPORTED" + }; + let mut response = SupergraphResponse::new_from_graphql_response( + graphql::Response::builder() + .errors(vec![crate::error::Error::builder() + .message(String::from( + "Deferred responses and subscriptions aren't supported in batches", + )) + .extension_code(message) + .build()]) + .build(), + context.clone(), + ); + *response.response.status_mut() = StatusCode::NOT_ACCEPTABLE; + return Ok(response); + } + } + + let ClientRequestAccepts { + multipart_defer: accepts_multipart_defer, + multipart_subscription: accepts_multipart_subscription, + .. + } = context + .private_entries + .lock() + .get() + .cloned() + .unwrap_or_default(); + let mut subscription_tx = None; + if (is_deferred && !accepts_multipart_defer) + || (is_subscription && !accepts_multipart_subscription) + { + let (error_message, error_code) = if is_deferred { + (String::from("the router received a query with the @defer directive but the client does not accept multipart/mixed HTTP responses. To enable @defer support, add the HTTP header 'Accept: multipart/mixed; deferSpec=20220824'"), "DEFER_BAD_HEADER") + } else { + (String::from("the router received a query with a subscription but the client does not accept multipart/mixed HTTP responses. To enable subscription support, add the HTTP header 'Accept: multipart/mixed; boundary=graphql; subscriptionSpec=1.0'"), "SUBSCRIPTION_BAD_HEADER") + }; + let mut response = SupergraphResponse::new_from_graphql_response( + graphql::Response::builder() + .errors(vec![crate::error::Error::builder() + .message(error_message) + .extension_code(error_code) + .build()]) + .build(), + context, + ); + *response.response.status_mut() = StatusCode::NOT_ACCEPTABLE; + Ok(response) + } else if let Some(err) = plan.query.validate_variables(body, &schema).err() { + let mut res = SupergraphResponse::new_from_graphql_response(err, context); + *res.response.status_mut() = StatusCode::BAD_REQUEST; + Ok(res) + } else { + if is_subscription { + let ctx = context.clone(); + let (subs_tx, subs_rx) = mpsc::channel(1); + let query_plan = plan.clone(); + let execution_service_factory_cloned = execution_service_factory.clone(); + let cloned_supergraph_req = + clone_supergraph_request(&req.supergraph_request, context.clone())?; + // Spawn task for subscription + tokio::spawn(async move { + subscription_task( + execution_service_factory_cloned, + ctx, + query_plan, + subs_rx, + notify, + cloned_supergraph_req, + ) + .await; + }); + subscription_tx = subs_tx.into(); + } + + let execution_response = execution_service_factory + .create() + .oneshot( + ExecutionRequest::internal_builder() + .supergraph_request(req.supergraph_request) + .query_plan(plan.clone()) + .context(context) + .and_subscription_tx(subscription_tx) + .build() + .await, + ) + .await?; + + let ExecutionResponse { response, context } = execution_response; + + let (parts, response_stream) = response.into_parts(); + + Ok(SupergraphResponse { + context, + response: http::Response::from_parts(parts, response_stream.boxed()), + }) + } + } + // This should never happen because if we have an empty query plan we should have error in errors vec + None => Err(BoxError::from("cannot compute a query plan")), + } +} + +pub struct SubscriptionTaskParams { + pub(crate) client_sender: tokio::sync::mpsc::Sender, + pub(crate) subscription_handle: SubscriptionHandle, + pub(crate) subscription_config: SubscriptionConfig, + pub(crate) stream_rx: ReceiverStream>, + pub(crate) service_name: String, +} + +async fn subscription_task( + mut execution_service_factory: ExecutionServiceFactory, + context: Context, + query_plan: Arc, + mut rx: mpsc::Receiver, + notify: Notify, + supergraph_req: SupergraphRequest, +) { + let sub_params = match rx.recv().await { + Some(sub_params) => sub_params, + None => { + return; + } + }; + let subscription_config = sub_params.subscription_config; + let subscription_handle = sub_params.subscription_handle; + let service_name = sub_params.service_name; + let mut receiver = sub_params.stream_rx; + let sender = sub_params.client_sender; + + let graphql_document = &query_plan.query.string; + // Get the rest of the query_plan to execute for subscription events + let query_plan = match &query_plan.root { + crate::query_planner::PlanNode::Subscription { rest, .. } => rest.clone().map(|r| { + Arc::new(QueryPlan { + usage_reporting: query_plan.usage_reporting.clone(), + root: *r, + formatted_query_plan: query_plan.formatted_query_plan.clone(), + query: query_plan.query.clone(), + }) + }), + _ => { + let _ = sender + .send( + graphql::Response::builder() + .error( + graphql::Error::builder() + .message("cannot execute the subscription event") + .extension_code("SUBSCRIPTION_EXECUTION_ERROR") + .build(), + ) + .build(), + ) + .await; + return; + } + }; + + let limit_is_set = subscription_config.max_opened_subscriptions.is_some(); + let mut subscription_handle = subscription_handle.clone(); + let operation_signature = context + .private_entries + .lock() + .get::() + .map(|usage_reporting| usage_reporting.stats_report_key.clone()) + .unwrap_or_default(); + + let operation_name = context + .get::<_, String>(OPERATION_NAME) + .ok() + .flatten() + .unwrap_or_default(); + let display_body = context.contains_key(LOGGING_DISPLAY_BODY); + + let mut receiver = match receiver.next().await { + Some(receiver) => receiver, + None => { + tracing::trace!("receiver channel closed"); + return; + } + }; + + if limit_is_set { + OPENED_SUBSCRIPTIONS.fetch_add(1, Ordering::Relaxed); + } + + let mut configuration_updated_rx = notify.subscribe_configuration(); + let mut schema_updated_rx = notify.subscribe_schema(); + + let expires_in = crate::plugins::authentication::jwt_expires_in(&supergraph_req.context); + + let mut timeout = Box::pin(tokio::time::sleep(expires_in)); + + loop { + tokio::select! { + // We prefer to specify the order of checks within the select + biased; + _ = subscription_handle.closed_signal.recv() => { + break; + } + _ = &mut timeout => { + let response = Response::builder() + .subscribed(false) + .error( + crate::error::Error::builder() + .message("subscription closed because the JWT has expired") + .extension_code("SUBSCRIPTION_JWT_EXPIRED") + .build(), + ) + .build(); + let _ = sender.send(response).await; + break; + }, + message = receiver.next() => { + match message { + Some(mut val) => { + if display_body { + tracing::info!(http.request.body = ?val, apollo.subgraph.name = %service_name, "Subscription event body from subgraph {service_name:?}"); + } + val.created_at = Some(Instant::now()); + let res = dispatch_event(&supergraph_req, &execution_service_factory, query_plan.as_ref(), context.clone(), val, sender.clone()) + .instrument(tracing::info_span!(SUBSCRIPTION_EVENT_SPAN_NAME, + graphql.document = graphql_document, + graphql.operation.name = %operation_name, + otel.kind = "INTERNAL", + apollo_private.operation_signature = %operation_signature, + apollo_private.duration_ns = field::Empty,) + ).await; + if let Err(err) = res { + tracing::error!("cannot send the subscription to the client: {err:?}"); + break; + } + } + None => break, + } + } + Some(new_configuration) = configuration_updated_rx.next() => { + // If the configuration was dropped in the meantime, we ignore this update and will + // pick up the next one. + if let Some(conf) = new_configuration.upgrade() { + let plugins = match create_plugins(&conf, &execution_service_factory.schema, None).await { + Ok(plugins) => plugins, + Err(err) => { + tracing::error!("cannot re-create plugins with the new configuration (closing existing subscription): {err:?}"); + break; + }, + }; + let subgraph_services = match create_subgraph_services(&plugins, &execution_service_factory.schema, &conf).await { + Ok(subgraph_services) => subgraph_services, + Err(err) => { + tracing::error!("cannot re-create subgraph service with the new configuration (closing existing subscription): {err:?}"); + break; + }, + }; + let plugins = Arc::new(IndexMap::from_iter(plugins)); + execution_service_factory = ExecutionServiceFactory { schema: execution_service_factory.schema.clone(), plugins: plugins.clone(), subgraph_service_factory: Arc::new(SubgraphServiceFactory::new(subgraph_services.into_iter().map(|(k, v)| (k, Arc::new(v) as Arc)).collect(), plugins.clone())) }; + } + } + Some(new_schema) = schema_updated_rx.next() => { + if new_schema.raw_sdl != execution_service_factory.schema.raw_sdl { + let _ = sender + .send( + Response::builder() + .subscribed(false) + .error(graphql::Error::builder().message("subscription has been closed due to a schema reload").extension_code("SUBSCRIPTION_SCHEMA_RELOAD").build()) + .build(), + ) + .await; + + break; + } + } + } + } + drop(sender); + tracing::trace!("Leaving the task for subscription"); + if limit_is_set { + OPENED_SUBSCRIPTIONS.fetch_sub(1, Ordering::Relaxed); + } +} + +async fn dispatch_event( + supergraph_req: &SupergraphRequest, + execution_service_factory: &ExecutionServiceFactory, + query_plan: Option<&Arc>, + context: Context, + mut val: graphql::Response, + sender: mpsc::Sender, +) -> Result<(), SendError> { + let start = Instant::now(); + let span = Span::current(); + let res = match query_plan { + Some(query_plan) => { + let cloned_supergraph_req = clone_supergraph_request( + &supergraph_req.supergraph_request, + supergraph_req.context.clone(), + ) + .expect("it's a clone of the original one; qed"); + let execution_request = ExecutionRequest::internal_builder() + .supergraph_request(cloned_supergraph_req.supergraph_request) + .query_plan(query_plan.clone()) + .context(context) + .source_stream_value(val.data.take().unwrap_or_default()) + .build() + .await; + + let execution_service = execution_service_factory.create(); + let execution_response = execution_service.oneshot(execution_request).await; + let next_response = match execution_response { + Ok(mut execution_response) => execution_response.next_response().await, + Err(err) => { + tracing::error!("cannot execute the subscription event: {err:?}"); + let _ = sender + .send( + graphql::Response::builder() + .error( + graphql::Error::builder() + .message("cannot execute the subscription event") + .extension_code("SUBSCRIPTION_EXECUTION_ERROR") + .build(), + ) + .build(), + ) + .await; + return Ok(()); + } + }; + + if let Some(mut next_response) = next_response { + next_response.created_at = val.created_at; + next_response.subscribed = val.subscribed; + val.errors.append(&mut next_response.errors); + next_response.errors = val.errors; + + sender.send(next_response).await + } else { + Ok(()) + } + } + None => sender.send(val).await, + }; + span.record( + APOLLO_PRIVATE_DURATION_NS, + start.elapsed().as_nanos() as i64, + ); + + res +} + +async fn plan_query( + mut planning: CachingQueryPlanner, + operation_name: Option, + context: Context, + schema: Arc, + query_str: String, +) -> Result { + // FIXME: we have about 80 tests creating a supergraph service and crafting a supergraph request for it + // none of those tests create an executable document to put it in the context, and the document cannot be created + // from inside the supergraph request fake builder, because it needs a schema matching the query. + // So while we are updating the tests to create a document manually, this here will make sure current + // tests will pass + { + let mut entries = context.private_entries.lock(); + if !entries.contains_key::() { + let doc = Query::parse_document(&query_str, &schema, &Configuration::default()); + Query::validate_query(&schema, &doc.executable) + .map_err(crate::error::QueryPlannerError::from)?; + entries.insert::(doc); + } + drop(entries); + } + + planning + .call( + query_planner::CachingRequest::builder() + .query(query_str) + .and_operation_name(operation_name) + .context(context) + .build(), + ) + .instrument(tracing::info_span!( + QUERY_PLANNING_SPAN_NAME, + "otel.kind" = "INTERNAL" + )) + .await +} + +fn clone_supergraph_request( + req: &http::Request, + context: Context, +) -> Result { + let mut cloned_supergraph_req = SupergraphRequest::builder() + .extensions(req.body().extensions.clone()) + .and_query(req.body().query.clone()) + .context(context) + .method(req.method().clone()) + .and_operation_name(req.body().operation_name.clone()) + .uri(req.uri().clone()) + .variables(req.body().variables.clone()); + + for (header_name, header_value) in req.headers().clone() { + if let Some(header_name) = header_name { + cloned_supergraph_req = cloned_supergraph_req.header(header_name, header_value); + } + } + + cloned_supergraph_req.build() +} + +/// Builder which generates a plugin pipeline. +/// +/// This is at the heart of the delegation of responsibility model for the router. A schema, +/// collection of plugins, collection of subgraph services are assembled to generate a +/// [`tower::util::BoxCloneService`] capable of processing a router request +/// through the entire stack to return a response. +pub(crate) struct PluggableSupergraphServiceBuilder { + plugins: Plugins, + subgraph_services: Vec<(String, Arc)>, + configuration: Option>, + planner: BridgeQueryPlanner, +} + +impl PluggableSupergraphServiceBuilder { + pub(crate) fn new(planner: BridgeQueryPlanner) -> Self { + Self { + plugins: Default::default(), + subgraph_services: Default::default(), + configuration: None, + planner, + } + } + + pub(crate) fn with_dyn_plugin( + mut self, + plugin_name: String, + plugin: Box, + ) -> PluggableSupergraphServiceBuilder { + self.plugins.insert(plugin_name, plugin); + self + } + + pub(crate) fn with_subgraph_service( + mut self, + name: &str, + service_maker: S, + ) -> PluggableSupergraphServiceBuilder + where + S: MakeSubgraphService, + { + self.subgraph_services + .push((name.to_string(), Arc::new(service_maker))); + self + } + + pub(crate) fn with_configuration( + mut self, + configuration: Arc, + ) -> PluggableSupergraphServiceBuilder { + self.configuration = Some(configuration); + self + } + + pub(crate) async fn build(self) -> Result { + let configuration = self.configuration.unwrap_or_default(); + + let schema = self.planner.schema(); + let query_planner_service = CachingQueryPlanner::new( + self.planner, + schema.clone(), + &configuration, + IndexMap::new(), + ) + .await; + + let mut plugins = self.plugins; + // Activate the telemetry plugin. + // We must NOT fail to go live with the new router from this point as the telemetry plugin activate interacts with globals. + for (_, plugin) in plugins.iter_mut() { + if let Some(telemetry) = plugin.as_any_mut().downcast_mut::() { + telemetry.activate(); + } + } + + let plugins = Arc::new(plugins); + + let subgraph_service_factory = Arc::new(SubgraphServiceFactory::new( + self.subgraph_services, + plugins.clone(), + )); + + Ok(SupergraphCreator { + query_planner_service, + subgraph_service_factory, + schema, + plugins, + config: configuration, + }) + } +} + +/// A collection of services and data which may be used to create a "router". +#[derive(Clone)] +pub(crate) struct SupergraphCreator { + query_planner_service: CachingQueryPlanner, + subgraph_service_factory: Arc, + schema: Arc, + config: Arc, + plugins: Arc, +} + +pub(crate) trait HasPlugins { + fn plugins(&self) -> Arc; +} + +impl HasPlugins for SupergraphCreator { + fn plugins(&self) -> Arc { + self.plugins.clone() + } +} + +pub(crate) trait HasSchema { + fn schema(&self) -> Arc; +} + +impl HasSchema for SupergraphCreator { + fn schema(&self) -> Arc { + Arc::clone(&self.schema) + } +} + +pub(crate) trait HasConfig { + fn config(&self) -> Arc; +} + +impl HasConfig for SupergraphCreator { + fn config(&self) -> Arc { + Arc::clone(&self.config) + } +} + +impl ServiceFactory for SupergraphCreator { + type Service = supergraph::BoxService; + fn create(&self) -> Self::Service { + self.make().boxed() + } +} + +impl SupergraphCreator { + pub(crate) fn make( + &self, + ) -> impl Service< + supergraph::Request, + Response = supergraph::Response, + Error = BoxError, + Future = BoxFuture<'static, supergraph::ServiceResult>, + > + Send { + let supergraph_service = SupergraphService::builder() + .query_planner_service(self.query_planner_service.clone()) + .execution_service_factory(ExecutionServiceFactory { + schema: self.schema.clone(), + plugins: self.plugins.clone(), + subgraph_service_factory: self.subgraph_service_factory.clone(), + }) + .schema(self.schema.clone()) + .notify(self.config.notify.clone()) + .build(); + + let shaping = self + .plugins + .iter() + .find(|i| i.0.as_str() == APOLLO_TRAFFIC_SHAPING) + .and_then(|plugin| plugin.1.as_any().downcast_ref::()) + .expect("traffic shaping should always be part of the plugin list"); + + let supergraph_service = AllowOnlyHttpPostMutationsLayer::default() + .layer(shaping.supergraph_service_internal(supergraph_service)); + + ServiceBuilder::new() + .layer(content_negotiation::SupergraphLayer::default()) + .service( + self.plugins + .iter() + .rev() + .fold(supergraph_service.boxed(), |acc, (_, e)| { + e.supergraph_service(acc) + }), + ) + } + + pub(crate) async fn cache_keys(&self, count: Option) -> Vec { + self.query_planner_service.cache_keys(count).await + } + + pub(crate) fn planner(&self) -> Arc> { + self.query_planner_service.planner() + } + + pub(crate) async fn warm_up_query_planner( + &mut self, + query_parser: &QueryAnalysisLayer, + persisted_query_layer: &PersistedQueryLayer, + cache_keys: Vec, + ) { + self.query_planner_service + .warm_up(query_parser, persisted_query_layer, cache_keys) + .await + } +} diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__deferred_fragment_bounds_nullability-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__deferred_fragment_bounds_nullability-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__deferred_fragment_bounds_nullability-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__deferred_fragment_bounds_nullability-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__deferred_fragment_bounds_nullability.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__deferred_fragment_bounds_nullability.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__deferred_fragment_bounds_nullability.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__deferred_fragment_bounds_nullability.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_from_primary_on_deferred_responses-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_from_primary_on_deferred_responses-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_from_primary_on_deferred_responses-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_from_primary_on_deferred_responses-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_from_primary_on_deferred_responses.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_from_primary_on_deferred_responses.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_from_primary_on_deferred_responses.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_from_primary_on_deferred_responses.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_deferred_responses-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_deferred_responses-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_deferred_responses-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_deferred_responses-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_deferred_responses.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_deferred_responses.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_deferred_responses.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_deferred_responses.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_incremental_responses-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_incremental_responses-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_incremental_responses-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_incremental_responses-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_incremental_responses.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_incremental_responses.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_incremental_responses.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_incremental_responses.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_nullified_paths.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_nullified_paths.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__errors_on_nullified_paths.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__errors_on_nullified_paths.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses-3.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses-3.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses-3.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses-3.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__filter_nullified_deferred_responses.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__filter_nullified_deferred_responses.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__missing_entities.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__missing_entities.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__missing_entities.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__missing_entities.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface-3.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface-3.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface-3.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface-3.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__no_typename_on_interface.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__no_typename_on_interface.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__nullability_bubbling.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__nullability_bubbling.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__nullability_bubbling.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__nullability_bubbling.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__nullability_formatting.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__nullability_formatting.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__nullability_formatting.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__nullability_formatting.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__query_reconstruction.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__query_reconstruction.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__query_reconstruction.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__query_reconstruction.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__reconstruct_deferred_query_under_interface-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__reconstruct_deferred_query_under_interface-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__reconstruct_deferred_query_under_interface-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__reconstruct_deferred_query_under_interface-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__reconstruct_deferred_query_under_interface.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__reconstruct_deferred_query_under_interface.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__reconstruct_deferred_query_under_interface.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__reconstruct_deferred_query_under_interface.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__root_typename_with_defer-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__root_typename_with_defer-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__root_typename_with_defer-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__root_typename_with_defer-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__root_typename_with_defer.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__root_typename_with_defer.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__root_typename_with_defer.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__root_typename_with_defer.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload-3.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload-3.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload-3.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload-3.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_callback_schema_reload.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_callback_schema_reload.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback-3.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback-3.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback-3.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback-3.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-2.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-2.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-2.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-2.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-3.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-3.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-3.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-3.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-4.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-4.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit-4.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit-4.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_with_callback_with_limit.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_with_callback_with_limit.snap diff --git a/apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_without_header.snap b/apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_without_header.snap similarity index 100% rename from apollo-router/src/services/snapshots/apollo_router__services__supergraph_service__tests__subscription_without_header.snap rename to apollo-router/src/services/supergraph/snapshots/apollo_router__services__supergraph__tests__subscription_without_header.snap diff --git a/apollo-router/src/services/supergraph_service.rs b/apollo-router/src/services/supergraph/tests.rs similarity index 51% rename from apollo-router/src/services/supergraph_service.rs rename to apollo-router/src/services/supergraph/tests.rs index 63a2e6ddc3..11095f0a98 100644 --- a/apollo-router/src/services/supergraph_service.rs +++ b/apollo-router/src/services/supergraph/tests.rs @@ -1,852 +1,24 @@ -//! Implements the router phase of the request lifecycle. - -use std::sync::atomic::Ordering; +use std::collections::HashMap; use std::sync::Arc; -use std::task::Poll; -use std::time::Instant; - -use futures::channel::mpsc::SendError; -use futures::future::BoxFuture; -use futures::stream::StreamExt; -use futures::SinkExt; -use futures::TryFutureExt; -use http::StatusCode; -use indexmap::IndexMap; -use router_bridge::planner::Planner; -use router_bridge::planner::UsageReporting; -use tokio::sync::mpsc; -use tower::BoxError; -use tower::Layer; -use tower::ServiceBuilder; +use std::time::Duration; + +use http::HeaderValue; use tower::ServiceExt; use tower_service::Service; -use tracing::field; -use tracing::Span; -use tracing_futures::Instrument; - -use super::execution::QueryPlan; -use super::layers::allow_only_http_post_mutations::AllowOnlyHttpPostMutationsLayer; -use super::layers::content_negotiation; -use super::layers::persisted_queries::PersistedQueryLayer; -use super::layers::query_analysis::ParsedDocument; -use super::layers::query_analysis::QueryAnalysisLayer; -use super::new_service::ServiceFactory; -use super::router::ClientRequestAccepts; -use super::subgraph_service::MakeSubgraphService; -use super::subgraph_service::SubgraphServiceFactory; -use super::ExecutionServiceFactory; -use super::QueryPlannerContent; -use crate::configuration::Batching; -use crate::context::OPERATION_NAME; -use crate::error::CacheResolverError; + use crate::graphql; -use crate::graphql::IntoGraphQLErrors; -use crate::graphql::Response; -use crate::notification::HandleStream; -use crate::plugin::DynPlugin; -use crate::plugins::subscription::SubscriptionConfig; -use crate::plugins::telemetry::tracing::apollo_telemetry::APOLLO_PRIVATE_DURATION_NS; -use crate::plugins::telemetry::Telemetry; -use crate::plugins::telemetry::LOGGING_DISPLAY_BODY; -use crate::plugins::traffic_shaping::TrafficShaping; -use crate::plugins::traffic_shaping::APOLLO_TRAFFIC_SHAPING; -use crate::query_planner::subscription::SubscriptionHandle; -use crate::query_planner::subscription::OPENED_SUBSCRIPTIONS; -use crate::query_planner::subscription::SUBSCRIPTION_EVENT_SPAN_NAME; -use crate::query_planner::BridgeQueryPlanner; -use crate::query_planner::CachingQueryPlanner; -use crate::query_planner::QueryPlanResult; -use crate::query_planner::WarmUpCachingQueryKey; -use crate::router_factory::create_plugins; -use crate::router_factory::create_subgraph_services; -use crate::services::query_planner; +use crate::plugin::test::MockSubgraph; +use crate::services::router::ClientRequestAccepts; +use crate::services::subgraph; use crate::services::supergraph; -use crate::services::ExecutionRequest; -use crate::services::ExecutionResponse; -use crate::services::QueryPlannerResponse; -use crate::services::SupergraphRequest; -use crate::services::SupergraphResponse; -use crate::spec::Query; use crate::spec::Schema; +use crate::test_harness::MockedSubgraphs; use crate::Configuration; use crate::Context; use crate::Notify; +use crate::TestHarness; -pub(crate) const QUERY_PLANNING_SPAN_NAME: &str = "query_planning"; - -/// An [`IndexMap`] of available plugins. -pub(crate) type Plugins = IndexMap>; - -/// Containing [`Service`] in the request lifecyle. -#[derive(Clone)] -pub(crate) struct SupergraphService { - execution_service_factory: ExecutionServiceFactory, - query_planner_service: CachingQueryPlanner, - schema: Arc, - notify: Notify, -} - -#[buildstructor::buildstructor] -impl SupergraphService { - #[builder] - pub(crate) fn new( - query_planner_service: CachingQueryPlanner, - execution_service_factory: ExecutionServiceFactory, - schema: Arc, - notify: Notify, - ) -> Self { - SupergraphService { - query_planner_service, - execution_service_factory, - schema, - notify, - } - } -} - -impl Service for SupergraphService { - type Response = SupergraphResponse; - type Error = BoxError; - type Future = BoxFuture<'static, Result>; - - fn poll_ready(&mut self, cx: &mut std::task::Context<'_>) -> Poll> { - self.query_planner_service - .poll_ready(cx) - .map_err(|err| err.into()) - } - - fn call(&mut self, req: SupergraphRequest) -> Self::Future { - // Consume our cloned services and allow ownership to be transferred to the async block. - let clone = self.query_planner_service.clone(); - - let planning = std::mem::replace(&mut self.query_planner_service, clone); - - let schema = self.schema.clone(); - - let context_cloned = req.context.clone(); - let fut = service_call( - planning, - self.execution_service_factory.clone(), - schema, - req, - self.notify.clone(), - ) - .or_else(|error: BoxError| async move { - let errors = vec![crate::error::Error { - message: error.to_string(), - extensions: serde_json_bytes::json!({ - "code": "INTERNAL_SERVER_ERROR", - }) - .as_object() - .unwrap() - .to_owned(), - ..Default::default() - }]; - - Ok(SupergraphResponse::builder() - .errors(errors) - .status_code(StatusCode::INTERNAL_SERVER_ERROR) - .context(context_cloned) - .build() - .expect("building a response like this should not fail")) - }); - - Box::pin(fut) - } -} - -async fn service_call( - planning: CachingQueryPlanner, - execution_service_factory: ExecutionServiceFactory, - schema: Arc, - req: SupergraphRequest, - notify: Notify, -) -> Result { - let context = req.context; - let body = req.supergraph_request.body(); - let variables = body.variables.clone(); - - let QueryPlannerResponse { - content, - context, - errors, - } = match plan_query( - planning, - body.operation_name.clone(), - context.clone(), - schema.clone(), - req.supergraph_request - .body() - .query - .clone() - .expect("query presence was checked before"), - ) - .await - { - Ok(resp) => resp, - Err(err) => match err.into_graphql_errors() { - Ok(gql_errors) => { - return Ok(SupergraphResponse::builder() - .context(context) - .errors(gql_errors) - .status_code(StatusCode::BAD_REQUEST) // If it's a graphql error we return a status code 400 - .build() - .expect("this response build must not fail")); - } - Err(err) => return Err(err.into()), - }, - }; - - if !errors.is_empty() { - return Ok(SupergraphResponse::builder() - .context(context) - .errors(errors) - .status_code(StatusCode::BAD_REQUEST) // If it's a graphql error we return a status code 400 - .build() - .expect("this response build must not fail")); - } - - match content { - Some(QueryPlannerContent::Introspection { response }) => Ok( - SupergraphResponse::new_from_graphql_response(*response, context), - ), - Some(QueryPlannerContent::IntrospectionDisabled) => { - let mut response = SupergraphResponse::new_from_graphql_response( - graphql::Response::builder() - .errors(vec![crate::error::Error::builder() - .message(String::from("introspection has been disabled")) - .extension_code("INTROSPECTION_DISABLED") - .build()]) - .build(), - context, - ); - *response.response.status_mut() = StatusCode::BAD_REQUEST; - Ok(response) - } - - Some(QueryPlannerContent::Plan { plan }) => { - let operation_name = body.operation_name.clone(); - let is_deferred = plan.is_deferred(operation_name.as_deref(), &variables); - let is_subscription = plan.is_subscription(operation_name.as_deref()); - - if let Some(batching) = context.private_entries.lock().get::() { - if batching.enabled && (is_deferred || is_subscription) { - let message = if is_deferred { - "BATCHING_DEFER_UNSUPPORTED" - } else { - "BATCHING_SUBSCRIPTION_UNSUPPORTED" - }; - let mut response = SupergraphResponse::new_from_graphql_response( - graphql::Response::builder() - .errors(vec![crate::error::Error::builder() - .message(String::from( - "Deferred responses and subscriptions aren't supported in batches", - )) - .extension_code(message) - .build()]) - .build(), - context.clone(), - ); - *response.response.status_mut() = StatusCode::NOT_ACCEPTABLE; - return Ok(response); - } - } - - let ClientRequestAccepts { - multipart_defer: accepts_multipart_defer, - multipart_subscription: accepts_multipart_subscription, - .. - } = context - .private_entries - .lock() - .get() - .cloned() - .unwrap_or_default(); - let mut subscription_tx = None; - if (is_deferred && !accepts_multipart_defer) - || (is_subscription && !accepts_multipart_subscription) - { - let (error_message, error_code) = if is_deferred { - (String::from("the router received a query with the @defer directive but the client does not accept multipart/mixed HTTP responses. To enable @defer support, add the HTTP header 'Accept: multipart/mixed; deferSpec=20220824'"), "DEFER_BAD_HEADER") - } else { - (String::from("the router received a query with a subscription but the client does not accept multipart/mixed HTTP responses. To enable subscription support, add the HTTP header 'Accept: multipart/mixed; boundary=graphql; subscriptionSpec=1.0'"), "SUBSCRIPTION_BAD_HEADER") - }; - let mut response = SupergraphResponse::new_from_graphql_response( - graphql::Response::builder() - .errors(vec![crate::error::Error::builder() - .message(error_message) - .extension_code(error_code) - .build()]) - .build(), - context, - ); - *response.response.status_mut() = StatusCode::NOT_ACCEPTABLE; - Ok(response) - } else if let Some(err) = plan.query.validate_variables(body, &schema).err() { - let mut res = SupergraphResponse::new_from_graphql_response(err, context); - *res.response.status_mut() = StatusCode::BAD_REQUEST; - Ok(res) - } else { - if is_subscription { - let ctx = context.clone(); - let (subs_tx, subs_rx) = mpsc::channel(1); - let query_plan = plan.clone(); - let execution_service_factory_cloned = execution_service_factory.clone(); - let cloned_supergraph_req = - clone_supergraph_request(&req.supergraph_request, context.clone())?; - // Spawn task for subscription - tokio::spawn(async move { - subscription_task( - execution_service_factory_cloned, - ctx, - query_plan, - subs_rx, - notify, - cloned_supergraph_req, - ) - .await; - }); - subscription_tx = subs_tx.into(); - } - - let execution_response = execution_service_factory - .create() - .oneshot( - ExecutionRequest::internal_builder() - .supergraph_request(req.supergraph_request) - .query_plan(plan.clone()) - .context(context) - .and_subscription_tx(subscription_tx) - .build() - .await, - ) - .await?; - - let ExecutionResponse { response, context } = execution_response; - - let (parts, response_stream) = response.into_parts(); - - Ok(SupergraphResponse { - context, - response: http::Response::from_parts(parts, response_stream.boxed()), - }) - } - } - // This should never happen because if we have an empty query plan we should have error in errors vec - None => Err(BoxError::from("cannot compute a query plan")), - } -} - -pub struct SubscriptionTaskParams { - pub(crate) client_sender: futures::channel::mpsc::Sender, - pub(crate) subscription_handle: SubscriptionHandle, - pub(crate) subscription_config: SubscriptionConfig, - pub(crate) stream_rx: futures::channel::mpsc::Receiver>, - pub(crate) service_name: String, -} - -async fn subscription_task( - mut execution_service_factory: ExecutionServiceFactory, - context: Context, - query_plan: Arc, - mut rx: mpsc::Receiver, - notify: Notify, - supergraph_req: SupergraphRequest, -) { - let sub_params = match rx.recv().await { - Some(sub_params) => sub_params, - None => { - return; - } - }; - let subscription_config = sub_params.subscription_config; - let subscription_handle = sub_params.subscription_handle; - let service_name = sub_params.service_name; - let mut receiver = sub_params.stream_rx; - let mut sender = sub_params.client_sender; - - let graphql_document = &query_plan.query.string; - // Get the rest of the query_plan to execute for subscription events - let query_plan = match &query_plan.root { - crate::query_planner::PlanNode::Subscription { rest, .. } => rest.clone().map(|r| { - Arc::new(QueryPlan { - usage_reporting: query_plan.usage_reporting.clone(), - root: *r, - formatted_query_plan: query_plan.formatted_query_plan.clone(), - query: query_plan.query.clone(), - }) - }), - _ => { - let _ = sender - .send( - graphql::Response::builder() - .error( - graphql::Error::builder() - .message("cannot execute the subscription event") - .extension_code("SUBSCRIPTION_EXECUTION_ERROR") - .build(), - ) - .build(), - ) - .await; - return; - } - }; - - let limit_is_set = subscription_config.max_opened_subscriptions.is_some(); - let mut subscription_handle = subscription_handle.clone(); - let operation_signature = context - .private_entries - .lock() - .get::() - .map(|usage_reporting| usage_reporting.stats_report_key.clone()) - .unwrap_or_default(); - - let operation_name = context - .get::<_, String>(OPERATION_NAME) - .ok() - .flatten() - .unwrap_or_default(); - let display_body = context.contains_key(LOGGING_DISPLAY_BODY); - - let mut receiver = match receiver.next().await { - Some(receiver) => receiver, - None => { - tracing::trace!("receiver channel closed"); - return; - } - }; - - if limit_is_set { - OPENED_SUBSCRIPTIONS.fetch_add(1, Ordering::Relaxed); - } - - let mut configuration_updated_rx = notify.subscribe_configuration(); - let mut schema_updated_rx = notify.subscribe_schema(); - - loop { - tokio::select! { - _ = subscription_handle.closed_signal.recv() => { - break; - } - message = receiver.next() => { - match message { - Some(mut val) => { - if display_body { - tracing::info!(http.request.body = ?val, apollo.subgraph.name = %service_name, "Subscription event body from subgraph {service_name:?}"); - } - val.created_at = Some(Instant::now()); - let res = dispatch_event(&supergraph_req, &execution_service_factory, query_plan.as_ref(), context.clone(), val, sender.clone()) - .instrument(tracing::info_span!(SUBSCRIPTION_EVENT_SPAN_NAME, - graphql.document = graphql_document, - graphql.operation.name = %operation_name, - otel.kind = "INTERNAL", - apollo_private.operation_signature = %operation_signature, - apollo_private.duration_ns = field::Empty,) - ).await; - if let Err(err) = res { - if !err.is_disconnected() { - tracing::error!("cannot send the subscription to the client: {err:?}"); - } - break; - } - } - None => break, - } - } - Some(new_configuration) = configuration_updated_rx.next() => { - // If the configuration was dropped in the meantime, we ignore this update and will - // pick up the next one. - if let Some(conf) = new_configuration.upgrade() { - let plugins = match create_plugins(&conf, &execution_service_factory.schema, None).await { - Ok(plugins) => plugins, - Err(err) => { - tracing::error!("cannot re-create plugins with the new configuration (closing existing subscription): {err:?}"); - break; - }, - }; - let subgraph_services = match create_subgraph_services(&plugins, &execution_service_factory.schema, &conf).await { - Ok(subgraph_services) => subgraph_services, - Err(err) => { - tracing::error!("cannot re-create subgraph service with the new configuration (closing existing subscription): {err:?}"); - break; - }, - }; - let plugins = Arc::new(IndexMap::from_iter(plugins)); - execution_service_factory = ExecutionServiceFactory { schema: execution_service_factory.schema.clone(), plugins: plugins.clone(), subgraph_service_factory: Arc::new(SubgraphServiceFactory::new(subgraph_services.into_iter().map(|(k, v)| (k, Arc::new(v) as Arc)).collect(), plugins.clone())) }; - } - } - Some(new_schema) = schema_updated_rx.next() => { - if new_schema.raw_sdl != execution_service_factory.schema.raw_sdl { - let _ = sender - .send( - Response::builder() - .subscribed(false) - .error(graphql::Error::builder().message("subscription has been closed due to a schema reload").extension_code("SUBSCRIPTION_SCHEMA_RELOAD").build()) - .build(), - ) - .await; - - break; - } - } - } - } - if let Err(err) = sender.close().await { - tracing::trace!("cannot close the sender {err:?}"); - } - - tracing::trace!("Leaving the task for subscription"); - if limit_is_set { - OPENED_SUBSCRIPTIONS.fetch_sub(1, Ordering::Relaxed); - } -} - -async fn dispatch_event( - supergraph_req: &SupergraphRequest, - execution_service_factory: &ExecutionServiceFactory, - query_plan: Option<&Arc>, - context: Context, - mut val: graphql::Response, - mut sender: futures::channel::mpsc::Sender, -) -> Result<(), SendError> { - let start = Instant::now(); - let span = Span::current(); - let res = match query_plan { - Some(query_plan) => { - let cloned_supergraph_req = clone_supergraph_request( - &supergraph_req.supergraph_request, - supergraph_req.context.clone(), - ) - .expect("it's a clone of the original one; qed"); - let execution_request = ExecutionRequest::internal_builder() - .supergraph_request(cloned_supergraph_req.supergraph_request) - .query_plan(query_plan.clone()) - .context(context) - .source_stream_value(val.data.take().unwrap_or_default()) - .build() - .await; - - let execution_service = execution_service_factory.create(); - let execution_response = execution_service.oneshot(execution_request).await; - let next_response = match execution_response { - Ok(mut execution_response) => execution_response.next_response().await, - Err(err) => { - tracing::error!("cannot execute the subscription event: {err:?}"); - let _ = sender - .send( - graphql::Response::builder() - .error( - graphql::Error::builder() - .message("cannot execute the subscription event") - .extension_code("SUBSCRIPTION_EXECUTION_ERROR") - .build(), - ) - .build(), - ) - .await; - return Ok(()); - } - }; - - if let Some(mut next_response) = next_response { - next_response.created_at = val.created_at; - next_response.subscribed = val.subscribed; - val.errors.append(&mut next_response.errors); - next_response.errors = val.errors; - - sender.send(next_response).await - } else { - Ok(()) - } - } - None => sender.send(val).await, - }; - span.record( - APOLLO_PRIVATE_DURATION_NS, - start.elapsed().as_nanos() as i64, - ); - - res -} - -async fn plan_query( - mut planning: CachingQueryPlanner, - operation_name: Option, - context: Context, - schema: Arc, - query_str: String, -) -> Result { - // FIXME: we have about 80 tests creating a supergraph service and crafting a supergraph request for it - // none of those tests create an executable document to put it in the context, and the document cannot be created - // from inside the supergraph request fake builder, because it needs a schema matching the query. - // So while we are updating the tests to create a document manually, this here will make sure current - // tests will pass - { - let mut entries = context.private_entries.lock(); - if !entries.contains_key::() { - let doc = Query::parse_document(&query_str, &schema, &Configuration::default()); - Query::validate_query(&schema, &doc.executable) - .map_err(crate::error::QueryPlannerError::from)?; - entries.insert::(doc); - } - drop(entries); - } - - planning - .call( - query_planner::CachingRequest::builder() - .query(query_str) - .and_operation_name(operation_name) - .context(context) - .build(), - ) - .instrument(tracing::info_span!( - QUERY_PLANNING_SPAN_NAME, - "otel.kind" = "INTERNAL" - )) - .await -} - -fn clone_supergraph_request( - req: &http::Request, - context: Context, -) -> Result { - let mut cloned_supergraph_req = SupergraphRequest::builder() - .extensions(req.body().extensions.clone()) - .and_query(req.body().query.clone()) - .context(context) - .method(req.method().clone()) - .and_operation_name(req.body().operation_name.clone()) - .uri(req.uri().clone()) - .variables(req.body().variables.clone()); - - for (header_name, header_value) in req.headers().clone() { - if let Some(header_name) = header_name { - cloned_supergraph_req = cloned_supergraph_req.header(header_name, header_value); - } - } - - cloned_supergraph_req.build() -} - -/// Builder which generates a plugin pipeline. -/// -/// This is at the heart of the delegation of responsibility model for the router. A schema, -/// collection of plugins, collection of subgraph services are assembled to generate a -/// [`tower::util::BoxCloneService`] capable of processing a router request -/// through the entire stack to return a response. -pub(crate) struct PluggableSupergraphServiceBuilder { - plugins: Plugins, - subgraph_services: Vec<(String, Arc)>, - configuration: Option>, - planner: BridgeQueryPlanner, -} - -impl PluggableSupergraphServiceBuilder { - pub(crate) fn new(planner: BridgeQueryPlanner) -> Self { - Self { - plugins: Default::default(), - subgraph_services: Default::default(), - configuration: None, - planner, - } - } - - pub(crate) fn with_dyn_plugin( - mut self, - plugin_name: String, - plugin: Box, - ) -> PluggableSupergraphServiceBuilder { - self.plugins.insert(plugin_name, plugin); - self - } - - pub(crate) fn with_subgraph_service( - mut self, - name: &str, - service_maker: S, - ) -> PluggableSupergraphServiceBuilder - where - S: MakeSubgraphService, - { - self.subgraph_services - .push((name.to_string(), Arc::new(service_maker))); - self - } - - pub(crate) fn with_configuration( - mut self, - configuration: Arc, - ) -> PluggableSupergraphServiceBuilder { - self.configuration = Some(configuration); - self - } - - pub(crate) async fn build(self) -> Result { - let configuration = self.configuration.unwrap_or_default(); - - let schema = self.planner.schema(); - let query_planner_service = CachingQueryPlanner::new( - self.planner, - schema.clone(), - &configuration, - IndexMap::new(), - ) - .await; - - let mut plugins = self.plugins; - // Activate the telemetry plugin. - // We must NOT fail to go live with the new router from this point as the telemetry plugin activate interacts with globals. - for (_, plugin) in plugins.iter_mut() { - if let Some(telemetry) = plugin.as_any_mut().downcast_mut::() { - telemetry.activate(); - } - } - - let plugins = Arc::new(plugins); - - let subgraph_service_factory = Arc::new(SubgraphServiceFactory::new( - self.subgraph_services, - plugins.clone(), - )); - - Ok(SupergraphCreator { - query_planner_service, - subgraph_service_factory, - schema, - plugins, - config: configuration, - }) - } -} - -/// A collection of services and data which may be used to create a "router". -#[derive(Clone)] -pub(crate) struct SupergraphCreator { - query_planner_service: CachingQueryPlanner, - subgraph_service_factory: Arc, - schema: Arc, - config: Arc, - plugins: Arc, -} - -pub(crate) trait HasPlugins { - fn plugins(&self) -> Arc; -} - -impl HasPlugins for SupergraphCreator { - fn plugins(&self) -> Arc { - self.plugins.clone() - } -} - -pub(crate) trait HasSchema { - fn schema(&self) -> Arc; -} - -impl HasSchema for SupergraphCreator { - fn schema(&self) -> Arc { - Arc::clone(&self.schema) - } -} - -pub(crate) trait HasConfig { - fn config(&self) -> Arc; -} - -impl HasConfig for SupergraphCreator { - fn config(&self) -> Arc { - Arc::clone(&self.config) - } -} - -impl ServiceFactory for SupergraphCreator { - type Service = supergraph::BoxService; - fn create(&self) -> Self::Service { - self.make().boxed() - } -} - -impl SupergraphCreator { - pub(crate) fn make( - &self, - ) -> impl Service< - supergraph::Request, - Response = supergraph::Response, - Error = BoxError, - Future = BoxFuture<'static, supergraph::ServiceResult>, - > + Send { - let supergraph_service = SupergraphService::builder() - .query_planner_service(self.query_planner_service.clone()) - .execution_service_factory(ExecutionServiceFactory { - schema: self.schema.clone(), - plugins: self.plugins.clone(), - subgraph_service_factory: self.subgraph_service_factory.clone(), - }) - .schema(self.schema.clone()) - .notify(self.config.notify.clone()) - .build(); - - let shaping = self - .plugins - .iter() - .find(|i| i.0.as_str() == APOLLO_TRAFFIC_SHAPING) - .and_then(|plugin| plugin.1.as_any().downcast_ref::()) - .expect("traffic shaping should always be part of the plugin list"); - - let supergraph_service = AllowOnlyHttpPostMutationsLayer::default() - .layer(shaping.supergraph_service_internal(supergraph_service)); - - ServiceBuilder::new() - .layer(content_negotiation::SupergraphLayer::default()) - .service( - self.plugins - .iter() - .rev() - .fold(supergraph_service.boxed(), |acc, (_, e)| { - e.supergraph_service(acc) - }), - ) - } - - pub(crate) async fn cache_keys(&self, count: Option) -> Vec { - self.query_planner_service.cache_keys(count).await - } - - pub(crate) fn planner(&self) -> Arc> { - self.query_planner_service.planner() - } - - pub(crate) async fn warm_up_query_planner( - &mut self, - query_parser: &QueryAnalysisLayer, - persisted_query_layer: &PersistedQueryLayer, - cache_keys: Vec, - ) { - self.query_planner_service - .warm_up(query_parser, persisted_query_layer, cache_keys) - .await - } -} - -#[cfg(test)] -mod tests { - use std::collections::HashMap; - use std::time::Duration; - - use http::HeaderValue; - use tower::ServiceExt; - - use super::*; - use crate::plugin::test::MockSubgraph; - use crate::services::subgraph; - use crate::services::supergraph; - use crate::test_harness::MockedSubgraphs; - use crate::Notify; - use crate::TestHarness; - - const SCHEMA: &str = r#"schema +const SCHEMA: &str = r#"schema @core(feature: "https://specs.apollo.dev/core/v0.1") @core(feature: "https://specs.apollo.dev/join/v0.1") @core(feature: "https://specs.apollo.dev/inaccessible/v0.1") @@ -892,9 +64,9 @@ mod tests { suborga: [Organization] }"#; - #[tokio::test] - async fn nullability_formatting() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn nullability_formatting() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": null }}}} @@ -902,35 +74,35 @@ mod tests { ("orga", MockSubgraph::default()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query("query { currentUser { activeOrganization { id creatorUser { name } } } }") - .context(defer_context()) - // Request building here - .build() - .unwrap(); - let response = service - .oneshot(request) - .await - .unwrap() - .next_response() - .await - .unwrap(); + let request = supergraph::Request::fake_builder() + .query("query { currentUser { activeOrganization { id creatorUser { name } } } }") + .context(defer_context()) + // Request building here + .build() + .unwrap(); + let response = service + .oneshot(request) + .await + .unwrap() + .next_response() + .await + .unwrap(); - insta::assert_json_snapshot!(response); - } + insta::assert_json_snapshot!(response); +} - #[tokio::test] - async fn nullability_bubbling() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn nullability_bubbling() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": {} }}}} @@ -938,36 +110,34 @@ mod tests { ("orga", MockSubgraph::default()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - "query { currentUser { activeOrganization { nonNullId creatorUser { name } } } }", - ) - .build() - .unwrap(); - let response = service - .oneshot(request) - .await - .unwrap() - .next_response() - .await - .unwrap(); + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query("query { currentUser { activeOrganization { nonNullId creatorUser { name } } } }") + .build() + .unwrap(); + let response = service + .oneshot(request) + .await + .unwrap() + .next_response() + .await + .unwrap(); - insta::assert_json_snapshot!(response); - } + insta::assert_json_snapshot!(response); +} - #[tokio::test] - async fn errors_on_deferred_responses() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn errors_on_deferred_responses() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{__typename id}}"}}, serde_json::json!{{"data": {"currentUser": { "__typename": "User", "id": "0" }}}} @@ -996,31 +166,31 @@ mod tests { ("orga", MockSubgraph::default()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query("query { currentUser { id ...@defer { name } } }") - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query("query { currentUser { id ...@defer { name } } }") + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn errors_from_primary_on_deferred_responses() { - let schema = r#" +#[tokio::test] +async fn errors_from_primary_on_deferred_responses() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.2", for: EXECUTION) @@ -1061,7 +231,7 @@ mod tests { computer(id: ID!): Computer }"#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("computers", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{__typename id}}"}}, serde_json::json!{{"data": {"currentUser": { "__typename": "User", "id": "0" }}}} @@ -1092,19 +262,19 @@ mod tests { ).build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - r#"query { + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + r#"query { computer(id: "Computer1") { id ...ComputerErrorField @defer @@ -1113,20 +283,20 @@ mod tests { fragment ComputerErrorField on Computer { errorField }"#, - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn deferred_fragment_bounds_nullability() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn deferred_fragment_bounds_nullability() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1182,16 +352,16 @@ mod tests { ).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .context(defer_context()) .query( "query { currentUser { activeOrganization { id suborga { id ...@defer { nonNullId } } } } }", @@ -1199,16 +369,16 @@ mod tests { .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn errors_on_incremental_responses() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn errors_on_incremental_responses() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1264,16 +434,16 @@ mod tests { ).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .context(defer_context()) .query( "query { currentUser { activeOrganization { id suborga { id ...@defer { name } } } } }", @@ -1281,16 +451,16 @@ mod tests { .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn root_typename_with_defer() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn root_typename_with_defer() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1336,16 +506,16 @@ mod tests { ).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .context(defer_context()) .query( "query { __typename currentUser { activeOrganization { id suborga { id ...@defer { name } } } } }", @@ -1353,25 +523,25 @@ mod tests { .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - assert_eq!( - res.data.as_ref().unwrap().get("__typename"), - Some(&serde_json_bytes::Value::String("Query".into())) - ); - insta::assert_json_snapshot!(res); - - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } - - #[tokio::test] - async fn subscription_with_callback() { - let mut notify = Notify::builder().build(); - let (handle, _) = notify - .create_or_subscribe("TEST_TOPIC".to_string(), false) - .await - .unwrap(); - let subgraphs = MockedSubgraphs([ + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + assert_eq!( + res.data.as_ref().unwrap().get("__typename"), + Some(&serde_json_bytes::Value::String("Query".into())) + ); + insta::assert_json_snapshot!(res); + + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} + +#[tokio::test] +async fn subscription_with_callback() { + let mut notify = Notify::builder().build(); + let (handle, _) = notify + .create_or_subscribe("TEST_TOPIC".to_string(), false) + .await + .unwrap(); + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"subscription{userWasCreated{name activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"userWasCreated": { "__typename": "User", "id": "1", "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1395,53 +565,53 @@ mod tests { ).build()) ].into_iter().collect()); - let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); - configuration.notify = notify.clone(); - let service = TestHarness::builder() - .configuration(Arc::new(configuration)) - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); + configuration.notify = notify.clone(); + let service = TestHarness::builder() + .configuration(Arc::new(configuration)) + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) .context(subscription_context()) .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - insta::assert_json_snapshot!(res); - notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - // error happened - notify - .broadcast( - graphql::Response::builder() - .error( - graphql::Error::builder() - .message("cannot fetch the name") - .extension_code("INVALID") - .build(), - ) - .build(), - ) - .await - .unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } - - #[tokio::test] - async fn subscription_callback_schema_reload() { - let mut notify = Notify::builder().build(); - let (handle, _) = notify - .create_or_subscribe("TEST_TOPIC".to_string(), false) - .await - .unwrap(); - let orga_subgraph = MockSubgraph::builder().with_json( + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(res); + notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + // error happened + notify + .broadcast( + graphql::Response::builder() + .error( + graphql::Error::builder() + .message("cannot fetch the name") + .extension_code("INVALID") + .build(), + ) + .build(), + ) + .await + .unwrap(); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} + +#[tokio::test] +async fn subscription_callback_schema_reload() { + let mut notify = Notify::builder().build(); + let (handle, _) = notify + .create_or_subscribe("TEST_TOPIC".to_string(), false) + .await + .unwrap(); + let orga_subgraph = MockSubgraph::builder().with_json( serde_json::json!{{ "query":"query($representations:[_Any!]!){_entities(representations:$representations){...on Organization{suborga{id name}}}}", "variables": { @@ -1462,7 +632,7 @@ mod tests { assert_eq!(req.subgraph_request.headers().get("x-test").unwrap(), HeaderValue::from_static("test")); req }); - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"subscription{userWasCreated{name activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"userWasCreated": { "__typename": "User", "id": "1", "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1470,18 +640,18 @@ mod tests { ("orga", orga_subgraph) ].into_iter().collect()); - let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "headers": {"all": {"request": [{"propagate": {"named": "x-test"}}]}}, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); - configuration.notify = notify.clone(); - let configuration = Arc::new(configuration); - let service = TestHarness::builder() - .configuration(configuration.clone()) - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "headers": {"all": {"request": [{"propagate": {"named": "x-test"}}]}}, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); + configuration.notify = notify.clone(); + let configuration = Arc::new(configuration); + let service = TestHarness::builder() + .configuration(configuration.clone()) + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) @@ -1489,33 +659,33 @@ mod tests { .context(subscription_context()) .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - insta::assert_json_snapshot!(res); - notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - - let new_schema = format!("{SCHEMA} "); - // reload schema - let schema = Schema::parse(&new_schema, &configuration).unwrap(); - notify.broadcast_schema(Arc::new(schema)); - insta::assert_json_snapshot!(tokio::time::timeout( - Duration::from_secs(1), - stream.next_response() - ) + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(res); + notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + + let new_schema = format!("{SCHEMA} "); + // reload schema + let schema = Schema::parse(&new_schema, &configuration).unwrap(); + notify.broadcast_schema(Arc::new(schema)); + insta::assert_json_snapshot!(tokio::time::timeout( + Duration::from_secs(1), + stream.next_response() + ) + .await + .unwrap() + .unwrap()); +} + +#[tokio::test] +async fn subscription_with_callback_with_limit() { + let mut notify = Notify::builder().build(); + let (handle, _) = notify + .create_or_subscribe("TEST_TOPIC".to_string(), false) .await - .unwrap() - .unwrap()); - } - - #[tokio::test] - async fn subscription_with_callback_with_limit() { - let mut notify = Notify::builder().build(); - let (handle, _) = notify - .create_or_subscribe("TEST_TOPIC".to_string(), false) - .await - .unwrap(); - let subgraphs = MockedSubgraphs([ + .unwrap(); + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"subscription{userWasCreated{name activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"userWasCreated": { "__typename": "User", "id": "1", "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1539,97 +709,97 @@ mod tests { ).build()) ].into_iter().collect()); - let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "max_opened_subscriptions": 1, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); - configuration.notify = notify.clone(); - let mut service = TestHarness::builder() - .configuration(Arc::new(configuration)) - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let mut configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "max_opened_subscriptions": 1, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); + configuration.notify = notify.clone(); + let mut service = TestHarness::builder() + .configuration(Arc::new(configuration)) + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) .context(subscription_context()) .build() .unwrap(); - let mut stream = service.ready().await.unwrap().call(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - insta::assert_json_snapshot!(res); - notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - // error happened - notify - .broadcast( - graphql::Response::builder() - .error( - graphql::Error::builder() - .message("cannot fetch the name") - .extension_code("INVALID") - .build(), - ) - .build(), - ) - .await - .unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - let request = supergraph::Request::fake_builder() + let mut stream = service.ready().await.unwrap().call(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(res); + notify.broadcast(graphql::Response::builder().data(serde_json_bytes::json!({"userWasCreated": { "name": "test", "activeOrganization": { "__typename": "Organization", "id": "0" }}})).build()).await.unwrap(); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + // error happened + notify + .broadcast( + graphql::Response::builder() + .error( + graphql::Error::builder() + .message("cannot fetch the name") + .extension_code("INVALID") + .build(), + ) + .build(), + ) + .await + .unwrap(); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) .context(subscription_context()) .build() .unwrap(); - let mut stream_2 = service.ready().await.unwrap().call(request).await.unwrap(); - let res = stream_2.next_response().await.unwrap(); - assert!(!res.errors.is_empty()); - insta::assert_json_snapshot!(res); - drop(stream); - drop(stream_2); - let request = supergraph::Request::fake_builder() + let mut stream_2 = service.ready().await.unwrap().call(request).await.unwrap(); + let res = stream_2.next_response().await.unwrap(); + assert!(!res.errors.is_empty()); + insta::assert_json_snapshot!(res); + drop(stream); + drop(stream_2); + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) .context(subscription_context()) .build() .unwrap(); - // Wait a bit to ensure all the closed signals has been triggered - tokio::time::sleep(Duration::from_millis(100)).await; - let mut stream_2 = service.ready().await.unwrap().call(request).await.unwrap(); - let res = stream_2.next_response().await.unwrap(); - assert!(res.errors.is_empty()); - } - - #[tokio::test] - async fn subscription_without_header() { - let subgraphs = MockedSubgraphs(HashMap::new()); - let configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); - let service = TestHarness::builder() - .configuration(Arc::new(configuration)) - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + // Wait a bit to ensure all the closed signals has been triggered + tokio::time::sleep(Duration::from_millis(100)).await; + let mut stream_2 = service.ready().await.unwrap().call(request).await.unwrap(); + let res = stream_2.next_response().await.unwrap(); + assert!(res.errors.is_empty()); +} - let request = supergraph::Request::fake_builder() +#[tokio::test] +async fn subscription_without_header() { + let subgraphs = MockedSubgraphs(HashMap::new()); + let configuration: Configuration = serde_json::from_value(serde_json::json!({"include_subgraph_errors": { "all": true }, "subscription": { "enabled": true, "mode": {"preview_callback": {"public_url": "http://localhost:4545"}}}})).unwrap(); + let service = TestHarness::builder() + .configuration(Arc::new(configuration)) + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); + + let request = supergraph::Request::fake_builder() .query( "subscription { userWasCreated { name activeOrganization { id suborga { id name } } } }", ) .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - insta::assert_json_snapshot!(res); - } + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(res); +} - #[tokio::test] - async fn root_typename_with_defer_and_empty_first_response() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn root_typename_with_defer_and_empty_first_response() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1671,20 +841,20 @@ mod tests { { "__typename": "Organization", "id": "3"}, ] } - }} - ).build()) - ].into_iter().collect()); - - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + }} + ).build()) + ].into_iter().collect()); + + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .context(defer_context()) .query( "query { __typename ... @defer { currentUser { activeOrganization { id suborga { id name } } } } }", @@ -1692,20 +862,20 @@ mod tests { .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - assert_eq!( - res.data.as_ref().unwrap().get("__typename"), - Some(&serde_json_bytes::Value::String("Query".into())) - ); + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + assert_eq!( + res.data.as_ref().unwrap().get("__typename"), + Some(&serde_json_bytes::Value::String("Query".into())) + ); - // Must have 2 chunks - let _ = stream.next_response().await.unwrap(); - } + // Must have 2 chunks + let _ = stream.next_response().await.unwrap(); +} - #[tokio::test] - async fn root_typename_with_defer_in_defer() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn root_typename_with_defer_in_defer() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} @@ -1729,16 +899,16 @@ mod tests { ).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() + let request = supergraph::Request::fake_builder() .context(defer_context()) .query( "query { ...@defer { __typename currentUser { activeOrganization { id suborga { id name } } } } }", @@ -1746,25 +916,25 @@ mod tests { .build() .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let res = stream.next_response().await.unwrap(); - assert_eq!(res.errors, []); - let res = stream.next_response().await.unwrap(); - assert_eq!( - res.incremental - .get(0) - .unwrap() - .data - .as_ref() - .unwrap() - .get("__typename"), - Some(&serde_json_bytes::Value::String("Query".into())) - ); - } - - #[tokio::test] - async fn query_reconstruction() { - let schema = r#"schema + let mut stream = service.oneshot(request).await.unwrap(); + let res = stream.next_response().await.unwrap(); + assert_eq!(res.errors, []); + let res = stream.next_response().await.unwrap(); + assert_eq!( + res.incremental + .get(0) + .unwrap() + .data + .as_ref() + .unwrap() + .get("__typename"), + Some(&serde_json_bytes::Value::String("Query".into())) + ); +} + +#[tokio::test] +async fn query_reconstruction() { + let schema = r#"schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.2", for: EXECUTION) @link(url: "https://specs.apollo.dev/tag/v0.2") @@ -1830,20 +1000,20 @@ mod tests { } "#; - // this test does not need to generate a valid response, it is only here to check - // that the router does not panic when reconstructing the query for the deferred part - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .build_supergraph() - .await - .unwrap(); + // this test does not need to generate a valid response, it is only here to check + // that the router does not panic when reconstructing the query for the deferred part + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - r#"mutation ($userId: ID!) { + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + r#"mutation ($userId: ID!) { makePayment(userId: $userId) { id ... @defer { @@ -1853,20 +1023,20 @@ mod tests { } } }"#, - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - // if a deferred response falls under a path that was nullified in the primary response, - // the deferred response must not be sent - #[tokio::test] - async fn filter_nullified_deferred_responses() { - let subgraphs = MockedSubgraphs([ +// if a deferred response falls under a path that was nullified in the primary response, +// the deferred response must not be sent +#[tokio::test] +async fn filter_nullified_deferred_responses() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder() .with_json( serde_json::json!{{"query":"{currentUser{__typename name id}}"}}, @@ -1944,18 +1114,18 @@ mod tests { }}).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query( - r#"query { + let request = supergraph::Request::fake_builder() + .query( + r#"query { currentUser { name ... @defer { @@ -1971,27 +1141,27 @@ mod tests { } } }"#, - ) - .context(defer_context()) - .build() - .unwrap(); - let mut response = service.oneshot(request).await.unwrap(); + ) + .context(defer_context()) + .build() + .unwrap(); + let mut response = service.oneshot(request).await.unwrap(); - let primary = response.next_response().await.unwrap(); - insta::assert_json_snapshot!(primary); + let primary = response.next_response().await.unwrap(); + insta::assert_json_snapshot!(primary); - let deferred = response.next_response().await.unwrap(); - insta::assert_json_snapshot!(deferred); + let deferred = response.next_response().await.unwrap(); + insta::assert_json_snapshot!(deferred); - // the last deferred response was replace with an empty response, - // to still have one containing has_next = false - let last = response.next_response().await.unwrap(); - insta::assert_json_snapshot!(last); - } + // the last deferred response was replace with an empty response, + // to still have one containing has_next = false + let last = response.next_response().await.unwrap(); + insta::assert_json_snapshot!(last); +} - #[tokio::test] - async fn reconstruct_deferred_query_under_interface() { - let schema = r#"schema +#[tokio::test] +async fn reconstruct_deferred_query_under_interface() { + let schema = r#"schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.2", for: EXECUTION) @link(url: "https://specs.apollo.dev/tag/v0.2") @@ -2060,7 +1230,7 @@ mod tests { name: String! @join__field(graph: USER) }"#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{me{__typename ...on User{id fullName memberships{permission account{__typename id}}}}}"}}, serde_json::json!{{"data": {"me": { @@ -2095,19 +1265,19 @@ mod tests { }}).build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - r#"query { + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + r#"query { me { ... on User { id @@ -2123,39 +1293,39 @@ mod tests { } } }"#, - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - fn subscription_context() -> Context { - let context = Context::new(); - context.private_entries.lock().insert(ClientRequestAccepts { - multipart_subscription: true, - ..Default::default() - }); +fn subscription_context() -> Context { + let context = Context::new(); + context.private_entries.lock().insert(ClientRequestAccepts { + multipart_subscription: true, + ..Default::default() + }); - context - } + context +} - fn defer_context() -> Context { - let context = Context::new(); - context.private_entries.lock().insert(ClientRequestAccepts { - multipart_defer: true, - ..Default::default() - }); +fn defer_context() -> Context { + let context = Context::new(); + context.private_entries.lock().insert(ClientRequestAccepts { + multipart_defer: true, + ..Default::default() + }); - context - } + context +} - #[tokio::test] - async fn interface_object_typename_rewrites() { - let schema = r#" +#[tokio::test] +async fn interface_object_typename_rewrites() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) @@ -2223,7 +1393,7 @@ mod tests { } "#; - let query = r#" + let query = r#" { iFromS1 { ... on A { @@ -2233,7 +1403,7 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("S1", MockSubgraph::builder() .with_json( serde_json::json! {{ @@ -2260,32 +1430,32 @@ mod tests { .build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .query(query) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); - assert_eq!( - serde_json::to_value(&response.data).unwrap(), - serde_json::json!({ "iFromS1": { "y": 42 } }), - ); - } + assert_eq!( + serde_json::to_value(&response.data).unwrap(), + serde_json::json!({ "iFromS1": { "y": 42 } }), + ); +} - #[tokio::test] - async fn interface_object_response_processing() { - let schema = r#" +#[tokio::test] +async fn interface_object_response_processing() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) @@ -2365,7 +1535,7 @@ mod tests { } "#; - let query = r#" + let query = r#" { allReviewedProducts { id @@ -2374,7 +1544,7 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("products", MockSubgraph::builder() .with_json( serde_json::json! {{ @@ -2398,45 +1568,45 @@ mod tests { .build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .query(query) + .build() + .unwrap(); + + let mut stream = service.oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); + + assert_eq!( + serde_json::to_value(&response.data).unwrap(), + serde_json::json!({ "allReviewedProducts": [ {"id": "1", "price": 12.99}, {"id": "2", "price": 14.99} ]}), + ); +} - let mut stream = service.oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); - - assert_eq!( - serde_json::to_value(&response.data).unwrap(), - serde_json::json!({ "allReviewedProducts": [ {"id": "1", "price": 12.99}, {"id": "2", "price": 14.99} ]}), - ); - } - - #[tokio::test] - async fn only_query_interface_object_subgraph() { - // This test has 2 subgraphs, one with an interface and another with that interface - // declared as an @interfaceObject. It then sends a query that can be entirely - // fulfilled by the @interfaceObject subgraph (in particular, it doesn't request - // __typename; if it did, it would force a query on the other subgraph to obtain - // the actual implementation type). - // The specificity here is that the final in-memory result will not have a __typename - // _despite_ being the parent type of that result being an interface. Which is fine - // since __typename is not requested, and so there is no need to known the actual - // __typename, but this is something that never happen outside of @interfaceObject - // (usually, results whose parent type is an abstract type (say an interface) are always - // queried internally with their __typename). And so this test make sure that the - // post-processing done by the router on the result handle this correctly. - - let schema = r#" +#[tokio::test] +async fn only_query_interface_object_subgraph() { + // This test has 2 subgraphs, one with an interface and another with that interface + // declared as an @interfaceObject. It then sends a query that can be entirely + // fulfilled by the @interfaceObject subgraph (in particular, it doesn't request + // __typename; if it did, it would force a query on the other subgraph to obtain + // the actual implementation type). + // The specificity here is that the final in-memory result will not have a __typename + // _despite_ being the parent type of that result being an interface. Which is fine + // since __typename is not requested, and so there is no need to known the actual + // __typename, but this is something that never happen outside of @interfaceObject + // (usually, results whose parent type is an abstract type (say an interface) are always + // queried internally with their __typename). And so this test make sure that the + // post-processing done by the router on the result handle this correctly. + + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) @@ -2510,7 +1680,7 @@ mod tests { } "#; - let query = r#" + let query = r#" { iFromS2 { y @@ -2518,58 +1688,58 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs( - [ - ( - "S1", - MockSubgraph::builder() - // This test makes no queries to S1, only to S2 - .build(), - ), - ( - "S2", - MockSubgraph::builder() - .with_json( - serde_json::json! {{ - "query": "{iFromS2{y}}", - }}, - serde_json::json! {{ - "data": {"iFromS2":{"y":20}} - }}, - ) - .build(), - ), - ] - .into_iter() - .collect(), - ); + let subgraphs = MockedSubgraphs( + [ + ( + "S1", + MockSubgraph::builder() + // This test makes no queries to S1, only to S2 + .build(), + ), + ( + "S2", + MockSubgraph::builder() + .with_json( + serde_json::json! {{ + "query": "{iFromS2{y}}", + }}, + serde_json::json! {{ + "data": {"iFromS2":{"y":20}} + }}, + ) + .build(), + ), + ] + .into_iter() + .collect(), + ); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .query(query) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); - assert_eq!( - serde_json::to_value(&response.data).unwrap(), - serde_json::json!({ "iFromS2": { "y": 20 } }), - ); - } + assert_eq!( + serde_json::to_value(&response.data).unwrap(), + serde_json::json!({ "iFromS2": { "y": 20 } }), + ); +} - #[tokio::test] - async fn aliased_subgraph_data_rewrites_on_root_fetch() { - let schema = r#" +#[tokio::test] +async fn aliased_subgraph_data_rewrites_on_root_fetch() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) @@ -2631,7 +1801,7 @@ mod tests { } "#; - let query = r#" + let query = r#" { us { f @@ -2639,7 +1809,7 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("S1", MockSubgraph::builder() .with_json( serde_json::json! {{ @@ -2667,32 +1837,32 @@ mod tests { .build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .query(query) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); - assert_eq!( - serde_json::to_value(&response.data).unwrap(), - serde_json::json!({"us": [{"f": "fA"}, {"f": "fB"}]}), - ); - } + assert_eq!( + serde_json::to_value(&response.data).unwrap(), + serde_json::json!({"us": [{"f": "fA"}, {"f": "fB"}]}), + ); +} - #[tokio::test] - async fn aliased_subgraph_data_rewrites_on_non_root_fetch() { - let schema = r#" +#[tokio::test] +async fn aliased_subgraph_data_rewrites_on_non_root_fetch() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) @@ -2761,7 +1931,7 @@ mod tests { } "#; - let query = r#" + let query = r#" { t { us { @@ -2771,7 +1941,7 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("S1", MockSubgraph::builder() .with_json( serde_json::json! {{ @@ -2807,32 +1977,32 @@ mod tests { .build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .query(query) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); - assert_eq!( - serde_json::to_value(&response.data).unwrap(), - serde_json::json!({"t": {"us": [{"f": "fA"}, {"f": "fB"}]}}), - ); - } + assert_eq!( + serde_json::to_value(&response.data).unwrap(), + serde_json::json!({"t": {"us": [{"f": "fA"}, {"f": "fB"}]}}), + ); +} - #[tokio::test] - async fn errors_on_nullified_paths() { - let schema = r#" +#[tokio::test] +async fn errors_on_nullified_paths() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.1", for: EXECUTION) @@ -2886,7 +2056,7 @@ mod tests { } "#; - let query = r#" + let query = r#" query Query { foo { id @@ -2898,7 +2068,7 @@ mod tests { } "#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ ("S1", MockSubgraph::builder().with_json( serde_json::json!{{"query":"query Query__S1__0{foo{id bar{__typename id}}}", "operationName": "Query__S1__0"}}, serde_json::json!{{"data": { @@ -2941,29 +2111,29 @@ mod tests { ).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query(query) - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query(query) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn missing_entities() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn missing_entities() { + let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( serde_json::json!{{"query":"{currentUser{id activeOrganization{__typename id}}}"}}, serde_json::json!{{"data": {"currentUser": { "__typename": "User", "id": "0", "activeOrganization": { "__typename": "Organization", "id": "1" } } } }} @@ -2972,29 +2142,29 @@ mod tests { serde_json::json!{{"data": {}, "errors":[{"message":"error"}]}}).build()) ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(SCHEMA) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(SCHEMA) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query("query { currentUser { id activeOrganization{ id name } } }") - .build() - .unwrap(); + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query("query { currentUser { id activeOrganization{ id name } } }") + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - insta::assert_json_snapshot!(stream.next_response().await.unwrap()); - } + insta::assert_json_snapshot!(stream.next_response().await.unwrap()); +} - #[tokio::test] - async fn no_typename_on_interface() { - let subgraphs = MockedSubgraphs([ +#[tokio::test] +async fn no_typename_on_interface() { + let subgraphs = MockedSubgraphs([ ("animal", MockSubgraph::builder().with_json( serde_json::json!{{"query":"query dog__animal__0{dog{id name}}", "operationName": "dog__animal__0"}}, serde_json::json!{{"data":{"dog":{"id":"4321","name":"Spot"}}}} @@ -3007,7 +2177,7 @@ mod tests { ).build()), ].into_iter().collect()); - let service = TestHarness::builder() + let service = TestHarness::builder() .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) .unwrap() .schema( @@ -3061,10 +2231,10 @@ mod tests { .await .unwrap(); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - "query dog { + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + "query dog { dog { ...on Animal { id @@ -3074,19 +2244,19 @@ mod tests { } } }", - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.clone().oneshot(request).await.unwrap(); + let mut stream = service.clone().oneshot(request).await.unwrap(); - let no_typename = stream.next_response().await.unwrap(); - insta::assert_json_snapshot!(no_typename); + let no_typename = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(no_typename); - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - "query dog { + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + "query dog { dog { ...on Animal { id @@ -3097,40 +2267,40 @@ mod tests { } } }", - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.clone().oneshot(request).await.unwrap(); + let mut stream = service.clone().oneshot(request).await.unwrap(); - let with_typename = stream.next_response().await.unwrap(); - assert_eq!( - with_typename - .data - .clone() - .unwrap() - .get("dog") - .unwrap() - .get("name") - .unwrap(), - no_typename - .data - .clone() - .unwrap() - .get("dog") - .unwrap() - .get("name") - .unwrap(), - "{:?}\n{:?}", - with_typename, - no_typename - ); - insta::assert_json_snapshot!(with_typename); - - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - "query dog { + let with_typename = stream.next_response().await.unwrap(); + assert_eq!( + with_typename + .data + .clone() + .unwrap() + .get("dog") + .unwrap() + .get("name") + .unwrap(), + no_typename + .data + .clone() + .unwrap() + .get("dog") + .unwrap() + .get("name") + .unwrap(), + "{:?}\n{:?}", + with_typename, + no_typename + ); + insta::assert_json_snapshot!(with_typename); + + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query( + "query dog { dog { ...on Dog { name @@ -3140,40 +2310,40 @@ mod tests { } } }", - ) - .build() - .unwrap(); + ) + .build() + .unwrap(); - let mut stream = service.oneshot(request).await.unwrap(); + let mut stream = service.oneshot(request).await.unwrap(); - let with_reversed_fragments = stream.next_response().await.unwrap(); - assert_eq!( - with_reversed_fragments - .data - .clone() - .unwrap() - .get("dog") - .unwrap() - .get("name") - .unwrap(), - no_typename - .data - .clone() - .unwrap() - .get("dog") - .unwrap() - .get("name") - .unwrap(), - "{:?}\n{:?}", - with_reversed_fragments, - no_typename - ); - insta::assert_json_snapshot!(with_reversed_fragments); - } - - #[tokio::test] - async fn multiple_interface_types() { - let schema = r#" + let with_reversed_fragments = stream.next_response().await.unwrap(); + assert_eq!( + with_reversed_fragments + .data + .clone() + .unwrap() + .get("dog") + .unwrap() + .get("name") + .unwrap(), + no_typename + .data + .clone() + .unwrap() + .get("dog") + .unwrap() + .get("name") + .unwrap(), + "{:?}\n{:?}", + with_reversed_fragments, + no_typename + ); + insta::assert_json_snapshot!(with_reversed_fragments); +} + +#[tokio::test] +async fn multiple_interface_types() { + let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) { @@ -3302,7 +2472,7 @@ mod tests { } "#; - let query = r#"fragment OperationItemFragment on OperationItem { + let query = r#"fragment OperationItemFragment on OperationItem { __typename ... on OperationItemStuff { __typename @@ -3339,7 +2509,7 @@ mod tests { } }"#; - let subgraphs = MockedSubgraphs([ + let subgraphs = MockedSubgraphs([ // The response isn't interesting to us, // we just need to make sure the query makes it through parsing and validation ("graph1", MockSubgraph::builder().with_json( @@ -3348,83 +2518,82 @@ mod tests { ).build()), ].into_iter().collect()); - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(schema) - .extra_plugin(subgraphs) - .build_supergraph() - .await - .unwrap(); - - let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query(query) - .variables( - serde_json_bytes::json! {{ "id": "1234", "a": 1, "b": 2}} - .as_object() - .unwrap() - .clone(), - ) - .build() - .unwrap(); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(schema) + .extra_plugin(subgraphs) + .build_supergraph() + .await + .unwrap(); - let mut stream = service.clone().oneshot(request).await.unwrap(); - let response = stream.next_response().await.unwrap(); - assert_eq!(serde_json_bytes::Value::Null, response.data.unwrap()); - } - - #[tokio::test] - async fn id_scalar_can_overflow_i32() { - // Hack to let the first subgraph fetch contain an ID variable: - // ``` - // type Query { - // user(id: ID!): User @join__field(graph: USER) - // } - // ``` - assert!(SCHEMA.contains("currentUser:")); - let schema = SCHEMA.replace("currentUser:", "user(id: ID!):"); - - let service = TestHarness::builder() - .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) - .unwrap() - .schema(&schema) - .subgraph_hook(|_subgraph_name, _service| { - tower::service_fn(|request: subgraph::Request| async move { - let id = &request.subgraph_request.body().variables["id"]; - Err(format!("$id = {id}").into()) - }) - .boxed() - }) - .build_supergraph() - .await - .unwrap(); + let request = supergraph::Request::fake_builder() + .context(defer_context()) + .query(query) + .variables( + serde_json_bytes::json! {{ "id": "1234", "a": 1, "b": 2}} + .as_object() + .unwrap() + .clone(), + ) + .build() + .unwrap(); - let large: i64 = 1 << 53; - let large_plus_one = large + 1; - // f64 rounds since it doesn’t have enough mantissa bits - assert!(large_plus_one as f64 as i64 == large); - // i64 of course doesn’t round - assert!(large_plus_one != large); + let mut stream = service.clone().oneshot(request).await.unwrap(); + let response = stream.next_response().await.unwrap(); + assert_eq!(serde_json_bytes::Value::Null, response.data.unwrap()); +} - let request = supergraph::Request::fake_builder() - .query("query($id: ID!) { user(id: $id) { name }}") - .variable("id", large_plus_one) - .build() - .unwrap(); - let response = service - .oneshot(request) - .await - .unwrap() - .next_response() - .await - .unwrap(); - // The router did not panic or respond with an early validation error. - // Instead it did a subgraph fetch, which recieved the correct ID variable without rounding: - assert_eq!( - response.errors[0].extensions["reason"].as_str().unwrap(), - "$id = 9007199254740993" - ); - assert_eq!(large_plus_one.to_string(), "9007199254740993"); - } +#[tokio::test] +async fn id_scalar_can_overflow_i32() { + // Hack to let the first subgraph fetch contain an ID variable: + // ``` + // type Query { + // user(id: ID!): User @join__field(graph: USER) + // } + // ``` + assert!(SCHEMA.contains("currentUser:")); + let schema = SCHEMA.replace("currentUser:", "user(id: ID!):"); + + let service = TestHarness::builder() + .configuration_json(serde_json::json!({"include_subgraph_errors": { "all": true } })) + .unwrap() + .schema(&schema) + .subgraph_hook(|_subgraph_name, _service| { + tower::service_fn(|request: subgraph::Request| async move { + let id = &request.subgraph_request.body().variables["id"]; + Err(format!("$id = {id}").into()) + }) + .boxed() + }) + .build_supergraph() + .await + .unwrap(); + + let large: i64 = 1 << 53; + let large_plus_one = large + 1; + // f64 rounds since it doesn’t have enough mantissa bits + assert!(large_plus_one as f64 as i64 == large); + // i64 of course doesn’t round + assert!(large_plus_one != large); + + let request = supergraph::Request::fake_builder() + .query("query($id: ID!) { user(id: $id) { name }}") + .variable("id", large_plus_one) + .build() + .unwrap(); + let response = service + .oneshot(request) + .await + .unwrap() + .next_response() + .await + .unwrap(); + // The router did not panic or respond with an early validation error. + // Instead it did a subgraph fetch, which recieved the correct ID variable without rounding: + assert_eq!( + response.errors[0].extensions["reason"].as_str().unwrap(), + "$id = 9007199254740993" + ); + assert_eq!(large_plus_one.to_string(), "9007199254740993"); } diff --git a/apollo-router/src/spec/query/transform.rs b/apollo-router/src/spec/query/transform.rs index 1710e2444b..1f1c0d0567 100644 --- a/apollo-router/src/spec/query/transform.rs +++ b/apollo-router/src/spec/query/transform.rs @@ -13,26 +13,29 @@ pub(crate) fn document( sources: document.sources.clone(), definitions: Vec::new(), }; + + // walk through the fragment first: if a fragment is entirely filtered, we want to + // remove the spread too for definition in &document.definitions { - match definition { - ast::Definition::OperationDefinition(def) => { - let root_type = visitor - .schema() - .root_operation(def.operation_type) - .ok_or("missing root operation definition")? - .clone(); - if let Some(new_def) = visitor.operation(&root_type, def)? { - new.definitions - .push(ast::Definition::OperationDefinition(new_def.into())) - } + if let ast::Definition::FragmentDefinition(def) = definition { + if let Some(new_def) = visitor.fragment_definition(def)? { + new.definitions + .push(ast::Definition::FragmentDefinition(new_def.into())) } - ast::Definition::FragmentDefinition(def) => { - if let Some(new_def) = visitor.fragment_definition(def)? { - new.definitions - .push(ast::Definition::FragmentDefinition(new_def.into())) - } + } + } + + for definition in &document.definitions { + if let ast::Definition::OperationDefinition(def) = definition { + let root_type = visitor + .schema() + .root_operation(def.operation_type) + .ok_or("missing root operation definition")? + .clone(); + if let Some(new_def) = visitor.operation(&root_type, def)? { + new.definitions + .push(ast::Definition::OperationDefinition(new_def.into())) } - _ => {} } } Ok(new) @@ -301,19 +304,19 @@ fn test_add_directive_to_fields() { let ast = apollo_compiler::ast::Document::parse(graphql, ""); let (schema, _doc) = ast.to_mixed(); let mut visitor = AddDirective { schema }; - let expected = "query($id: ID = null) { + let expected = "fragment F on Query { + next @added { + a @added + } +} + +query($id: ID = null) { a @added ... @defer { b @added } ...F } - -fragment F on Query { - next @added { - a @added - } -} "; assert_eq!(document(&mut visitor, &ast).unwrap().to_string(), expected) } diff --git a/apollo-router/src/spec/schema.rs b/apollo-router/src/spec/schema.rs index 5842989dba..ca170a9402 100644 --- a/apollo-router/src/spec/schema.rs +++ b/apollo-router/src/spec/schema.rs @@ -146,6 +146,12 @@ impl Schema { }) } + pub(crate) fn create_api_schema(&self) -> String { + apollo_federation::Supergraph::from(self.definitions.clone()) + .to_api_schema() + .to_string() + } + pub(crate) fn with_api_schema(mut self, api_schema: Schema) -> Self { self.api_schema = Some(Box::new(api_schema)); self diff --git a/apollo-router/src/test_harness.rs b/apollo-router/src/test_harness.rs index 189baf39f5..191fd4ec21 100644 --- a/apollo-router/src/test_harness.rs +++ b/apollo-router/src/test_harness.rs @@ -23,7 +23,7 @@ use crate::services::execution; use crate::services::layers::persisted_queries::PersistedQueryLayer; use crate::services::layers::query_analysis::QueryAnalysisLayer; use crate::services::router; -use crate::services::router_service::RouterCreator; +use crate::services::router::service::RouterCreator; use crate::services::subgraph; use crate::services::supergraph; use crate::services::HasSchema; diff --git a/apollo-router/src/uplink/license_stream.rs b/apollo-router/src/uplink/license_stream.rs index 79ec956d02..bf03a2ab8e 100644 --- a/apollo-router/src/uplink/license_stream.rs +++ b/apollo-router/src/uplink/license_stream.rs @@ -229,7 +229,6 @@ mod test { use std::time::Instant; use std::time::SystemTime; - use futures::SinkExt; use futures::StreamExt; use futures_test::stream::StreamTestExt; @@ -384,8 +383,9 @@ mod test { #[tokio::test(flavor = "multi_thread")] async fn license_expander_claim_pause_claim() { - let (mut tx, rx) = futures::channel::mpsc::channel(10); - let events_stream = rx.expand_licenses().map(SimpleEvent::from); + let (tx, rx) = tokio::sync::mpsc::channel(10); + let rx_stream = tokio_stream::wrappers::ReceiverStream::new(rx); + let events_stream = rx_stream.expand_licenses().map(SimpleEvent::from); tokio::task::spawn(async move { // This simulates a new claim coming in before in between the warning and halt diff --git a/apollo-router/templates/sandbox_index.html b/apollo-router/templates/sandbox_index.html index 2e97cc0dba..006fe13f51 100644 --- a/apollo-router/templates/sandbox_index.html +++ b/apollo-router/templates/sandbox_index.html @@ -51,7 +51,7 @@

Welcome to the Apollo Router

style="width: 100vw; height: 100vh; position: absolute; top: 0;" id="embeddableSandbox" > - +