Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add limited UPDATE/DELETE(tables only) #2195

Merged
merged 5 commits into from
Mar 26, 2022
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ This project adheres to [Semantic Versioning](http://semver.org/).
+ #1689, Add the ability to run without `db-anon-role` disabling anonymous access. - @wolfgangwalther
- #1543, Allow access to fields of composite types in select=, order= and filters through JSON operators -> and ->>. - @wolfgangwalther
- #2075, Allow access to array items in ?select=, ?order= and filters through JSON operators -> and ->>. - @wolfgangwalther
- #2156, Allow applying `limit/offset` to UPDATE/DELETE to only affect a subset of rows - @steve-chavez
+ Uses the table primary key, so it needs a select privilege on the primary key columns
+ If no primary key is available, it will fallback to using the "ctid" system column(will also require a select privilege on it)
+ Will work on views if the PK(or "ctid") is present on its SELECT clause

### Fixed

Expand All @@ -37,11 +41,16 @@ This project adheres to [Semantic Versioning](http://semver.org/).
- #2153, Fix --dump-schema running with a wrong PG version. - @wolfgangwalther
- #2042, Keep working when EMFILE(Too many open files) is reached. - @steve-chavez
- #2147, Ignore `Content-Type` headers for `GET` requests when calling RPCs. Previously, `GET` without parameters, but with `Content-Type: text/plain` or `Content-Type: application/octet-stream` would fail with `404 Not Found`, even if a function without arguments was available.
- #2155, Ignore `max-rows` on POST, PATCH, PUT and DELETE - @steve-chavez

### Changed

- #2001, Return 204 No Content without Content-Type for RPCs returning VOID - @wolfgangwalther
+ Previously, those RPCs would return "null" as a body with Content-Type: application/json.
- #2156, `limit/offset` now limits the affected rows on UPDATE/DELETE - @steve-chavez
+ Previously, `limit/offset` only limited the returned rows but not the actual updated rows
- #2155, `max-rows` is no longer applied on POST/PATCH/PUT/DELETE returned rows - @steve-chavez
+ This was misleading because the affected rows were not really affected by `max-rows`, only the returned rows were limited

## [9.0.0] - 2021-11-25

Expand All @@ -62,7 +71,7 @@ This project adheres to [Semantic Versioning](http://semver.org/).
- #2031, Improve error message for ambiguous embedding and add a relevant hint that includes unambiguous embedding suggestions - @laurenceisla
- #1917, Add error codes with the `"PGRST"` prefix to the error response body to differentiate PostgREST errors from PostgreSQL errors - @laurenceisla
- #1917, Normalize the error response body by always having the `detail` and `hint` error fields with a `null` value if they are empty - @laurenceisla
- #2176, Errors raised with `SQLSTATE` now include the message and the code in the response body - @laurenceisla
- #2176, Errors raised with `SQLSTATE` now include the message and the code in the response body - @laurenceisla

### Fixed

Expand Down
72 changes: 54 additions & 18 deletions src/PostgREST/Query/QueryBuilder.hs
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ import PostgREST.DbStructure.Table (Table (..))
import PostgREST.Request.Preferences (PreferResolution (..))

import PostgREST.Query.SqlFragment
import PostgREST.RangeQuery (allRange)
import PostgREST.Request.Types

import Protolude
Expand Down Expand Up @@ -105,28 +106,63 @@ mutateRequestToQuery (Insert mainQi iCols body onConflct putConditions returning
])
where
cols = BS.intercalate ", " $ pgFmtIdent <$> S.toList iCols
mutateRequestToQuery (Update mainQi uCols body logicForest returnings) =
if S.null uCols

mutateRequestToQuery (Update mainQi uCols body logicForest (range, rangeId) returnings)
| S.null uCols =
-- if there are no columns we cannot do UPDATE table SET {empty}, it'd be invalid syntax
-- selecting an empty resultset from mainQi gives us the column names to prevent errors when using &select=
-- the select has to be based on "returnings" to make computed overloaded functions not throw
then SQL.sql ("SELECT " <> emptyBodyReturnedColumns <> " FROM " <> fromQi mainQi <> " WHERE false")
else
"WITH " <> normalizedBody body <> " " <>
"UPDATE " <> SQL.sql (fromQi mainQi) <> " SET " <> SQL.sql cols <> " " <>
"FROM (SELECT * FROM json_populate_recordset (null::" <> SQL.sql (fromQi mainQi) <> " , " <> SQL.sql selectBody <> " )) _ " <>
(if null logicForest then mempty else "WHERE " <> intercalateSnippet " AND " (pgFmtLogicTree mainQi <$> logicForest)) <> " " <>
SQL.sql (returningF mainQi returnings)
SQL.sql $ "SELECT " <> emptyBodyReturnedColumns <> " FROM " <> fromQi mainQi <> " WHERE false"

| range == allRange =
"WITH " <> normalizedBody body <> " " <>
"UPDATE " <> mainTbl <> " SET " <> SQL.sql nonRangeCols <> " " <>
"FROM (SELECT * FROM json_populate_recordset (null::" <> mainTbl <> " , " <> SQL.sql selectBody <> " )) _ " <>
whereLogic <> " " <>
Comment on lines +119 to +121
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this change always adds a WHERE clause to the UPDATE query - even if the request is completely unconditional.

This breaks using pg-safeupdate. The request now goes through instead of failing because of a previously unconditional UPDATE query.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that pg-safeupdate is not required anymore because we're disallowing unfiltered patch/unfiltered delete unless a limit is applied.

Looks like this change always adds a WHERE clause to the UPDATE query - even if the request is completely unconditional.

Yeah, because of the above, if no filter is added then WHERE false is done.

One thing better with the pg-safeupdate safeguard is that it fails with an error while the WHERE false will not do anything.
That's only the case of unfiltered UPDATE though. For unfiltered DELETE, we're going to fail with an error.

(I need to go back to this one because it will be inconsistent otherwise)

Would we need to keep compatibility with pg-safeupdate now that we have this? Why?


Not sure but maybe we could do LIMIT 0 instead of WHERE false.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, because of the above, if no filter is added then WHERE false is done.

Oh, hell. I thought this was a review on #2311.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressing this in #2405

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this change always adds a WHERE clause to the UPDATE query - even if the request is completely unconditional.

I think the change in this PR still allows the full table update without conditionals. The one that disallowed this was #2311 if I'm not mistaken.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare #2418

SQL.sql (returningF mainQi returnings)

| otherwise =
"WITH " <> normalizedBody body <> ", " <>
"pgrst_update_body AS (SELECT * FROM json_populate_recordset (null::" <> mainTbl <> " , " <> SQL.sql selectBody <> " ) LIMIT 1), " <>
"pgrst_affected_rows AS (" <>
"SELECT " <> SQL.sql rangeIdF <> " FROM " <> mainTbl <>
whereLogic <> " " <>
"ORDER BY " <> SQL.sql rangeIdF <> " " <> limitOffsetF range <>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to order this by rangeIdF here? Shouldn't we order this by whatever was put in the query string as ?order=...? Otherwise the result of limit will be unpredictable.

Copy link
Member Author

@steve-chavez steve-chavez Mar 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to order this by rangeIdF here?

Yes, as mentioned in the crunchy data blog post:

A word of warning:
Operating (particularly for a destructive operation) on some poorly-defined criteria is purposefully difficult. In general, the ORDER BY clause is strongly encouraged for any operation using LIMIT for a mutable query.

Shouldn't we order this by whatever was put in the query string as ?order=...?

If the client doesn't specify an order then that would be bad because of the above.

I guess we could replace the implicit ORDER BY PK by the explicit ?order though..

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we just use the ?order and add the ctid column at the end of that. If there is no ?order, it's just the ctid column.

") " <>
"UPDATE " <> mainTbl <> " SET " <> SQL.sql rangeCols <>
"FROM pgrst_affected_rows " <>
"WHERE " <> SQL.sql whereRangeIdF <> " " <>
SQL.sql (returningF mainQi returnings)

where
whereLogic = if null logicForest then mempty else " WHERE " <> intercalateSnippet " AND " (pgFmtLogicTree mainQi <$> logicForest)
mainTbl = SQL.sql (fromQi mainQi)
emptyBodyReturnedColumns = if null returnings then "NULL" else BS.intercalate ", " (pgFmtColumn (QualifiedIdentifier mempty $ qiName mainQi) <$> returnings)
nonRangeCols = BS.intercalate ", " (pgFmtIdent <> const " = _." <> pgFmtIdent <$> S.toList uCols)
rangeCols = BS.intercalate ", " ((\col -> pgFmtIdent col <> " = (SELECT " <> pgFmtIdent col <> " FROM pgrst_update_body) ") <$> S.toList uCols)
(whereRangeIdF, rangeIdF) = mutRangeF mainQi rangeId

mutateRequestToQuery (Delete mainQi logicForest (range, rangeId) returnings)
| range == allRange =
"DELETE FROM " <> SQL.sql (fromQi mainQi) <> " " <>
whereLogic <> " " <>
SQL.sql (returningF mainQi returnings)

| otherwise =
"WITH " <>
"pgrst_affected_rows AS (" <>
"SELECT " <> SQL.sql rangeIdF <> " FROM " <> SQL.sql (fromQi mainQi) <>
whereLogic <> " " <>
"ORDER BY " <> SQL.sql rangeIdF <> " " <> limitOffsetF range <>
") " <>
"DELETE FROM " <> SQL.sql (fromQi mainQi) <> " " <>
"USING pgrst_affected_rows " <>
"WHERE " <> SQL.sql whereRangeIdF <> " " <>
SQL.sql (returningF mainQi returnings)

where
cols = BS.intercalate ", " (pgFmtIdent <> const " = _." <> pgFmtIdent <$> S.toList uCols)
emptyBodyReturnedColumns :: SqlFragment
emptyBodyReturnedColumns
| null returnings = "NULL"
| otherwise = BS.intercalate ", " (pgFmtColumn (QualifiedIdentifier mempty $ qiName mainQi) <$> returnings)
mutateRequestToQuery (Delete mainQi logicForest returnings) =
"DELETE FROM " <> SQL.sql (fromQi mainQi) <> " " <>
(if null logicForest then mempty else "WHERE " <> intercalateSnippet " AND " (map (pgFmtLogicTree mainQi) logicForest)) <> " " <>
SQL.sql (returningF mainQi returnings)
whereLogic = if null logicForest then mempty else " WHERE " <> intercalateSnippet " AND " (pgFmtLogicTree mainQi <$> logicForest)
(whereRangeIdF, rangeIdF) = mutRangeF mainQi rangeId

requestToCallProcQuery :: CallRequest -> SQL.Snippet
requestToCallProcQuery (FunctionCall qi params args returnsScalar multipleCall returnings) =
Expand Down
10 changes: 10 additions & 0 deletions src/PostgREST/Query/SqlFragment.hs
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ module PostgREST.Query.SqlFragment
, fromQi
, limitOffsetF
, locationF
, mutRangeF
, normalizedBody
, pgFmtColumn
, pgFmtIdent
Expand Down Expand Up @@ -334,3 +335,12 @@ unknownLiteral = unknownEncoder . encodeUtf8
intercalateSnippet :: ByteString -> [SQL.Snippet] -> SQL.Snippet
intercalateSnippet _ [] = mempty
intercalateSnippet frag snippets = foldr1 (\a b -> a <> SQL.sql frag <> b) snippets

-- the "ctid" system column is always available to tables
mutRangeF :: QualifiedIdentifier -> [FieldName] -> (SqlFragment, SqlFragment)
mutRangeF mainQi rangeId = (
BS.intercalate " AND " $
(\col -> pgFmtColumn mainQi col <> " = " <> pgFmtColumn (QualifiedIdentifier mempty "pgrst_affected_rows") col) <$>
(if null rangeId then ["ctid"] else rangeId)
, if null rangeId then pgFmtColumn mainQi "ctid" else BS.intercalate ", " (pgFmtColumn mainQi <$> rangeId)
)
steve-chavez marked this conversation as resolved.
Show resolved Hide resolved
Comment on lines +338 to +346
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current implementation, you don't need pk columns at all. You can just do it with the ctid column only. This will make it less complex for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given what I mentioned below, it does make a lot of sense to order by ctid only and not by PK, because doing this:

INSERT INTO projects VALUES (1, 'Windows 7', 1);
INSERT INTO projects VALUES (2, 'Windows 10', 1);
INSERT INTO projects VALUES (5, 'Orphan', NULL);
INSERT INTO projects VALUES (4, 'OSX', 2);
INSERT INTO projects VALUES (3, 'IOS', 2);

Gives through http:

GET /projects

[{"id":1,"name":"Windows 7","client_id":1},
 {"id":2,"name":"Windows 10","client_id":1},
 {"id":5,"name":"Orphan","client_id":null},
 {"id":4,"name":"OSX","client_id":2},
 {"id":3,"name":"IOS","client_id":2}]

Which is the same as doing GET /projects?order=ctid but not the same as doing GET /projects?order=id.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically with ctid we can ensure that DELETE /projects?limit=2 deletes the same rows as the ones obtained through GET /projects?limit=2.

Copy link
Member Author

@steve-chavez steve-chavez Mar 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing bad about preferring ctid is that you have to give explicit SELECT privilege to the column for it to work(or a full wide SELECT) and that is weird - giving the PK columns a SELECT privilege is a more usual operation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So because of the above - the need to do an special GRANT SELECT(ctid) for limit to work - I think in fact ctid should be left out and limit should depend on PKs.

Later on, it would also be weird to tell users to expose ctid in ther VIEWs for limit to work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a user chooses to order on non-pk columns, you can't show this error, because you'd need to detect the uniqueness of those columns again - which is exactly what we have trouble with for views

Forgot about that, so I can't suggest ordering by a pk because a view might have more rows than pk values as in the example you showed here. Got it.

Is this what you understood, too?

Ah, yes I understand the mechanism however the "implicit" part got me thinking into a server side default for row_count.

That would basically solve the "postgrest let's you delete all your rows by default" problem - sometimes some users make the mistake to send deletes without filters, this won't stop even with limit I believe.

So I think enforcing a row_count server side that is overridable on the client would solve that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. A server-side default, which is overridable from the client-side, would be completely unrelated to this issue here, though. The implicit row-count here would just be to avoid mistakes in the order clause - while the default you are proposing would be a protection against unlimited delete.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, if we instead run this query implicitly as the following:
DELETE /v!row_count=lte.2?limit=2&order=static

@wolfgangwalther Now that I've realized, this would mean that limit needs row_count and that for this feature both might as well be the same.

So how about this. I'll go ahead and apply the row_count logic for the limit but will not expose the actual row_count special filter. We can add that one later on when we settle on a syntax.

(I do see row_count being useful in other ways but for this feature we can apply it implicitly as you mentioned)

Copy link
Member Author

@steve-chavez steve-chavez Mar 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And now that I think about it. If we enforce row_count implicitly on limit, then it would be safe to assume the PK cols for the order/join on views as well?

So the above problem would be no more?

I mean because we'd be already fulfilling our part of the contract - the affected rows would not exceed the limit. And if the VIEW query reduces the amount of rows mapped to the PKs this would not be an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So how about this. I'll go ahead and apply the row_count logic for the limit but will not expose the actual row_count special filter.

Ok.

I mean because we'd be already fulfilling our part of the contract - the affected rows would not exceed the limit. And if the VIEW query reduces the amount of rows mapped to the PKs this would not be an issue.

I think it would still have very odd interactions with offset. For me, it'd be much simpler to implement and explain to always require and use the columns in order - and hint in the docs and error messages at the requirements for that. We'd need that for views without inferred PK columns anyway - and this way we don't have a complicated mix of what to do in which situation.

2 changes: 1 addition & 1 deletion src/PostgREST/Request/ApiRequest.hs
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ targetToJsonRpcParams target params =
-}
data ApiRequest = ApiRequest {
iAction :: Action -- ^ Similar but not identical to HTTP verb, e.g. Create/Invoke both POST
, iRange :: M.HashMap Text NonnegRange -- ^ Requested range of rows within response
, iRange :: M.HashMap Text NonnegRange -- ^ Requested range of rows within response
, iTopLevelRange :: NonnegRange -- ^ Requested range of rows from the top level
, iTarget :: Target -- ^ The target, be it calling a proc or accessing a table
, iPayload :: Maybe Payload -- ^ Data sent by client and used for mutation actions
Expand Down
15 changes: 9 additions & 6 deletions src/PostgREST/Request/DbRequestBuilder.hs
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ import Protolude hiding (from)
readRequest :: Schema -> TableName -> Maybe Integer -> [Relationship] -> ApiRequest -> Either Error ReadRequest
readRequest schema rootTableName maxRows allRels apiRequest =
mapLeft ApiRequestError $
treeRestrictRange maxRows =<<
treeRestrictRange maxRows (iAction apiRequest) =<<
augmentRequestWithJoin schema rootRels =<<
addLogicTrees apiRequest =<<
addRanges apiRequest =<<
Expand Down Expand Up @@ -115,8 +115,9 @@ initReadRequest rootQi =
fldForest:rForest

-- | Enforces the `max-rows` config on the result
treeRestrictRange :: Maybe Integer -> ReadRequest -> Either ApiRequestError ReadRequest
treeRestrictRange maxRows request = pure $ nodeRestrictRange maxRows <$> request
treeRestrictRange :: Maybe Integer -> Action -> ReadRequest -> Either ApiRequestError ReadRequest
treeRestrictRange _ (ActionMutate _) request = Right request
treeRestrictRange maxRows _ request = pure $ nodeRestrictRange maxRows <$> request
where
nodeRestrictRange :: Maybe Integer -> ReadNode -> ReadNode
nodeRestrictRange m (q@Select {range_=r}, i) = (q{range_=restrictRange m r }, i)
Expand Down Expand Up @@ -283,7 +284,9 @@ addOrders ApiRequest{..} rReq =

addRanges :: ApiRequest -> ReadRequest -> Either ApiRequestError ReadRequest
addRanges ApiRequest{..} rReq =
foldr addRangeToNode (Right rReq) =<< ranges
case iAction of
ActionMutate _ -> Right rReq
_ -> foldr addRangeToNode (Right rReq) =<< ranges
where
ranges :: Either ApiRequestError [(EmbedPath, NonnegRange)]
ranges = first QueryParamError $ QueryParams.pRequestRange `traverse` M.toList iRange
Expand Down Expand Up @@ -319,7 +322,7 @@ mutateRequest mutation schema tName ApiRequest{..} pkCols readReq = mapLeft ApiR
case mutation of
MutationCreate ->
Right $ Insert qi iColumns body ((,) <$> iPreferResolution <*> Just confCols) [] returnings
MutationUpdate -> Right $ Update qi iColumns body combinedLogic returnings
MutationUpdate -> Right $ Update qi iColumns body combinedLogic (iTopLevelRange, pkCols) returnings
MutationSingleUpsert ->
if null qsLogic &&
qsFilterFields == S.fromList pkCols &&
Expand All @@ -330,7 +333,7 @@ mutateRequest mutation schema tName ApiRequest{..} pkCols readReq = mapLeft ApiR
then Right $ Insert qi iColumns body (Just (MergeDuplicates, pkCols)) combinedLogic returnings
else
Left InvalidFilters
MutationDelete -> Right $ Delete qi combinedLogic returnings
MutationDelete -> Right $ Delete qi combinedLogic (iTopLevelRange, pkCols) returnings
where
confCols = fromMaybe pkCols qsOnConflict
QueryParams.QueryParams{..} = iQueryParams
Expand Down
2 changes: 2 additions & 0 deletions src/PostgREST/Request/Types.hs
Original file line number Diff line number Diff line change
Expand Up @@ -135,11 +135,13 @@ data MutateQuery
, updCols :: S.Set FieldName
, updBody :: Maybe LBS.ByteString
, where_ :: [LogicTree]
, mutRange :: (NonnegRange, [FieldName])
, returning :: [FieldName]
}
| Delete
{ in_ :: QualifiedIdentifier
, where_ :: [LogicTree]
, mutRange :: (NonnegRange, [FieldName])
, returning :: [FieldName]
}

Expand Down
161 changes: 161 additions & 0 deletions test/spec/Feature/Query/DeleteSpec.hs
Original file line number Diff line number Diff line change
Expand Up @@ -115,3 +115,164 @@ spec =
{ matchStatus = 204
, matchHeaders = [matchHeaderAbsent hContentType]
}

context "limited delete" $ do
it "works with the limit and offset query params" $ do
get "/limited_delete_items"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 2, "name": "item-2" }
, { "id": 3, "name": "item-3" }
]|]

request methodDelete "/limited_delete_items?limit=1&offset=1"
[("Prefer", "tx=commit")]
mempty
`shouldRespondWith`
""
{ matchStatus = 204
, matchHeaders = [ matchHeaderAbsent hContentType
, "Preference-Applied" <:> "tx=commit" ]
}

get "/limited_delete_items?order=id"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 3, "name": "item-3" }
]|]

request methodPost "/rpc/reset_limited_items"
[("Prefer", "tx=commit")]
[json| {"tbl_name": "limited_delete_items"} |]
`shouldRespondWith` ""
{ matchStatus = 204 }

it "works with the limit query param plus a filter" $ do
get "/limited_delete_items"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 2, "name": "item-2" }
, { "id": 3, "name": "item-3" }
]|]

request methodDelete "/limited_delete_items?limit=1&id=gt.1"
[("Prefer", "tx=commit")]
mempty
`shouldRespondWith`
""
{ matchStatus = 204
, matchHeaders = [ matchHeaderAbsent hContentType
, "Preference-Applied" <:> "tx=commit" ]
}

get "/limited_delete_items?order=id"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 3, "name": "item-3" }
]|]

request methodPost "/rpc/reset_limited_items"
[("Prefer", "tx=commit")]
[json| {"tbl_name": "limited_delete_items"} |]
`shouldRespondWith` ""
{ matchStatus = 204 }

it "works on a table with a composite pk" $ do
get "/limited_delete_items_cpk"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 2, "name": "item-2" }
, { "id": 3, "name": "item-3" }
]|]

request methodDelete "/limited_delete_items_cpk?limit=1&offset=1"
[("Prefer", "tx=commit")]
mempty
`shouldRespondWith`
""
{ matchStatus = 204
, matchHeaders = [ matchHeaderAbsent hContentType
, "Preference-Applied" <:> "tx=commit" ]
}

get "/limited_delete_items_cpk"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 3, "name": "item-3" }
]|]

request methodPost "/rpc/reset_limited_items"
[("Prefer", "tx=commit")]
[json| {"tbl_name": "limited_delete_items_cpk"} |]
`shouldRespondWith` ""
{ matchStatus = 204 }

it "works with views with an inferred pk" $ do
get "/limited_delete_items_view"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 2, "name": "item-2" }
, { "id": 3, "name": "item-3" }
]|]

request methodDelete "/limited_delete_items_view?limit=1&offset=1"
[("Prefer", "tx=commit")]
mempty
`shouldRespondWith`
""
{ matchStatus = 204
, matchHeaders = [ matchHeaderAbsent hContentType
, "Preference-Applied" <:> "tx=commit" ]
}

get "/limited_delete_items_view"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 3, "name": "item-3" }
]|]

request methodPost "/rpc/reset_limited_items"
[("Prefer", "tx=commit")]
[json| {"tbl_name": "limited_delete_items_view"} |]
`shouldRespondWith` ""
{ matchStatus = 204 }

it "works on a table without a pk" $ do
get "/limited_delete_items_no_pk"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 2, "name": "item-2" }
, { "id": 3, "name": "item-3" }
]|]

request methodDelete "/limited_delete_items_no_pk?limit=1&offset=1"
[("Prefer", "tx=commit")]
mempty
`shouldRespondWith`
""
{ matchStatus = 204
, matchHeaders = [ matchHeaderAbsent hContentType
, "Preference-Applied" <:> "tx=commit" ]
}

get "/limited_delete_items_no_pk"
`shouldRespondWith`
[json|[
{ "id": 1, "name": "item-1" }
, { "id": 3, "name": "item-3" }
]|]

request methodPost "/rpc/reset_limited_items"
[("Prefer", "tx=commit")]
[json| {"tbl_name": "limited_delete_items_no_pk"} |]
`shouldRespondWith` ""
{ matchStatus = 204 }
Loading