From 99a8b0180915e49c7a6872ea32936b7b4c0fe14b Mon Sep 17 00:00:00 2001 From: Mark Paluch Date: Tue, 29 Aug 2023 09:01:15 +0200 Subject: [PATCH] Harmonize content. Refactor content to a natural flow, remove duplications, extract partials. See #1597 --- README.adoc | 134 +++++++++--- spring-data-jdbc-distribution/pom.xml | 5 + spring-data-jdbc/README.adoc | 3 - .../documentation/QueryByExampleTests.java | 12 +- src/main/antora/modules/ROOT/nav.adoc | 25 +-- .../modules/ROOT/pages/jdbc/auditing.adoc | 2 +- .../ROOT/pages/jdbc/configuration.adoc | 64 ------ .../ROOT/pages/jdbc/custom-conversions.adoc | 75 ------- .../ROOT/pages/jdbc/entity-persistence.adoc | 51 +++-- .../ROOT/pages/jdbc/examples-repo.adoc | 5 - .../ROOT/pages/jdbc/getting-started.adoc | 117 ++++++++++- .../ROOT/pages/jdbc/loading-aggregates.adoc | 28 --- .../modules/ROOT/pages/jdbc/locking.adoc | 28 --- .../modules/ROOT/pages/jdbc/logging.adoc | 8 - .../modules/ROOT/pages/jdbc/mapping.adoc | 189 ++++++----------- .../modules/ROOT/pages/jdbc/transactions.adoc | 32 ++- .../modules/ROOT/pages/query-by-example.adoc | 80 +++----- src/main/antora/modules/ROOT/pages/r2dbc.adoc | 11 + .../antora/modules/ROOT/pages/r2dbc/core.adoc | 15 -- ...{template.adoc => entity-persistence.adoc} | 65 +++++- .../ROOT/pages/r2dbc/getting-started.adoc | 76 +++++-- .../modules/ROOT/pages/r2dbc/kotlin.adoc | 2 +- .../modules/ROOT/pages/r2dbc/mapping.adoc | 74 ++----- .../ROOT/pages/r2dbc/query-by-example.adoc | 38 ---- .../ROOT/pages/r2dbc/repositories.adoc | 57 +----- .../pages/repositories/core-concepts.adoc | 2 + .../modules/ROOT/partials/id-generation.adoc | 16 ++ .../ROOT/partials/mapping-annotations.adoc | 24 +++ .../antora/modules/ROOT/partials/mapping.adoc | 190 ++++++++++++++++++ .../ROOT/partials/optimistic-locking.adoc | 12 ++ 30 files changed, 774 insertions(+), 666 deletions(-) delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/locking.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/jdbc/logging.adoc delete mode 100644 src/main/antora/modules/ROOT/pages/r2dbc/core.adoc rename src/main/antora/modules/ROOT/pages/r2dbc/{template.adoc => entity-persistence.adoc} (77%) delete mode 100644 src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc create mode 100644 src/main/antora/modules/ROOT/partials/id-generation.adoc create mode 100644 src/main/antora/modules/ROOT/partials/mapping-annotations.adoc create mode 100644 src/main/antora/modules/ROOT/partials/mapping.adoc create mode 100644 src/main/antora/modules/ROOT/partials/optimistic-locking.adoc diff --git a/README.adoc b/README.adoc index d6dffa121a..bcb393ca4d 100644 --- a/README.adoc +++ b/README.adoc @@ -1,15 +1,14 @@ -image:https://spring.io/badges/spring-data-jdbc/ga.svg["Spring Data JDBC", link="https://spring.io/projects/spring-data-jdbc#learn"] -image:https://spring.io/badges/spring-data-jdbc/snapshot.svg["Spring Data JDBC", link="https://spring.io/projects/spring-data-jdbc#learn"] - -= Spring Data JDBC image:https://jenkins.spring.io/buildStatus/icon?job=spring-data-jdbc%2Fmain&subject=Build[link=https://jenkins.spring.io/view/SpringData/job/spring-data-jdbc/] https://gitter.im/spring-projects/spring-data[image:https://badges.gitter.im/spring-projects/spring-data.svg[Gitter]] += Spring Data Relational image:https://jenkins.spring.io/buildStatus/icon?job=spring-data-jdbc%2Fmain&subject=Build[link=https://jenkins.spring.io/view/SpringData/job/spring-data-jdbc/] https://gitter.im/spring-projects/spring-data[image:https://badges.gitter.im/spring-projects/spring-data.svg[Gitter]] The primary goal of the https://projects.spring.io/spring-data[Spring Data] project is to make it easier to build Spring-powered applications that use new data access technologies such as non-relational databases, map-reduce frameworks, and cloud based data services. -Spring Data JDBC, part of the larger Spring Data family, makes it easy to implement JDBC based repositories. This module deals with enhanced support for JDBC based data access layers. It makes it easier to build Spring powered applications that use data access technologies. +Spring Data Relational, part of the larger Spring Data family, makes it easy to implement repositories for SQL databases. +This module deals with enhanced support for JDBC and R2DBC based data access layers. +It makes it easier to build Spring powered applications that use data access technologies. It aims at being conceptually easy. In order to achieve this it does NOT offer caching, lazy loading, write behind or many other features of JPA. -This makes Spring Data JDBC a simple, limited, opinionated ORM. +This makes Spring Data JDBC and Spring Data R2DBC a simple, limited, opinionated ORM. == Features @@ -18,20 +17,20 @@ This makes Spring Data JDBC a simple, limited, opinionated ORM. * Support for transparent auditing (created, last changed) * Events for persistence events * Possibility to integrate custom repository code -* JavaConfig based repository configuration by introducing `EnableJdbcRepository` -* Integration with MyBatis +* JavaConfig based repository configuration through `@EnableJdbcRepositories` respective `@EnableR2dbcRepositories` +* JDBC-only: Integration with MyBatis == Code of Conduct This project is governed by the https://github.com/spring-projects/.github/blob/e3cc2ff230d8f1dca06535aa6b5a4a23815861d4/CODE_OF_CONDUCT.md[Spring Code of Conduct]. By participating, you are expected to uphold this code of conduct. Please report unacceptable behavior to spring-code-of-conduct@pivotal.io. -== Getting Started +== Getting Started with JDBC -Here is a quick teaser of an application using Spring Data Repositories in Java: +Here is a quick teaser of an application using Spring Data JDBC Repositories in Java: [source,java] ---- -public interface PersonRepository extends CrudRepository { +interface PersonRepository extends CrudRepository { @Query("SELECT * FROM person WHERE lastname = :lastname") List findByLastname(String lastname); @@ -41,7 +40,7 @@ public interface PersonRepository extends CrudRepository { } @Service -public class MyService { +class MyService { private final PersonRepository repository; @@ -88,7 +87,7 @@ Add the Maven dependency: org.springframework.data spring-data-jdbc - ${version}.RELEASE + ${version} ---- @@ -99,7 +98,89 @@ If you'd rather like the latest snapshots of the upcoming major version, use our org.springframework.data spring-data-jdbc - ${version}.BUILD-SNAPSHOT + ${version}-SNAPSHOT + + + + spring-libs-snapshot + Spring Snapshot Repository + https://repo.spring.io/snapshot + +---- + +== Getting Started with R2DBC + +Here is a quick teaser of an application using Spring Data R2DBC Repositories in Java: + +[source,java] +---- +interface PersonRepository extends ReactiveCrudRepository { + + @Query("SELECT * FROM person WHERE lastname = :lastname") + Flux findByLastname(String lastname); + + @Query("SELECT * FROM person WHERE firstname LIKE :firstname") + Flux findByFirstnameLike(String firstname); +} + +@Service +class MyService { + + private final PersonRepository repository; + + public MyService(PersonRepository repository) { + this.repository = repository; + } + + public Flux doWork() { + + Person person = new Person(); + person.setFirstname("Jens"); + person.setLastname("Schauder"); + repository.save(person); + + Mono deleteAll = repository.deleteAll(); + + Flux lastNameResults = repository.findByLastname("Schauder"); + Flux firstNameResults = repository.findByFirstnameLike("Je%"); + + return deleteAll.thenMany(lastNameResults.concatWith(firstNameResults)); + } +} + +@Configuration +@EnableR2dbcRepositories +class ApplicationConfig extends AbstractR2dbcConfiguration { + + @Bean + public ConnectionFactory connectionFactory() { + return ConnectionFactories.get("r2dbc:://:/"); + } + +} +---- + +=== Maven configuration + +Add the Maven dependency: + +[source,xml] +---- + + org.springframework.data + spring-data-r2dbc + ${version} + +---- + +If you'd rather like the latest snapshots of the upcoming major version, use our Maven snapshot repository and declare the appropriate dependency version. + +[source,xml] +---- + + org.springframework.data + spring-data-r2dbc + ${version}-SNAPSHOT @@ -111,7 +192,8 @@ If you'd rather like the latest snapshots of the upcoming major version, use our == Getting Help -Having trouble with Spring Data? We’d love to help! +Having trouble with Spring Data? +We’d love to help! * If you are new to Spring Data JDBC read the following two articles https://spring.io/blog/2018/09/17/introducing-spring-data-jdbc["Introducing Spring Data JDBC"] and https://spring.io/blog/2018/09/24/spring-data-jdbc-references-and-aggregates["Spring Data JDBC, References, and Aggregates"]. * Check the @@ -119,24 +201,27 @@ https://docs.spring.io/spring-data/jdbc/docs/current/reference/html/[reference d * Learn the Spring basics – Spring Data builds on Spring Framework, check the https://spring.io[spring.io] web-site for a wealth of reference documentation. If you are just starting out with Spring, try one of the https://spring.io/guides[guides]. * If you are upgrading, check out the https://docs.spring.io/spring-data/jdbc/docs/current/changelog.txt[changelog] for "`new and noteworthy`" features. -* Ask a question - we monitor https://stackoverflow.com[stackoverflow.com] for questions tagged with https://stackoverflow.com/tags/spring-data[`spring-data-jdbc`]. +* Ask a question - we monitor https://stackoverflow.com[stackoverflow.com] for questions tagged with https://stackoverflow.com/tags/spring-data[`spring-data`]. You can also chat with the community on https://gitter.im/spring-projects/spring-data[Gitter]. == Reporting Issues -Spring Data uses GitHub as issue tracking system to record bugs and feature requests. If you want to raise an issue, please follow the recommendations below: +Spring Data uses GitHub as issue tracking system to record bugs and feature requests.If you want to raise an issue, please follow the recommendations below: -* Before you log a bug, please search the -Spring Data JDBCs https://github.com/spring-projects/spring-data-jdbc/issues[issue tracker] to see if someone has already reported the problem. -* If the issue doesn’t already exist, https://github.com/spring-projects/spring-data-jdbc/issues/new[create a new issue]. -* Please provide as much information as possible with the issue report, we like to know the version of Spring Data that you are using and JVM version. Please include full stack traces when applicable. +* Before you log a bug, please search the Spring Data JDBCs https://github.com/spring-projects/spring-data-relational/issues[issue tracker] to see if someone has already reported the problem. +* If the issue doesn’t already exist, https://github.com/spring-projects/spring-data-relational/issues/new[create a new issue]. +* Please provide as much information as possible with the issue report, we like to know the version of Spring Data that you are using and JVM version. +Please include full stack traces when applicable. * If you need to paste code, or include a stack trace use triple backticks before and after your text. -* If possible try to create a test-case or project that replicates the issue. Attach a link to your code or a compressed file containing your code. Use an in-memory database when possible. If you need a different database include the setup using https://github.com/testcontainers[Testcontainers] in your test. +* If possible try to create a test-case or project that replicates the issue. +Attach a link to your code or a compressed file containing your code. +Use an in-memory database when possible. +If you need a different database include the setup using https://github.com/testcontainers[Testcontainers] in your test. == Building from Source You don’t need to build from source to use Spring Data (binaries in https://repo.spring.io[repo.spring.io]), but if you want to try out the latest and greatest, Spring Data can be easily built with the https://github.com/takari/maven-wrapper[maven wrapper]. -You also need JDK 1.8. +You also need JDK 17. [source,bash] ---- @@ -195,6 +280,7 @@ There are a number of modules in this project, here is a quick overview: * Spring Data Relational: Common infrastructure abstracting general aspects of relational database access. * link:spring-data-jdbc[Spring Data JDBC]: Repository support for JDBC-based datasources. +* link:spring-data-r2dbc[Spring Data R2DBC]: Repository support for R2DBC-based datasources. == Examples @@ -202,4 +288,4 @@ There are a number of modules in this project, here is a quick overview: == License -Spring Data JDBC is Open Source software released under the https://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license]. +Spring Data Relational is Open Source software released under the https://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license]. diff --git a/spring-data-jdbc-distribution/pom.xml b/spring-data-jdbc-distribution/pom.xml index 93b1b74b74..271486f02a 100644 --- a/spring-data-jdbc-distribution/pom.xml +++ b/spring-data-jdbc-distribution/pom.xml @@ -44,6 +44,11 @@ + + org.apache.maven.plugins + maven-assembly-plugin + + io.spring.maven.antora antora-maven-plugin diff --git a/spring-data-jdbc/README.adoc b/spring-data-jdbc/README.adoc index 67ca8dc919..e3b900372a 100644 --- a/spring-data-jdbc/README.adoc +++ b/spring-data-jdbc/README.adoc @@ -1,6 +1,3 @@ -image:https://spring.io/badges/spring-data-jdbc/ga.svg["Spring Data JDBC", link="https://spring.io/projects/spring-data-jdbc#learn"] -image:https://spring.io/badges/spring-data-jdbc/snapshot.svg["Spring Data JDBC", link="https://spring.io/projects/spring-data-jdbc#learn"] - = Spring Data JDBC The primary goal of the https://projects.spring.io/spring-data[Spring Data] project is to make it easier to build Spring-powered applications that use data access technologies. *Spring Data JDBC* offers the popular Repository abstraction based on JDBC. diff --git a/spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java b/spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java index ea3037f724..eafdd444c3 100644 --- a/spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java +++ b/spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java @@ -55,12 +55,12 @@ void queryByExampleSimple() { Example example = Example.of(employee); // <2> - Flux employees = repository.findAll(example); // <3> + repository.findAll(example); // <3> - // do whatever with the flux + // do whatever with the result // end::example[] - employees // + repository.findAll(example) // .as(StepVerifier::create) // .expectNext(new Employee(1, "Frodo", "ring bearer")) // .verifyComplete(); @@ -87,12 +87,12 @@ void queryByExampleCustomMatcher() { .withIgnorePaths("role"); // <4> Example example = Example.of(employee, matcher); // <5> - Flux employees = repository.findAll(example); + repository.findAll(example); - // do whatever with the flux + // do whatever with the result // end::example-2[] - employees // + repository.findAll(example) // .as(StepVerifier::create) // .expectNext(new Employee(1, "Frodo Baggins", "ring bearer")) // .expectNext(new Employee(1, "Bilbo Baggins", "burglar")) // diff --git a/src/main/antora/modules/ROOT/nav.adoc b/src/main/antora/modules/ROOT/nav.adoc index cb843519ea..e0f9382ca4 100644 --- a/src/main/antora/modules/ROOT/nav.adoc +++ b/src/main/antora/modules/ROOT/nav.adoc @@ -1,5 +1,6 @@ * xref:index.adoc[Overview] ** xref:commons/upgrade.adoc[] + * xref:repositories/introduction.adoc[] ** xref:repositories/core-concepts.adoc[] ** xref:repositories/definition.adoc[] @@ -9,47 +10,43 @@ ** xref:object-mapping.adoc[] ** xref:commons/custom-conversions.adoc[] ** xref:repositories/custom-implementations.adoc[] +** xref:repositories/core-extensions.adoc[] +** xref:query-by-example.adoc[] ** xref:repositories/core-domain-events.adoc[] ** xref:commons/entity-callbacks.adoc[] -** xref:repositories/core-extensions.adoc[] ** xref:repositories/null-handling.adoc[] ** xref:repositories/query-keywords-reference.adoc[] ** xref:repositories/query-return-types-reference.adoc[] + * xref:jdbc.adoc[] ** xref:jdbc/why.adoc[] ** xref:jdbc/domain-driven-design.adoc[] ** xref:jdbc/getting-started.adoc[] -** xref:jdbc/examples-repo.adoc[] -** xref:jdbc/configuration.adoc[] ** xref:jdbc/entity-persistence.adoc[] -** xref:jdbc/loading-aggregates.adoc[] +** xref:jdbc/mapping.adoc[] ** xref:jdbc/query-methods.adoc[] ** xref:jdbc/mybatis.adoc[] ** xref:jdbc/events.adoc[] -** xref:jdbc/logging.adoc[] -** xref:jdbc/transactions.adoc[] ** xref:jdbc/auditing.adoc[] -** xref:jdbc/mapping.adoc[] -** xref:jdbc/custom-conversions.adoc[] -** xref:jdbc/locking.adoc[] -** xref:query-by-example.adoc[] +** xref:jdbc/transactions.adoc[] ** xref:jdbc/schema-support.adoc[] + * xref:r2dbc.adoc[] ** xref:r2dbc/getting-started.adoc[] -** xref:r2dbc/core.adoc[] -** xref:r2dbc/template.adoc[] +** xref:r2dbc/entity-persistence.adoc[] +** xref:r2dbc/mapping.adoc[] ** xref:r2dbc/repositories.adoc[] ** xref:r2dbc/query-methods.adoc[] ** xref:r2dbc/entity-callbacks.adoc[] ** xref:r2dbc/auditing.adoc[] -** xref:r2dbc/mapping.adoc[] -** xref:r2dbc/query-by-example.adoc[] ** xref:r2dbc/kotlin.adoc[] ** xref:r2dbc/migration-guide.adoc[] + * xref:kotlin.adoc[] ** xref:kotlin/requirements.adoc[] ** xref:kotlin/null-safety.adoc[] ** xref:kotlin/object-mapping.adoc[] ** xref:kotlin/extensions.adoc[] ** xref:kotlin/coroutines.adoc[] + * https://github.com/spring-projects/spring-data-commons/wiki[Wiki] diff --git a/src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc b/src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc index d144264c1f..fa761b5d56 100644 --- a/src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc +++ b/src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc @@ -1,5 +1,5 @@ [[jdbc.auditing]] -= JDBC Auditing += Auditing :page-section-summary-toc: 1 In order to activate auditing, add `@EnableJdbcAuditing` to your configuration, as the following example shows: diff --git a/src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc b/src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc deleted file mode 100644 index 529b93a6a2..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc +++ /dev/null @@ -1,64 +0,0 @@ -[[jdbc.java-config]] -= Configuration - -The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows: - -.Spring Data JDBC repositories using Java configuration -[source,java] ----- -@Configuration -@EnableJdbcRepositories // <1> -class ApplicationConfig extends AbstractJdbcConfiguration { // <2> - - @Bean - DataSource dataSource() { // <3> - - EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); - return builder.setType(EmbeddedDatabaseType.HSQL).build(); - } - - @Bean - NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4> - return new NamedParameterJdbcTemplate(dataSource); - } - - @Bean - TransactionManager transactionManager(DataSource dataSource) { // <5> - return new DataSourceTransactionManager(dataSource); - } -} ----- -<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository` -<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC -<3> Creates a `DataSource` connecting to a database. -This is required by the following two bean methods. -<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database. -<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC. - -The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`. -The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`. -We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`. -If no base package is configured, it uses the package in which the configuration class resides. -Extending `AbstractJdbcConfiguration` ensures various beans get registered. -Overwriting its methods can be used to customize the setup (see below). - -This configuration can be further simplified by using Spring Boot. -With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies. -Everything else is done by Spring Boot. - -There are a couple of things one might want to customize in this setup. - -[[jdbc.dialects]] -== Dialects - -Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver. -By default, the `AbstractJdbcConfiguration` tries to determine the database in use and register the correct `Dialect`. -This behavior can be changed by overwriting `jdbcDialect(NamedParameterJdbcOperations)`. - -If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. Alternatively, you can: - -1. Implement your own `Dialect`. -2. Implement a `JdbcDialectProvider` returning the `Dialect`. -3. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line + -`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=` - diff --git a/src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc b/src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc deleted file mode 100644 index f8d7672139..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc +++ /dev/null @@ -1,75 +0,0 @@ -[[jdbc.custom-converters]] -= Custom Conversions - -Spring Data JDBC allows registration of custom converters to influence how values are mapped in the database. -Currently, converters are only applied on property-level. - -[[jdbc.custom-converters.writer]] -== Writing a Property by Using a Registered Spring Converter - -The following example shows an implementation of a `Converter` that converts from a `Boolean` object to a `String` value: - -[source,java] ----- -import org.springframework.core.convert.converter.Converter; - -@WritingConverter -public class BooleanToStringConverter implements Converter { - - @Override - public String convert(Boolean source) { - return source != null && source ? "T" : "F"; - } -} ----- - -There are a couple of things to notice here: `Boolean` and `String` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). -By annotating this converter with `@WritingConverter` you instruct Spring Data to write every `Boolean` property as `String` in the database. - -[[jdbc.custom-converters.reader]] -== Reading by Using a Spring Converter - -The following example shows an implementation of a `Converter` that converts from a `String` to a `Boolean` value: - -[source,java] ----- -@ReadingConverter -public class StringToBooleanConverter implements Converter { - - @Override - public Boolean convert(String source) { - return source != null && source.equalsIgnoreCase("T") ? Boolean.TRUE : Boolean.FALSE; - } -} ----- - -There are a couple of things to notice here: `String` and `Boolean` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). -By annotating this converter with `@ReadingConverter` you instruct Spring Data to convert every `String` value from the database that should be assigned to a `Boolean` property. - -[[jdbc.custom-converters.configuration]] -== Registering Spring Converters with the `JdbcConverter` - -[source,java] ----- -class MyJdbcConfiguration extends AbstractJdbcConfiguration { - - // … - - @Override - protected List userConverters() { - return Arrays.asList(new BooleanToStringConverter(), new StringToBooleanConverter()); - } - -} ----- - -NOTE: In previous versions of Spring Data JDBC it was recommended to directly overwrite `AbstractJdbcConfiguration.jdbcCustomConversions()`. -This is no longer necessary or even recommended, since that method assembles conversions intended for all databases, conversions registered by the `Dialect` used and conversions registered by the user. -If you are migrating from an older version of Spring Data JDBC and have `AbstractJdbcConfiguration.jdbcCustomConversions()` overwritten conversions from your `Dialect` will not get registered. - -[[jdbc.custom-converters.jdbc-value]] -== JdbcValue - -Value conversion uses `JdbcValue` to enrich values propagated to JDBC operations with a `java.sql.Types` type. -Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation. -This converter should convert the value to `JdbcValue` which has a field for the value and for the actual `JDBCType`. diff --git a/src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc b/src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc index 5ee622963a..4e3449f8a9 100644 --- a/src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc +++ b/src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc @@ -13,32 +13,41 @@ While this process could and probably will be improved, there are certain limita It does not know the previous state of an aggregate. So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method. -[[jdbc.entity-persistence.state-detection-strategies]] -include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] +See also xref:repositories/core-concepts.adoc#is-new-state-detection[Entity State Detection] for further details. -[[jdbc.entity-persistence.id-generation]] -== ID Generation +[[jdbc.loading-aggregates]] +== Loading Aggregates -Spring Data JDBC uses the ID to identify entities. -The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. +Spring Data JDBC offers two ways how it can load aggregates: -When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. +. The traditional and before version 3.2 the only way is really simple: +Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query. +If the aggregate root references other entities those are loaded with separate statements. -One important constraint is that, after saving an entity, the entity must not be new any more. -Note that whether an entity is new is part of the entity's state. -With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. -If you are not using auto-increment columns, you can use a `BeforeConvertCallback` to set the ID of the entity (covered later in this document). +. Spring Data JDBC 3.2 allows the use of _Single Query Loading_. +With this an arbitrary number of aggregates can be fully loaded with a single SQL query. +This should be significant more efficient, especially for complex aggregates, consisting of many entities. ++ +Currently, Single Query Loading is restricted to: -[[jdbc.entity-persistence.optimistic-locking]] -== Optimistic Locking +1. It only works for aggregates that only reference one entity collection.The plan is to remove this constraint in the future. + +2. The aggregate must also not use `AggregateReference` or embedded entities.The plan is to remove this constraint in the future. + +3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.H2 and HSQL don't support analytic functions (aka windowing functions). + +4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.The plan is to remove this constraint in the future. -Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with -https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root. -Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen: -The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged. -If this isn't the case an `OptimisticLockingFailureException` will be thrown. -Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above. +5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)` -This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used. +NOTE: Single Query Loading is to be considered experimental. +We appreciate feedback on how it works for you. + +NOTE: While Single Query Loading can be abbreviated as SQL, but we highly discourage doing so since confusion with Structured Query Language is almost guaranteed. + +include::partial$id-generation.adoc[] + +[[jdbc.entity-persistence.optimistic-locking]] +== Optimistic Locking -During deletes the version check also applies but no version is increased. +include::partial$optimistic-locking.adoc[] diff --git a/src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc b/src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc deleted file mode 100644 index 1fa283439d..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc +++ /dev/null @@ -1,5 +0,0 @@ -[[jdbc.examples-repo]] -= Examples Repository -:page-section-summary-toc: 1 - -There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works. diff --git a/src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc b/src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc index 2faa3bf494..64947a4d6a 100644 --- a/src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc +++ b/src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc @@ -3,14 +3,15 @@ An easy way to bootstrap setting up a working environment is to create a Spring-based project in https://spring.io/tools[Spring Tools] or from https://start.spring.io[Spring Initializr]. -First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access. +First, you need to set up a running database server. +Refer to your vendor documentation on how to configure your database for JDBC access. [[requirements]] == Requirements Spring Data JDBC requires https://spring.io/docs[Spring Framework] {springVersion} and above. -In terms of databases, Spring Data JDBC requires a xref:jdbc/configuration.adoc#jdbc.dialects[dialect] to abstract common SQL functionality over vendor-specific flavours. +In terms of databases, Spring Data JDBC requires a <> to abstract common SQL functionality over vendor-specific flavours. Spring Data JDBC includes direct support for the following databases: * DB2 @@ -22,8 +23,11 @@ Spring Data JDBC includes direct support for the following databases: * Oracle * Postgres -If you use a different database then your application won’t startup. -The xref:jdbc/configuration.adoc#jdbc.dialects[dialect] section contains further detail on how to proceed in such case. +If you use a different database then your application won’t start up. +The <> section contains further detail on how to proceed in such case. + +[[jdbc.hello-world]] +== Hello World To create a Spring project in STS: @@ -31,6 +35,9 @@ To create a Spring project in STS: Then enter a project and a package name, such as `org.spring.jdbc.example`. . Add the following to the `pom.xml` files `dependencies` element: + + +. Add the following to the pom.xml files `dependencies` element: ++ [source,xml,subs="+attributes"] ---- @@ -45,13 +52,15 @@ Then enter a project and a package name, such as `org.spring.jdbc.example`. ---- + . Change the version of Spring in the pom.xml to be + [source,xml,subs="+attributes"] ---- -{springVersion} +{springVersion} ---- -. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level of your `` element: + +. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level as your `` element: + [source,xml] ---- @@ -66,3 +75,99 @@ Then enter a project and a package name, such as `org.spring.jdbc.example`. The repository is also https://repo.spring.io/milestone/org/springframework/data/[browseable here]. +[[jdbc.logging]] +=== Logging + +Spring Data JDBC does little to no logging on its own. +Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging. +Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis]. + +You may also want to set the logging level to `DEBUG` to see some additional information. +To do so, edit the `application.properties` file to have the following content: + +[source] +---- +logging.level.org.springframework.jdbc=DEBUG +---- + +// TODO: Add example similar to + +[[jdbc.examples-repo]] +== Examples Repository + +There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works. + +[[jdbc.java-config]] +== Configuration + +The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows: + +.Spring Data JDBC repositories using Java configuration +[source,java] +---- +@Configuration +@EnableJdbcRepositories // <1> +class ApplicationConfig extends AbstractJdbcConfiguration { // <2> + + @Bean + DataSource dataSource() { // <3> + + EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); + return builder.setType(EmbeddedDatabaseType.HSQL).build(); + } + + @Bean + NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4> + return new NamedParameterJdbcTemplate(dataSource); + } + + @Bean + TransactionManager transactionManager(DataSource dataSource) { // <5> + return new DataSourceTransactionManager(dataSource); + } +} +---- + +<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository` +<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC +<3> Creates a `DataSource` connecting to a database. +This is required by the following two bean methods. +<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database. +<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC. + +The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`. +The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`. +We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`. +If no base package is configured, it uses the package in which the configuration class resides. +Extending `AbstractJdbcConfiguration` ensures various beans get registered. +Overwriting its methods can be used to customize the setup (see below). + +This configuration can be further simplified by using Spring Boot. +With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies. +Everything else is done by Spring Boot. + +There are a couple of things one might want to customize in this setup. + +[[jdbc.dialects]] +== Dialects + +Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver. +By default, the `AbstractJdbcConfiguration` attempts to determine the dialect from the database configuration by obtaining a connection and registering the correct `Dialect`. +You override `AbstractJdbcConfiguration.jdbcDialect(NamedParameterJdbcOperations)` to customize dialect selection. + +If you use a database for which no dialect is available, then your application won’t start up. +In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. +Alternatively, you can implement your own `Dialect`. + +[TIP] +==== +Dialects are resolved by {spring-data-jdbc-javadoc}/org/springframework/data/jdbc/repository/config/DialectResolver.html[`DialectResolver`] from a `JdbcOperations` instance, typically by inspecting `Connection.getMetaData()`. ++ You can let Spring auto-discover your `JdbcDialect` by registering a class that implements `org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider` through `META-INF/spring.factories`. +`DialectResolver` discovers dialect provider implementations from the class path using Spring's `SpringFactoriesLoader`. +To do so: + +. Implement your own `Dialect`. +. Implement a `JdbcDialectProvider` returning the `Dialect`. +. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line + +`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=` +==== diff --git a/src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc b/src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc deleted file mode 100644 index 2af64758a1..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc +++ /dev/null @@ -1,28 +0,0 @@ -[[jdbc.loading-aggregates]] -= Loading Aggregates - -Spring Data JDBC offers two ways how it can load aggregates. -The traditional and before version 3.2 the only way is really simple: -Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query. -If the aggregate root references other entities those are loaded with separate statements. - -Spring Data JDBC now allows the use of _Single Query Loading_. -With this an arbitrary number of aggregates can be fully loaded with a single SQL query. -This should be significant more efficient, especially for complex aggregates, consisting of many entities. - -Currently, this feature is very restricted. - -1. It only works for aggregates that only reference one entity collection.The plan is to remove this constraint in the future. - -2. The aggregate must also not use `AggregateReference` or embedded entities.The plan is to remove this constraint in the future. - -3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.H2 and HSQL don't support analytic functions (aka windowing functions). - -4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.The plan is to remove this constraint in the future. - -5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)` - -Note: Single Query Loading is to be considered experimental. We appreciate feedback on how it works for you. - -Note:Single Query Loading can be abbreviated as SQL, but we highly discourage that since confusion with Structured Query Language is almost guaranteed. - diff --git a/src/main/antora/modules/ROOT/pages/jdbc/locking.adoc b/src/main/antora/modules/ROOT/pages/jdbc/locking.adoc deleted file mode 100644 index 57a12cc0fc..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/locking.adoc +++ /dev/null @@ -1,28 +0,0 @@ -[[jdbc.locking]] -= JDBC Locking - -Spring Data JDBC supports locking on derived query methods. -To enable locking on a given derived query method inside a repository, you annotate it with `@Lock`. -The required value of type `LockMode` offers two values: `PESSIMISTIC_READ` which guarantees that the data you are reading doesn't get modified and `PESSIMISTIC_WRITE` which obtains a lock to modify the data. -Some databases do not make this distinction. -In that cases both modes are equivalent of `PESSIMISTIC_WRITE`. - -.Using @Lock on derived query method -[source,java] ----- -interface UserRepository extends CrudRepository { - - @Lock(LockMode.PESSIMISTIC_READ) - List findByLastname(String lastname); -} ----- - -As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query: - -.Resulting Sql query for MySQL dialect -[source,sql] ----- -Select * from user u where u.lastname = lastname LOCK IN SHARE MODE ----- - -Alternative to `LockMode.PESSIMISTIC_READ` you can use `LockMode.PESSIMISTIC_WRITE`. diff --git a/src/main/antora/modules/ROOT/pages/jdbc/logging.adoc b/src/main/antora/modules/ROOT/pages/jdbc/logging.adoc deleted file mode 100644 index a7c94bfe73..0000000000 --- a/src/main/antora/modules/ROOT/pages/jdbc/logging.adoc +++ /dev/null @@ -1,8 +0,0 @@ -[[jdbc.logging]] -= Logging -:page-section-summary-toc: 1 - -Spring Data JDBC does little to no logging on its own. -Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging. -Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis]. - diff --git a/src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc b/src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc index ea78caf8f2..43100036d0 100644 --- a/src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc +++ b/src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc @@ -21,12 +21,11 @@ The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name. The same name mapping is applied for mapping fields to column names. For example, the `firstName` field maps to the `FIRST_NAME` column. You can control this mapping by providing a custom `NamingStrategy`. +See <> for more detail. Table and column names that are derived from property or class names are used in SQL statements without quotes by default. -You can control this behavior by setting `JdbcMappingContext.setForceQuote(true)`. +You can control this behavior by setting `RelationalMappingContext.setForceQuote(true)`. -* Nested objects are not supported. - -* The converter uses any Spring Converters registered with it to override the default mapping of object properties to row columns and values. +* The converter uses any Spring Converters registered with `CustomConversions` to override the default mapping of object properties to row columns and values. * The fields of an object are used to convert to and from columns in the row. Public `JavaBean` properties are not used. @@ -34,6 +33,7 @@ Public `JavaBean` properties are not used. * If you have a single non-zero-argument constructor whose constructor argument names match top-level column names of the row, that constructor is used. Otherwise, the zero-argument constructor is used. If there is more than one non-zero-argument constructor, an exception is thrown. +Refer to xref:object-mapping.adoc#mapping.object-creation[Object Creation] for further details. [[jdbc.entity-persistence.types]] == Supported Types in Your Entity @@ -69,6 +69,17 @@ Alternatively you may annotate the attribute with `@MappedCollection(idColumn="y * `List` is mapped as a `Map`. +[[mapping.usage.annotations]] +=== Mapping Annotation Overview + +include::partial$mapping-annotations.adoc[] + +See xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.optimistic-locking[Optimistic Locking] for further reference. + +The mapping metadata infrastructure is defined in the separate `spring-data-commons` project that is technology-agnostic. +Specific subclasses are used in the JDBC support to support annotation based metadata. +Other strategies can also be put in place (if there is demand). + [[jdbc.entity-persistence.types.referenced-entities]] === Referenced Entities @@ -116,158 +127,82 @@ p1.bestFriend = AggregateReference.to(p2.id); * Types for which you registered suitable [[jdbc.custom-converters, custom conversions]]. -[[jdbc.entity-persistence.naming-strategy]] -== `NamingStrategy` - -When you use the standard implementations of `CrudRepository` that Spring Data JDBC provides, they expect a certain table structure. -You can tweak that by providing a {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context. - -[[jdbc.entity-persistence.custom-table-name]] -== `Custom table names` - -When the NamingStrategy does not matching on your database table names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation. -The element `value` of this annotation provides the custom table name. -The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database: - -[source,java] ----- -@Table("CUSTOM_TABLE_NAME") -class MyEntity { - @Id - Integer id; - - String name; -} ----- - -[[jdbc.entity-persistence.custom-column-name]] -== `Custom column names` - -When the NamingStrategy does not matching on your database column names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation. -The element `value` of this annotation provides the custom column name. -The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database: - -[source,java] ----- -class MyEntity { - @Id - Integer id; - - @Column("CUSTOM_COLUMN_NAME") - String name; -} ----- - -The {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] -annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship). -`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table. -In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons: +:mapped-collection: true +:embedded-entities: true +include::partial$mapping.adoc[] -[source,java] ----- -class MyEntity { - @Id - Integer id; +[[mapping.explicit.converters]] +== Overriding Mapping with Explicit Converters - @MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME") - Set subEntities; -} +Spring Data allows registration of custom converters to influence how values are mapped in the database. +Currently, converters are only applied on property-level. -class MySubEntity { - String name; -} ----- +[[custom-converters.writer]] +=== Writing a Property by Using a Registered Spring Converter -When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`. -This additional column name may be customized with the `keyColumn` Element of the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation: +The following example shows an implementation of a `Converter` that converts from a `Boolean` object to a `String` value: [source,java] ---- -class MyEntity { - @Id - Integer id; +import org.springframework.core.convert.converter.Converter; - @MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME") - List name; -} +@WritingConverter +public class BooleanToStringConverter implements Converter { -class MySubEntity { - String name; + @Override + public String convert(Boolean source) { + return source != null && source ? "T" : "F"; + } } ---- -[[jdbc.entity-persistence.embedded-entities]] -== Embedded entities +There are a couple of things to notice here: `Boolean` and `String` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). +By annotating this converter with `@WritingConverter` you instruct Spring Data to write every `Boolean` property as `String` in the database. -Embedded entities are used to have value objects in your java data model, even if there is only one table in your database. -In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation. -The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected. +[[custom-converters.reader]] +=== Reading by Using a Spring Converter -However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. + -Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set. +The following example shows an implementation of a `Converter` that converts from a `String` to a `Boolean` value: -.Sample Code of embedding objects -==== [source,java] ---- -class MyEntity { - - @Id - Integer id; +@ReadingConverter +public class StringToBooleanConverter implements Converter { - @Embedded(onEmpty = USE_NULL) <1> - EmbeddedEntity embeddedEntity; -} - -class EmbeddedEntity { - String name; + @Override + public Boolean convert(String source) { + return source != null && source.equalsIgnoreCase("T") ? Boolean.TRUE : Boolean.FALSE; + } } ---- -<1> ``Null``s `embeddedEntity` if `name` in `null`. -Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property. -==== - -If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation. -This element represents a prefix and is prepend for each column name in the embedded object. +There are a couple of things to notice here: `String` and `Boolean` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). +By annotating this converter with `@ReadingConverter` you instruct Spring Data to convert every `String` value from the database that should be assigned to a `Boolean` property. -[TIP] -==== -Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly. +[[jdbc.custom-converters.configuration]] +=== Registering Spring Converters with the `JdbcConverter` [source,java] ---- -class MyEntity { +class MyJdbcConfiguration extends AbstractJdbcConfiguration { - @Id - Integer id; + // … + + @Override + protected List userConverters() { + return Arrays.asList(new BooleanToStringConverter(), new StringToBooleanConverter()); + } - @Embedded.Nullable <1> - EmbeddedEntity embeddedEntity; } ---- -<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`. -==== - -Embedded entities containing a `Collection` or a `Map` will always be considered non empty since they will at least contain the empty collection or map. -Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL). - -[[jdbc.entity-persistence.read-only-properties]] -== Read Only Properties - -Attributes annotated with `@ReadOnlyProperty` will not be written to the database by Spring Data JDBC, but they will be read when an entity gets loaded. - -Spring Data JDBC will not automatically reload an entity after writing it. -Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns. - -If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. -Spring Data JDBC will not perform any insert, delete or update for these rows. - -[[jdbc.entity-persistence.insert-only-properties]] -== Insert Only Properties +NOTE: In previous versions of Spring Data JDBC it was recommended to directly overwrite `AbstractJdbcConfiguration.jdbcCustomConversions()`. +This is no longer necessary or even recommended, since that method assembles conversions intended for all databases, conversions registered by the `Dialect` used and conversions registered by the user. +If you are migrating from an older version of Spring Data JDBC and have `AbstractJdbcConfiguration.jdbcCustomConversions()` overwritten conversions from your `Dialect` will not get registered. -Attributes annotated with `@InsertOnlyProperty` will only be written to the database by Spring Data JDBC during insert operations. -For updates these properties will be ignored. +[[jdbc.custom-converters.jdbc-value]] +=== JdbcValue -`@InsertOnlyProperty` is only supported for the aggregate root. +Value conversion uses `JdbcValue` to enrich values propagated to JDBC operations with a `java.sql.Types` type. +Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation. +This converter should convert the value to `JdbcValue` which has a field for the value and for the actual `JDBCType`. diff --git a/src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc b/src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc index 69c8d311b9..f60852e336 100644 --- a/src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc +++ b/src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc @@ -82,7 +82,8 @@ Typically, you want the `readOnly` flag to be set to true, because most of the q In contrast to that, `deleteInactiveUsers()` uses the `@Modifying` annotation and overrides the transaction configuration. Thus, the method is with the `readOnly` flag set to `false`. -NOTE: It is highly recommended to make query methods transactional. These methods might execute more then one query in order to populate an entity. +NOTE: It is highly recommended to make query methods transactional. +These methods might execute more then one query in order to populate an entity. Without a common transaction Spring Data JDBC executes the queries in different connections. This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one. @@ -90,3 +91,32 @@ NOTE: It is definitely reasonable to mark read-only queries as such by setting t This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject `INSERT` and `UPDATE` statements inside a read-only transaction). Instead, the `readOnly` flag is propagated as a hint to the underlying JDBC driver for performance optimizations. +[[jdbc.locking]] +== JDBC Locking + +Spring Data JDBC supports locking on derived query methods. +To enable locking on a given derived query method inside a repository, you annotate it with `@Lock`. +The required value of type `LockMode` offers two values: `PESSIMISTIC_READ` which guarantees that the data you are reading doesn't get modified and `PESSIMISTIC_WRITE` which obtains a lock to modify the data. +Some databases do not make this distinction. +In that cases both modes are equivalent of `PESSIMISTIC_WRITE`. + +.Using @Lock on derived query method +[source,java] +---- +interface UserRepository extends CrudRepository { + + @Lock(LockMode.PESSIMISTIC_READ) + List findByLastname(String lastname); +} +---- + +As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock. +If you are using a databse with the MySQL Dialect this will result for example in the following query: + +.Resulting Sql query for MySQL dialect +[source,sql] +---- +Select * from user u where u.lastname = lastname LOCK IN SHARE MODE +---- + +Alternative to `LockMode.PESSIMISTIC_READ` you can use `LockMode.PESSIMISTIC_WRITE`. diff --git a/src/main/antora/modules/ROOT/pages/query-by-example.adoc b/src/main/antora/modules/ROOT/pages/query-by-example.adoc index c8c486c04f..b8c3c9a140 100644 --- a/src/main/antora/modules/ROOT/pages/query-by-example.adoc +++ b/src/main/antora/modules/ROOT/pages/query-by-example.adoc @@ -1,67 +1,33 @@ -[[query-by-example.running]] -= Query by Example +include::{commons}@data-commons::query-by-example.adoc[] -include::{commons}@data-commons::page$query-by-example.adoc[leveloffset=+1] +Here's an example: -In Spring Data JDBC and R2DBC, you can use Query by Example with Repositories, as shown in the following example: - -.Query by Example using a Repository -[source,java] +[source,java,indent=0] ---- -public interface PersonRepository - extends CrudRepository, - QueryByExampleExecutor { … } - -public class PersonService { - - @Autowired PersonRepository personRepository; - - public List findPeople(Person probe) { - return personRepository.findAll(Example.of(probe)); - } -} +include::example$r2dbc/QueryByExampleTests.java[tag=example] ---- -NOTE: Currently, only `SingularAttribute` properties can be used for property matching. - -The property specifier accepts property names (such as `firstname` and `lastname`). You can navigate by chaining properties together with dots (`address.city`). You can also tune it with matching options and case sensitivity. - -The following table shows the various `StringMatcher` options that you can use and the result of using them on a field named `firstname`: - -[cols="1,2", options="header"] -.`StringMatcher` options -|=== -| Matching -| Logical result - -| `DEFAULT` (case-sensitive) -| `firstname = ?0` +<1> Create a domain object with the criteria (`null` fields will be ignored). +<2> Using the domain object, create an `Example`. +<3> Through the repository, execute query (use `findOne` for a single item). -| `DEFAULT` (case-insensitive) -| `LOWER(firstname) = LOWER(?0)` +This illustrates how to craft a simple probe using a domain object. +In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. +`null` fields are ignored. -| `EXACT` (case-sensitive) -| `firstname = ?0` - -| `EXACT` (case-insensitive) -| `LOWER(firstname) = LOWER(?0)` - -| `STARTING` (case-sensitive) -| `firstname like ?0 + '%'` - -| `STARTING` (case-insensitive) -| `LOWER(firstname) like LOWER(?0) + '%'` - -| `ENDING` (case-sensitive) -| `firstname like '%' + ?0` - -| `ENDING` (case-insensitive) -| `LOWER(firstname) like '%' + LOWER(?0)` +[source,java,indent=0] +---- +include::example$r2dbc/QueryByExampleTests.java[tag=example-2] +---- -| `CONTAINING` (case-sensitive) -| `firstname like '%' + ?0 + '%'` +<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) +<2> For the `name` field, use a wildcard that matches against the end of the field +<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). +<4> Ignore the `role` field when forming the query. +<5> Plug the custom `ExampleMatcher` into the probe. -| `CONTAINING` (case-insensitive) -| `LOWER(firstname) like '%' + LOWER(?0) + '%'` +It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. +For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. -|=== +Query By Example really shines when you don't know all the fields needed in a query in advance. +If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. diff --git a/src/main/antora/modules/ROOT/pages/r2dbc.adoc b/src/main/antora/modules/ROOT/pages/r2dbc.adoc index dff72745a0..10cad9aeb6 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc.adoc @@ -12,5 +12,16 @@ This chapter points out the specialties for repository support for JDBC. This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories]. You should have a sound understanding of the basic concepts explained there. +R2DBC contains a wide range of features: + +* Spring configuration support with xref:r2dbc/getting-started.adoc#r2dbc.connectionfactory[Java-based `@Configuration`] classes for an R2DBC driver instance. +* xref:r2dbc/entity-persistence.adoc[`R2dbcEntityTemplate`] as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs. +* Feature-rich xref:r2dbc/mapping.adoc[object mapping] integrated with Spring's Conversion Service. +* xref:r2dbc/mapping.adoc#mapping.usage.annotations[Annotation-based mapping metadata] that is extensible to support other metadata formats. +* xref:r2dbc/repositories.adoc[Automatic implementation of Repository interfaces], including support for xref:repositories/custom-implementations.adoc[custom query methods]. + +For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality. +`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations. + diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/core.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/core.adoc deleted file mode 100644 index a5a2b94e4b..0000000000 --- a/src/main/antora/modules/ROOT/pages/r2dbc/core.adoc +++ /dev/null @@ -1,15 +0,0 @@ -[[r2dbc.core]] -= R2DBC Core Support -:page-section-summary-toc: 1 - -R2DBC contains a wide range of features: - -* Spring configuration support with Java-based `@Configuration` classes for an R2DBC driver instance. -* `R2dbcEntityTemplate` as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs. -* Feature-rich object mapping integrated with Spring's Conversion Service. -* Annotation-based mapping metadata that is extensible to support other metadata formats. -* Automatic implementation of Repository interfaces, including support for custom query methods. - -For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality. -`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations. - diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/template.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/entity-persistence.adoc similarity index 77% rename from src/main/antora/modules/ROOT/pages/r2dbc/template.adoc rename to src/main/antora/modules/ROOT/pages/r2dbc/entity-persistence.adoc index 8c4e338a13..04315f659d 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc/template.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc/entity-persistence.adoc @@ -1,5 +1,5 @@ -[[r2dbc.entityoperations]] -= Data Access API +[[r2dbc.entity-persistence]] += Persisting Entities `R2dbcEntityTemplate` is the central entrypoint for Spring Data R2DBC. It provides direct entity-oriented methods and a more narrow, fluent interface for typical ad-hoc use-cases, such as querying, inserting, updating, and deleting data. @@ -17,7 +17,8 @@ The actual statements are sent to the database upon subscription. There are several convenient methods on `R2dbcEntityTemplate` for saving and inserting your objects. To have more fine-grained control over the conversion process, you can register Spring converters with `R2dbcCustomConversions` -- for example `Converter` and `Converter`. -The simple case of using the save operation is to save a POJO. In this case, the table name is determined by name (not fully qualified) of the class. +The simple case of using the save operation is to save a POJO. +In this case, the table name is determined by name (not fully qualified) of the class. You may also call the save operation with a specific collection name. You can use mapping metadata to override the collection in which to store the object. @@ -45,9 +46,9 @@ Table names can be customized by using the fluent API. == Selecting Data The `select(…)` and `selectOne(…)` methods on `R2dbcEntityTemplate` are used to select data from a table. -Both methods take a xref:r2dbc/template.adoc#r2dbc.datbaseclient.fluent-api.criteria[`Query`] object that defines the field projection, the `WHERE` clause, the `ORDER BY` clause and limit/offset pagination. +Both methods take a <> object that defines the field projection, the `WHERE` clause, the `ORDER BY` clause and limit/offset pagination. Limit/offset functionality is transparent to the application regardless of the underlying database. -This functionality is supported by the xref:r2dbc/core.adoc#r2dbc.drivers[`R2dbcDialect` abstraction] to cater for differences between the individual SQL flavors. +This functionality is supported by the xref:r2dbc/getting-started.adoc#r2dbc.dialects[`R2dbcDialect` abstraction] to cater for differences between the individual SQL flavors. .Selecting entities using the `R2dbcEntityTemplate` [source,java,indent=0] @@ -65,6 +66,7 @@ Consider the following simple query: ---- include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=simpleSelect] ---- + <1> Using `Person` with the `select(…)` method maps tabular results on `Person` result objects. <2> Fetching `all()` rows returns a `Flux` without limiting results. @@ -134,6 +136,7 @@ Consider the following simple typed insert operation: ---- include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=insert] ---- + <1> Using `Person` with the `into(…)` method sets the `INTO` table, based on mapping metadata. It also prepares the insert statement to accept `Person` objects for inserting. <2> Provide a scalar `Person` object. @@ -155,6 +158,7 @@ Person modified = … include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=update] ---- + <1> Update `Person` objects and apply mapping based on mapping metadata. <2> Set a different table name by calling the `inTable(…)` method. <3> Specify a query that translates into a `WHERE` clause. @@ -173,7 +177,58 @@ Consider the following simple insert operation: ---- include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=delete] ---- + <1> Delete `Person` objects and apply mapping based on mapping metadata. <2> Set a different table name by calling the `from(…)` method. <3> Specify a query that translates into a `WHERE` clause. <4> Apply the delete operation and return the number of affected rows. + +[[r2dbc.entity-persistence.saving]] +Using Repositories, saving an entity can be performed with the `ReactiveCrudRepository.save(…)` method. +If the entity is new, this results in an insert for the entity. + +If the entity is not new, it gets updated. +Note that whether an instance is new is part of the instance's state. + +NOTE: This approach has some obvious downsides. +If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful. +While this process could and probably will be improved, there are certain limitations to what Spring Data R2DBC can offer. +It does not know the previous state of an aggregate. +So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method. + +include::partial$id-generation.adoc[] + +[[r2dbc.entity-persistence.optimistic-locking]] +== Optimistic Locking + +include::partial$optimistic-locking.adoc[] + +[source,java] +---- +@Table +class Person { + + @Id Long id; + String firstname; + String lastname; + @Version Long version; +} + +R2dbcEntityTemplate template = …; + +Mono daenerys = template.insert(new Person("Daenerys")); <1> + +Person other = template.select(Person.class) + .matching(query(where("id").is(daenerys.getId()))) + .first().block(); <2> + +daenerys.setLastname("Targaryen"); +template.update(daenerys); <3> + +template.update(other).subscribe(); // emits OptimisticLockingFailureException <4> +---- + +<1> Initially insert row. `version` is set to `0`. +<2> Load the just inserted row. `version` is still `0`. +<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`. +<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`. diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc index 5d8a47fe1a..cc0812661f 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc @@ -1,8 +1,39 @@ [[r2dbc.getting-started]] = Getting Started -An easy way to set up a working environment is to create a Spring-based project through https://start.spring.io[start.spring.io]. -To do so: +An easy way to bootstrap setting up a working environment is to create a Spring-based project in https://spring.io/tools[Spring Tools] or from https://start.spring.io[Spring Initializr]. + +First, you need to set up a running database server. +Refer to your vendor documentation on how to configure your database for R2DBC access. + +[[requirements]] +== Requirements + +Spring Data R2DBC requires https://spring.io/docs[Spring Framework] {springVersion} and above. + +In terms of databases, Spring Data R2DBC requires a <> to abstract common SQL functionality over vendor-specific flavours. +Spring Data R2DBC includes direct support for the following databases: + +* https://github.com/r2dbc/r2dbc-h2[H2] (`io.r2dbc:r2dbc-h2`) +* https://github.com/mariadb-corporation/mariadb-connector-r2dbc[MariaDB] (`org.mariadb:r2dbc-mariadb`) +* https://github.com/r2dbc/r2dbc-mssql[Microsoft SQL Server] (`io.r2dbc:r2dbc-mssql`) +* https://github.com/asyncer-io/r2dbc-mysql[MySQL] (`io.asyncer:r2dbc-mysql`) +* https://github.com/jasync-sql/jasync-sql[jasync-sql MySQL] (`com.github.jasync-sql:jasync-r2dbc-mysql`) +* https://github.com/r2dbc/r2dbc-postgresql[Postgres] (`io.r2dbc:r2dbc-postgresql`) +* https://github.com/oracle/oracle-r2dbc[Oracle] (`com.oracle.database.r2dbc:oracle-r2dbc`) + +If you use a different database then your application won’t start up. +The <> section contains further detail on how to proceed in such case. + +[[r2dbc.hello-world]] +== Hello World + +To create a Spring project in STS: + +. Go to File -> New -> Spring Template Project -> Simple Spring Utility Project, and press Yes when prompted. +Then enter a project and a package name, such as `org.spring.r2dbc.example`. +. Add the following to the `pom.xml` files `dependencies` element: ++ . Add the following to the pom.xml files `dependencies` element: + @@ -32,7 +63,7 @@ To do so: + [source,xml,subs="+attributes"] ---- -{springVersion} +{springVersion} ---- . Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level as your `` element: @@ -110,14 +141,15 @@ There is a https://github.com/spring-projects/spring-data-examples[GitHub reposi [[r2dbc.connecting]] == Connecting to a Relational Database with Spring -One of the first tasks when using relational databases and Spring is to create a `io.r2dbc.spi.ConnectionFactory` object by using the IoC container.Make sure to use a xref:r2dbc/getting-started.adoc#r2dbc.drivers[supported database and driver]. +One of the first tasks when using relational databases and Spring is to create a `io.r2dbc.spi.ConnectionFactory` object by using the IoC container. +Make sure to use a <>. [[r2dbc.connectionfactory]] -== Registering a `ConnectionFactory` Instance using Java-based Metadata +== Registering a `ConnectionFactory` Instance using Java Configuration The following example shows an example of using Java-based bean metadata to register an instance of `io.r2dbc.spi.ConnectionFactory`: -.Registering a `io.r2dbc.spi.ConnectionFactory` object using Java-based bean metadata +.Registering a `io.r2dbc.spi.ConnectionFactory` object using Java Configuration [source,java] ---- @Configuration @@ -135,24 +167,24 @@ This approach lets you use the standard `io.r2dbc.spi.ConnectionFactory` instanc `AbstractR2dbcConfiguration` also registers `DatabaseClient`, which is required for database interaction and for Repository implementation. -[[r2dbc.drivers]] -== R2DBC Drivers - -Spring Data R2DBC supports drivers through R2DBC's pluggable SPI mechanism. -You can use any driver that implements the R2DBC spec with Spring Data R2DBC. -Since Spring Data R2DBC reacts to specific features of each database, it requires a `Dialect` implementation otherwise your application won't start up. -Spring Data R2DBC ships with dialect implementations for the following drivers: - -* https://github.com/r2dbc/r2dbc-h2[H2] (`io.r2dbc:r2dbc-h2`) -* https://github.com/mariadb-corporation/mariadb-connector-r2dbc[MariaDB] (`org.mariadb:r2dbc-mariadb`) -* https://github.com/r2dbc/r2dbc-mssql[Microsoft SQL Server] (`io.r2dbc:r2dbc-mssql`) -* https://github.com/jasync-sql/jasync-sql[jasync-sql MySQL] (`com.github.jasync-sql:jasync-r2dbc-mysql`) -* https://github.com/r2dbc/r2dbc-postgresql[Postgres] (`io.r2dbc:r2dbc-postgresql`) -* https://github.com/oracle/oracle-r2dbc[Oracle] (`com.oracle.database.r2dbc:oracle-r2dbc`) +[[r2dbc.dialects]] +== Dialects +Spring Data R2DBC uses a `Dialect` to encapsulate behavior that is specific to a database or its driver. Spring Data R2DBC reacts to database specifics by inspecting the `ConnectionFactory` and selects the appropriate database dialect accordingly. -You need to configure your own {spring-data-r2dbc-javadoc}/api/org/springframework/data/r2dbc/dialect/R2dbcDialect.html[`R2dbcDialect`] if the driver you use is not yet known to Spring Data R2DBC. +If you use a database for which no dialect is available, then your application won’t start up. +In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. +Alternatively, you can implement your own `Dialect`. -TIP: Dialects are resolved by {spring-data-r2dbc-javadoc}/org/springframework/data/r2dbc/dialect/DialectResolver.html[`DialectResolver`] from a `ConnectionFactory`, typically by inspecting `ConnectionFactoryMetadata`. +[TIP] +==== +Dialects are resolved by {spring-data-r2dbc-javadoc}/org/springframework/data/r2dbc/dialect/DialectResolver.html[`DialectResolver`] from a `ConnectionFactory`, typically by inspecting `ConnectionFactoryMetadata`. + You can let Spring auto-discover your `R2dbcDialect` by registering a class that implements `org.springframework.data.r2dbc.dialect.DialectResolver$R2dbcDialectProvider` through `META-INF/spring.factories`. `DialectResolver` discovers dialect provider implementations from the class path using Spring's `SpringFactoriesLoader`. +To do so: + +. Implement your own `Dialect`. +. Implement a `R2dbcDialectProvider` returning the `Dialect`. +. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line + +`org.springframework.data.r2dbc.dialect.DialectResolver$R2dbcDialectProvider=` +==== diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc index b66fa9af40..d0eb3cd969 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc @@ -2,7 +2,7 @@ = Kotlin This part of the reference documentation explains the specific Kotlin functionality offered by Spring Data R2DBC. -See xref:kotlin.adoc for the general functionality provided by Spring Data. +See xref:kotlin.adoc[] for the general functionality provided by Spring Data. To retrieve a list of `SWCharacter` objects in Java, you would normally write the following: diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc index 70b0b57ce2..57c72a73e2 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc @@ -21,13 +21,13 @@ The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name. The same name mapping is applied for mapping fields to column names. For example, the `firstName` field maps to the `FIRST_NAME` column. You can control this mapping by providing a custom `NamingStrategy`. -See xref:r2dbc/mapping.adoc#mapping.configuration[Mapping Configuration] for more detail. +See <> for more detail. Table and column names that are derived from property or class names are used in SQL statements without quotes by default. -You can control this behavior by setting `R2dbcMappingContext.setForceQuote(true)`. +You can control this behavior by setting `RelationalMappingContext.setForceQuote(true)`. * Nested objects are not supported. -* The converter uses any Spring Converters registered with it to override the default mapping of object properties to row columns and values. +* The converter uses any Spring Converters registered with `CustomConversions` to override the default mapping of object properties to row columns and values. * The fields of an object are used to convert to and from columns in the row. Public `JavaBean` properties are not used. @@ -35,11 +35,12 @@ Public `JavaBean` properties are not used. * If you have a single non-zero-argument constructor whose constructor argument names match top-level column names of the row, that constructor is used. Otherwise, the zero-argument constructor is used. If there is more than one non-zero-argument constructor, an exception is thrown. +Refer to xref:object-mapping.adoc#mapping.object-creation[Object Creation] for further details. [[mapping.configuration]] == Mapping Configuration -By default (unless explicitly configured) an instance of `MappingR2dbcConverter` is created when you create a `DatabaseClient`. +By default, (unless explicitly configured) an instance of `MappingR2dbcConverter` is created when you create a `DatabaseClient`. You can create your own instance of the `MappingR2dbcConverter`. By creating your own instance, you can register Spring converters to map specific classes to and from the database. @@ -53,7 +54,7 @@ Spring Data converts the letter casing of such a name to that form which is also Therefore, you can use unquoted names when creating tables, as long as you do not use keywords or special characters in your names. For databases that adhere to the SQL standard, this means that names are converted to upper case. The quoting character and the way names get capitalized is controlled by the used `Dialect`. -See xref:r2dbc/core.adoc#r2dbc.drivers[R2DBC Drivers] for how to configure custom dialects. +See xref:r2dbc/getting-started.adoc#r2dbc.dialects[R2DBC Drivers] for how to configure custom dialects. .@Configuration class to configure R2DBC mapping support [source,java] @@ -147,11 +148,11 @@ The following table explains how property types of an entity affect mapping: |`Collection` |Array of `T` -|Conversion to Array type if supported by the configured xref:r2dbc/core.adoc#r2dbc.drivers[driver], not supported otherwise. +|Conversion to Array type if supported by the configured xref:r2dbc/getting-started.adoc#requirements[driver], not supported otherwise. |Arrays of primitive types, wrapper types and `String` |Array of wrapper type (e.g. `int[]` -> `Integer[]`) -|Conversion to Array type if supported by the configured xref:r2dbc/core.adoc#r2dbc.drivers[driver], not supported otherwise. +|Conversion to Array type if supported by the configured xref:r2dbc/getting-started.adoc#requirements[driver], not supported otherwise. |Driver-specific types |Passthru @@ -169,66 +170,17 @@ Drivers can contribute additional simple types such as Geometry types. [[mapping.usage.annotations]] === Mapping Annotation Overview -The `MappingR2dbcConverter` can use metadata to drive the mapping of objects to rows. -The following annotations are available: - -* `@Id`: Applied at the field level to mark the primary key. -* `@Table`: Applied at the class level to indicate this class is a candidate for mapping to the database. -You can specify the name of the table where the database is stored. -* `@Transient`: By default, all fields are mapped to the row. -This annotation excludes the field where it is applied from being stored in the database. -Transient properties cannot be used within a persistence constructor as the converter cannot materialize a value for the constructor argument. -* `@PersistenceConstructor`: Marks a given constructor -- even a package protected one -- to use when instantiating the object from the database. -Constructor arguments are mapped by name to the values in the retrieved row. -* `@Value`: This annotation is part of the Spring Framework. -Within the mapping framework it can be applied to constructor arguments. -This lets you use a Spring Expression Language statement to transform a key’s value retrieved in the database before it is used to construct a domain object. -In order to reference a column of a given row one has to use expressions like: `@Value("#root.myProperty")` where root refers to the root of the given `Row`. -* `@Column`: Applied at the field level to describe the name of the column as it is represented in the row, letting the name be different from the field name of the class. -Names specified with a `@Column` annotation are always quoted when used in SQL statements. -For most databases, this means that these names are case-sensitive. -It also means that you can use special characters in these names. -However, this is not recommended, since it may cause problems with other tools. -* `@Version`: Applied at field level is used for optimistic locking and checked for modification on save operations. -The value is `null` (`zero` for primitive types) is considered as marker for entities to be new. -The initially stored value is `zero` (`one` for primitive types). -The version gets incremented automatically on every update. -See xref:r2dbc/repositories.adoc#r2dbc.optimistic-locking[Optimistic Locking] for further reference. +include::partial$mapping-annotations.adoc[] +See xref:r2dbc/entity-persistence.adoc#r2dbc.entity-persistence.optimistic-locking[Optimistic Locking] for further reference. The mapping metadata infrastructure is defined in the separate `spring-data-commons` project that is technology-agnostic. Specific subclasses are used in the R2DBC support to support annotation based metadata. Other strategies can also be put in place (if there is demand). -[[mapping.custom.object.construction]] -=== Customized Object Construction - -The mapping subsystem allows the customization of the object construction by annotating a constructor with the `@PersistenceConstructor` annotation.The values to be used for the constructor parameters are resolved in the following way: - -* If a parameter is annotated with the `@Value` annotation, the given expression is evaluated, and the result is used as the parameter value. -* If the Java type has a property whose name matches the given field of the input row, then its property information is used to select the appropriate constructor parameter to which to pass the input field value. -This works only if the parameter name information is present in the Java `.class` files, which you can achieve by compiling the source with debug information or using the `-parameters` command-line switch for `javac` in Java 8. -* Otherwise, a `MappingException` is thrown to indicate that the given constructor parameter could not be bound. - -[source,java] ----- -class OrderItem { - - private @Id final String id; - private final int quantity; - private final double unitPrice; - - OrderItem(String id, int quantity, double unitPrice) { - this.id = id; - this.quantity = quantity; - this.unitPrice = unitPrice; - } - - // getters/setters omitted -} ----- +include::partial$mapping.adoc[] [[mapping.explicit.converters]] -=== Overriding Mapping with Explicit Converters +== Overriding Mapping with Explicit Converters When storing and querying your objects, it is often convenient to have a `R2dbcConverter` instance to handle the mapping of all Java types to `OutboundRow` instances. However, you may sometimes want the `R2dbcConverter` instances to do most of the work but let you selectively handle the conversion for a particular type -- perhaps to optimize performance. @@ -281,7 +233,7 @@ public class PersonWriteConverter implements Converter { ---- [[mapping.explicit.enum.converters]] -==== Overriding Enum Mapping with Explicit Converters +=== Overriding Enum Mapping with Explicit Converters Some databases, such as https://github.com/pgjdbc/r2dbc-postgresql#postgres-enum-types[Postgres], can natively write enum values using their database-specific enumerated column type. Spring Data converts `Enum` values by default to `String` values for maximum portability. diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc deleted file mode 100644 index 542e1eded3..0000000000 --- a/src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc +++ /dev/null @@ -1,38 +0,0 @@ -[[r2dbc.repositories.queries.query-by-example]] -= Query By Example - -Spring Data R2DBC also lets you use xref:query-by-example.adoc[Query By Example] to fashion queries. -This technique allows you to use a "probe" object. -Essentially, any field that isn't empty or `null` will be used to match. - -Here's an example: - -[source,java,indent=0] ----- -include::example$r2dbc/QueryByExampleTests.java[tag=example] ----- - -<1> Create a domain object with the criteria (`null` fields will be ignored). -<2> Using the domain object, create an `Example`. -<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`). - -This illustrates how to craft a simple probe using a domain object. -In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. -`null` fields are ignored. - -[source,java,indent=0] ----- -include::example$r2dbc/QueryByExampleTests.java[tag=example-2] ----- - -<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) -<2> For the `name` field, use a wildcard that matches against the end of the field -<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). -<4> Ignore the `role` field when forming the query. -<5> Plug the custom `ExampleMatcher` into the probe. - -It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. -For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. - -Query By Example really shines when you don't know all the fields needed in a query in advance. -If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. diff --git a/src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc b/src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc index 71b4906389..f2d94a8669 100644 --- a/src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc +++ b/src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc @@ -71,63 +71,8 @@ The preceding example creates an application context with Spring's unit test sup Inside the test method, we use the repository to query the database. We use `StepVerifier` as a test aid to verify our expectations against the results. -[[r2dbc.entity-persistence.state-detection-strategies]] -include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] - -[[r2dbc.entity-persistence.id-generation]] -=== ID Generation - -Spring Data R2DBC uses the ID to identify entities. -The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. - -When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. - -Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value. -That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`. - -One important constraint is that, after saving an entity, the entity must not be new anymore. -Note that whether an entity is new is part of the entity's state. -With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. - -[[r2dbc.optimistic-locking]] -=== Optimistic Locking - -The `@Version` annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version. -Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime. -In that case, an `OptimisticLockingFailureException` is thrown. -The following example shows these features: - -[source,java] ----- -@Table -class Person { - - @Id Long id; - String firstname; - String lastname; - @Version Long version; -} - -R2dbcEntityTemplate template = …; - -Mono daenerys = template.insert(new Person("Daenerys")); <1> - -Person other = template.select(Person.class) - .matching(query(where("id").is(daenerys.getId()))) - .first().block(); <2> - -daenerys.setLastname("Targaryen"); -template.update(daenerys); <3> - -template.update(other).subscribe(); // emits OptimisticLockingFailureException <4> ----- -<1> Initially insert row. `version` is set to `0`. -<2> Load the just inserted row. `version` is still `0`. -<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`. -<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`. - [[projections.resultmapping]] -==== Result Mapping +=== Result Mapping A query method returning an Interface- or DTO projection is backed by results produced by the actual query. Interface projections generally rely on mapping results onto the domain type first to consider potential `@Column` type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data. diff --git a/src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc b/src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc index 4ae3ce6763..ad0eda73dd 100644 --- a/src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc +++ b/src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc @@ -1 +1,3 @@ include::{commons}@data-commons::page$repositories/core-concepts.adoc[] + +include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] diff --git a/src/main/antora/modules/ROOT/partials/id-generation.adoc b/src/main/antora/modules/ROOT/partials/id-generation.adoc new file mode 100644 index 0000000000..e4f91b8311 --- /dev/null +++ b/src/main/antora/modules/ROOT/partials/id-generation.adoc @@ -0,0 +1,16 @@ +[[entity-persistence.id-generation]] +== ID Generation + +Spring Data uses the identifer property to identify entities. +The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. + +When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. + +Spring Data does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value. +That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`. + +xref:repositories/core-concepts.adoc#is-new-state-detection[Entity State Detection] explains in detail the strategies to detect whether an entity is new or whether it is expected to exist in your database. + +One important constraint is that, after saving an entity, the entity must not be new anymore. +Note that whether an entity is new is part of the entity's state. +With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. diff --git a/src/main/antora/modules/ROOT/partials/mapping-annotations.adoc b/src/main/antora/modules/ROOT/partials/mapping-annotations.adoc new file mode 100644 index 0000000000..e98d076c5d --- /dev/null +++ b/src/main/antora/modules/ROOT/partials/mapping-annotations.adoc @@ -0,0 +1,24 @@ +The `RelationalConverter` can use metadata to drive the mapping of objects to rows. +The following annotations are available: + +* `@Id`: Applied at the field level to mark the primary key. +* `@Table`: Applied at the class level to indicate this class is a candidate for mapping to the database. +You can specify the name of the table where the database is stored. +* `@Transient`: By default, all fields are mapped to the row. +This annotation excludes the field where it is applied from being stored in the database. +Transient properties cannot be used within a persistence constructor as the converter cannot materialize a value for the constructor argument. +* `@PersistenceCreator`: Marks a given constructor or static factory method -- even a package protected one -- to use when instantiating the object from the database. +Constructor arguments are mapped by name to the values in the retrieved row. +* `@Value`: This annotation is part of the Spring Framework. +Within the mapping framework it can be applied to constructor arguments. +This lets you use a Spring Expression Language statement to transform a key’s value retrieved in the database before it is used to construct a domain object. +In order to reference a column of a given row one has to use expressions like: `@Value("#root.myProperty")` where root refers to the root of the given `Row`. +* `@Column`: Applied at the field level to describe the name of the column as it is represented in the row, letting the name be different from the field name of the class. +Names specified with a `@Column` annotation are always quoted when used in SQL statements. +For most databases, this means that these names are case-sensitive. +It also means that you can use special characters in these names. +However, this is not recommended, since it may cause problems with other tools. +* `@Version`: Applied at field level is used for optimistic locking and checked for modification on save operations. +The value is `null` (`zero` for primitive types) is considered as marker for entities to be new. +The initially stored value is `zero` (`one` for primitive types). +The version gets incremented automatically on every update. diff --git a/src/main/antora/modules/ROOT/partials/mapping.adoc b/src/main/antora/modules/ROOT/partials/mapping.adoc new file mode 100644 index 0000000000..7dce4ab17c --- /dev/null +++ b/src/main/antora/modules/ROOT/partials/mapping.adoc @@ -0,0 +1,190 @@ +[[entity-persistence.naming-strategy]] +== Naming Strategy + +By convention, Spring Data applies a `NamingStrategy` to determine table, column, and schema names defaulting to https://en.wikipedia.org/wiki/Snake_case[snake case]. +An object property named `firstName` becomes `first_name`. +You can tweak that by providing a {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context. + +[[entity-persistence.custom-table-name]] +== Override table names + +When the table naming strategy does not match your database table names, you can override the table name with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation. +The element `value` of this annotation provides the custom table name. +The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database: + +[source,java] +---- +@Table("CUSTOM_TABLE_NAME") +class MyEntity { + @Id + Integer id; + + String name; +} +---- + +[[entity-persistence.custom-column-name]] +== Override column names + +When the column naming strategy does not match your database table names, you can override the table name with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation. +The element `value` of this annotation provides the custom column name. +The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database: + +[source,java] +---- +class MyEntity { + @Id + Integer id; + + @Column("CUSTOM_COLUMN_NAME") + String name; +} +---- + +ifdef::mapped-collection[] + +The {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] +annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship). +`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table. +In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons: + +[source,java] +---- +class MyEntity { + @Id + Integer id; + + @MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME") + Set subEntities; +} + +class MySubEntity { + String name; +} +---- + +When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`. +This additional column name may be customized with the `keyColumn` Element of the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation: + +[source,java] +---- +class MyEntity { + @Id + Integer id; + + @MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME") + List name; +} + +class MySubEntity { + String name; +} +---- +endif::[] + +ifdef::embedded-entities[] + +[[entity-persistence.embedded-entities]] +== Embedded entities + +Embedded entities are used to have value objects in your java data model, even if there is only one table in your database. +In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation. +The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected. + +However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. + +Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set. + +.Sample Code of embedding objects +==== +[source,java] +---- +class MyEntity { + + @Id + Integer id; + + @Embedded(onEmpty = USE_NULL) <1> + EmbeddedEntity embeddedEntity; +} + +class EmbeddedEntity { + String name; +} +---- + +<1> ``Null``s `embeddedEntity` if `name` in `null`. +Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property. +==== + +If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation. +This element represents a prefix and is prepend for each column name in the embedded object. + +[TIP] +==== +Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly. + +[source,java] +---- +class MyEntity { + + @Id + Integer id; + + @Embedded.Nullable <1> + EmbeddedEntity embeddedEntity; +} +---- + +<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`. +==== + +Embedded entities containing a `Collection` or a `Map` will always be considered non-empty since they will at least contain the empty collection or map. +Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL). +endif::[] + +[[entity-persistence.read-only-properties]] +== Read Only Properties + +Attributes annotated with `@ReadOnlyProperty` will not be written to the database by Spring Data, but they will be read when an entity gets loaded. + +Spring Data will not automatically reload an entity after writing it. +Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns. + +If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. +Spring Data will not perform any insert, delete or update for these rows. + +[[entity-persistence.insert-only-properties]] +== Insert Only Properties + +Attributes annotated with `@InsertOnlyProperty` will only be written to the database by Spring Data during insert operations. +For updates these properties will be ignored. + +`@InsertOnlyProperty` is only supported for the aggregate root. + +[[mapping.custom.object.construction]] +== Customized Object Construction + +The mapping subsystem allows the customization of the object construction by annotating a constructor with the `@PersistenceConstructor` annotation.The values to be used for the constructor parameters are resolved in the following way: + +* If a parameter is annotated with the `@Value` annotation, the given expression is evaluated, and the result is used as the parameter value. +* If the Java type has a property whose name matches the given field of the input row, then its property information is used to select the appropriate constructor parameter to which to pass the input field value. +This works only if the parameter name information is present in the Java `.class` files, which you can achieve by compiling the source with debug information or using the `-parameters` command-line switch for `javac` in Java 8. +* Otherwise, a `MappingException` is thrown to indicate that the given constructor parameter could not be bound. + +[source,java] +---- +class OrderItem { + + private @Id final String id; + private final int quantity; + private final double unitPrice; + + OrderItem(String id, int quantity, double unitPrice) { + this.id = id; + this.quantity = quantity; + this.unitPrice = unitPrice; + } + + // getters/setters omitted +} +---- diff --git a/src/main/antora/modules/ROOT/partials/optimistic-locking.adoc b/src/main/antora/modules/ROOT/partials/optimistic-locking.adoc new file mode 100644 index 0000000000..5819ce4173 --- /dev/null +++ b/src/main/antora/modules/ROOT/partials/optimistic-locking.adoc @@ -0,0 +1,12 @@ +Spring Data supports optimistic locking by means of a numeric attribute that is annotated with +https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root. +Whenever Spring Data saves an aggregate with such a version attribute two things happen: + +* The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged. +* If this isn't the case an `OptimisticLockingFailureException` will be thrown. + +Also, the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above. + +This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used. + +During deletes the version check also applies but no version is increased.