-
Notifications
You must be signed in to change notification settings - Fork 677
Developer guide
This guide targets developers that want to understand the internal workings of Spring Data, especially those that want to write their own modules or implement new features in existing modules.
As a user of a Spring Data module you do NOT implement interfaces as one is used to with most frameworks, but you at least start by specifying an interface and Spring Data will create the implementation internally for you.
This implementation is based on three sources:
-
A basic implementation of standard interfaces provided by Spring Data. For example for the Mongo module there is SimpleMongoRepository implementing MongoRepository
-
Analysis of the interface that is to be implemented. Spring Data analyses names of methods, their arguments and return value types and annotations on those methods and creates implementation strategies based on that information.
-
Information about the Java classes to be persisted. This includes information that can be obtained through reflection, like class names, property names and types, and annotations on properties. It also includes store specific information. For example many stores provides some kind of metadata, which Spring Data will use.
For each module there is a *RepositoryFactory
which creates repositories implementing the passed in interface, by creating a proxy which contains a chain of strategies for implementing methods. These strategies differ for each module, but often include
- implementation based on some annotation on the method, so one can provide literal queries in a store dependent query language to use.
- implementation based on the name of the method (so called query methods)
- implementation by the basic repository implementation mentioned above. This strategy is the bare minimum and is available in all stores.
The process outlined above has large parts that are independent of a specific store. For example most if not all stores need to access properties (i.e. getting and setting values) and meta data about properties (e.g. what annotations are present) abstracting over the exact implementation (i.e. is there a setter or do we access a potential private field directly; is an annotation on the field, the getter or the setter, or in a super class or interface). Also the structure of a repository and a repository factory is always the same, with surprisingly few pieces that are store dependent.
For all this there are implementations in Spring Data Commons and interfaces or abstract classes to fill in the store dependent blanks.
The Spring Data Commons module provides a sophisticated system to gather and use entity mapping and conversion functionality. The core model abstractions are PersistentEntity
, PersistentProperty
. These abstractions can be created and used through a MappingContext
. In top of that we provide an EntityConverter
abstraction consisting of EntityReader
and EntityWriter
.
The base of any mappings is the MappingContext
each store has it's own implementation <Store>MappingContext
which has the main purpose of providing a consistent set of instances of mapping related classes to the various pieces of the mapping infrastructure.
The main abstractions are EntityWriter
and EntityReader
which provide write
/read
methods that take a sink
/source
to write to / read from.
If a store reads and writes from the same kind of object e.g. the DbObject
of MongoDb, the two interfaces get combined into that of an EntityConverter
.
For reading and writing the different pieces of an entity need to get converted into something that can be added to a sink
or read from a source
. This is a process consisting of multiple steps:
-
Figure out what conversion is required. For example, an entity might contain an attribute of type
Enum
but the store doesn't support enums directly, so it needs to get converted. What the conversion target is, depends on the source type, but also possibly on annotations or other configurations. You might want to consider annotations on the attribute to decide if you want to store an Enum as it's ordinal number (Integer
), or its name (String
). This decision is based on the<Store>PersistentProperty
, which might also provide additional information about how to store attribute information, for example, the name of a column to use or the attribute name in a JSON format. -
Do the actual conversion. This is done through a
ConversionService
if it is a one to one conversion, i.e. one type needs to get converted to one other type just as in the enum example. More complex mappings such as those of entities are the task of the of theEntityWriter
,EntityReader
, orEntityConversion
. The same might apply to other examples likeMap
s that need special handling in order to be converted to the target structure. -
If an attribute is itself an entity. The process typically gets applied recursively. The decision if an attribute is an entity is again made by the
<Store>PersistentProperty
and is by default based on a list of "simple types". Attributes that aren't compatible with one such simple type are considered an entity.
A ConversionService
is a Spring Framework class that does conversions for you.
Given a source object and a target type it converts one into the other.
In the context of Spring Data it is used to convert non-entity types from the types used in the object model to types used by the persistent storage.
This was described in the previous section about EntityWriter
and EntityReader
.
They need a reference to a ConversionService
.
It is important to keep this ConversionService
separate from other instances that might exist in an application, like the one that converts types transmitted in HTTP requests to types used in controller methods.
In Spring Data you construct a ConversionService
from a CustomConversion
.
CustomConversions
contain Converters
which are the pieces that do the job inside a ConversionService
plus additional information:
-
Is the
Converter
intended to be used when writing to the persistence store? -
Is the
Converter
intended to be used when reading to the persistence store? -
A
SimpleTypeHolder
which gets filled based on the converters. If aConverter
uses a given type as source type and thisConverter
is intended for writing this type is considered a simple type. Let's say you register aConverter
in yourCustomConversions
that converts anAddress
to aString
. This means you can save (i.e. write)Address
instances without considering separate entities, which requires some sort of separate "table". Therefore it is considered simple. TheSimpleTypeHolder
also contains some generic default types that probably all stores can handle directly (primitives,String
...) and possibly some store dependent ones. Maybe your store knows how to handleBigDecimal
or has it's ownStruct
that it can handle without any conversion.This
SimpleTypeHolder
should find it's way into theMappingContext
where it is used to destinguish entities from simple types.
As an important part of the entity conversion on the reading side is creating instances of the domain class an EntityInstantiator
API allows plugging custom code for
By default the conversion system inspects the properties of the type being read to find out which nested types it has to instantiate. So assume the following domain classes:
class Address { … }
class Person {
Address address;
}
If the domain model works with type hierarchies, inspecting the property types might not be sufficient to read data into an object graph. Assume the domain model slightly modified and a Person
instance created as follows:
class Address { … }
class Person {
Object address;
}
Person person = new Person();
person.address = new Address();
In this case there are two important things to consider: first, the type information of the address
has to be stored when persisting the Person
instance. Second, when reading the object we need to detect the type to be instantiated for the data eventually forming the Address
instance.
To achieve this, Spring Data Commons provides the TypeMapper
API. The interface has to be typed to the actual store's native data representation (e.g. a DBObject
in MongoDB). It's core responsibility is writing type information to the store object and read it back in in turn. We also ship a configurable DefaultTypeMapper
that takes a TypeAliasAccessor
, a MappingContext
as well as TypeInformationMapper
implementations to delegate to.
The TypeAliasAccessor
is the core interface to implement how a type alias gets actually persisted to the store (through writeTypeTo(…)
)and read back in (through readTypeAlias(…)
). It is not responsible to interpret the token written, it's solely encapsulating the reading and writing aspect and has to be implemented in a store-specific way.
The type alias is some arbitrary token which can be mapped to a Java type in a unique way. This can be the fully-qualified Java type name or another unique hash-like token.
The TypeInformationMapper
is responsible for mapping the arbitrary alias into a TypeInformation
instance, i.e. actually resolving the type. Spring Data Commons ships with a variety of implementations for that out of the box:
-
SimpleTypeInformationMapper
- stores the fully-qualified class name on writing and interpreting the token returned by theTypeAliasAccessor
as class name and trying to load the class. -
MappingContextTypeInformationMapper
- uses aMappingContext
which inspects the entities persisted for the@TypeAlias
annotation. It will return the value configured if annotated ornull
if no type alias information is present with thePersistentEntity
(seePersistentEntity#getTypeAlias()
) -
ConfigurableTypeInformationMapper
- takes aMap<? extends Class<?>, String>
to register a manual mapping from a type to a given alias.
These implementations can be configured as a chain on the DefaultTypeMapper
. The class will then consult them in a row using the alias or type of the one returning a non-null
value. This allows some types being mapped into aliases by @TypeAlias
whereas all others fall back on the fully-qualified class name.
However, there a a few constructors on the DefaultTypeMapper
that make the setup quite easy. The one taking a TypeAliasAccessor
registers a SimpleTypeInformationMapper
by default. Assume we have a TypeAliasAccessor
for a Map
like this:
class MapTypeAliasAccessor implements TypeAliasAccessor<Map<String, Object>> {
public static final String TYPE_ALIAS_KEY = "_class";
public Object readAliasFrom(Map<String, Object> source) {
return source.get(TYPE_ALIAS_KEY);
}
public void writeTypeTo(Map<String, Object> sink, Object alias) {
sink.put(TYPE_ALIAS_KEY, alias);
}
}
We could the setup a DefaultTypeMapper
instance and use it as follows:
MapTypeAliasAccessor accessor = new MapTypeAliasAccessor();
TypeMapper<Map<String, Object> mapper = new DefaultTypeMapper<Map<String, Object>(accessor);
Map<String, Object> store = new HashMap<String, Object>();
mapper.writeType(HashMap.class, store);
// Make sure we have the type information captured
assertThat(store.get(TYPE_ALIAS_KEY), is(HashMap.class.getName()));
// Make sure we can obtain the type from the plain store source
assertThat(mapper.readType(store), is(ClassTypeInformation.from(HashMap.class)));
If we hand it a MappingContext
and our Address
is annotated with @TypeAlias("A")
, then the DefaultTypeMapper
would work as follows:
MapTypeAliasAccessor accessor = new MapTypeAliasAccessor();
MappingContext<?, ?> context = … // obtain MappingContext
TypeMapper<Map<String, Object> mapper = new DefaultTypeMapper<Map<String, Object>(accessor, context, Collections.emptyList());
Map<String, Object> store = new HashMap<String, Object>();
mapper.writeType(Address.class, store);
// Make sure the alias is written, not the class name
assertThat(store.get(TYPE_ALIAS_KEY, is("A")));
// Make sure we discover Address to be the target ty
assertThat(mapper.readType(store), is(ClassTypeInformation.from(Address.class)));
Note, that storing type information for Person
would still store the fully-qualified class name as the MappingContext
does not find any type alias mapping information.
Usually EntityConverter
implementations will setup a DefaultTypeMapper
in their constructors and hand in a MappingContext
so that type information is transparently stored as fully-qualified class names and consider the customization via @TypeAlias
out-of-the box. The only store specific part is the TypeAliasAccessor
implementation, which has to be adapted according to the store's data structure. Have a look at MongoDB's DBObjectTypeAliasAccessor
for example.
Beyond that it's good practice to expose the TypeMapper
as configurable property of the EntityConverter
implementation so that uses gain full control over the type mapping setup if necessary.
We provide a set of repository interfaces that either declare a user's repository interface as a Spring Data interface or even pull in functionality that can be implemented generically.
-
Repository
- A plain marker interface to let the Spring Data infrastructure pick up user defined repositories. -
CrudRepository
- ExtendsRepository
and adds basic persistence methods like saving entities, finding entities and deleting them. -
PagingAndSortingRepositories
- ExtendsCrudRepository
and adds method for access ing entities page by page and sorted by a given criteria.
TODO
When building a store implementation for a data store we do not already support the most interesting parts are the mapping and conversion system as well as the repository abstraction. If the store you target already supports entity mapping (like JPA for example) you can implement the repository abstraction directly on top of it. Otherwise you need to integrate with the mapping and conversion system. The following sections will describe the important abstractions and classes you'll have to take a look and and extend/implement.
As example for an implementation of store support with Spring Data mapping have a look at Spring Data MongoDB, for a plain repository abstraction integration consider taking a look at Spring Data JPA.
The very core of the repository abstraction is the factory to create repository instances. RepositoryFactorySupport
requires the following methods to be implemented:
-
getEntityInformation(…)
- returns theEntityInformation
which encapsulates ways to determine whether an entity is new, lookup the identifier of the entity as well as the type of the id.PersistentEntityInformation
is the class you'll probably want to extend. In order to do so you need aPersistentEntity
. You get that by requesting it from aMappingContext
. The mapping context will ensure that thePersistentEntity
is properly initialized (e.g. allPersistentProperty
s are found) and that the information gets memoized and not created over and over again. You can simply instantiate aMappingContext
in your repository factory. -
getRepositoryBaseClass(…)
- returns the type of the backing instance of the repositories which usually implements CRUD methods etc. Needed to inspect the user's repository interface for query methods before actually creating the instance. -
getTargetRepository(…)
- returns an instance of the type returned bygetRepositoryBaseClass(…)
. This instance will usually return one of the Spring Data Repository interfaces.
PersistentEntityInformation
vsPersistentEntity
The original idea was to have a separate interface for some use cases that need only very little information about an entity, so those could use the simplerPersistentEntityInformation
. But currently it looks like this doesn't really work. So go ahead use mainlyPersistentEntity
and when you need theisNew()
fromPersistentEntityInformation
just create one from yourPersistentEntity
.There are considerations to merge these two classes in future.
There seem to be two kind of stores as far as the repository implementation is concerned:
-
Those that read and write based on one class. For example Monge reads and writes
DbObject
s -
Those where reading and writing is asymmetrical or based on a mapping of the persistence layer. Examples for this are JDBC which needs
PreparedStatement
s with parametersfor writing and producesResultSet
s for reading, or JPA which takes and produces Pojos directly.
For the first kind you want to use the mapping and conversion system. For the second you will probably implement the necessary methods in the repository directly.
When creating domain model in your repositories, you want to use a ClassGeneratingEntityInstantiator
. It creates instances and can handle constructor parameters, that set properties (if the names and types of the constructor parameters and the property match). So the users of your repository don't have to have parameterless constructors in their domain classes.
For manipulationg domain model instances, i.e. setting and getting property values, use an accessor that you can get from a PersistentEntity
:
Object instance = ....; // the domain model instance
PersistentEntity entity = ....; the PersistentEntity for the class of the domain model instance
PersistentProperty property = entity.getPersistentProperty("Name"); // get the name property
entity.getPropertyAccessor(instance).setProperty(property, someValue); // set the name property to someValue
The PropertyAccessor
will take care to use getter/setter/field access as required.
TODO
Spring namespace support is considered a deprecated technology, therefore newer features no longer ship with support for XML-based configuration.
TODO