PrevUpHomeNext

Predefined Concrete Mapper Class Templates

direct_mapper<T>

direct_mapper<T> is an implementation of abstract_mapper<T> that represents T by calling the backend library's basic conversions for T, and nothing else: it does not add any data transformation of its own.

Each backend library provides a fixed set of basic conversions, which convert in both directions between C++ data types and the DBMS's column-oriented formats. They do not use mappers because they are at a level below mappers. They are only available for a few C++ types, and it is a different few for each backend. The details are here and here, but I can tell you two things now:

direct_mapper<T> does not suffer from any of the same tradeoffs as other mapper classes for numeric types, so it is the mapper of choice in cases where it is available, i.e. on a database whose backend library provides basic conversions for T.

numeric_cast_mapper<T, OtherMapper>

numeric_cast_mapper<T, OtherMapper> is an implementation of abstract_mapper<T> that represents T by:

E.g. the following mapper class is one possible implementation of abstract_mapper<uint16_t> [16] (assuming that direct_mapper<int32_t> is available [17] ):

numeric_cast_mapper<uint16_t, direct_mapper<int32_t>>

To convert a uint16_t to a column, this mapper calls boost::numeric_cast<int32_t>() and then delegates to direct_mapper<int32_t>. To convert a column to uint16_t, it delegates to direct_mapper<int32_t> and then calls boost::numeric_cast<uint16_t>().

When boost::numeric_cast fails, it throws a boost::bad_numeric_conversion, which quince does not catch.

An int32_t can absorb any value from a uint16_t, so the cast that this mapper makes during outbound conversions will always succeed. Inbound conversions, on the other hand, can go less well. If a certain record had a uint16_t field that was near its maximum value, and you used update() with a server-side expression to triple it, then the next attempt to read it back would throw a boost::bad_numeric_conversion.

I'm not sure whether to count that as a disadvantage of numeric_cast_mapper. Perhaps boost::bad_numeric_conversion is as good a way as any to respond to arithmetic overflow.

The clear disadvantage of numeric_cast_mapper is that, in all the cases where it is useful, it has a cost in space; e.g. using 32 bits to hold 16 bits' worth of information.

reinterpret_cast_mapper<T, OtherMapper>

reinterpret_cast_mapper<T, OtherMapper> is an implementation of abstract_mapper<T> that represents T by:

E.g. here is another possible implementation of abstract_mapper<uint16_t> (this time assuming that direct_mapper<int16_t> is available [18] ):

reinterpret_cast_mapper<uint16_t, direct_mapper<int16_t>>

To convert a uint16_t to a column, this mapper invokes reinterpret_cast<int16_t>() and then delegates to direct_mapper<int16_t>. To convert a column to uint16_t, it delegates to direct_mapper<int16_t> and then calls reinterpret_cast<uint16_t>().

reinterpret_cast_mapper doesn't waste any space. It has the following disadvantages instead:

reinterpret_cast_mapper<T, OtherMapper, Offset>

This variant of reinterpret_cast_mapper takes a constant offset of type T as a third template parameter. It behaves like the other form of reinterpret_cast_mapper except that:

E.g. here is another possible implementation of abstract_mapper<uint16_t> (again assuming that direct_mapper<int16_t> is available):

reinterpret_cast_mapper<uint16_t, direct_mapper<in16_t>, 0x8000>

By choosing the offset carefully, we solve the problem of comparisons and sorts.

On the other hand we still have the problem of server-side arithmetic producing wrong results. And the problem of server-side values looking wrong just got extended to all values, not just negative ones.



[16] It happens to be quince_postgresql's default implementation.

[17] It is, on quince_postgresql.

[18] It is, on quince_postgresql.


PrevUpHomeNext