> 6) ================================
>
>> https://anarthal.github.io/mysql/mysql/ref/boost__mysql__value/get_std_optional.html
>> https://anarthal.github.io/mysql/mysql/ref/boost__mysql__value/get_optional.html
>
> Sounds also unusual to have two member functions for the different
> versions of optional. I suggest using the boost version until we
> transition to C++17.
I probably won't have the time to do a proper review, but had a quick
look at the documentation and this interface surprised me a bit. I was
expecting optional (whether std or boost) to be used to map nullable
values, but it seems that's not the case. It seems this interface will
return an empty optional if using a wrong type (like trying to get an
int from a string column), but the doc is silent on what it will do if
trying to get an int from a nullable int which is actually NULL (my
understanding is that it would return an empty optional). It bothers me
a bit that these two use cases are handled the same way, because one of
them is a valid use, and the other is a mismatch between the program and
the database schema that i would like to diagnose.
I understand this issue is non-trivial, since when the value is
retrieved from the row object, the information about the type of the
data in the DB (and thus its nullability) is lost. However, it seems
odd. It may be of interest to store whether the field is null in a
independent way from its type, instead of relying on a single null type.
Or maybe i just missed something obvious.
Regards,
Julien
That's right, value::get_optional<uint64_t>() will return an empty optional
either if your value is not an int (e.g. an string) or an actual NULL value.
You can distinguish both using value::is_null(). For the use case you are
proposing, I would suggest this kind of code:
value v = /* get your value */
if (v.is_null())
{
// handle NULL case
}
else
{
uint64_t my_int = v.get<uint64_t>(); // This will throw on type mismatch
}
Of course, if we end up implementing reading rows into compile-time known
data structures, we can do a better job here.
>
> I understand this issue is non-trivial, since when the value is
> retrieved from the row object, the information about the type of the
> data in the DB (and thus its nullability) is lost. However, it seems
> odd. It may be of interest to store whether the field is null in a
> independent way from its type, instead of relying on a single null type.
Additionally to value::is_null, you can also access field metadata
using resultset.fields()[index]. This returns a field_metadata object
https://anarthal.github.io/mysql/mysql/ref/boost__mysql__field_metadata.html
which contains a is_not_null() function, which will return true if you
created your field with a NOT NULL clause.
sorry for beeing not clear (and sending a not directly fitting example)
that code should be library code - more a less a collection of
base-mysql concepts
that can be used - this sample transformer lets you act empty strings as
null in mysql
- the implementation is a dummy - only to get a feeling how the data-flow is
my adaption is used with SQLite and the index is the parameter index
that would then map to SQLite bind functions or as in this case checks
if the
value is null and returns ""
plus serveral other "typical" helper for adaption problems
the transformer get also used for all fetch routines
> Let's also take a look at actually using the statements:
>
> My_select my_select( db_connection, "select a, b, c from test
> where d == ?1" );
>
> {
> // fetch into ref tuple
> int some_int{};
> float some_float{};
> std::string some_string;
> my_select( { some_int, some_float, some_string }, { 123 } );
> }
>
> How is that different from the following snippet?
>
> resultset r = conn.query("select a, b, c from test where d == ?1" );
> tuple<int, float, string> row;
> r.read_one(row);
my goal was to keep the sql-string combined with the Prepared_fetch_1
instanciation
but string use in templates is a little bit limited
and i also map input types for inserts or where clauses - thats also
possible with
splitted tuples for the input/output data but then its even more
separated from the statement (which is tied to the input/output types)
to know as much as possible before-hand - allows maybe deeper
optimization etc. for example the my_select instance
can use prepared statements per default (and this is connection oriented
with sqlite)
the "readers" are just variants (that also allow to beeing const - see
const auto tuple):
// fetch into ref tuple
my_select( { ein_int, ein_float, ein_string }, { 123 } );
// return value tuple
const auto [ein_int2, ein_float2, ein_string2] = my_select( { 123 } );
// fetch into class/struct...
Result3 result;
my_select( result, { 123 } );
the real optimization party starts with multi row fetches
// multi row fetch
using My_select = Prepared_fetch<std::tuple<int>, std::tuple<int, float,
NullAsEmptyString>>;
My_select my_select( db_connection, "select a, b from test where c == ?1" );
std::vector<Result2> result;
my_select.fetch_copy( std::back_inserter( result ), 34 );
my_select.fetch_copy( result, 34 );
auto fetch_func = []( const int& /*ein_int_*/, const float& /*ein_float_*/,
                     const std::string& /*ein_string_*/ ) {};
my_select.fetch_func( fetch_func, 34 );
auto fetch_func_cancel = []( const int& /*ein_int_*/, const float&
/*ein_float_*/,
                            const std::string& /*ein_string_*/ ) {
return false; };
my_select.fetch_func_with_cancel( fetch_func_cancel, 34 );
because i know at instanciation times what parts are fixed, variant size
etc. - so i can further reduce
the memory overhead etc. - you could directly combine the procotocl
result parsing with the result-set content etc.
its not implemented in my sqlite wrapper so far but the interface allows
such optization (if the backend is deep enough - like yours)
that means fetch-copy can be prepared at compile time for exact the data
etc. would allow zero or less-copy concepts
> >
> >
> > the Input- and Output Transformers help to work with basic types
> > and also with SQL/MySQL special types like Null-string etc. - its not
> > possible to map MySQL types always 1:1 to C++ and back, sometimes you
> > want to
> > behave the transformation different
> >
> > the Transformers aren't visible when used with basic types
> >
> >
> > its just an example to promote the Idea: https://pastebin.com/raw/vepbTAKL
> >
> >
> > the best combination would be some sort of fluent SQL interface like:
> > https://github.com/rbock/sqlpp11
>
> This library is supposed to be a protocol driver library, so it
> provides primitives
> close to the MySQL protocol. sqlpp11 is great, but it's a higher level library.
> I don't think it makes sense trying to incorporate this kind of features here.
> It would make more sense for a higher level library like sqlpp11 to build
> on top of Boost.MySQL, instead.
Boost does only provide low level stuff for real low level concepts
(smart-pointer, maps etc.-)
but most other libraries are always introducing very high level concepts