I've filled out odbc_parameters::bind() except for blob and is_null().
In my experience with SQLBindParameter() in ODBC, null requires a datatype. Can the internals of the library be changed to all the datatype for null to be specified?
Thanks,
Ben
--
You received this message because you are subscribed to the Google Groups "SG11 - Databases" group.
To unsubscribe from this group and stop receiving emails from it, send an email to databases+...@isocpp.org.
To post to this group, send email to data...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/databases/.
For more options, visit https://groups.google.com/a/isocpp.org/groups/opt_out.
Ok, if more libs need this type, we should add it.
The most annoying I remember is: QVariant(QVariant::Int) == QVariant(0)The first is a Variant holding a Null of type Int and the latter is a Variant holding an Integer of value zero. Both are equal and they are not able to change this misbehavior, because to much code relies on this.
I was worried I was giving up too easily and so did a search this morning: http://msdn.microsoft.com/en-us/library/ms709448(v=vs.85).aspx
I can't say I'm happy to be proven 'right' in this case.
Regards,
Ben
Thanks Johann, will do.
Will you add me as a user to your github project (my username is BenHanson), or should I branch?
Regards,
Ben
Hi Johann,
I have been reading your document N3886 together with your sample implementation in github and comparing it with ODBC which I have worked on for 10 years.
Please note that the comments below are a bit revised from the one I emailed you.
Anyway, these are my revised comments:
I hope the above help.
Kind regards,
class statement {
Connection conn_;
std::string sql_;
public:
template<class _Bean, typename... _Member>
void execute_into(std::initializer_list<Parm> parms, _Bean& bean, _Members...mem){...}
};
struct Bean {
int cod;
std::string nm;
};
Bean b;
statement X(conn1,"select cod, nm from tb where cod = ?");
X.execute_into({2},b,&Bean::cod,&Bean.nm);
class map_element {
public:
template<typename _Mem>
map_element(std::string field_name, _Mem mem_foo) {...}
template<typename _Mem>
map_element(int field_index, _Mem mem_foo) {...}
};
and pass s set of map_element to execute_into function:
map_element cod_mapped("cod",&Bean::cod);
map_element nm_mapped(2,&Bean::nm);
Bean b;
X.execute_into({2},b,cod_mapped,nm_mapped);
enum null_handle { // this should be done by policies instead of enum, but this way is easier to understand
DEFAULT_WHEN_NULL,
ACCEPT_NULL,
THROW_EXCEPTION_WHEN_NULL
};
enum cast_handle {
TRY_BEST_CAST,
THROW_EXCEPTION_WHEN_DIFFERENT
};
template<typename _Mem>
class map_element {
public:
map_element(std::string field_name, _Mem mem_foo, null_handle null_h, cast_handle cast_h ) {...}};
struct Bean {
int cod;
nullable<int> index;
std::string nm;
};
auto cod_mapped = map_element ("cod",&Bean::cod, DEFAULT_WHEN_NULL, THROW_EXCEPTION_WHEN_DIFFERENT);auto idx_mapped = map_element ("index",&Bean::index, ACCEPT_NULL, THROW_EXCEPTION_WHEN_DIFFERENT);auto nm_mapped = map_element (3,&Bean::nm,DEFAULT_WHEN_NULL, TRY_BEST_CAST );
Bean b;
X.execute_into({2},b,cod_mapped,idx_mapped, nm_mapped);
template<class _Bean>class result {
_Bean current_; // we maintain the last copy of bean here
public:
template<typename ..._Maps>
result(_Maps... maps);
iterator begin(); // here, the first use may move to first, then subsequent '++' calls go next
iterator current(); // or we can get a current iterator, pointing to current_ bean
iterator end();
void move/next/first/last common result functions
};
class statement {
template<typename _Bean, typename ..._Maps>
result<_Bean> execute(std::initializer_list<parm> parms, _Maps...maps)
};
now we use:
Bean b;
result<Bean> rs = X.execute({2},b,cod_mapped,idx_mapped, nm_mapped); // here we can derive Bean type from mapped
std::vector<Bean> v;
std::copy( rs.current(), rs.end(), back_inserter(v) );
On 21 May 2014 21:09, Johann Anhofer <johann....@gmail.com> wrote:
>
> As described in N3886 iterators a dangerous thing in combination with result
> sets.
> One may write: std::sort(rs.begin(), rs.end()); or std::unique(rs.begin(),
> rs.end()) which is perfectly legal for normal containers, but result sets
> are not containers, so I don't like iterators for them.
Iterators are not reserved to containers only.
Think of a result set as a view of your data.
A view can have iterators.
This is a very well established concept in C++:
IMHO: it looks rather complicated to use,
but if you use a code generator it may be ok.
As described in N3886 iterators a dangerous thing in combination with result sets.
Yes, may be complicated, mostly when you involve pointer to members, but it commonly get faster and static typed, this way you don´t even need the "any" type , if you have a map you can "search" for and db low level functions can be optimized to read from net stream directly into the bean members saving some copies ( I think ).
namespace detail
{
template <typename Statement, typename Class, class Reflection>
class insert_statement_iterator_proxy
{
public:
explicit insert_statement_iterator_proxy(Statement & statement):
statement(&statement)
{
}
Class & operator =(Class & dest)
{
assert(statement != nullptr);
insert<Reflection>(*statement, dest);
return dest;
}
private:
Statement * statement;
};
} // namespace detail
template <typename Statement, typename Class, class Reflection>
class insert_statement_iterator: public boost::iterator_facade<
insert_statement_iterator<Statement, Class, Reflection>,
Class,
std::output_iterator_tag,
detail::insert_statement_iterator_proxy<Statement, Class, Reflection>>
{
public:
insert_statement_iterator() : statement(nullptr) {}
explicit insert_statement_iterator(Statement & statement_ref):
statement(&statement_ref)
{
assert(statement != nullptr);
}
private:
friend class boost::iterator_core_access;
typename std::tuple_element<0, Statement>::type & current_statement() const
{
assert(statement != nullptr);
return std::get<0>(*statement);
}
detail::insert_statement_iterator_proxy<Statement, Class, Reflection>
/*reference*/ dereference() const {
assert(statement != nullptr);
return detail::insert_statement_iterator_proxy<Statement, Class, Reflection>(*statement);
}
bool equal(insert_statement_iterator const & other) const
{
return statement == other.statement;
}
// all the action is in the proxy assignment
void increment() {}
Statement * statement;
};
I'm the author of N3886 and I want do know if my design is basically accepted and if I should invest more time to push it any further.It turns out that in the next four months I've got enough free time, but I want some ideas and feedback from this group, to steer it, into the right direction.Johann
std::vector<la_carte_example::system_requirements> rqmts;
rqmts.push_back({10,100});
sqlite::database
database("rqmts.db");
auto
statement = std::make_tuple(
prepare(
database,
insert_into_system_requirements_query()));
la_carte::insert<system_requirements_reflection>(statement, rqmts.begin(), rqmts.end());
I'll post more specifics on this approach, but just wanted to reply to your suggestion, which I heartily endorse as a way to provide member access.
In regards to the result set, I'm not certain that it's necessary to parameterize it on the bean.Assuming there are different result set types for different databases (at least the underlying implementation must be different to account for wrapping the different APIs), it gets to be more complicated to have a template in each result set. The way I've addressed this is to have overloads for getting from the result set, and when you want to call the setter/mutator for the bean, you know its type and can specify a temporary of that type, which will get picked up by the overload resolution.
I think reflection is a key component to simplify type safety.
A standard database library should look at the best examples from other languages including Java Hibernate, .NET Framework Entity Framework, and Ruby's Active Record, all of which use reflection
I have implemented the outlines of a data mapping library ...
The actual usage would look something like this:
std::vector<la_carte_example::system_requirements> rqmts;
rqmts.push_back({10,100});
sqlite::database database("rqmts.db");
auto statement = std::make_tuple(
prepare(
database,
insert_into_system_requirements_query()));
la_carte::insert<system_requirements_reflection>(statement, rqmts.begin(), rqmts.end());
Note that there's a lot more going on here, but suffice it to say tuples hold type type lists and different tuples also a hierarchy of statements to allow for hierarchical object networks.
I have actual code that I plan on sharing soon as well.
I'll post more specifics on this approach, but just wanted to reply to your suggestion, which I heartily endorse as a way to provide member access.
In regards to the result set, I'm not certain that it's necessary to parameterize it on the bean.Assuming there are different result set types for different databases (at least the underlying implementation must be different to account for wrapping the different APIs), it gets to be more complicated to have a template in each result set. The way I've addressed this is to have overloads for getting from the result set, and when you want to call the setter/mutator for the bean, you know its type and can specify a temporary of that type, which will get picked up by the overload resolution.
I think reflection is a key component to simplify type safety.
Indeed, I don't know if you are aware, but we proposed N3951/N4027 and N3984, you can use attributes and reflected types to help make the maps.
struct X { int i; std::string s; };
I think reflection is a key component to simplify type safety.Indeed, I don't know if you are aware, but we proposed N3951/N4027 and N3984, you can use attributes and reflected types to help make the maps.
select<mapping>(sql_statement, data_obj)
My dream is SQL without resultset: if i could just pick a scalar, a single bean, fill an array or apply a function to all elements, I don't need even the resultset:
class Bean {
const std::string& get_description() {...};
void set_description( std::string desc ) {...};
public:
[[db_attrib("cod", DEFAULT_WHEN_NULL)]] int cod;
[[db_attrib("name", DEFAULT_WHEN_NULL)]] std::string name;
[[db_attrib("description", DEFAULT_WHEN_NULL), get_description, set_description ]];
};
I'm the author of N3886 and I want do know if my design is basically accepted and if I should invest more time to push it any further.It turns out that in the next four months I've got enough free time, but I want some ideas and feedback from this group, to steer it, into the right direction.Johann
I'm not suggesting that result set is tied to the type.
There should be (at least) four basic CRUD operations, which can be defined as free function templates, parameterized on data source/sink (e.g., SQL database), user domain type (i.e., bean), and a mapping. With type deduction, it is possible to eliminate two explicit template parameters if the two are passed by reference, such as:
select<mapping>(sql_statement, data_obj)
Mapping is orthogonal to the statically defined data source ("statement") and to the bean type (user domain type). The data source may expose a variety of type-specific functions (e.g., sqlite3_bind_int(), sqlite3_column_int(), and sqlite3_column_text16()) some of which may not map to standard types (e.g., sqlite3_column_text16) and there may also be gaps (e.g., unsigned long long). Necessarily at least some aspect of the result set's type is based on the data source's type collection.
select<mapping>(sql_statement, data_obj);
select<mapping>(sql_statement, data_obj2);
Annotations are problematic if the text names are specific column names because a change in the column names could mean recompilation.
I'm not suggesting that result set is tied to the type.
There should be (at least) four basic CRUD operations, which can be defined as free function templates, parameterized on data source/sink (e.g., SQL database), user domain type (i.e., bean), and a mapping. With type deduction, it is possible to eliminate two explicit template parameters if the two are passed by reference, such as:
select<mapping>(sql_statement, data_obj)
Mapping is orthogonal to the statically defined data source ("statement") and to the bean type (user domain type). The data source may expose a variety of type-specific functions (e.g., sqlite3_bind_int(), sqlite3_column_int(), and sqlite3_column_text16()) some of which may not map to standard types (e.g., sqlite3_column_text16) and there may also be gaps (e.g., unsigned long long). Necessarily at least some aspect of the result set's type is based on the data source's type collection.In this case, you want to use the same sql_statement to different types of data_obj ?
select<mapping>(sql_statement, data_obj);
select<mapping>(sql_statement, data_obj2);
And sql_statement need to be static ? what if I have a dynamic SQL ?
Annotations are problematic if the text names are specific column names because a change in the column names could mean recompilation.Do you want to generate beans based on the table definition when compiling ?
Johann and William,I think standardizing just the SQL interface is already a big task.
A similar effort can be done for the other types of persistence (document store like Amazon S3, key-value pair etc).After that it might be possible to come up with a high level, reduced functionality interface that handles all data sources.
It would be easy to get started for new apps and still drop down to specific interface later on, when the app grows big enough.
On Wednesday, June 25, 2014 2:24:25 PM UTC-4, Csaba Csoma wrote:Johann and William,
I think standardizing just the SQL interface is already a big task.
Agreed. The problem with many attempts at standardizing is that "perfect is the enemy of the good." And the good that is out there isn't nearly good enough compared to the excellent examples of STL and many of the Boost Libraries. The database libraries tend to be highly locked into vendor-specifics, add unnecessary runtime overhead, and just not as general as what we've become accustomed to. This isn't just databases--there still isn't a standard socket library.
A similar effort can be done for the other types of persistence (document store like Amazon S3, key-value pair etc).After that it might be possible to come up with a high level, reduced functionality interface that handles all data sources.
Those are probably an easier place to start because they tend to be NoSQL-ish and you can define a lot of operations without worrying about the messiness of query dialects.
It would be easy to get started for new apps and still drop down to specific interface later on, when the app grows big enough.
I think a reasonable proof-of-concept Boost quality library including SQL, that is dialect-free and sufficient for people to use it in production code would take would take 3 - 6 person years to develop. I've written libraries in 4 months labor that were good enough for limited production use and compelling enough over alternative libraries (e.g., SQLite C API, PostgreSQL, and SOCI). But it's a lot of work, and I'm not sure who is willing to pay for it. So, we're left with this as a hobby/part-time effort in most cases.