CREATE LINKED TABLE IF NOT EXISTS TLKT.EMPLOYEE('org.postgresql.Driver', 'jdbc:postgresql://127.0.0.1/hrla', 'postgres', '***','NOMINA.EMPLOYEE');
TRUNCATE TABLE TLKT.EMPLOYEE;
commit;
Error: Cannot truncate "TLKT.EMPLOYEE"; SQL statement:
TRUNCATE TABLE TLKT.EMPLOYEE [90106-127]
SQLState: 90106
ErrorCode: 90106
Error occured in:
TRUNCATE TABLE TLKT.EMPLOYEE
TRUNCATE is only supported for 'regular' tables. For linked tables,
you need to use DELETE FROM ...
I will document that, and I will add a feature request.
Regards,
Thomas
> --
> You received this message because you are subscribed to the Google Groups
> "H2 Database" group.
> To post to this group, send email to h2-da...@googlegroups.com.
> To unsubscribe from this group, send email to
> h2-database...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/h2-database?hl=en.
>

Hi, TRUNCATE is only supported for 'regular' tables. For linked tables, you need to use DELETE FROM ... I will document that, and I will add a feature request. Regards, Thomas On Fri, Jan 22, 2010 at 7:47 PM, Dario Fassi <dfa...@gmail.com> wrote:Hi,
CREATE LINKED TABLE IF NOT EXISTS TLKT.EMPLOYEE('org.postgresql.Driver', 'jdbc:postgresql://127.0.0.1/hrla', 'postgres', '***','NOMINA.EMPLOYEE'); TRUNCATE TABLE TLKT.EMPLOYEE; commit;
I'm now investigating linked tables for my use case. And I thought about such "pass-through" optimization too if query has no dependencies on H2 db objects other than the linked table itself and the table has no per-row triggers.
May be I'll try to implement it a little later.
> If H2 instead of failing in advance when don't understand a sentence (in
> pass-t mode) just allow the other database handle the sentence for and
> act in regard of remote db exceptions or results, many things would work.
> In case some operation don't fill the conditions needed on H2 for
> handling such situation or results , at the end they will fail in the
> same way they fail now - so no loss.
How h2 will decide to which linked db send query if it can't even
parse it? Why it should parse the query at all if this query is
targeted to another db? I think the right solution is to use stored
procedures returning ResultSet. But to do it more convenient I think
it is possible to add to h2 new db object DBLINK with syntax like
CREATE DBLINK dblink_to_oracle (driver, url, user, password);
and function like
QUERY_DBLINK( dblink_to_oracle,
SELECT xxx FROM some_oracle_table
WHERE yyy = ?
CONNECT BY PRIOR a = b
)
which can be used in more complex queries as function table. And the
query inside this function will not be parsed, only "?" should be
found to set parameters and execute it on this dblink. I think you
want something like this. But this will not be a one day work and I'm
not sure if Thomas will approve this feature..
If H2 instead of failing in advance when don't understand a sentence (in pass-t mode) just allow the other database handle the sentence for and act in regard of remote db exceptions or results, many things would work. In case some operation don't fill the conditions needed on H2 for handling such situation or results , at the end they will fail in the same way they fail now - so no loss.How h2 will decide to which linked db send query if it can't even parse it?
Why it should parse the query at all if this query is
targeted to another db? I think the right solution is to use stored
procedures returning ResultSet. But to do it more convenient I think
it is possible to add to h2 new db object DBLINK with syntax like
CREATE DBLINK dblink_to_oracle (driver, url, user, password);
and function like
QUERY_DBLINK( dblink_to_oracle,
SELECT xxx FROM some_oracle_table
WHERE yyy = ?
CONNECT BY PRIOR a = b
)
which can be used in more complex queries as function table. And the
query inside this function will not be parsed, only "?" should be
found to set parameters and execute it on this dblink. I think you
want something like this.
But this will not be a one day work and I'm not sure if Thomas will approve this feature.
> linked tables creation such as "pass-through"
Why don't you connect to the target database directly?
I have some data integration tasks too, and I agree that creating linked table for any simple thing is not very convenient especially when parametrized queries needed.
Direct access to external data sources would be much better.
Also I think linked tables should be done over dblink to have ability to change parameters only for dblink but not to all linked tables if something changed.
H2 can become mega-cool data integration solution then:)
> DBLINK could implement a getConnection() method
This sounds like DriverManager. You can register you own custom JDBC
driver that returns the 'right' connection.
> Not all databases support that setup via connection URL.
> Ex. SET ISOLATION LEVEL , COLLATING , CLIENT ENCODING, etc.
So far nobody requested such features. Could you provide more details
about what you need?
> Direct access to external data sources would be much better.
Do you mean by setting a connection _object_ versus declarative
(database URL and so on like now)?
> change parameters only for dblink
> but not to all linked tables if something changed.
I will add a feature request for "Support CREATE DATABASE LINK".
Regards,
Thomas
Yes, but not only. Also I mean an ability to query data from
connection
and execute arbitrary commands against it without creating any other
H2 objects (like linked tables). Preferably with support of parameters
-
linked tables are not suited for this.
> Yes, but not only. Also I mean an ability to query data from
> connection
> and execute arbitrary commands against it without creating any other
> H2 objects (like linked tables). Preferably with support of parameters
> -
> linked tables are not suited for this.
What's the difference to using JDBC yourself?
Regards,
Thomas
Of course you can in java , but not in middle of an sql script.
I just thinking in the possibility of using Csv, and other H2 tools over
an external connection.
>> Not all databases support that setup via connection URL.
>> Ex. SET ISOLATION LEVEL , COLLATING , CLIENT ENCODING, etc.
>>
> So far nobody requested such features. Could you provide more details
> about what you need?
>
By example, a concrete real case I have to query data from a legacy
database in ISOLATION LEVEL UNCOMMITED READ ,
to create a H2 table using a query that mix data from that linked table
and other H2's tables.
That database don't accept setup isolation level via jdbc URL, so I can
use linked tables for this case.
Other case, I need/wish define a linked table with a query that use a
sql grammar that H2 don't accept.
>> Direct access to external data sources would be much better.
>>
> Do you mean by setting a connection _object_ versus declarative
> (database URL and so on like now)?
>
>
No I mean to get an ResultSet from an external source in such a way that
H2 don't parse the sql destinated to linked database, just use the
returning ResultSet.
>> change parameters only for dblink
>> but not to all linked tables if something changed.
>>
> I will add a feature request for "Support CREATE DATABASE LINK".
Regards,
Dario
Ryan
We don't use Access, but yes, that's the idea.
In MS world you have some interesting artifacts to support data
exchange, integration and ETL applications like DTSs , but it's very
common that this type of apps use as intermediary and access db and VB.
Because our main target are *nix+java server environments and
heterogeneous db vendor + legacy , H2 fill this role very well as dbm
and as ETL tool.
An LINK DATABASE feature and LINKED_QUERY() functions it's all we need
to remove some uncomfortable limitations of Linked tables.
Dario.
You can also use JDBC inside H2, using customer functions. Example:
create alias query as 'ResultSet query(Connection conn, String sql)
throws SQLException { return conn.createStatement().executeQuery(sql);
}';
call query('select * from dual');
drop alias query;
If needed, this can be extended to access external databases.
Regards,
Thomas
> You can also use JDBC inside H2, using customer functions.
> If needed, this can be extended to access external databases.
Since this is a common task, the request was to standardize it in H2.
When database link object will be implemented (you promised to add the
feature request), it will be straight forward to add functions for
external data access.
--
-- THIS ONE CAN'T BE USED - I dont know why ?
--
create alias LINKED_SCRIPT as ' void doextscript( String
driverClassName, String url, String username, String passwd, String
scriptFileName, String charsetName, boolean continueOnError) throws
SQLException {
try {
java.sql.Driver aDrv = (java.sql.Driver)
Class.forName(driverClassName).newInstance();
java.sql.DriverManager.registerDriver(aDrv);
} catch (Exception e) {;}
org.h2.tools.RunScript.execute(url, username, passwd, scriptFileName,
charsetName, continueOnError) ;
}';
--
-- THIS ONE WORK FINE and we use it some times , but has the same
limitations that linked tables.
--
create alias LINKED_QUERY as 'ResultSet doextquery( Connection con,
String driverClassName, String url, String username, String passwd,
String query) throws SQLException {
try {
java.sql.Driver aDrv = (java.sql.Driver)
Class.forName(driverClassName).newInstance();
java.sql.DriverManager.registerDriver(aDrv);
} catch (Exception e) {;}
return java.sql.DriverManager.getConnection(url, username,
passwd).createStatement().executeQuery(query);
}';
Now to use it in an SELECT this look like:
select a.* from LINKED_QUERY(
org.postgresql.Driver','jdbc:postgresql://127.0.0.1:5432/mydb','sa','********',
'select * from mySchema.someTable' ) as a;
you must agree that isn't a nice syntax ;-)
and this is very redundant with LINKED TABLE's connections management. I
think their external connections pool must be shared by all this
"LINKED" artifacts.
When we know what need to be executed at design time, then we can write
a compiled java UDFs jar that use resources of h2 and applications and
all goes right.
But , many time we DON'T know in advance what need to be executed as
"CUSTOM SCRIPTS" that the us or final user can write in SQL as
application's customer procedures.
In data integration applications , new data inputs are ofted handled as
custom procedures (scripts) executed from an application that has an
embedded H2 database.
For those cases , this UDFs are tolerable but not comfortable or efficient.
I insist that almost anything is just inside H2: drivers and external
connections handling.
We only need some expressive and clean way to access that extenal /
linked connections for use in a more general or less restrictive fashion.
Regards,
Dario
El 07/02/10 14:04, Thomas Mueller escribió:
> UDFs
I guess you mean user defined functions?
> -- THIS ONE CAN'T BE USED - I dont know why ?
I don't understand, why not? Does it throw an exception?
> select a.* from LINKED_QUERY(
> org.postgresql.Driver','jdbc:postgresql://127.0.0.1:5432/mydb','sa','********',
> 'select * from mySchema.someTable' ) as a;
> you must agree that isn't a nice syntax ;-)
> and this is very redundant with LINKED TABLE's connections management.
You could create a method to map a 'data source' to a connection:
LINKED_DEFINE_DATASOURCE(dataSource, driver, url, user, password);
LINKED_QUERY(dataSource, query);
Or you could create a function that creates another function:
LINK(dataSource, driver, url, user, password);
This would then create the 'real' <dataSource>_QUERY alias.
> many time we DON'T know in advance what need to be executed as
> "CUSTOM SCRIPTS" that the us or final user can write in SQL as
> application's customer procedures.
I don't understand, how is that related to this problem?
> I insist that almost anything is just inside H2: drivers and external
> connections handling.
As I wrote, it doesn't have high priority for me. I think it doesn't
make sense to add functionality into the database engine itself if the
same functionality can "live" outside. Unless many people need it.
Regards,
Thomas
UDFsI guess you mean user defined functions?
-- THIS ONE CAN'T BE USED - I dont know why ?I don't understand, why not? Does it throw an exception?
Error: Syntax error in SQL statement " void doextscript( String driverClassName, String url, String username, String passwd, String scriptFileName, String charsetName, boolean continueOnError)
throws SQLException {
try {
java.sql.Driver aDrv = (java.sql.Driver) Class.forName(driverClassName).newInstance();
java.sql.DriverManager.registerDriver(aDrv);
} catch (Exception e) {;}
org.h2.tools.RunScript.execute(url, username, passwd, scriptFileName, charsetName, continueOnError) ;
}"; SQL statement:
create alias LINKED_SCRIPT as ' void doextscript( String driverClassName, String url, String username, String passwd, String scriptFileName, String charsetName, boolean continueOnError)
throws SQLException {
try {
java.sql.Driver aDrv = (java.sql.Driver) Class.forName(driverClassName).newInstance();
java.sql.DriverManager.registerDriver(aDrv);
} catch (Exception e) {;}
org.h2.tools.RunScript.execute(url, username, passwd, scriptFileName, charsetName, continueOnError) ;
}' [42000-128]
SQLState: 42000
ErrorCode: 42000
Error occured in:
create alias LINKED_SCRIPT as ' void doextscript( String driverClassName, String url, String username, String passwd, String scriptFileName, String charsetName, boolean continueOnError)
throws SQLException {
try {
java.sql.Driver aDrv = (java.sql.Driver) Class.forName(driverClassName).newInstance();
java.sql.DriverManager.registerDriver(aDrv);
} catch (Exception e) {;}
org.h2.tools.RunScript.execute(url, username, passwd, scriptFileName, charsetName, continueOnError) ;
}'
select a.* from LINKED_QUERY( org.postgresql.Driver','jdbc:postgresql://127.0.0.1:5432/mydb','sa','********', 'select * from mySchema.someTable' ) as a; you must agree that isn't a nice syntax ;-)and this is very redundant with LINKED TABLE's connections management.You could create a method to map a 'data source' to a connection: LINKED_DEFINE_DATASOURCE(dataSource, driver, url, user, password); LINKED_QUERY(dataSource, query);
call LINKED_QUERY( LINKED_DEFINE_DATASOURCE( dataSource, driver, url, user, password) , query);
Or you could create a function that creates another function: LINK(dataSource, driver, url, user, password); This would then create the 'real' <dataSource>_QUERY alias.
create alias SHOW_FUNCTIONS as 'java.sql.ResultSet showfunctions(java.sql.Connection con , String schemaPattern, String functionNamePattern) throws SQLException {
java.sql.DatabaseMetaData dbmd = con.getMetaData();
return dbmd.getFunctions(null, schemaPattern, functionNamePattern);
}';
many time we DON'T know in advance what need to be executed as "CUSTOM SCRIPTS" that the us or final user can write in SQL as application's customer procedures.I don't understand, how is that related to this problem?
As I wrote, it doesn't have high priority for me. I think it doesn't
make sense to add functionality into the database engine itself if the same functionality can "live" outside. Unless many people need it.
> Me neither and what's wrong - this is the error :.
It does work for me (no error). The root cause of the exception should
show the compiler error. For the next release, I will change the error
message so the actual problem (the compiler error) is included.
By the way Class.forName(jdbcDriverClassName) is enough (no need to
instantiate and register the driver).
> What would be the right way to look up the functions created in the db ?
> I try to use the alias bellow , but DatabaseMetaData.getFuntions() is in
> JDK 1.6
DatabaseMetaData uses the INFORMATION_SCHEMA. See the source code of
DatabaseMetaData.getFuntions().
> It maybe be better remove it from the core of h2 and pass it to an extension
> module (h2-linked.jar) , as the "contrib" of other databases.
Yes, that's a very good idea. The question is what is the 'right'
abstraction, and how does the extension module system look like.
Currently I think it's user defined functions, but there should be a
better way to integrate them.
Regards,
Thomas
It maybe be better remove it from the core of h2 and pass it to an extension module (h2-linked.jar) , as the "contrib" of other databases.
The question is what is the 'right' abstraction, and how does the extension module system look like. Currently I think it's user defined functions, but there should be a better way to integrate them.
> Maybe with a new command like: * LOAD_EXTENSION some_jar_fullpath*
The bad thing in such approach that each time database created you
will need run initialization script like
LOAD_EXTENSION some_ext1.jar;
LOAD_EXTENSION some_ext2.jar;
LOAD_EXTENSION some_ext3.jar;
etc...
In addition better to have system property in which we will list all
the classnames of plugins
which should be instantiated and initialized for each new database.
By the way. I'm thinking about pluggable table implementations. Like
this
CREATE TABLE T1
(ID INT, NAME VARCHAR)
ENGINE "com.aaa.MegaCoolTableFactory"
this class will have factory method which will return
org.h2.table.Table instance.
Thoughts, suggestions?
What about using the Java "service provider mechanism" for extensions?
How this works is explained here:
http://java.sun.com/javase/6/docs/api/java/util/ServiceLoader.html -
this mechanism also work in Java 5, H2 would use
javax.imageio.spi.ServiceRegistry (which sounds a bit weird, but it's
basically the same thing).
To register an H2 extensions, either
B) add the file "META-INF/services/org.h2.Service" to the jar file
with the list of extension classes
B) or register the extensions at runtime using a Java API
Registering a file system implementation is already possible using
org.h2.store.fs.FileSystem.register (some built-in file systems use
this mechanism already). What is needed is extend this mechanism for
functions and other components. There are already some candidates, for
example org.h2.mode.FunctionsMySQL.
Regards,
Thomas
Maybe with a new command like: * LOAD_EXTENSION some_jar_fullpath*The bad thing in such approach that each time database created you will need run initialization script like LOAD_EXTENSION some_ext1.jar; LOAD_EXTENSION some_ext2.jar; LOAD_EXTENSION some_ext3.jar; etc...
In addition better to have system property in which we will list all the classnames of plugins which should be instantiated and initialized for each new database.
By the way. I'm thinking about pluggable table implementations. Like this CREATE TABLE T1 (ID INT, NAME VARCHAR) ENGINE "com.aaa.MegaCoolTableFactory" this class will have factory method which will return org.h2.table.Table instance. Thoughts, suggestions
This seems an standard and very clean way to handle alternative
providers for an actual or expected H2 service , but I can see how this
help for extensions.
In this way H2 extensibility must to be defined in terms of "what's
can to be provided as an extension ?" and I don't know if that question
have a concrete answer.
For and alternate table engine implementation (as Sergi propose) or full
text search engine, or new index types , an spi definition can fit
perfectly.
But for an extension stated as a pack of related UDFs and support code
, in don't believe that an SPI approach can be used.
I think that we are talking of two different things:
1) Plug-ins: SPI candidate with tight relation to core central functions.
2) Extensions: Addons of UDF sets with minimal core compromise that
must to be part of database itself.
Plugins can be database configuration components , on the other hand
Extensions must to be database's objects.
If someone create a database and define a view that use and UDF ; then
create a backup and send to another place;
all need components to maintain that view operative must to be in the
backup as any other database object.
For extension suppose there are ways to simply modify the classpath
dynamically to make jar contained classes available
(http://www.coderanch.com/t/402870/Beginning-Java/java/Adding-jar-classpath-dynamically
);
maybe more tricky than standard or elegant but much more direct.
> To register an H2 extensions, either
>
> B) add the file "META-INF/services/org.h2.Service" to the jar file
> with the list of extension classes
> B) or register the extensions at runtime using a Java API
>
> Registering a file system implementation is already possible using
> org.h2.store.fs.FileSystem.register (some built-in file systems use
> this mechanism already). What is needed is extend this mechanism for
> functions and other components. There are already some candidates, for
> example org.h2.mode.FunctionsMySQL.
>
Yes , database compatibility modes can be implemented as SPI , one for
MySql , PgSql, etc.
and as SPI class loading is lazy even if all db mode plugins are
installed, only the used one will be loaded.
This can reduce core jar size and runtime needed resources.
regards,
Dario
> But for an extension stated as a pack of related UDFs and support code
> , in don't believe that an SPI approach can be used.
Why not?
> I think that we are talking of two different things:
>
> 1) Plug-ins: SPI candidate with tight relation to core central functions.
> 2) Extensions: Addons of UDF sets with minimal core compromise that
> must to be part of database itself.
>
> Plugins can be database configuration components , on the other hand
> Extensions must to be database's objects.
I don't see a clear distinction here. Fulltext indexes are also
'database objects' in a way, yet they are implemented as User Defined
Functions (UDFs).
> If someone create a database and define a view that use and UDF ; then
> create a backup and send to another place;
> all need components to maintain that view operative must to be in the
> backup as any other database object.
Yes. For CREATE ALIAS ... AS ..., the source code of the function is
actually stored in the database. Currently there is no way to create a
function using the bytecode (store the bytecode in the database). This
feature could be added. I will add a feature request for:
"User defined functions: allow to store the bytecode (of just the
class, or the jar file of the extension) in the database."
Regards,
Thomas
El 23/02/10 16:35, Thomas Mueller escribió:
>> But for an extension stated as a pack of related UDFs and support code
>> , in don't believe that an SPI approach can be used.
>>
> Why not?
>
SPI , as I understand, it's better suited to define "alternative
implementations of some concrete function or characteristic" that is
by nature or convenience pluggable or replaceable,
and limited by interfaces that define what can be replaced and the
contract of how this need to be done to mate with the core.
Table implementations, indexes implementations, transport protocols,
etc. are good candidates for SPI with specialized versions for distinct
uses .
I cant figure out why such a contract or interface will be necessary to
only add new functions.
In the case of an extension as a set of related UDFs only a few
communication and installation facilities are need:
1) Packaging & Installing : Automation of alias creation and
persistence of definitions as database objects.
2) Extend the VARIABLES possibilities as more generic object-holder
accessible from SQL and UDF as a bus between they for collaboration .
THIS IS THE CENTRAL POINT and main limitation of what can be done
right now.
>> I think that we are talking of two different things:
>>
>> 1) Plug-ins: SPI candidate with tight relation to core central functions.
>> 2) Extensions: Addons of UDF sets with minimal core compromise that
>> must to be part of database itself.
>>
>> Plugins can be database configuration components , on the other hand
>> Extensions must to be database's objects.
>>
> I don't see a clear distinction here. Fulltext indexes are also
> 'database objects' in a way, yet they are implemented as User Defined
> Functions (UDFs).
>
All of this can be implemented as UDFs or with what ever be the best
implementation option ; but since they are different in purpose and
scope it's better to design with that in mind.
Main distinction it's that extensions are mostly to ADD collateral
functions and don't has the hassle to "REPLACE" core functionality ,
even critical functions that need to be contained and directed in some
way by interfaces that define the insertion points for plugins (what
core functions are 3er party provided).
>> If someone create a database and define a view that use and UDF ; then
>> create a backup and send to another place;
>> all need components to maintain that view operative must to be in the
>> backup as any other database object.
>>
> Yes. For CREATE ALIAS ... AS ..., the source code of the function is
> actually stored in the database. Currently there is no way to create a
> function using the bytecode (store the bytecode in the database). This
> feature could be added. I will add a feature request for:
>
> "User defined functions: allow to store the bytecode (of just the
> class, or the jar file of the extension) in the database."
>
This is a great starting point for consistency and portability ; but at
the same time will be necessary some definition / installation command
or tool.
And if we add a simple jar layout to automate alias creation in the same
process , then we can namely call that "an extension" ! :-)
What do you think about simply to use a database subdirectory for
deploy the extensions jars ; and advance over installation first.
Backup only need to include that directory ( until store bytecode can be
done ) to get portability in it's simplest form.
It's a shorter path to cover all the way: packaging, deploy and
activation of an extension , that can be perfected later.
regards
Dario
But you don't think that UDFs without an shared variables space between
SQL and UDF space are still very limited ?
As I can see UDFs are limited to behave as static functions ( Invariant
functions in IBM terminology ).
How to make this udfs collaborate with each others or leave some effect
active for repetitive invocations.
More even, that shared variables space can be used to simulate cursor
implementation and useful for simplify column functions.
regards,
Dario.
This is all very abstract. New features should be use case driver, and
not "abstract idea driven".
What about providing some example (client) source code, including some
documentation what it does? That makes it _much_ easier to decide.
Regards,
Thomas
Regards,
Thomas
Regards
Dario
> But you don't think that UDFs without an shared variables space between
> SQL and UDF space are still very limited ?
> As I can see UDFs are limited to behave as static functions ( Invariant
> functions in IBM terminology ).
This all is not about extensions but about H2 concept of UDFs so
should be discussed separately.
About use cases
1) Database initialization service. It will do something on each
database creation (run initialization scripts and so on).
This is more powerful and sometimes more convenient replacement
for initialization parameter of url ;INIT=...
Potentially this can be used also for automated loading jars into
new db (if this will be implemented).
2) Functions serivice. It will contain set of functions. But there
will be even no need to execute initialization scripts - they
will work like builtin H2 functions.
This will allow to extend H2 set of builtin functions in easy way -
simply put jar with UDFs to classpath. We are talked a lot of
time about should we include some functions to H2 or not.
So any functions that are usually not needed (for example
external data access and so on) should be placed in separate
plugins. On H2 site this should be also separate downloads.
3) Tables service. To allow user to create his own table
implementation best suited for his needs.
This is needed by me for data transfer from one application to
my application via odbc with maximum performance. Also
I think someone can create table implementations that
are faster for some use cases than standard.