I think I understand. What if creating a local temp table wouldn't
commit? I will investigate if this can be supported. Anyway at some
point DDL operations should be transactional (it's not high on the
priority list so far however). What about a new syntax:
CREATE LOCAL TEMPORARY TABLE .... WITHOUT COMMIT;
It's a bit ugly I agree, but it would solve the problem for you. Like
this, I could add support for transactional DDL step by step, and when
everything is implemented then switch over so this becomes the
default.
Regards,
Thomas
- rami
What's the best way to monitor connections count and resource consumption ?
regards
Dario.
Like I said earlier, this is my preferred solution. If creating the
temp table does not commit then I can do it on that initial connection
and use a true local temp table.
> I will investigate if this can be supported. Anyway at some
> point DDL operations should be transactional (it's not high on the
> priority list so far however). What about a new syntax:
>
> CREATE LOCAL TEMPORARY TABLE .... WITHOUT COMMIT;
>
> It's a bit ugly I agree, but it would solve the problem for you. Like
> this, I could add support for transactional DDL step by step, and when
> everything is implemented then switch over so this becomes the
> default.
Sounds reasonable to me short term and totally something i could
leverage immediately (well, once its available).
> What's the best way to monitor connections count and resource consumption ?
Use SELECT * FROM INFORMATION_SCHEMA.SESSIONS
There is no easy way to measure the resource consumption for each
session individually, however some values (specially info.CACHE_SIZE)
of this database are available using SELECT * FROM
INFORMATION_SCHEMA.SETTINGS.
Regards,
Thomas
I will try to dig this with jvm instrumentation on a running instance.
regards,
Dario.
> I'm worried about last change log notice of CACHE_SIZE using half of
> JVM -Xmx.
Do you mean this entry?: "When opening an existing database, the cache
size is set to at most half the amount of memory available for the
virtual machine (Runtime.getRuntime().maxMemory()), even if the cache
size setting stored in the database is larger. Setting the cache size
in the database URL or explicitly using SET CACHE_SIZE overrides this
value (even if larger than the physical memory)."
The "half of JVM" only applies if the cache size defined in the
cache_size setting is *larger* than that.
The reason for this change is: sometimes I get databases with the
cache_size set very large. As the cache_size setting is persisted (in
the database) this is problematic (slow performance, sometimes even
out of memory). The change in version 1.2.136 is just a safeguard
against forgetting to reset this value. Maybe it would be better to
not persist the cache_size setting in the database itself (not sure).
Regards,
Thomas
> What I mean is that the amount of memory consumed at any time by dbm (even
> in Embedded mode) should respect an "upper bound" or at least knowable and
> predictable.
It is, if you specify it in the connection setting (A).
However if you don't (B), and it is set in the database, then there
are two cases:
- you do want to use the previous setting
- you forgot that it was set previously
The "automatic limit" only applies to (B).
> if we can
> know what's the used value !
You can. But the point is, if you do want to use an explicit value,
then you need to set it each time you open the database.
> Why not let this value to be dynamically adjusted based on the
> available memory in the JVM conservatively ?
That's what the automatic mechanism does.
> Thus if, on the dbm could use idle memory.
That's already available:
http://h2database.com/html/features.html#cache_settings
Regards,
Thomas
>> Why not let this value to be dynamically adjusted based on the
>> available memory in the JVM conservatively ?
>> Thus if, on the dbm could use idle memory.
> That's what the automatic mechanism does.
>
I mean dynamically at runtime, but if I had understood that this value
it's determined at database start and never adjusted, I'm right ?
What's happen when at runtime we change the CACHE_SIZE to a greater or
lower value , it's adjusted in the next database start or ... ?
Is a costly operation to dbm change CACHE_SIZE at runtime or do nothing ?
If you wonder why:
I'm thinking in the case of H2 in embedded mode or started with a
ConnectionPool as DataSource factory of Tomcat or another application
server.
In the same JVM we have the application (or Container) and the embedded
H2 instance sharing jvm resources.
To setup this scenario we need to estimate application or container
memory needs (can be measured starting H2 as a server in a separated
process) and then embed H2 with a convenient cache size.
Because application/container memory needs must to taken in the worst
peak plus a prudential margin, most of the time there will be available
memory at jvm that H2 could use and release periodically.
Ex.
setup JVM -Xmx=512m
Application/Container use = from 60m to 450m : avg 260m
If we start H2 without specific cache size then they will be using 256m
that can lead in a suboptimal situation.
If we choice to set a fixed cache size to avoid that , then it must be
set to some value bellow 200m, hurting h2 performance even if there are
400m of available memory in most of the time.
In an application case we can get a thread adjusting cache size in
regard of runtime.freeMemory() .
Can be something like this possible ?
regards
Dario
> I mean dynamically at runtime, but if I had understood that this value
> it's determined at database start and never adjusted, I'm right ?
Yes, the value is not adjusted automatically. I don't think that would
be a solution. Would you add a background thread to detect if other
applications need memory? Actually there is no way to do that correct,
except using SoftReference/WeakReference/PhantomReference. And that's
what is already implemented:
http://h2database.com/html/features.html#cache_settings
> Is a costly operation to dbm change CACHE_SIZE at runtime or do nothing ?
I don't understand the question sorry.
> most of the time there will be available
> memory at jvm that H2 could use and release periodically.
This is already available. I suggest to read the documentation:
http://h2database.com/html/features.html#cache_settings "Also included
is an experimental second level soft reference cache."
Regards,
Thomas
> Was there an issue I could track for this?
I created the issue
http://code.google.com/p/h2database/issues/detail?id=214 - if you
'star' the feature you will get notified of changes.
I have implemented this feature now, if everything goes fine it will
be included in the next release (probably in two weeks).
Regards,
Thomas
--
Steve Ebersole <st...@hibernate.org>
http://hibernate.org
> I am not familiar with this issue tracker.
It's the one from Google Code. It's not as sophisticated as Jira, but
it works well for what I need.
> As far as I can tell it is still an open issue.
I closed it today: Implemented in version 1.2.139. Example usage:
set autocommit true;
create table test(id int);
insert into test values(1);
set autocommit false;
insert into test values(2);
create local temporary table temp(id int primary key, name
varchar(255)) transactional;
insert into temp values(1, 'test');
rollback;
select * from test;
drop table test;
drop table temp;
set autocommit true;
create table test(id int);
insert into test values(1);
set autocommit false;
insert into test values(2);
create local temporary table temp(id int primary key, name
varchar(255)) on commit drop transactional;
insert into temp values(1, 'test');
select * from temp;
rollback;
select * from test;
drop table test;
-- table already dropped, so this will fail:
select * from temp;
Regards,
Thomas