ERROR accessing ReadOnly H2 Database files

133 views
Skip to first unread message

Elango

unread,
Apr 23, 2007, 8:58:13 AM4/23/07
to H2 Database
After seeing the performance and features of H2 Database,I started
using it in my Application.

The Application uses readonly database (within CD ROM)
with approximately 500,000 records.

I had dumped the records in write mode and changed the database files
to readonly.
I am unable to execute the select query with Read-only files.
But if I change the database to writable, the program executes
correctly.

I get the following Exception while running the program.

Exception Stack Trace:
org.h2.jdbc.JdbcSQLException: The database is read only [90097-44]
at org.h2.message.Message.getSQLException(Message.java:65)
at org.h2.message.Message.getSQLException(Message.java:47)
at org.h2.message.Message.getSQLException(Message.java:86)
at org.h2.engine.Database.checkWritingAllowed(Database.java:
1204)
at org.h2.store.FileStore.checkWritingAllowed(FileStore.java:
85)
at org.h2.store.FileStore.write(FileStore.java:198)
at org.h2.store.FileStore.writeDirect(FileStore.java:188)
at org.h2.store.FileStore.init(FileStore.java:100)
at org.h2.engine.Database.openFile(Database.java:305)
at org.h2.engine.Database.openFile(Database.java:294)
at org.h2.result.ResultDiskBuffer.<init>(ResultDiskBuffer.java:
34)
at org.h2.result.LocalResult.addRow(LocalResult.java:204)
at org.h2.command.dml.Select.queryFlat(Select.java:283)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:
326)
at org.h2.command.dml.Query.query(Query.java:93)
at org.h2.command.CommandContainer.query(CommandContainer.java:
73)
at org.h2.command.Command.executeQueryLocal(Command.java:72)
at org.h2.command.Command.executeQuery(Command.java:61)
at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:
67)
at com.xyz.app.gui.TestReadOnlyDB.main(TestReadOnlyDB.java:21)

BUT I am still able to open the Database using SQuirreL SQL Client in
readonly mode.
Please help me in solving this problem.

Thanks in advance.

Regards,
Elango Ramaraj


---------------------------------------
Source File
package com.xyz.app.gui;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class TestReadOnlyDB {
public static void main(String args[]){
Connection connection = null;
Statement statement = null;
ResultSet rset = null;
try {
Class.forName("org.h2.Driver");
connection=DriverManager.getConnection(
"jdbc:h2:file:D:/Project/app/H2DB/AppDB/
AppData;DB_CLOSE_DELAY=2");
statement = connection.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);

rset = statement.executeQuery("select * from PRICE_LIST");

while (rset.next()){
System.out.println(rset.getString("CATALOG_NUMBER"));
}

} catch(Exception e){
e.printStackTrace();
}
finally{
closeConnection(rset, statement, connection);
}
}
private static void closeConnection(ResultSet rset,
Statement statement, Connection connection){
try {
if (null != rset){ rset.close(); rset = null; }
if (null != statement){ statement.close(); statement =
null; }
if (null != connection){connection.close(); connection =
null; }
} catch(Exception e){
e.printStackTrace();
}
}
}

Elango

unread,
Apr 23, 2007, 11:59:20 AM4/23/07
to H2 Database
This problem is occuring for larger database tables having more than
100,000 records. It is working fine for smaller tables.

Thomas Mueller

unread,
Apr 25, 2007, 3:44:49 AM4/25/07
to h2-da...@googlegroups.com
Hi,

The problem is that for large result sets, the database engine tries
to create temporary files in the same directory as the database. This
is already solved (create the temp files in the temp dir in this
case), but not yet published in the latest version. But you can try
already using the latest build, where it is fixed:
code.google.com/p/h2database, downloads. Direct link:
http://h2database.googlecode.com/files/h2-2007-04-20.zip

I hope this helps. Please tell me if it works for you.
Thomas

Elango

unread,
Apr 25, 2007, 9:17:59 AM4/25/07
to H2 Database
Hi Muller,

Thanks a lot. It is working with the latest build.

Is there any option to archive the database files as a jar file and
connect it with H2 database?

Elango Ramaraj

Thomas Mueller

unread,
Apr 25, 2007, 6:42:26 PM4/25/07
to h2-da...@googlegroups.com
Hi,

No, currently not. The problem is that files in a jar file can not be
read in random access mode (they can only be read as a stream) and the
database engine is currently not optimized for this. It would be
possible to implement this feature, but it would be slow. But the
feature is already on the 'todo' list since quite a long time... I
think it is an important feature.

Thomas

Shane Duan

unread,
Apr 25, 2007, 7:25:34 PM4/25/07
to h2-da...@googlegroups.com
Hi,

I have an opensource project, cotta (http://cotta.sourceforge.net),
that wraps java IO so that you can treat a jar file as read-only
directory, in case you are interested.

This was created as a proof of concept that there should be a better
way to write java IO related program. It is on apache license, so
feel free to do whatever you want,

Cheers
Shane


--
Shane
http://www.shaneduan.com

Elango

unread,
Apr 26, 2007, 2:42:25 PM4/26/07
to H2 Database
Hi
The database files size exceeds more than 600MB. It may exceed the CD-
ROM size after adding the application, jre and some more document
stuffs.
Moreover application uses Lucene search engine for full-text search.
The Lucene Index files are coming around additional 50 MB.
I planned to zip the database files and put it in CD-ROM. The zip file
may be extracted to the temporary directory when application is
started.
But while doing this users have to wait for 20 to 25 seconds to load
the application.

Is there any solution for this? Or Is there any way to extract only
the necessary db table files to reduce the start-up time?

I didn't find more documentation about cotta. I am also not sure that,
this will have features like random access mode?
Is there any other way to optimize the file size? Is it possible to
integrate it with H2 DB?

Elango

(sorry for my bad English)

Shane Duan

unread,
Apr 26, 2007, 2:53:07 PM4/26/07
to h2-da...@googlegroups.com
Hi, Sorry I didn't mean to deviate the thread. If you have question
about cotta please send me direct email.

H2 does not have anything to do with Cotta at the moment.

Cheers
Shane


--
Shane
http://www.shaneduan.com

Thomas Mueller

unread,
Apr 27, 2007, 5:35:06 AM4/27/07
to h2-da...@googlegroups.com
Hi,
One idea would be to unzip the data only once (if the data is alreay
in the temp dir, just check if the data is still current using a
simple checksum of the file sizes and dates or something like this).
Another solution is to try to compress the data in the database itself
and connect to the CD-ROM 'path'. (There are some compression
algorithms in H2, maybe you can use those, specially if you are using
BLOB, CLOB or large text files, or other compressible data). However
this wouldn't work if there is a lot more data. And I am not sure how
to find out the CD-ROM path, but if you start the application from
CD-ROM it should be possible I think. The 'database in a jar file'
solution is not that great because random access is not possible in
jar / zip files. I know it is possible to emulate random access using
caching, but it will be slow (there is a solution, splitting big files
in multiple smaller files, but this would now help the compression
ratio).
Thomas

Elango

unread,
Apr 27, 2007, 11:04:26 AM4/27/07
to H2 Database
Hi,

Thanks for the response.
Please see my queries marked with <Q>

One idea would be to unzip the data only once (if the data is alreay
in the temp dir, just check if the data is still current using a
simple checksum of the file sizes and dates or something like this).

<Q> This seems to be a good option. I plan to keep and META-INFO file
(like java property file) along with the zip file. This file contains
the date, time and List of Database files. The criteria for unzipping
the data file will be by comparing the META-INFO file.

Another solution is to try to compress the data in the database
itself
and connect to the CD-ROM 'path'. (There are some compression
algorithms in H2, maybe you can use those, specially if you are using
BLOB, CLOB or large text files, or other compressible data). However
this wouldn't work if there is a lot more data. And I am not sure how
to find out the CD-ROM path, but if you start the application from
CD-ROM it should be possible I think.

<Q> Will the file compression algorithms (used in H2) support the
VARCHAR columns? My Database will not contains any BLOB or CLOB.
<Q> Where can i find the documentations to compress the data in the
database itself?
<Q> The java application (that invokes h2) will resinde in the CD-ROM
and there will not be a problem in identifying the CD-ROM path. I
think there will not be a problem in accessing the CD ROM paths.

The 'database in a jar file'
solution is not that great because random access is not possible in
jar / zip files. I know it is possible to emulate random access using
caching, but it will be slow (there is a solution, splitting big
files
in multiple smaller files, but this would now help the compression
ratio).

<Q> I dropped this idea.


- Elango

Thomas Mueller

unread,
Apr 27, 2007, 8:54:45 PM4/27/07
to h2-da...@googlegroups.com
Hi,
Compression of data inside the database is a bit tricky. Currently the
algorithms implemented only work (that means, only are able to
compress) if the data in a field (one row, one column) is quite large.
That means, if you just use VARCHAR, and the average varchar field is
small (less than 60 characters), then compression is not possible.
However otherwise you could use the COMPRESS and EXPAND functions. For
LOB compression, there is a setting (I forgot the name, SET
COMPRESS_LOB or something like this).
Thomas

Elango

unread,
Apr 27, 2007, 10:16:16 PM4/27/07
to H2 Database
Hi Mueller,

Thanks for the quick response.

I would like to go for first option.
i.e using zip files and extacting it for the first time (or) any
changes in the database.

Thank you again
Elango


On Apr 28, 5:54 am, "Thomas Mueller" <thomas.tom.muel...@gmail.com>
wrote:


> Hi,
> Compression of data inside the database is a bit tricky. Currently the
> algorithms implemented only work (that means, only are able to
> compress) if the data in a field (one row, one column) is quite large.
> That means, if you just use VARCHAR, and the average varchar field is
> small (less than 60 characters), then compression is not possible.
> However otherwise you could use the COMPRESS and EXPAND functions. For
> LOB compression, there is a setting (I forgot the name, SET
> COMPRESS_LOB or something like this).
> Thomas
>

> > - Elango- Hide quoted text -
>
> - Show quoted text -

Reply all
Reply to author
Forward
0 new messages