[sqlite] SQLite as a Logger: How to mimic "rotation of logs"?

2,255 views
Skip to first unread message

Lynton Grice

unread,
May 10, 2011, 4:09:14 AM5/10/11
to General Discussion of SQLite Database
Hi there,

SQLite is a perfect fit for a logger, the only question I have is once
it is in production my database will grow rapidly, how can I implement /
mimic a type of "rotating log"?

So in my mind I am thinking that perhaps I can LIMIT the size of the
SQLIte DB to say 5 MB? And once the DB reaches that size it starts
INSERTING new logs over the earliest records in the database?

Is this possible?

Thanks for the help

Lynton
_______________________________________________
sqlite-users mailing list
sqlite...@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Stephan Beal

unread,
May 10, 2011, 5:44:10 AM5/10/11
to General Discussion of SQLite Database
On Tue, May 10, 2011 at 10:09 AM, Lynton Grice
<lynton...@logosworld.com>wrote:

> So in my mind I am thinking that perhaps I can LIMIT the size of the
> SQLIte DB to say 5 MB? And once the DB reaches that size it starts
> INSERTING new logs over the earliest records in the database?
>
> Is this possible?


The API doesn't provide a way to do this, but i have done the following in
the past... keep a counter in the log function, and every (say) 100 calls,
run a DELETE to clean up what you want. In my case i simply got the ID of
the last-inserted log entry and ran DELETE ... WHERE ID<(the_last_id-100).

Not a perfect solution, but for simple apps it suffices.

--
----- stephan beal
http://wanderinghorse.net/home/stephan/

Enrico Thierbach

unread,
May 10, 2011, 5:52:52 AM5/10/11
to General Discussion of SQLite Database
Hi Lynton,

I don't think sqlite (or any SQL database, for that matter) is a perfect fit for a logger, because there is a certain amount of write overhead.
Why do you think you would want to do this?

/eno

Stephan Beal

unread,
May 10, 2011, 6:06:52 AM5/10/11
to General Discussion of SQLite Database
On Tue, May 10, 2011 at 11:52 AM, Enrico Thierbach <e...@open-lab.org> wrote:

> I don't think sqlite (or any SQL database, for that matter) is a perfect
> fit for a logger, because there is a certain amount of write overhead.
> Why do you think you would want to do this?
>

ALL db insertions in a db are, in effect, some form of logging. In embedded
apps with no stdout/stderr (e.g. WinCE) using sqlite as a logging
destination can be quite useful (and easy to set up).

@Lynton: i've got some C++ code implementing such a logger here:
http://fossil.wanderinghorse.net/repos/sqlite-stuff/index.cgi/finfo?name=sq3/sq3_log_db.hpp
http://fossil.wanderinghorse.net/repos/sqlite-stuff/index.cgi/finfo?name=sq3/sq3_log_db.cpp

that might be useful as a basis (e.g. it includes a printf-like log()
routine and a trim() function to remove all but the last N entries).

Enrico Thierbach

unread,
May 10, 2011, 6:15:13 AM5/10/11
to General Discussion of SQLite Database

On 10.05.2011, at 12:06, Stephan Beal wrote:

> On Tue, May 10, 2011 at 11:52 AM, Enrico Thierbach <e...@open-lab.org> wrote:
>
>> I don't think sqlite (or any SQL database, for that matter) is a perfect
>> fit for a logger, because there is a certain amount of write overhead.
>> Why do you think you would want to do this?
>>
>
> ALL db insertions in a db are, in effect, some form of logging. In embedded
> apps with no stdout/stderr (e.g. WinCE) using sqlite as a logging
> destination can be quite useful (and easy to set up).

Yes and no: logging is an (append-only) write to an already opened file or network socket, and no indexes need to be updated. While inserting a document into a database needs to fiddle with internal database structures, which is less performant than just writing a few bytes to an already handle.

Of course, constraints on an embedded device are different than, say, on a Unix server, and logging to a database is easy to set up, especially if the database is already there :). In other scenarios file system logging generally wins, and not only performance wise, but because there are plenty of tools to work with those; unless, of course, there is a specific need to use a database.

/eno

Lynton Grice

unread,
May 10, 2011, 6:33:09 AM5/10/11
to General Discussion of SQLite Database
Hi Enrico,

Well I have been looking at using a nice light weight C logger.....and
played around with LOG4C, Panthious have and also checked out using
syslog-ng, matlog and standard syslog....but in the end decided that for
this specific application an SQLite logger would work well.....

I am already using SQLIte as a queue so the administrators are already
comfortable using the SQLite command line ;-)

Lynton

Lynton Grice

unread,
May 10, 2011, 6:35:15 AM5/10/11
to General Discussion of SQLite Database
Hi Stephan,

Thanks for this, much appreciated. My application is written in pure C,
so I guess I will not be able to use your C++ code?

Chat later

Lynton

On 10/05/2011 12:06, Stephan Beal wrote:
> On Tue, May 10, 2011 at 11:52 AM, Enrico Thierbach<e...@open-lab.org> wrote:
>
>> I don't think sqlite (or any SQL database, for that matter) is a perfect
>> fit for a logger, because there is a certain amount of write overhead.
>> Why do you think you would want to do this?
>>
> ALL db insertions in a db are, in effect, some form of logging. In embedded
> apps with no stdout/stderr (e.g. WinCE) using sqlite as a logging
> destination can be quite useful (and easy to set up).
>
> @Lynton: i've got some C++ code implementing such a logger here:
> http://fossil.wanderinghorse.net/repos/sqlite-stuff/index.cgi/finfo?name=sq3/sq3_log_db.hpp
> http://fossil.wanderinghorse.net/repos/sqlite-stuff/index.cgi/finfo?name=sq3/sq3_log_db.cpp
>
> that might be useful as a basis (e.g. it includes a printf-like log()
> routine and a trim() function to remove all but the last N entries).
>

_______________________________________________

Lynton Grice

unread,
May 10, 2011, 6:42:14 AM5/10/11
to General Discussion of SQLite Database

Hi all,

Thanks for your comments...much appreciated......

BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
to say FIX the sqlite database size to say "5 MB"?

Also, lets say I have a AUTOINCREMENT INTEGER PRIMARY KEY, what will
happen when it reaches 5 MB? Will it just keep returning SQLITE_FULL or
similar? I guess I am looking for a "round robin queue" here?

While I'm on it, if I have an AUTOINCREMENT INTEGER PRIMARY KEY with a
LOGGER implementation, and the integer reaches it's limit (even though I
am deleting previous records), will the sqlite database assign "un-used
primary keys" (previously deleted) to any NEW inserts?

Thanks for the help ;-)

Lynton

Stephan Beal

unread,
May 10, 2011, 6:42:46 AM5/10/11
to General Discussion of SQLite Database
On Tue, May 10, 2011 at 12:35 PM, Lynton Grice
<lynton...@logosworld.com>wrote:

> Thanks for this, much appreciated. My application is written in pure C,
> so I guess I will not be able to use your C++ code?
>

It's actually just a thin coating over C, and the whole class is quite
small, so it would be easy to strip out the bits you can re-use. The log
routine, for example, internally uses vnsprintf() (which is C99, not C89, by
the way). The trim() function just prepares and executes a statement.

Afriza N. Arief

unread,
May 10, 2011, 6:57:26 AM5/10/11
to General Discussion of SQLite Database
On May 10, 2011 4:09 PM, "Lynton Grice" <lynton...@logosworld.com> wrote:
>
> Hi there,

>
> how can I implement /
> mimic a type of "rotating log"?
>
> So in my mind I am thinking that perhaps I can LIMIT the size of the
> SQLIte DB to say 5 MB? And once the DB reaches that size it starts
> INSERTING new logs over the earliest records in the database?
>

What I did was to create a separate thread / process which is suspended most
of the time. Every certain interval, it wakes up and do clean up according
to some conditions.

For my case, it will DELETE records older than 60 days by using the
timestamp column I added. I used REAL for the timestamp as I read somewhere
that it will save space and be faster for sorting. It also used its own DB
connection so SQLite will deal with any concurrency issues.

DELETE FROM logs WHERE timestamp < julianday('now','-60 days')

Alternatively, you can keep the last N number of records, for example 1000.

DELETE FROM logs WHERE rowid IN (SELECT rowid FROM logs ORDER BY timestamp
DESC LIMIT 1000,-1)

By issuing DELETE statement, you will actually reuse the storage space for
subsequent new data. Issue VACUUM if you want to slowly reclaim unused
space.

Afriza N. Arief

Simon Slavin

unread,
May 10, 2011, 7:34:54 AM5/10/11
to General Discussion of SQLite Database

On 10 May 2011, at 11:42am, Lynton Grice wrote:

> BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
> to say FIX the sqlite database size to say "5 MB"?

There isn't one. SQLite would not know which records to delete.

> Also, lets say I have a AUTOINCREMENT INTEGER PRIMARY KEY, what will
> happen when it reaches 5 MB? Will it just keep returning SQLITE_FULL or
> similar? I guess I am looking for a "round robin queue" here?

A round robin queue is fine. Every so often, to kill off old records do

SELECT max(rowid) FROM myTable

then in your code subtract from it however many rows you want to keep, then do

DELETE FROM myTable WHERE rowid < firstToRetain

It won't work perfectly but it's simple and fast.

> While I'm on it, if I have an AUTOINCREMENT INTEGER PRIMARY KEY with a
> LOGGER implementation, and the integer reaches it's limit (even though I
> am deleting previous records), will the sqlite database assign "un-used
> primary keys" (previously deleted) to any NEW inserts?

A SQLite integer can get /really/ big: 2^63. There's no way you could ever write enough records to push it over the limit. Your hardware will disintegrate first.

Simon.

Lauri Nurmi

unread,
May 10, 2011, 8:57:51 AM5/10/11
to General Discussion of SQLite Database
El mar, 10-05-2011 a las 12:34 +0100, Simon Slavin escribió:
> On 10 May 2011, at 11:42am, Lynton Grice wrote:
>
> > BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
> > to say FIX the sqlite database size to say "5 MB"?
>
> There isn't one. SQLite would not know which records to delete.

If such a pragma existed, SQLite wouldn't need to delete anything
necessarily, it could behave like it behaves when trying to write to a
full disk.

Simon Slavin

unread,
May 10, 2011, 9:08:44 AM5/10/11
to General Discussion of SQLite Database

On 10 May 2011, at 1:57pm, Lauri Nurmi wrote:

> El mar, 10-05-2011 a las 12:34 +0100, Simon Slavin escribió:
>> On 10 May 2011, at 11:42am, Lynton Grice wrote:
>>
>>> BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
>>> to say FIX the sqlite database size to say "5 MB"?
>>
>> There isn't one. SQLite would not know which records to delete.
>
> If such a pragma existed, SQLite wouldn't need to delete anything
> necessarily, it could behave like it behaves when trying to write to a
> full disk.

Good idea. Or introduce a new result code for 'Database has reached maximum allowed size'. Presumably it would be handled as fixing the number of pages. Might be useful for small platforms like cellphones, where running out of memory is a disaster.

Simon.

Adam DeVita

unread,
May 10, 2011, 9:22:34 AM5/10/11
to General Discussion of SQLite Database
Why not use INSERT OR REPLACE to your advantage?

If you set the maximum number of log entries you wanted to keep, then kept
track of your log insert statement, you could wrap by

int this_log_entry_id=1; //initialize.. actually could be initialized by
getting the log entry id of the min date in your log at the beginning of
your program.
....


if (this_log_entry_id > max_log_entries){
this_log_entry_id =1;
}
else{
this_log_entry_id
}

call_insert_function (this_log_entry_id /*becomes the primary key that you
are inserting or replacing*/ , data_to_be_logged ,.....

Adam

--
VerifEye Technologies Inc.
905-948-0015x245
151 Whitehall Dr, Unit 2
Markham ON, L3R 9T1
Canada

Enrico Thierbach

unread,
May 10, 2011, 9:28:09 AM5/10/11
to General Discussion of SQLite Database
>
> A round robin queue is fine. Every so often, to kill off old records do
>
> SELECT max(rowid) FROM myTable
>
> then in your code subtract from it however many rows you want to keep, then do
>
> DELETE FROM myTable WHERE rowid < firstToRetain
>
> It won't work perfectly but it's simple and fast.
>

You could even do something like that in an ON INSERT trigger. And with an AUTOINCREMENT primary key (see http://www.sqlite.org/autoinc.html)
you would not even have to SELECT max(rowid), as the max rowid is then the rowid of the new record.

As a side note: even when run from a separate thread, inserting the log entries and deleting old entries would lock the database, thus affecting any main thread. You would probably want a separate database for that.

/eno

Nico Williams

unread,
May 10, 2011, 9:36:23 AM5/10/11
to General Discussion of SQLite Database
Or just a function to return the size of the current DB. Mind you,
automatically deleting rows from a log table isn't enough: you may
have to periodically VACUUM the DB, or you may have to setup
auto_vacuum (and incremental_vacuum).

I have code like this in one DB:

CREATE TABLE IF NOT EXISTS LogConfig
(opt TEXT PRIMARY KEY NOT NULL UNIQUE,
val);
INSERT OR IGNORE INTO LogConfig (opt, val)
VALUES ('replication_retain_time', '7 days');
INSERT OR IGNORE INTO LogConfig (opt, val)
VALUES ('replication_retain_count', '10000');
INSERT OR IGNORE INTO LogConfig (opt, val)
VALUES ('history_retain_time', '7 days');
INSERT OR IGNORE INTO LogConfig (opt, val)
VALUES ('history_retain_count', '10000');
...
DROP TRIGGER IF EXISTS EntryLog_prune;
CREATE TRIGGER EntryLog_prune
AFTER INSERT ON EntryLog
FOR EACH ROW BEGIN
DELETE FROM EntryLog
WHERE mtime < strftime('%s', 'now', '-' || (
SELECT lc.val
FROM LogConfig lc
WHERE lc.opt = 'history_retain_time')) AND
tx < ((SELECT t.tx FROM TX t ORDER by t.tx DESC LIMIT 1) - (
SELECT val FROM LogConfig WHERE opt = 'history_retain_count'));
END;

It'd be simple to change the above to use a max DB size, if there was
a function to return the current DB size. See above note regarding
vacuum. Something like this:

CREATE TRIGGER EntryLog_prune
AFTER INSERT ON EntryLog
FOR EACH ROW BEGIN
DELETE FROM EntryLog
WHERE tx < (SELECT min(el.tx) FROM EntryLog el) - (
SELECT lc.val FROM LogConfig lc WHERE lc.opt = 'delete_at_once')
AND db_size() > (SELECT lc.val FROM LogConfig lc WHERE lc.opt = 'max_db_size');
END;

Danny

unread,
May 10, 2011, 10:05:23 AM5/10/11
to sqlite...@sqlite.org
Why not have TWO tables? Log_A and Log_B?

When Log_A is full, DELETE everything from Log_B and start logging to it. When Lob_B is full, DELETE everything from Log_A and start logging to it again.

If you want, while logging to one, the other can be archived ...


--- On Tue, 5/10/11, Simon Slavin <sla...@bigfraud.org> wrote:

From: Simon Slavin <sla...@bigfraud.org>
Subject: Re: [sqlite] SQLite as a Logger: How to mimic "rotation of logs"?
To: "General Discussion of SQLite Database" <sqlite...@sqlite.org>
Date: Tuesday, May 10, 2011, 7:34 AM


On 10 May 2011, at 11:42am, Lynton Grice wrote:

> BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
> to say FIX the sqlite database size to say "5 MB"?

There isn't one.  SQLite would not know which records to delete.

> Also, lets say I have a AUTOINCREMENT INTEGER PRIMARY KEY, what will

> happen when it reaches 5 MB? Will it just keep returning SQLITE_FULL or
> similar? I guess I am looking for a "round robin queue" here?

A round robin queue is fine.  Every so often, to kill off old records do

SELECT max(rowid) FROM myTable

then in your code subtract from it however many rows you want to keep, then do

DELETE FROM myTable WHERE rowid < firstToRetain

It won't work perfectly but it's simple and fast.

> While I'm on it, if I have an AUTOINCREMENT INTEGER PRIMARY KEY with a

> LOGGER implementation, and the integer reaches it's limit (even though I
> am deleting previous records), will the sqlite database assign "un-used
> primary keys" (previously deleted) to any NEW inserts?

A SQLite integer can get /really/ big: 2^63.  There's no way you could ever write enough records to push it  over the limit.  Your hardware will disintegrate first.

Simon.

Lynton Grice

unread,
May 10, 2011, 10:23:35 AM5/10/11
to General Discussion of SQLite Database
Hi Simon,

Thanks for the feedback below, I will write some "delete logic" to run
from time to time.......;-)

Lynton

Lynton Grice

unread,
May 10, 2011, 10:32:24 AM5/10/11
to General Discussion of SQLite Database
Hi Enrico,

I like the ON INSERT trigger.....good idea. So perhaps you have a
"setLogMaxSize" type function in C that allows the client program to say
"hey, I only want the log to hold a max of 10 000 records".....and then
I do a select count(*) inside the ON INSERT type trigger and delete
entries if "num records > max"......

Regarding the locking, I am using a WAL mode database, so I would say we
are talking microseconds anyway? So I am not too worried.....or should I be?

Thanks

Lynton

Stephan Beal

unread,
May 10, 2011, 10:38:31 AM5/10/11
to General Discussion of SQLite Database
On Tue, May 10, 2011 at 4:32 PM, Lynton Grice
<lynton...@logosworld.com>wrote:

> I like the ON INSERT trigger.....good idea. So perhaps you have a


> "setLogMaxSize" type function in C that allows the client program to say
> "hey, I only want the log to hold a max of 10 000 records".....and then
> I do a select count(*) inside the ON INSERT type trigger and delete
> entries if "num records > max"......
>

The problem i see with that is that once the log has overflowed one time,
further log calls will run a DELETE very often. Why?

max_log_count=10
current_log_count=9

log("foo");
current_log_count=10 == clean up ==
current_log_count=9

log("bar")
current_log_count=10 == clean up ==
current_log_count=9
...
ad naseum

Lynton Grice

unread,
May 10, 2011, 10:39:47 AM5/10/11
to General Discussion of SQLite Database
Hey NIco,

Now this is great.....in fact I was playing with an "update hook" the
other day....and was going to put the deletion logic under the
SQLITE_INSERT below....

But your code looks better ;-) Thanks !

void update_callback( void* udp, int type, const char* db_name, const
char* tbl_name, sqlite3_int64 rowid ){
switch(type){
case(SQLITE_INSERT):
//Do deletion logic
break;
case(SQLITE_DELETE):
//Do something
break;
};

Lynton Grice

unread,
May 10, 2011, 10:56:43 AM5/10/11
to General Discussion of SQLite Database
Yup, fair enough.......what I think would be better is to have s
variable set called something like "history_retain_time" (like Nico
said)......and perhaps a "log_check_interval" in DAYS or HOURS or
MINUTES....whatever suits the application.....

Then perhaps on each insert you get the code to get the current TIME,
then if the time is >= "log_check_interval" then it is time to DELETE
all records older than the "history_retain_time".....

I think that would be better.....

Chat later

Lynton


On 10/05/2011 16:38, Stephan Beal wrote:
> On Tue, May 10, 2011 at 4:32 PM, Lynton Grice
> <lynton...@logosworld.com>wrote:
>
>> I like the ON INSERT trigger.....good idea. So perhaps you have a
>> "setLogMaxSize" type function in C that allows the client program to say
>> "hey, I only want the log to hold a max of 10 000 records".....and then
>> I do a select count(*) inside the ON INSERT type trigger and delete
>> entries if "num records> max"......
>>
> The problem i see with that is that once the log has overflowed one time,
> further log calls will run a DELETE very often. Why?
>
> max_log_count=10
> current_log_count=9
>
> log("foo");
> current_log_count=10 == clean up ==
> current_log_count=9
>
> log("bar")
> current_log_count=10 == clean up ==
> current_log_count=9
> ...
> ad naseum
>
>

_______________________________________________

Jay A. Kreibich

unread,
May 10, 2011, 1:00:23 PM5/10/11
to General Discussion of SQLite Database

On Tue, May 10, 2011 at 12:42:14PM +0200, Lynton Grice scratched on the wall:

>
> Hi all,
>
> Thanks for your comments...much appreciated......
>
> BTW: if I am using SQLIte as a logger, what PRAGMA statement can I use
> to say FIX the sqlite database size to say "5 MB"?

PRAGMA max_page_count = N

http://www.sqlite.org/pragma.html#pragma_max_page_count

This simply fixes the total size of the database. If you attempt to
perform an operation (such as an INSERT) that would push it over the
limit, you get an out-of-space error.

This PRAGMA needs to be set every time the database is opened.

> Also, lets say I have a AUTOINCREMENT INTEGER PRIMARY KEY, what will
> happen when it reaches 5 MB? Will it just keep returning SQLITE_FULL or
> similar?

Yes.

> I guess I am looking for a "round robin queue" here?

I'd do something like this. This keeps a constant number of messages
in the log. The "msg_id" provides a message counter, while the
"msg_seq" is used to keep the round-robin offset. The use of a view
is required to enable a INSTEAD OF trigger. There might be a way to
do this directly against the a table, but I'm not all that
experienced with SQLite triggers.

===========================================================================

-- Create storage table:
CREATE TABLE log_t (
msg_id INTEGER PRIMARY KEY AUTOINCREMENT,
msg_seq INTEGER UNIQUE,
time TEXT DEFAULT CURRENT_TIMESTAMP,
msg TEXT );

-- Init sqlite_sequence table:
INSERT INTO log_t ( msg_seq, msg ) VALUES ( -1, 'init' );
DELETE FROM log_t WHERE msg_seq = -1;

-- Create view:
CREATE VIEW log AS SELECT msg_id, msg_seq, time, msg FROM log_t;

CREATE TRIGGER log_seq_trg
INSTEAD OF INSERT ON log
FOR EACH ROW
BEGIN
INSERT OR REPLACE INTO log_t ( msg_seq, msg ) VALUES (
( SELECT seq + 1 FROM sqlite_sequence WHERE name = 'log_t' ) %

-- THIS VALUE CONTROLS THE NUMBER OF MESSAGES KEPT IN THE LOG:
10,

NEW.msg );
END;

===========================================================================

To test:

===========================================================================
INSERT INTO log ( msg ) VALUES ( 'a' );
INSERT INTO log ( msg ) VALUES ( 'b' );
INSERT INTO log ( msg ) VALUES ( 'c' );
INSERT INTO log ( msg ) VALUES ( 'd' );
INSERT INTO log ( msg ) VALUES ( 'e' );
INSERT INTO log ( msg ) VALUES ( 'f' );
INSERT INTO log ( msg ) VALUES ( 'g' );
INSERT INTO log ( msg ) VALUES ( 'h' );
INSERT INTO log ( msg ) VALUES ( 'i' );
INSERT INTO log ( msg ) VALUES ( 'j' );
INSERT INTO log ( msg ) VALUES ( 'k' );
INSERT INTO log ( msg ) VALUES ( 'l' );
INSERT INTO log ( msg ) VALUES ( 'm' );
INSERT INTO log ( msg ) VALUES ( 'n' );
INSERT INTO log ( msg ) VALUES ( 'o' );
INSERT INTO log ( msg ) VALUES ( 'p' );
INSERT INTO log ( msg ) VALUES ( 'q' );
INSERT INTO log ( msg ) VALUES ( 'r' );
INSERT INTO log ( msg ) VALUES ( 's' );
INSERT INTO log ( msg ) VALUES ( 't' );
INSERT INTO log ( msg ) VALUES ( 'u' );
INSERT INTO log ( msg ) VALUES ( 'v' );
INSERT INTO log ( msg ) VALUES ( 'w' );
INSERT INTO log ( msg ) VALUES ( 'x' );
INSERT INTO log ( msg ) VALUES ( 'y' );
INSERT INTO log ( msg ) VALUES ( 'z' );
===========================================================================

sqlite> SELECT * FROM log;
18|8|2011-05-10 13:38:14|q
19|9|2011-05-10 13:38:14|r
20|0|2011-05-10 13:38:14|s
21|1|2011-05-10 13:38:14|t
22|2|2011-05-10 13:38:14|u
23|3|2011-05-10 13:38:14|v
24|4|2011-05-10 13:38:14|w
25|5|2011-05-10 13:38:14|x
26|6|2011-05-10 13:38:14|y
27|7|2011-05-10 13:38:14|z

===========================================================================

Note that because I'm grabbing the msg_seq value out of the
sqlite_sequence table, there is some chance of minor screw-ups if the
logging system is deeply involved in transactions and check-points
that might invalidate a sequence number. Don't do that.


> While I'm on it, if I have an AUTOINCREMENT INTEGER PRIMARY KEY with a
> LOGGER implementation, and the integer reaches it's limit

It won't. It's a 63 bit value**. Even if you're inserting a million
rows per second, it will take almost 300,000 years to run out of
numbers. The only way you'll run out is if a very high value is
manually inserted, throwing off the sequence.

** OK, technically it is a 64 bit signed value, but sequences start
at 1 by default, limiting them to half the number domain-- hence 63.

> will the sqlite database assign "un-used
> primary keys" (previously deleted) to any NEW inserts?

No, not with an AUTOINCREMENT: http://www.sqlite.org/autoinc.html

"If the table has previously held a row with the largest possible
ROWID, then new INSERTs are not allowed and any attempt to insert
a new row will fail with an SQLITE_FULL error."

-j

--
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
but showing it to the wrong people has the tendency to make them
feel uncomfortable." -- Angela Johnson

Jean-Christophe Deschamps

unread,
May 10, 2011, 1:26:38 PM5/10/11
to j...@kreibi.ch, General Discussion of SQLite Database
Without a view (but with a trigger) and certainly open to improvement
(9 is the
MAX_ENTRIES parameter):

CREATE TABLE "log" (
"id" INTEGER NOT NULL PRIMARY KEY ON CONFLICT REPLACE AUTOINCREMENT,
"seq" INTEGER CONSTRAINT "ix1Seq" UNIQUE ON CONFLICT REPLACE,
"data" CHAR);

CREATE TRIGGER "trInsLog"
AFTER INSERT
ON "log"
BEGIN
update log set seq = id % 9 where id=(select max(id) from log);
END;

insert into log (data) values ('a');
insert into log (data) values ('b');
insert into log (data) values ('c');
insert into log (data) values ('d');
insert into log (data) values ('e');
insert into log (data) values ('f');
insert into log (data) values ('g');
insert into log (data) values ('h');
insert into log (data) values ('i');
insert into log (data) values ('j');
insert into log (data) values ('k');
insert into log (data) values ('l');
insert into log (data) values ('m');
insert into log (data) values ('o');
insert into log (data) values ('p');
insert into log (data) values ('q');
insert into log (data) values ('r');
insert into log (data) values ('s');
insert into log (data) values ('t');
insert into log (data) values ('u');
insert into log (data) values ('v');
insert into log (data) values ('w');
insert into log (data) values ('x');
insert into log (data) values ('y');
insert into log (data) values ('z');
insert into log (data) values ('Here you are!');

Nico Williams

unread,
May 10, 2011, 1:30:39 PM5/10/11
to j...@kreibi.ch, General Discussion of SQLite Database
On Tue, May 10, 2011 at 12:00 PM, Jay A. Kreibich <j...@kreibi.ch> wrote:
>> I guess I am looking for a "round robin queue" here?
>
>  I'd do something like this.  This keeps a constant number of messages
>  in the log.  The "msg_id" provides a message counter, while the
>  "msg_seq" is used to keep the round-robin offset.  The use of a view
>  is required to enable a INSTEAD OF trigger.  There might be a way to
>  do this directly against the a table, but I'm not all that
>  experienced with SQLite triggers.

You can get the effect of INSTEAD OF triggers on actual tables by
using a BEFORE INSERT trigger that ends with a SELECT RAISE(IGNORE):

CREATE TABLE t1(a);
CREATE TABLE t2(a);
CREATE TRIGGER t BEFORE INSERT ON t1 BEGIN
INSERT INTO t2 SELECT NEW.a;
SELECT RAISE(IGNORE);
END;

Thanks for the tip about the max_page_count pragma!

And, thinking about the way SQLite3 works today, re-using rowids --
using rowids as a circular list -- should help keep the page count
down, particularly if there's a maximum size for the rows being added
into that table.

Nico
--

Lynton Grice

unread,
May 10, 2011, 2:09:54 PM5/10/11
to j...@kreibi.ch, General Discussion of SQLite Database
Hi Jay,

Wow, thanks for your detailed message below.....much appreciated ;-)

I will try the PRAGMA and also the "msg_seq".....great.....

Lynton

_______________________________________________

Gerry Snyder

unread,
May 10, 2011, 2:23:28 PM5/10/11
to General Discussion of SQLite Database

If this has already been suggested, I apologize.

Add an integer column with a UNIQUE ON CONFLICT REPLACE constraint.Then
after you figure out how many entries are enough (maxcount), insert each
row, specifying that column as mod((lastinsertrowid()+1),maxcount) or
however you specify a modulus or remainder. That column will just wrap
around when it hits maxcount; and you extract based on rowid, not that
column to keep things in proper order.

And don't worry about maxing out on rowid.

HTH,

Gerry

Reply all
Reply to author
Forward
0 new messages