OperationalError: database is locked

1,960 views
Skip to first unread message

Mike Anson

unread,
Nov 9, 2012, 9:56:59 AM11/9/12
to web...@googlegroups.com
Greetings...

I may have a problem with my database (sqlite) locking.

Traceback (most recent call last):
  File "/home/web2py/src/web2py/gluon/scheduler.py", line 218, in executor
    result = dumps(_function(*args,**vars))
  File "applications/ircmessage/models/tasks.py", line 57, in send_unsent_messages
    for row in db(db.messages.status=='unsent').select(db.messages.id, db.messages.message, db.messages.uid):
  File "/home/web2py/src/web2py/gluon/dal.py", line 8787, in select
    return adapter.select(self.query,fields,attributes)
  File "/home/web2py/src/web2py/gluon/dal.py", line 2127, in select
    return super(SQLiteAdapter, self).select(query, fields, attributes)
  File "/home/web2py/src/web2py/gluon/dal.py", line 1615, in select
    return self._select_aux(sql,fields,attributes)
  File "/home/web2py/src/web2py/gluon/dal.py", line 1580, in _select_aux
    self.execute(sql)
  File "/home/web2py/src/web2py/gluon/dal.py", line 1693, in execute
    return self.log_execute(*a, **b)
  File "/home/web2py/src/web2py/gluon/dal.py", line 1687, in log_execute
    ret = self.cursor.execute(*a, **b)
OperationalError: database is locked

I have a scheduler every minute that get's records (messages) out of the DB with a status of unsent. The messages then get posted to an IRC channel and updated to sent to they are not included the next time the scheduler runs the script.

This seems to work just fine.

It's when I inject a new message (which by default has a message status set to unsent) via a bash script. The message inserts fine but my script that posts it to the IRC channel doesn't not post anything but simply updates it's status to sent without actually sending it. By sending it I mean post a message over sockets.

Here is my model for sending unsent messages:
        for row in db(db.messages.status=='unsent').select(db.messages.id, db.messages.message, db.messages.uid):
            message_id = row.id
            message_message = row.message
            message_uid = row.uid
            
            #socket connection already opened earlier in the script
            s.send("PRIVMSG %s :%s - %s\r\n" % (channel, message_uid, message_message))
            print "message %s has been sent" % message_id

            ## Only seems to print message when a delay is here.
            time.sleep(5)

            ## Set message record to sent and update modified field
            modified_stamp = strftime("%Y-%m-%d %H:%M:%S")
            db.messages[message_id] = dict(status='sent', modified=modified_stamp)
        
            db.commit()

Inserting a message via jsonrpc (shown) and cURL (not shown):

@service.jsonrpc
def savemessage(message, uid):
db.messages.insert(message=message, uid=uid)
db.commit()

message = {"status":"saved"}

return message

Should I simply switch to a postgresql or mysql database to prevent this locking?

Thanks for any help/advice in advance

Niphlod

unread,
Nov 9, 2012, 10:31:03 AM11/9/12
to web...@googlegroups.com
if you can, use a separate db for the scheduler. SQLite doesn't handle well concurrent writes (with default operational capabilities), so having the scheduler operating on the same database of your "insertion" of messages can lead to locks.

Just do

db = DAL('whatever.db')
db2 = DAL('whatever_scheduler.db')

db.define_table('messages', .....)

from gluon.scheduler import Scheduler
mysched = Scheduler(db2)

and to queue tasks you can then use

db2.scheduler_tasks.validate_and_insert(****)

or, with the new API

mysched.queue_task(***)

Vasile Ermicioi

unread,
Nov 9, 2012, 10:31:31 AM11/9/12
to web...@googlegroups.com
what version of sqlite ?

import sqlite3
print sqlite3.sqlite_version

for sqlite >=3.7 you can do

db.executesql('PRAGMA journal_mode=WAL')


and I am sure that you will not get this error anymore, 
I remember I had something like that some times ago and that worked

Vasile Ermicioi

unread,
Nov 9, 2012, 10:32:52 AM11/9/12
to web...@googlegroups.com
PS: 
it is enough to execute that only once 
db.executesql('PRAGMA journal_mode=WAL')

then you can remove it

Mike Anson

unread,
Nov 9, 2012, 10:42:45 AM11/9/12
to web...@googlegroups.com
Thank you again Niphlod. That seems like a sensible way for me to separate the two out. I'll give that a try.

Mike Anson

unread,
Nov 9, 2012, 10:44:20 AM11/9/12
to web...@googlegroups.com
I really appreciate your input Vasile. I will round back on to your advice should Niphlod's advice of separating the scheduler DB and my messaging DB not help me out here.

Mike Anson

unread,
Nov 13, 2012, 2:54:56 PM11/13/12
to web...@googlegroups.com
Hi Vasile,

Where would I put this in my script to run it once? Within the script itself say near the db.commit()?

Cheers,

-Mike


On Friday, 9 November 2012 10:33:01 UTC-5, Vasile Ermicioi wrote:

Vasile Ermicioi

unread,
Nov 14, 2012, 12:11:37 AM11/14/12
to web...@googlegroups.com
first be sure you have sqlite >=3.7:
1) open python
2)>import sqlite3
3)>print sqlite3.sqlite_version

if you don't have sqlite >=3.7 you can't use wal

put the code in models/db.py after database connection

db = DAL('sqlite://storage.sqlite')
db.executesql('PRAGMA journal_mode=WAL')

but you can remove it after one run



www.diazluis.com

unread,
Nov 14, 2012, 8:01:23 AM11/14/12
to web...@googlegroups.com
greetings.
I have the same problem.
but sometimes I throw a cache miss.
and I do not use cache cleared ... I have not configured ..

It had implemented db.executesql ('PRAGMA journal_mode = WAL')


error:

default.py                 OperationalError: database is locked + details
10_base_config.py OperationalError: database is locked + details
planilla.py                 ValueError: invalid literal for int() with base 10: 'cache' + details

Mike Anson

unread,
Nov 14, 2012, 11:00:08 AM11/14/12
to web...@googlegroups.com
I do indeed have greater than 3.7 so I'll see if that prevents the locking. So far so good although it was very sporadic and the scheduler ran over 70 times before it crapped out before adding the line you suggested.

Thanks again for your responses.

-Mike

kenny c

unread,
Oct 14, 2015, 10:03:30 PM10/14/15
to web2py-users
Hi Niphlod,

Thank you for this information. 

Could you tell me how you manage to copy new data into db from db2 after finishing the scheduler job?

Thank you.

kenny c

unread,
Oct 14, 2015, 10:39:17 PM10/14/15
to web...@googlegroups.com
If SQLite cannot handle many concurrent users while writing/reading, should I move to postgres or mongoDB? There will be over 10k rows added every 10mins.

Thanks.

Niphlod

unread,
Oct 15, 2015, 11:11:48 AM10/15/15
to web2py-users
mongodb isn't a supported backend for the scheduler.
And to answer your previous request: db2 is there just to support scheduler "internals" . if your task still wants to write to db there's nothing wrong with it.
Reply all
Reply to author
Forward
0 new messages