As mentioned in another thread, I recently moved one of my tables from sqlite to postgres. Importing data all went well. For a while, inserts went well, also. But then I started getting ERROR 23505 (dup key). Can I blame this on pg8000? [I have an open question on getting hooked up to psycopg2]
The table def:
db.define_table('updreq_y',
Field('unitaddr', 'string'),
Field("unitname", 'string', requires=IS_LENGTH(60)),
Field("unitid", 'string', default = "", requires=IS_LENGTH(60)),
Field('country', 'string', requires=IS_LENGTH(2)),
Field("reqtime", "datetime"),
Field("firsttime", "datetime", default = request.now),
Field("upcount", "integer", default=1)
)
There's also a before_update hook to bump upcount, but no additional constraints.
Looking at the table from psql:
postgres=# \d+ updreq_y
Table "public.updreq_y"
Column | Type | Modifiers
| Storage | Stats target | Description
-----------+-----------------------------+--------------------------------------
-----------------+----------+--------------+-------------
id | integer | not null default nextval('updreq_y_id
_seq'::regclass) | plain | |
unitaddr | character varying(512) |
| extended | |
unitname | character varying(512) |
| extended | |
unitid | character varying(512) |
| extended | |
country | character varying(512) |
| extended | |
reqtime | timestamp without time zone |
| plain | |
firsttime | timestamp without time zone |
| plain | |
upcount | integer |
| plain | |
Indexes:
"updreq_y_pkey" PRIMARY KEY, btree (id)
Has OIDs: no
The imported data (a CSV dump from sqlite) had gaps in the id number, and when I turned the postgres on, the inserts that happened for a while were filling in those gaps. Then I started getting errors on insert attempts. I was using update_or_insert(), and I thought maybe pg8000 didn't handle that correctly, so I did my own select, and if no matching rows do a regular insert(). Still getting errors. The value of max(id) hasn't changed yet.
The field unitid represents data that's being phased in; in the interim, I'm using unitaddr and unitname for the select conditions. Neither is unique by themselves, but together seem to avoid conflicts.
The insert looks like:
k = dby.updreq_y.insert(
unitid = request.client,
unitname = varkeys[0],
unitid = varkeys[1],
country = country,
reqtime = request.now,
firsttime = request.now)
and the traceback like
Traceback (most recent call last):
File "/home/www-data/web2py/gluon/restricted.py", line 219, in restricted
exec(ccode, environment)
File "/home/ec2-user/web2py/web2py-2.15.4/web2py/applications/updater/controllers/default.py", line 556, in <module>
File "/home/www-data/web2py/gluon/globals.py", line 414, in <lambda> self._caller = lambda f: f()
File "/home/www-data/web2py/gluon/tools.py", line 3981, in f
return action(*a, **b)
File "/home/ec2-user/web2py/web2py-2.15.4/web2py/applications/updater/controllers/default.py", line 149, in getXML
firsttime = request.now)
File "/home/www-data/web2py/gluon/packages/dal/pydal/objects.py", line 734, in insert
ret = self._db._adapter.insert(self, row.op_values())
File "/home/www-data/web2py/gluon/packages/dal/pydal/adapters/base.py", line 486, in insert
raise e
ProgrammingError: (\'ERROR\', \'23505\', \'duplicate key value violates unique constraint "updreq_y_pkey"\')'
Oh, and 2.15.4 on python 2.7.16, AWS Linux.
Any ideas of what I screwed up?
/dps