Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[BUGS] BUG #14273: Tuple concurently updated error while creating function using async call from node js with postgresq

2 views
Skip to first unread message

hargud...@gmail.com

unread,
Aug 2, 2016, 9:35:13 AM8/2/16
to
The following bug has been logged on the website:

Bug reference: 14273
Logged by: Kishor Hargude
Email address: hargud...@gmail.com
PostgreSQL version: 9.4.8
Operating system: Ubuntu
Description:

Hi,

I am facing a strange issue on postgresql-9.4.8.

below is the description of the issue.

We have connected our node js application with postgresql-9.4.8.We have
several functions which are getting created in database using async call
from nodejs to postgresql server.but while creating those functions ,We are
encountered a error of "Tuple concurrently updated".And because of this
error our node js application is getting terminated.which is not affordable
to us.

Below is the exact error lines from node js app logs.

[error: tuple concurrently updated]
name: 'error',
length: 83,
severity: 'ERROR',
code: 'XX000',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'heapam.c',
line: '4097',
routine: 'simple_heap_update' }
error: tuple concurrently updated
at Connection.parseE
(/var/www/tea/tea/gamma/node_modules/pg/lib/connection.js:539:11)
at Connection.parseMessage
(/var/www/tea/tea/gamma/node_modules/pg/lib/connection.js:366:17)
at Socket.<anonymous>
(/var/www/tea/tea/gamma/node_modules/pg/lib/connection.js:105:22)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:764:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:426:10)
at emitReadable (_stream_readable.js:422:5)
at readableAddChunk (_stream_readable.js:165:9)
at Socket.Readable.push (_stream_readable.js:127:10)
at TCP.onread (net.js:528:21)

********************************************
Below is the exact error lines from PG logs.

tuple concurrently updated
2016-08-02 10:13:06 IST:[unknown]:teadbuser:teadb_mixdbSTATEMENT: DROP
FUNCTION IF EXISTS get_changeoverview_new_added_components(integer,
character varying,character varying,character varying, numeric, numeric);

CREATE OR REPLACE FUNCTION
get_changeoverview_new_added_components(
.
.
.
.

We tried to execute those function by setting transaction_isolation to
serializable but encountered same issue.

Please suggest me that,How can I bypass this error...OR is it a bug about
severity of message ("tuple concurrently updated")on this kind of issue
.Should it be only warning ?

--
Kishor.


--
Sent via pgsql-bugs mailing list (pgsql...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Alvaro Herrera

unread,
Aug 2, 2016, 2:00:41 PM8/2/16
to
hargud...@gmail.com wrote:

> We have connected our node js application with postgresql-9.4.8.We have
> several functions which are getting created in database using async call
> from nodejs to postgresql server.but while creating those functions ,We are
> encountered a error of "Tuple concurrently updated".

Yeah, this is known. We haven't worried too much, because the use case
seems thin: why are you deleting and creating the function over and
over? Wouldn't it be better to create the function once and be done
with it?

> And because of this error our node js application is getting
> terminated.which is not affordable to us.

Surely the application should be able to do something else upon
receiving an error, rather than crashing.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Tom Lane

unread,
Aug 2, 2016, 3:31:00 PM8/2/16
to
Alvaro Herrera <alvh...@2ndquadrant.com> writes:
> hargud...@gmail.com wrote:
>> We have connected our node js application with postgresql-9.4.8.We have
>> several functions which are getting created in database using async call
>> from nodejs to postgresql server.but while creating those functions ,We are
>> encountered a error of "Tuple concurrently updated".

> Yeah, this is known. We haven't worried too much, because the use case
> seems thin: why are you deleting and creating the function over and
> over? Wouldn't it be better to create the function once and be done
> with it?

Note that each one of those create calls, before failing, will have had
to wait for the previous updater to commit or not. We could imagine
installing alternate behavior such as "don't fail if the new state of the
pg_proc row is exactly what you wanted anyway"; but I do not see any
useful semantics that would not involve waiting to see if the prior
transaction commits. And then there's the cost of the bloat that you're
creating in the pg_proc catalog. So quite aside from the possibility of
a concurrent-update failure, you'd be much better off from a performance
standpoint if you take the trouble to not repetitively create the same
function.

regards, tom lane

Michael Paquier

unread,
Aug 2, 2016, 8:14:58 PM8/2/16
to
On Wed, Aug 3, 2016 at 3:00 AM, Alvaro Herrera <alvh...@2ndquadrant.com> wrote:
> hargud...@gmail.com wrote:
>
>> We have connected our node js application with postgresql-9.4.8.We have
>> several functions which are getting created in database using async call
>> from nodejs to postgresql server.but while creating those functions ,We are
>> encountered a error of "Tuple concurrently updated".
>
> Yeah, this is known. We haven't worried too much, because the use case
> seems thin: why are you deleting and creating the function over and
> over? Wouldn't it be better to create the function once and be done
> with it?

Hm. These days I have been pinged regarding the fact that it is
possible to reach this error on a relation that has a lot of updates
and where autovacuum is made more aggressive. Well, it increases the
possibility to face it. simple_heap_delete() is not used except for
catalogs, so could it be related to one of the pg_stat catalogs
complaining with VACUUM ANALYZE running in parallel?

>> And because of this error our node js application is getting
>> terminated.which is not affordable to us.
>
> Surely the application should be able to do something else upon
> receiving an error, rather than crashing.

That's for sure. A crash is a bad idea, and I recall that this is not
a critical failure.
--
Michael

Kishor Hargude

unread,
Aug 3, 2016, 8:42:03 AM8/3/16
to
Hi,

Thanks everyone for your valuable inputs.

I got it now,The issue may be happening due to pg_proc catalog table update for same function by different connection at the same time(due to async call of  node js to multiple db connections for same functions creation ). 


--
Kishor 
0 new messages