problems inserting segments into postgresql database

633 views
Skip to first unread message

Pablo Nebrera

unread,
Jun 7, 2013, 2:10:29 PM6/7/13
to druid-de...@googlegroups.com
Helllo

I have seen this error on realtime when it insert data into segment.

[2013-06-07 18:06:00,819] ERROR DbSegmentPublisher:98 - Exception inserting into DB
org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "end"
  Position: 65 [statement:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", located:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", rewritten:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{created_date:'2013-06-07T18:06:00.818Z',id:'rb_monitor_2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z_2013-06-07T17:54:00.000Z_-1',dataSource:'rb_monitor',partitioned:-1,start:'2013-06-07T17:54:00.000Z',payload:'{"dataSource":"rb_monitor","interval":"2013-06-07T17:54:00.000Z/2013-06-07T17:55:00.000Z","version":"2013-06-07T17:54:00.000Z","loadSpec":{"type":"s3_zip","bucket":"redborder","key":"rb/rb_monitor/2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z/2013-06-07T17:54:00.000Z/-1/index.zip"},"dimensions":"monitor,type,value","metrics":"events,sum_value","shardSpec":{"type":"linear","partitionNum":-1},"binaryVersion":9,"size":2038,"identifier":"rb_monitor_2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z_2013-06-07T17:54:00.000Z_-1"}',used:true,end:'2013-06-07T17:55:00.000Z',version:'2013-06-07T17:54:00.000Z'}, finder:[]}]
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:262)
at com.metamx.druid.realtime.DbSegmentPublisher.publishSegment(DbSegmentPublisher.java:57)
at com.metamx.druid.realtime.plumber.RealtimePlumberSchool$1$7$1.doRun(RealtimePlumberSchool.java:574)
at com.metamx.druid.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "end"
  Position: 65 [statement:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", located:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", rewritten:"INSERT INTO prod_segments (id, dataSource, created_date, start, end, partitioned, version, used, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{created_date:'2013-06-07T18:06:00.818Z',id:'rb_monitor_2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z_2013-06-07T17:54:00.000Z_-1',dataSource:'rb_monitor',partitioned:-1,start:'2013-06-07T17:54:00.000Z',payload:'{"dataSource":"rb_monitor","interval":"2013-06-07T17:54:00.000Z/2013-06-07T17:55:00.000Z","version":"2013-06-07T17:54:00.000Z","loadSpec":{"type":"s3_zip","bucket":"redborder","key":"rb/rb_monitor/2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z/2013-06-07T17:54:00.000Z/-1/index.zip"},"dimensions":"monitor,type,value","metrics":"events,sum_value","shardSpec":{"type":"linear","partitionNum":-1},"binaryVersion":9,"size":2038,"identifier":"rb_monitor_2013-06-07T17:54:00.000Z_2013-06-07T17:55:00.000Z_2013-06-07T17:54:00.000Z_-1"}',used:true,end:'2013-06-07T17:55:00.000Z',version:'2013-06-07T17:54:00.000Z'}, finder:[]}]
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1306)
at org.skife.jdbi.v2.Update.execute(Update.java:57)
at com.metamx.druid.realtime.DbSegmentPublisher$2.withHandle(DbSegmentPublisher.java:63)
at com.metamx.druid.realtime.DbSegmentPublisher$2.withHandle(DbSegmentPublisher.java:59)
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:259)
... 6 more
Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or near "end"
  Position: 65
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1300)
... 10 more



Thanks


Pablo

Eric Tschetter

unread,
Jun 7, 2013, 2:14:23 PM6/7/13
to druid-de...@googlegroups.com
Hrm, that could be because "end" is a reserved word on postgres.  Perhaps it you single or double quote the name of the column it'll work (and be compatible with MySQL as well)?

DbSegmentPublisher is the place that has the sql.

--Eric




Pablo

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/7359c7bf-4197-4af0-93cd-77a9ccd6c8fb%40googlegroups.com?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Pablo Nebrera

unread,
Jun 7, 2013, 3:16:58 PM6/7/13
to druid-de...@googlegroups.com
Added and I get new exception:

[2013-06-07 19:15:00,441] INFO  S3DataSegmentPusher:70 - Deleting zipped index File[/tmp/druid1271564176898721993index.zip]
[2013-06-07 19:15:00,441] INFO  S3DataSegmentPusher:70 - Deleting descriptor file[/tmp/druid2093764762439022784descriptor.json]
[2013-06-07 19:15:00,447] ERROR DbSegmentPublisher:98 - Exception inserting into DB
org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: column "used" is of type smallint but expression is of type boolean
  Hint: You will need to rewrite or cast the expression.
  Position: 145 [statement:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", located:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", rewritten:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{created_date:'2013-06-07T19:15:00.444Z',id:'rb_monitor_2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z_2013-06-07T19:12:00.000Z_-1',dataSource:'rb_monitor',partitioned:-1,start:'2013-06-07T19:12:00.000Z',payload:'{"dataSource":"rb_monitor","interval":"2013-06-07T19:12:00.000Z/2013-06-07T19:13:00.000Z","version":"2013-06-07T19:12:00.000Z","loadSpec":{"type":"s3_zip","bucket":"redborder","key":"rb/rb_monitor/2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z/2013-06-07T19:12:00.000Z/-1/index.zip"},"dimensions":"monitor,type,value","metrics":"events,sum_value","shardSpec":{"type":"linear","partitionNum":-1},"binaryVersion":9,"size":2415,"identifier":"rb_monitor_2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z_2013-06-07T19:12:00.000Z_-1"}',used:true,end:'2013-06-07T19:13:00.000Z',version:'2013-06-07T19:12:00.000Z'}, finder:[]}]
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:262)
at com.metamx.druid.realtime.DbSegmentPublisher.publishSegment(DbSegmentPublisher.java:57)
at com.metamx.druid.realtime.plumber.RealtimePlumberSchool$1$7$1.doRun(RealtimePlumberSchool.java:574)
at com.metamx.druid.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: column "used" is of type smallint but expression is of type boolean
  Hint: You will need to rewrite or cast the expression.
  Position: 145 [statement:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", located:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (:id, :dataSource, :created_date, :start, :end, :partitioned, :version, :used, :payload)", rewritten:"INSERT INTO prod_segments (id, dataSource, created_date, start, "end", partitioned, version, used, payload) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", arguments:{ positional:{}, named:{created_date:'2013-06-07T19:15:00.444Z',id:'rb_monitor_2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z_2013-06-07T19:12:00.000Z_-1',dataSource:'rb_monitor',partitioned:-1,start:'2013-06-07T19:12:00.000Z',payload:'{"dataSource":"rb_monitor","interval":"2013-06-07T19:12:00.000Z/2013-06-07T19:13:00.000Z","version":"2013-06-07T19:12:00.000Z","loadSpec":{"type":"s3_zip","bucket":"redborder","key":"rb/rb_monitor/2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z/2013-06-07T19:12:00.000Z/-1/index.zip"},"dimensions":"monitor,type,value","metrics":"events,sum_value","shardSpec":{"type":"linear","partitionNum":-1},"binaryVersion":9,"size":2415,"identifier":"rb_monitor_2013-06-07T19:12:00.000Z_2013-06-07T19:13:00.000Z_2013-06-07T19:12:00.000Z_-1"}',used:true,end:'2013-06-07T19:13:00.000Z',version:'2013-06-07T19:12:00.000Z'}, finder:[]}]
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1306)
at org.skife.jdbi.v2.Update.execute(Update.java:57)
at com.metamx.druid.realtime.DbSegmentPublisher$2.withHandle(DbSegmentPublisher.java:63)
at com.metamx.druid.realtime.DbSegmentPublisher$2.withHandle(DbSegmentPublisher.java:59)
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:259)
... 6 more
Caused by: org.postgresql.util.PSQLException: ERROR: column "used" is of type smallint but expression is of type boolean
  Hint: You will need to rewrite or cast the expression.
  Position: 145
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1300)
... 10 more
[2013-06-07 19:15:07,920] INFO  LoggingEmitter:70 - Event [{"feed":"metrics","timestamp":"2013-06-07T19:15:07.920Z","service":"realtime","host":"192.168.106.205:8082","metric":"jvm/mem/max","value":15271002112,"user1":"heap"}]
[2013-06-07 19:15:07,920] INFO  LoggingEmitter:70 - Event [{"feed":"metrics","timestamp":"2013-06-07T19:15:07.920Z","service":"realtime","host":"192.168.106.205:8082","metric":"jvm/mem/committed","value":78774272,"user1":"heap"}]
[2013-06-07 19:15:07,921] INFO  LoggingEmitter:70 - Event [{"feed":"metrics","timestamp":"2013-06-07T19:15:07.920Z","service":"realtime","host":"192.168.106.205:8082","metric":"jvm/mem/used","value":24895648,"user1":"heap"}]
[2013-06-07 19:15:07,921] INFO  LoggingEmitter:70 - Event [{"feed":"me

Eric Tschetter

unread,
Jun 7, 2013, 3:21:17 PM6/7/13
to druid-de...@googlegroups.com
That looks like an issue with the used column...  Does PG not have boolean columns?


Pablo Nebrera

unread,
Jun 7, 2013, 3:26:14 PM6/7/13
to druid-de...@googlegroups.com
With the change I get other error on druid_master 

Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: org.postgresql.util.PSQLException: ERROR: operator does not exist: boolean = integer
  Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
  Position: 45 [statement:"SELECT payload FROM prod_segments WHERE used=1", located:"SELECT payload FROM prod_segments WHERE used=1", rewritten:"SELECT payload FROM prod_segments WHERE used=1", arguments:{ positional:{}, named:{}, finder:[]}]
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1306)
at org.skife.jdbi.v2.Query.fold(Query.java:172)
at org.skife.jdbi.v2.Query.list(Query.java:84)
at org.skife.jdbi.v2.Query.list(Query.java:78)
at com.metamx.druid.db.DatabaseSegmentManager$9.withHandle(DatabaseSegmentManager.java:400)
at com.metamx.druid.db.DatabaseSegmentManager$9.withHandle(DatabaseSegmentManager.java:396)
at org.skife.jdbi.v2.DBI.withHandle(DBI.java:259)
... 13 more
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: boolean = integer
  Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
  Position: 45
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1300)
... 19 more


do I change 1 for true ?

Pablo Nebrera

unread,
Jun 7, 2013, 3:29:13 PM6/7/13
to druid-de...@googlegroups.com
I have checked and there are many places where you compare used to 1 or 0. I have changed to true or false respectivly

Will it be compatible with mysql ?

Pablo Nebrera

unread,
Jun 7, 2013, 3:53:26 PM6/7/13
to druid-de...@googlegroups.com
I have modified it and now it is doing insertions into database. I will do the pull request when everything works 

The problem I see all segments in realtime node. How could I check why is not passing to compute node ?

Thanks 


Pablo


2013/6/7 Pablo Nebrera <pablon...@eneotecnologia.com>

Pablo Nebrera

unread,
Jun 7, 2013, 4:01:20 PM6/7/13
to druid-de...@googlegroups.com
I think Eric it is working now with postgres. I will inform you with the progress

Thanks

Pablo
Reply all
Reply to author
Forward
0 new messages