Error 500 java.lang.NullPointerException

493 views
Skip to first unread message

Tausif Shaikh

unread,
Jul 6, 2016, 3:57:16 AM7/6/16
to Druid User
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/dakh-index.json localhost:8090/druid/indexer/v1/task

Warning: Couldn't read data from file "quickstart/dakh-index.json", this makes 
Warning: an empty POST.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    java.lang.NullPointerException: task</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>

a line in my json looks  like this:
{"HTTP_USER_AGENT": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36", "PORTFOLIO_ID": null, "NAME": "no_op", "POPUP_ID": 5, "REMOTE_ADDR": "122.15.120.178", "COUNTRY": "IN", "CREATED_AT": "2016-06-07 08:34:33", "FRAMEWORK_ID": null, "DOMAIN_NAME": "unknown", "TEMPLATE_ID": null, "TOKEN": null, "BUCKET_ID": null, "EMAIL": null, "ID": 1}

Kindly  help
dakh-index-task.json

David Lim

unread,
Jul 6, 2016, 1:37:33 PM7/6/16
to Druid User
Hi Tausif,

Couldn't read data from file "quickstart/dakh-index.json" is a curl error. Based on the file you attached, you probably want 'dakh-index-task.json'

Tausif Shaikh

unread,
Jul 11, 2016, 6:31:10 AM7/11/16
to Druid User
hey thnx for that
 now I have a new error.

<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /druid/indexer/v1/task. Reason:
<pre>    javax.servlet.ServletException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('"' (code 34)): was expecting comma to separate ARRAY entries
 at [Source: HttpInputOverHTTP@5c8a7de0; line: 1, column: 768]</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>


Tausif Shaikh

unread,
Jul 11, 2016, 7:45:30 AM7/11/16
to Druid User
solved .it was a json error

Tausif Shaikh

unread,
Jul 11, 2016, 8:52:31 AM7/11/16
to Druid User
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/dakh-index-task.json localhost:8090/druid/indexer/v1/task
 printed {"task":"index_hadoop_dakhevents_2016-07-11T18:12:41.846Z"}


the status code was FAILED.
WHAT COULD BE THE REASON.

Tausif Shaikh

unread,
Jul 11, 2016, 8:57:28 AM7/11/16
to Druid User


On Monday, 11 July 2016 18:22:31 UTC+5:30, Tausif Shaikh wrote:
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/dakh-index-task.json localhost:8090/druid/indexer/v1/task
 printed {"task":"index_hadoop_dakhevents_2016-07-11T18:12:41.846Z"}


the status code was FAILED.
WHAT COULD BE THE REASON.

this is my log:
{"task":"index_hadoop_dakhevents_2016-07-11T18:23:35.696Z","payload":{"id":"index_hadoop_dakhevents_2016-07-11T18:23:35.696Z","spec":{"dataSchema":{"dataSource":"dakhevents","parser":{"type":"string","parseSpec":{"format":"json","dimensionsSpec":{"dimensions":["HTTP_USER_AGENT","PORTFOLIO_ID","NAME","POPUP_ID","REMOTE_ADDR","COUNTRY","FRAMEWORK_ID","DOMAIN_NAME","TEMPLATE_ID","TOKEN","BUCKET_ID","EMAIL"]},"timestampSpec":{"format":"auto","column":"CREATED_AT"}}},"metricsSpec":[{"type":"count","name":"count"},{"type":"hyperUnique","name":"email_unique","fieldName":"EMAIL"}],"granularitySpec":{"type":"uniform","segmentGranularity":"DAY","queryGranularity":{"type":"none"},"intervals":["2016-01-01T00:00:00.000Z/2016-01-01T00:00:00.000Z"]}},"ioConfig":{"type":"hadoop","inputSpec":{"type":"static","paths":"quickstart/dakh_events.json"},"metadataUpdateSpec":null,"segmentOutputPath":null},"tuningConfig":{"type":"hadoop","workingPath":null,"version":"2016-07-11T18:23:35.695Z","partitionsSpec":{"type":"hashed","targetPartitionSize":5000000,"maxPartitionSize":7500000,"assumeGrouped":false,"numShards":-1,"partitionDimensions":[]},"shardSpecs":{},"indexSpec":{"bitmap":{"type":"concise"},"dimensionCompression":null,"metricCompression":null},"maxRowsInMemory":75000,"leaveIntermediate":false,"cleanupOnFailure":true,"overwriteFiles":false,"ignoreInvalidRows":false,"jobProperties":{},"combineText":false,"useCombiner":false,"buildV9Directly":false,"numBackgroundPersistThreads":0},"uniqueId":"998a1a38917146e880f294b6bec72986"},"hadoopDependencyCoordinates":null,"classpathPrefix":null,"context":null,"groupId":"index_hadoop_dakhevents_2016-07-11T18:23:35.696Z","dataSource":"dakhevents","resource":{"availabilityGroup":"index_hadoop_dakhevents_2016-07-11T18:23:35.696Z","requiredCapacity":1}}} 

Tauseef Shaikh

unread,
Jul 11, 2016, 9:17:08 AM7/11/16
to druid...@googlegroups.com
log all : 

2016-07-11T18:36:50,708 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Starting flush of map output
2016-07-11T18:36:50,719 INFO [Thread-21] org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2016-07-11T18:36:50,720 WARN [Thread-21] org.apache.hadoop.mapred.LocalJobRunner - job_local704288055_0001
java.lang.Exception: com.metamx.common.RE: Failure on row[{"HTTP_USER_AGENT": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36", "PORTFOLIO_ID": null, "NAME": "no_op", "POPUP_ID": 5, "REMOTE_ADDR": "122.15.120.178", "COUNTRY": "IN", "CREATED_AT": "2016-06-07 08:34:33", "FRAMEWORK_ID": null, "DOMAIN_NAME": "unknown", "TEMPLATE_ID": null, "TOKEN": null, "BUCKET_ID": null, "EMAIL": null, "ID": 1}]
	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) [hadoop-mapreduce-client-common-2.3.0.jar:?]
Caused by: com.metamx.common.RE: Failure on row[{"HTTP_USER_AGENT": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36", "PORTFOLIO_ID": null, "NAME": "no_op", "POPUP_ID": 5, "REMOTE_ADDR": "122.15.120.178", "COUNTRY": "IN", "CREATED_AT": "2016-06-07 08:34:33", "FRAMEWORK_ID": null, "DOMAIN_NAME": "unknown", "TEMPLATE_ID": null, "TOKEN": null, "BUCKET_ID": null, "EMAIL": null, "ID": 1}]
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:88) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]
	at io.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityMapper.run(DetermineHashedPartitionsJob.java:283) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_91]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_91]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]
	at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_91]
Caused by: com.metamx.common.parsers.ParseException: Unparseable timestamp found!
	at io.druid.data.input.impl.MapInputRowParser.parse(MapInputRowParser.java:72) ~[druid-api-0.9.1.1.jar:0.9.1.1]
	at io.druid.data.input.impl.StringInputRowParser.parseMap(StringInputRowParser.java:136) ~[druid-api-0.9.1.1.jar:0.9.1.1]
	at io.druid.data.input.impl.StringInputRowParser.parse(StringInputRowParser.java:131) ~[druid-api-0.9.1.1.jar:0.9.1.1]
	at io.druid.indexer.HadoopDruidIndexerMapper.parseInputRow(HadoopDruidIndexerMapper.java:98) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:69) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]
	at io.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityMapper.run(DetermineHashedPartitionsJob.java:283) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_91]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_91]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_91]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_91]
	at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_91]
Caused by: java.lang.IllegalArgumentException: Invalid format: "2016-06-07 08:34:33" is malformed at " 08:34:33"


--
You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/3QH7PXsVCvA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/702a12e1-eae7-4e81-a315-a8f7ff9a6a23%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
--
(¨`•.•´¨) Always
`•.¸(¨`•.•´¨) Keep
(¨`•.•´¨)¸.•´ Smiling!!
`•.¸.•´
 Tausiba´¨)
           ¸ •´ ¸.•*´¨)   ¸.•*¨)
          (¸.•´      (¸.•*      ♥♥♥...♪♪♪

Database Engineer

Tauseef Shaikh

unread,
Jul 11, 2016, 9:55:06 AM7/11/16
to druid...@googlegroups.com
again done index added .. the issue was timestamp format was incorrect. 

Tauseef Shaikh

unread,
Jul 11, 2016, 10:29:19 AM7/11/16
to druid...@googlegroups.com
can u  help me with the  best book tutorial  or website to learn and understand json query to druid.
Reply all
Reply to author
Forward
0 new messages