java.lang.IllegalStateException: Joda-time 2.2 or later version is required, but found version: null
at com.amazonaws.util.DateUtils.handleException(DateUtils.java:147)
at com.amazonaws.util.DateUtils.parseRFC822Date(DateUtils.java:195)
at com.amazonaws.services.s3.internal.ServiceUtils.parseRfc822Date(ServiceUtils.java:73)
at com.amazonaws.services.s3.internal.AbstractS3ResponseHandler.populateObjectMetadata(AbstractS3ResponseHandler.java:115)
at com.amazonaws.services.s3.internal.S3MetadataResponseHandler.handle(S3MetadataResponseHandler.java:32)
at com.amazonaws.services.s3.internal.S3MetadataResponseHandler.handle(S3MetadataResponseHandler.java:25)
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:974)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3736)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1027)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1005)
at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy30.retrieveMetadata(Unknown Source)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:743)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1402)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:637)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:803)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:186)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:133)
at org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.init(MapTask.java:822)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:425)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
Caused by: java.lang.IllegalArgumentException: Invalid format: "Fri, 05 Jun 2015 08:24:51 GMT" is malformed at "GMT"
at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:747)
at com.amazonaws.util.DateUtils.parseRFC822Date(DateUtils.java:193)
Googling, the only thing I found was this http://mail-archives.us.apache.org/mod_mbox/spark-user/201504.mbox/%3CCADRmTZJm3r5+6B6zq2HXqy9...@mail.gmail.com%3EWhere the issues was resolved by moving from AMI version 3.6.0 to 3.5.0. I tried that but still getting the same error.Any idea?
Which EMR job step is throwing the exception, and how far into its operation?
A
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Googling, the only thing I found was this http://mail-archives.us.apache.org/mod_mbox/spark-user/201504.mbox/%3CCADRmTZJm3r5+6B6zq2HXqy9xyewSp_47HeH2KQDjS7JtdjCqTw@mail.gmail.com%3E
Where the issues was resolved by moving from AMI version 3.6.0 to 3.5.0. I tried that but still getting the same error.Any idea?
Cheers,
Alex
Googling, the only thing I found was this http://mail-archives.us.apache.org/mod_mbox/spark-user/201504.mbox/%3CCADRmTZJm3r5+6B6zq2HXqy9...@mail.gmail.com%3E
Where the issues was resolved by moving from AMI version 3.6.0 to 3.5.0. I tried that but still getting the same error.Any idea?
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
2015-06-05 07:36:19,947 INFO [IPC Server handler 33 on 35925] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1433489536106_0003_m_000010_0: Error: cascading.pipe.OperatorException: [com.snowplowanalytics....][com.twitter.scalding.RichPipe.each(RichPipe.scala:471)] operator Each failed executing operation
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:107)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:39)
at cascading.flow.stream.FunctionEachStage$1.collect(FunctionEachStage.java:80)
at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145)
at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:133)
at com.twitter.scalding.FlatMapFunction$$anonfun$operate$2.apply(Operations.scala:48)
at com.twitter.scalding.FlatMapFunction$$anonfun$operate$2.apply(Operations.scala:46)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at com.twitter.scalding.FlatMapFunction.operate(Operations.scala:46)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:99)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:39)
at cascading.flow.stream.SourceStage.map(SourceStage.java:102)
at cascading.flow.stream.SourceStage.run(SourceStage.java:58)
at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:130)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
Caused by: java.lang.NullPointerException
at com.snowplowanalytics.snowplow.enrich.common.utils.JsonUtils$.stripInstanceEtc(JsonUtils.scala:240)
at com.snowplowanalytics.snowplow.enrich.common.utils.JsonUtils$.extractJson(JsonUtils.scala:204)
at com.snowplowanalytics.snowplow.enrich.common.utils.JsonUtils$.validateAndReformatJson(JsonUtils.scala:189)
at com.snowplowanalytics.snowplow.enrich.common.utils.JsonUtils$$anonfun$1.apply(JsonUtils.scala:59)
at com.snowplowanalytics.snowplow.enrich.common.utils.JsonUtils$$anonfun$1.apply(JsonUtils.scala:58)
at com.snowplowanalytics.snowplow.enrich.common.enrichments.EnrichmentManager$$anonfun$4.apply(EnrichmentManager.scala:103)
at com.snowplowanalytics.snowplow.enrich.common.enrichments.EnrichmentManager$$anonfun$4.apply(EnrichmentManager.scala:103)
at com.snowplowanalytics.snowplow.enrich.common.utils.MapTransformer$$anonfun$1.apply(MapTransformer.scala:158)
at com.snowplowanalytics.snowplow.enrich.common.utils.MapTransformer$$anonfun$1.apply(MapTransformer.scala:155)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at com.snowplowanalytics.snowplow.enrich.common.utils.MapTransformer$.com$snowplowanalytics$snowplow$enrich$common$utils$MapTransformer$$_transform(MapTransformer.scala:155)
at com.snowplowanalytics.snowplow.enrich.common.utils.MapTransformer$TransformableClass.transform(MapTransformer.scala:132)
at com.snowplowanalytics.snowplow.enrich.common.enrichments.EnrichmentManager$.enrichEvent(EnrichmentManager.scala:193)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$apply$3.apply(EtlPipeline.scala:81)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$apply$3.apply(EtlPipeline.scala:80)
at scalaz.NonEmptyList$class.map(NonEmptyList.scala:29)
at scalaz.NonEmptyListFunctions$$anon$4.map(NonEmptyList.scala:164)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2.apply(EtlPipeline.scala:80)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2.apply(EtlPipeline.scala:78)
at scalaz.Validation$class.map(Validation.scala:114)
at scalaz.Success.map(Validation.scala:329)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1.apply(EtlPipeline.scala:78)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1$$anonfun$apply$1.apply(EtlPipeline.scala:76)
at scala.Option.map(Option.scala:145)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1.apply(EtlPipeline.scala:76)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$$anonfun$1.apply(EtlPipeline.scala:74)
at scalaz.Validation$class.map(Validation.scala:114)
at scalaz.Success.map(Validation.scala:329)
at com.snowplowanalytics.snowplow.enrich.common.EtlPipeline$.processEvents(EtlPipeline.scala:74)
at com.snowplowanalytics.snowplow.enrich.hadoop.EtlJob$$anonfun$7.apply(EtlJob.scala:172)
at com.snowplowanalytics.snowplow.enrich.hadoop.EtlJob$$anonfun$7.apply(EtlJob.scala:171)
at com.twitter.scalding.MapFunction.operate(Operations.scala:58)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:99)
... 21 more
Googling, the only thing I found was this http://mail-archives.us.apache.org/mod_mbox/spark-user/201504.mbox/%3CCADRmTZJm3r5+6B6zq2HXqy9xyewSp_47HeH2KQDjS7JtdjCqTw@mail.gmail.com%3E
Where the issues was resolved by moving from AMI version 3.6.0 to 3.5.0. I tried that but still getting the same error.Any idea?
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
...
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.8.1</version>
</dependency>
/**
* Returns the original runtime exception iff the joda-time being used
* at runtime behaves as expected.
*
* @throws IllegalStateException if the joda-time being used at runtime
* doens't appear to be of the right version.
*/
private static <E extends RuntimeException> E handleException(E ex) {
if (JodaTime.hasExpectedBehavior())
return ex;
throw new IllegalStateException("Joda-time 2.2 or later version is required, but found version: " + JodaTime.getVersion(), ex);
}
...
--
You received this message because you are subscribed to the Google Groups "Snowplow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snowplow-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.