An java.lang.ArrayIndexOutOfBoundsException error occurred when I tried to build a simple LSTM model

22 views
Skip to first unread message

yan lu

unread,
Sep 25, 2023, 3:50:01 AM9/25/23
to User Group for BigDL

val data: Array[Tensor[Float]] = Array(
Tensor[Float](Array(1, 2, 3)),
Tensor[Float](Array(2, 0, 4))
)
val labels: Array[Tensor[Float]] = Array(
TensorFloat,
TensorFloat
)

val samples: Array[Sample[Float]] = (data zip labels).map { case (feature, label) =>
Sample[Float](feature, label)
}
val samplesRDD: RDD[Sample[Float]] = spark.sparkContext.parallelize(samples)

val optimizer = Optimizer(model = model,
sampleRDD = samplesRDD,
criterion = MSECriterionFloat,
batchSize = 1)
optimizer.optimize()

An error occurred when I attempted to run the above code
23/09/22 20:00:37 ERROR [main] DistriOptimizer$: Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 18) executor driver): java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at com.intel.analytics.bigdl.tensor.TensorNumericMath$TensorNumeric$NumericFloat$.arraycopy$mcF$sp(TensorNumeric.scala:721)
at com.intel.analytics.bigdl.tensor.TensorNumericMath$TensorNumeric$NumericFloat$.arraycopy(TensorNumeric.scala:715)
at com.intel.analytics.bigdl.tensor.TensorNumericMath$TensorNumeric$NumericFloat$.arraycopy(TensorNumeric.scala:503)
at com.intel.analytics.bigdl.dataset.MiniBatch$.copy(MiniBatch.scala:464)
at com.intel.analytics.bigdl.dataset.MiniBatch$.copyWithPadding(MiniBatch.scala:380)
at com.intel.analytics.bigdl.dataset.ArrayTensorMiniBatch.set(MiniBatch.scala:209)
at com.intel.analytics.bigdl.dataset.ArrayTensorMiniBatch.set(MiniBatch.scala:111)
at com.intel.analytics.bigdl.dataset.SampleToMiniBatch$$anon$2.next(Transformer.scala:348)
at com.intel.analytics.bigdl.dataset.SampleToMiniBatch$$anon$2.next(Transformer.scala:323)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceLeft(TraversableOnce.scala:237)
at scala.collection.TraversableOnce.reduceLeft$(TraversableOnce.scala:220)
at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$reduce$2(RDD.scala:1097)
at org.apache.spark.SparkContext.$anonfun$runJob$6(SparkContext.scala:2322)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1516)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

huangka...@gmail.com

unread,
Sep 27, 2023, 11:02:50 PM9/27/23
to User Group for BigDL
Follow-up of this issue is here: https://github.com/intel-analytics/BigDL/issues/9051
Reply all
Reply to author
Forward
0 new messages