running kamanja for medical sample

11 views
Skip to first unread message

Alison Apel

unread,
Dec 14, 2015, 5:48:27 PM12/14/15
to kamanja
 Hi,
      When I run "kamanja start -v" for the medical sample, I get the following output. I'm concerned about the:

ERROR [pool-8-thread-2] - Failed to get data from container:com.ligadata.kamanja.samples.messages.beneficiary.

error. Should I just ignore?
thanks,
alison


WARN [main] - DATABASE_SCHEMA remains unset
 WARN [main] - DO_AUTH remains unset
 WARN [main] - MODEL_EXEC_LOG remains unset
 WARN [main] - JAR_TARGET_DIR remains unset
 WARN [main] - ADAPTER_SPECIFIC_CONFIG remains unset
 WARN [main] - AUDIT_PARMS remains unset
 WARN [main] - ZOOKEEPER_CONNECT_STRING remains unset
 WARN [main] - SECURITY_IMPL_CLASS remains unset
 WARN [main] - DATABASE remains unset
 WARN [main] - AUDIT_IMPL_JAR remains unset
 WARN [main] - SECURITY_IMPL_JAR remains unset
 WARN [main] - FUNCTION_FILES_DIR remains unset
 WARN [main] - MESSAGE_FILES_DIR remains unset
 WARN [main] - COMPILER_WORK_DIR remains unset
 WARN [main] - CONTAINER_FILES_DIR remains unset
 WARN [main] - ZNODE_PATH remains unset
 WARN [main] - NOTIFY_ENGINE remains unset
 WARN [main] - DATABASE_HOST remains unset
 WARN [main] - SCALA_HOME remains unset
 WARN [main] - SERVICE_PORT remains unset
 WARN [main] - CLASSPATH remains unset
 WARN [main] - AUDIT_IMPL_CLASS remains unset
 WARN [main] - SSL_PASSWD remains unset
 WARN [main] - SSL_CERTIFICATE remains unset
 WARN [main] - SERVICE_HOST remains unset
 WARN [main] - CONCEPT_FILES_DIR remains unset
 WARN [main] - DATABASE_LOCATION remains unset
 WARN [main] - JAVA_HOME remains unset
 WARN [main] - MANIFEST_PATH remains unset
 WARN [main] - TYPE_FILES_DIR remains unset
 WARN [main] - MODEL_FILES_DIR remains unset
 WARN [main] - DO_AUDIT remains unset
 WARN [main] - JAR_PATHS remains unset
 WARN [main] - GIT_ROOT remains unset
 WARN [main] - CONFIG_FILES_DIR remains unset
 WARN [main] - ROOT_DIR remains unset
log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
 WARN [main] - KamanjaManager is running now. Waiting for user to terminate with SIGTERM, SIGINT or SIGABRT signals
 WARN [main-EventThread] - NodeId:1, IsLeader:false, Leader:1, AllParticipents:{1}
 WARN [main-EventThread] - NodeId:1, IsLeader:true, Leader:1, AllParticipents:{1}
 WARN [pool-2-thread-1] - Got Redistribution request. Participents are {1}. Looks like all nodes are not yet up. Waiting for 30000 milli seconds to see whether there are any more changes in participents
 WARN [pool-2-thread-1] - Distribution NodeId:1, IsLeader:true, Leader:1, AllParticipents:{1}
ERROR [pool-8-thread-2] - Failed to get data from container:com.ligadata.kamanja.samples.messages.beneficiary.
StackTrace:java.io.IOError: java.io.IOException: Wrong index checksum, store was not closed properly and could be corrupted.
at org.mapdb.StoreDirect.checkHeaders(StoreDirect.java:258)
at org.mapdb.StoreDirect.<init>(StoreDirect.java:207)
at org.mapdb.DBMaker.extendStoreDirect(DBMaker.java:918)
at org.mapdb.DBMaker.makeEngine(DBMaker.java:722)
at org.mapdb.DBMaker.make(DBMaker.java:665)
at com.ligadata.keyvaluestore.HashMapAdapter.createTable(HashMapAdapter.scala:140)
at com.ligadata.keyvaluestore.HashMapAdapter.liftedTree1$1(HashMapAdapter.scala:196)
at com.ligadata.keyvaluestore.HashMapAdapter.com$ligadata$keyvaluestore$HashMapAdapter$$CreateContainer(HashMapAdapter.scala:195)
at com.ligadata.keyvaluestore.HashMapAdapter.com$ligadata$keyvaluestore$HashMapAdapter$$CheckTableExists(HashMapAdapter.scala:174)
at com.ligadata.keyvaluestore.HashMapAdapter.get(HashMapAdapter.scala:631)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$.com$ligadata$SimpleEnvContextImpl$SimpleEnvContextImpl$$callGetData(SimpleEnvContextImpl.scala:2295)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$$anonfun$com$ligadata$SimpleEnvContextImpl$SimpleEnvContextImpl$$LoadDataIfNeeded$1.apply(SimpleEnvContextImpl.scala:1011)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$$anonfun$com$ligadata$SimpleEnvContextImpl$SimpleEnvContextImpl$$LoadDataIfNeeded$1.apply(SimpleEnvContextImpl.scala:968)
at scala.collection.immutable.Range.foreach(Range.scala:141)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$.com$ligadata$SimpleEnvContextImpl$SimpleEnvContextImpl$$LoadDataIfNeeded(SimpleEnvContextImpl.scala:968)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$.localGetObject(SimpleEnvContextImpl.scala:684)
at com.ligadata.SimpleEnvContextImpl.SimpleEnvContextImpl$.getObject(SimpleEnvContextImpl.scala:1366)
at com.ligadata.KamanjaManager.LearningEngine.execute(LearningEngine.scala:224)
at com.ligadata.KamanjaManager.ExecContextImpl$$anonfun$execute$1.apply(ExecContext.scala:84)
at com.ligadata.KamanjaManager.ExecContextImpl$$anonfun$execute$1.apply(ExecContext.scala:82)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at com.ligadata.KamanjaManager.ExecContextImpl.execute(ExecContext.scala:82)
at com.ligadata.InputAdapters.KafkaSimpleConsumer$$anonfun$StartProcessing$3$$anon$1$$anonfun$run$1$$anonfun$apply$1.apply$mcV$sp(KafkaSimpleConsumer.scala:288)
at scala.util.control.Breaks.breakable(Breaks.scala:37)
at com.ligadata.InputAdapters.KafkaSimpleConsumer$$anonfun$StartProcessing$3$$anon$1$$anonfun$run$1.apply(KafkaSimpleConsumer.scala:264)
at com.ligadata.InputAdapters.KafkaSimpleConsumer$$anonfun$StartProcessing$3$$anon$1$$anonfun$run$1.apply(KafkaSimpleConsumer.scala:260)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at kafka.message.MessageSet.foreach(MessageSet.scala:67)
at com.ligadata.InputAdapters.KafkaSimpleConsumer$$anonfun$StartProcessing$3$$anon$1.run(KafkaSimpleConsumer.scala:260)
at scala.actors.threadpool.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1060)
at scala.actors.threadpool.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:574)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Wrong index checksum, store was not closed properly and could be corrupted.
... 35 more

ERROR [pool-8-thread-2] - Failed to get data from datastore. Waiting for another 15000 milli seconds and going to start them again.
Executing COPD Risk Assessment against Beneficiary message:
Message Name: Beneficiary
Message Version: 000000.000001.000000
Message Desynpuf ID: 00001C24EE7B06AC
Desynpuf ID: 00001C24EE7B06AC
COPD Risk Level: 
Is Over 40 Years Old: true
Has Smoking History: false
Has Environmental Exposure: false
Has Dyspnea: false
Has Chronic Cough: false
Has Chronic Sputum: false
Has AAT Deficiency: false
Inpatient Claim Costs: 0.0
Outpatient Claim Costs: 0.0
******************************************************************************
...

William Tarver

unread,
Dec 14, 2015, 5:59:55 PM12/14/15
to Alison Apel, kamanja
This error is seen when using hash map. This means that your data store is corrupted and you need to clear your data. This occurs usually when two processing open a connection to the hash map at the same time. Shut down your processes, remove all files from $KAMANJA_HOME/storage and start again.


--
You received this message because you are subscribed to the Google Groups "kamanja" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kamanja+u...@ligadata.com.
To post to this group, send email to kam...@ligadata.com.
Visit this group at https://groups.google.com/a/ligadata.com/group/kamanja/.

Donald

unread,
Dec 17, 2015, 11:19:10 AM12/17/15
to kamanja, ali...@ligadata.com
Did this fix you problem?
To unsubscribe from this group and stop receiving emails from it, send an email to kamanja+unsubscribe@ligadata.com.

Alison Apel

unread,
Dec 17, 2015, 11:57:31 AM12/17/15
to Donald, kamanja
I understand they are still working on a fix. Stay tuned..

To unsubscribe from this group and stop receiving emails from it, send an email to kamanja+u...@ligadata.com.

Daniel Kozin

unread,
Dec 17, 2015, 1:32:21 PM12/17/15
to Alison Apel, Donald, kamanja
Hi Donald,

The exceptions that you are seeing below are due to the “corrupted” hash map.   This is an issue that is solved by running William’s new clean up script,  or  by cleaning up your logs and storage directories manually.   The utility for cleanup has been checked in.

dan
Reply all
Reply to author
Forward
0 new messages