Kamanja 1.2 Installation eror

24 views
Skip to first unread message

arun ravindranath

unread,
Dec 17, 2015, 7:19:50 AM12/17/15
to kam...@ligadata.com
Team

Post the installation and while uploading metadata we are getting below error. The there is some error in initial upload but the 

ERROR [main] - Stacktrace:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 7 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family serializedInfo does not exist in region fatafatpronto:config_objects,,1450352299280.eb03defc3df43a551a9bb6faf1ef079c. in table 'NameSpace:config_objects', {NAME => 'key', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'value', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:659)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:615)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1901)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31451)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)
: 7 times,
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:227)
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:207)
        at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1658)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1470)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1076)
        at com.ligadata.keyvaluestore.HBaseAdapter$$anonfun$put$1.apply(HBaseAdapter.scala:357)
        at com.ligadata.keyvaluestore.HBaseAdapter$$anonfun$put$1.apply(HBaseAdapter.scala:340)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at com.ligadata.keyvaluestore.HBaseAdapter.put(HBaseAdapter.scala:340)
        at com.ligadata.MetadataAPI.MetadataAPIImpl$.SaveObjectList(MetadataAPIImpl.scala:648)
        at com.ligadata.MetadataAPI.MetadataAPIImpl$.UploadConfig(MetadataAPIImpl.scala:5130)
        at com.ligadata.MetadataAPI.Utility.ConfigService$$anonfun$uploadClusterConfig$1.apply(ConfigService.scala:55)
        at com.ligadata.MetadataAPI.Utility.ConfigService$$anonfun$uploadClusterConfig$1.apply(ConfigService.scala:54)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at com.ligadata.MetadataAPI.Utility.ConfigService$.uploadClusterConfig(ConfigService.scala:54)
        at scala.com.ligadata.MetadataAPI.StartMetadataAPI$.route(StartMetadataAPI.scala:289)
        at scala.com.ligadata.MetadataAPI.StartMetadataAPI$.main(StartMetadataAPI.scala:102)
        at scala.com.ligadata.MetadataAPI.StartMetadataAPI.main(StartMetadataAPI.scala)

Result: {
  "APIResults" : {
    "Status Code" : 0,
    "Function Name" : "UploadConfig",
<bank specific value>
    "Result Description" : "Uploaded Config successfully"
  }
}

When we start the cluster it says no node configs avaliable.When i check by dumping the node details it gives error saying no configs availiable.
gbrdsr000002264:/apps/kamanja/scripts$ kamanja dump all cfg objects
Using default configuration /apps/kamanja/Install/config/MetadataAPIConfig.properties
 WARN [main] - DATABASE_SCHEMA remains unset
 WARN [main] - DATABASE_LOCATION remains unset
 WARN [main] - DATABASE_HOST remains unset
 WARN [main] - ADAPTER_SPECIFIC_CONFIG remains unset
 WARN [main] - SSL_PASSWD remains unset
 WARN [main] - AUDIT_PARMS remains unset
 WARN [main] - DATABASE remains unset
log4j:WARN No appenders could be found for logger (org.apache.curator.framework.imps.CuratorFrameworkImpl).
log4j:WARN Please initialize the log4j system properly.
Result: {
  "APIResults" : {
    "Status Code" : -1,
    "Function Name" : "GetAllCfgObjects",
    "Result Data" : null,
    "Result Description" : "Failed to fetch all configs. No configs available."
  }
}


William Tarver

unread,
Dec 17, 2015, 12:48:57 PM12/17/15
to kam...@ligadata.com, aru...@gmail.com

Please send me your cluster config and metadataAPI config and I'll see if I can reproduce.

--
You received this message because you are subscribed to the Google Groups "kamanja" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kamanja+unsubscribe@ligadata.com.
To post to this group, send email to kam...@ligadata.com.
Visit this group at https://groups.google.com/a/ligadata.com/group/kamanja/.

Ahmad Ahed Abu Dayyah

unread,
Dec 17, 2015, 12:50:34 PM12/17/15
to kam...@ligadata.com
Hi Arun ,

Please make sure that you clean hbase tables from the previous installation .

Mahesh Lalwani

unread,
Dec 17, 2015, 1:57:50 PM12/17/15
to kam...@ligadata.com
Arun,

Kamanja team is looking into this error and trying to reproduce it here. It is in this regard we look forward to the WebEx session to get more details about your config and discuss changes in metadata schema specifically serializedinfo column etc.

Thanks,
- Mahesh.

On Thursday, December 17, 2015 at 4:19:50 AM UTC-8, arun ravindranath wrote:

Ramana Mandava

unread,
Dec 17, 2015, 2:06:19 PM12/17/15
to kam...@ligadata.com, wil...@ligadata.com, aru...@gmail.com
It is likely we are accessing the old table. HBaseAdapter doesn't create the new table if the table already exists. I think It is more of an upgrade issue.
Reply all
Reply to author
Forward
0 new messages