host name problems

243 views
Skip to first unread message

aaron morton

unread,
Aug 30, 2011, 9:22:00 PM8/30/11
to Brisk Users
I was testing something and got the current head on the cassandra 0.8 branch, compiled the jar and dropped it into an working brisk beta 2 install. All on my local machine.

Anyone know what could have changed ?


On restart I got the error below, the INFO level messages are ones I added to the CFRR…

INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 157) InetAddress.getAllByName() /172.16.1.4
INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 167) split.getLocations() 0.0.0.0
INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 172) InetAddress.getByName() /0.0.0.0
ERROR [main] 2011-08-31 12:05:29,936 SessionState.java (line 343) Failed with exception java.io.IOException:java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
java.io.IOException: java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:133)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1114)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getRecordReader(HiveCassandraStandardColumnInputFormat.java:109)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:306)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:320)
... 10 more
Caused by: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamilyRecordReader.java:137)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getRecordReader(HiveCassandraStandardColumnInputFormat.java:105)
... 12 more
Caused by: java.lang.UnsupportedOperationException: no local connection available
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getLocation(ColumnFamilyRecordReader.java:184)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamilyRecordReader.java:118)
... 13 more


It looked like a mis match between the IP address in the splits and the current IP address. Odd, but I tried changing the listen_address, rpc_address and seed to use the 172 address.

Then I started getting errors that other things were trying to connect to localhost. So I changed localhost to 172… in

hadoop/conf/masters
hadoop/conf/slaves
hadoop/webapps/conf/core-site.xml
hadoop/webapps/conf/masters
hadoop/webapps/conf/slaves


Then I started getting errors on cassandra startup that the task tracker could not connect to 127.0.0.1. So I did this in the CLI
[default@brisk_system] list jobtracker;
Using default limit of 100
-------------------
RowKey: currentJobTracker
=> (column=current, value=127.0.0.1, timestamp=1314398419720)

1 Row Returned.
[default@brisk_system] del jobtracker['currentJobTracker'];

Then I restarted and got these continually logged
CassandraJobConf.java (line 102) Chose seed 172.16.1.4 as jobtracker

And I notice it's not written back to the db

[default@brisk_system] list jobtracker;
Using default limit of 100
-------------------
RowKey: currentJobTracker

1 Row Returned.

When I start Hive and try to use an existing DB I get this error….

WARN [main] 2011-08-31 13:08:10,866 CassandraProxyClient.java (line 347) Connection failed to Cassandra node: localhost:9160 unable to connect to server
WARN [main] 2011-08-31 13:08:10,866 CassandraProxyClient.java (line 354) No cassandra ring information found, no other nodes to connect to
ERROR [main] 2011-08-31 13:08:11,919 SessionState.java (line 343) FAILED: Error in metadata: org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Could not connect to Cassandra. Reason: Error connecting to node localhost
org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Could not connect to Cassandra. Reason: Error connecting to node localhost
at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:255)
at org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:2959)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:193)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Could not connect to Cassandra. Reason: Error connecting to node localhost
at org.apache.cassandra.hadoop.hive.metastore.CassandraClientHolder.<init>(CassandraClientHolder.java:60)
at org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStore.setConf(CassandraHiveMetaStore.java:69)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:316)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:268)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:413)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:194)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:159)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:108)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1855)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1865)
at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:251)
... 15 more
Caused by: java.io.IOException: Error connecting to node localhost
at org.apache.cassandra.hadoop.CassandraProxyClient.initialize(CassandraProxyClient.java:208)
at org.apache.cassandra.hadoop.CassandraProxyClient.<init>(CassandraProxyClient.java:180)
at org.apache.cassandra.hadoop.CassandraProxyClient.newProxyConnection(CassandraProxyClient.java:119)
at org.apache.cassandra.hadoop.hive.metastore.CassandraClientHolder.<init>(CassandraClientHolder.java:52)
... 27 more


Looks like the tables are still thinking about the localhost.


-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

tjake

unread,
Aug 30, 2011, 10:07:19 PM8/30/11
to Brisk Users
You can set the address of the metastore in hive-conf.xml

On Aug 30, 9:22 pm, aaron morton <aa...@thelastpickle.com> wrote:
> I was testing something and got the current head on the cassandra 0.8 branch, compiled the jar and dropped it into an working brisk beta 2 install. All on my local machine.
>
> Anyone know what could have changed ?
>
> On restart I got the error below, the INFO level messages are ones I added to the CFRR…
>
> INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 157) InetAddress.getAllByName()  /172.16.1.4
>  INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 167) split.getLocations() 0.0.0.0
>  INFO [main] 2011-08-31 12:05:29,935 ColumnFamilyRecordReader.java (line 172) InetAddress.getByName() /0.0.0.0
> ERROR [main] 2011-08-31 12:05:29,936 SessionState.java (line 343) Failed with exception java.io.IOException:java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
> java.io.IOException: java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java: 341)
>         at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:133)
>         at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1114)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:3 9)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp l.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
>         at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputForm at.getRecordReader(HiveCassandraStandardColumnInputFormat.java:109)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator. java:306)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java: 320)
>         ... 10 more
> Caused by: java.lang.RuntimeException: java.lang.UnsupportedOperationException: no local connection available
>         at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamil yRecordReader.java:137)
>         at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputForm at.getRecordReader(HiveCassandraStandardColumnInputFormat.java:105)
>         ... 12 more
> Caused by: java.lang.UnsupportedOperationException: no local connection available
>         at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getLocation(ColumnFami lyRecordReader.java:184)
>         at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamil yRecordReader.java:118)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp l.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Could not connect to Cassandra. Reason: Error connecting to node localhost
>         at org.apache.cassandra.hadoop.hive.metastore.CassandraClientHolder.<init>(Cas sandraClientHolder.java:60)
>         at org.apache.cassandra.hadoop.hive.metastore.CassandraHiveMetaStore.setConf(C assandraHiveMetaStore.java:69)
>         at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>         at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117 )
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaSto re.java:316)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry( HiveMetaStore.java:268)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(H iveMetaStore.java:413)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStor e.java:194)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaSt ore.java:159)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreCl ient.java:108)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:185 5)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1865)
>         at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:251)
>         ... 15 more
> Caused by: java.io.IOException: Error connecting to node localhost
>         at org.apache.cassandra.hadoop.CassandraProxyClient.initialize(CassandraProxyC lient.java:208)
>         at org.apache.cassandra.hadoop.CassandraProxyClient.<init>(CassandraProxyClien t.java:180)
>         at org.apache.cassandra.hadoop.CassandraProxyClient.newProxyConnection(Cassand raProxyClient.java:119)
>         at org.apache.cassandra.hadoop.hive.metastore.CassandraClientHolder.<init>(Cas sandraClientHolder.java:52)

Patricio Echagüe

unread,
Sep 8, 2011, 8:35:01 PM9/8/11
to brisk...@googlegroups.com
Aaron, were you able to get around this issue?

Leonardo Stern

unread,
Sep 8, 2011, 9:32:46 PM9/8/11
to brisk...@googlegroups.com
+1 to this question.

I needed to install a newer version of cassandra on brisk, and bumped
in this issue. In the end of the day I applied a patch (the Aaron to
CASSANDRA-2981) in the 0.8.1 version. (I didn't try to solve the
issue, I was worried about collateral effects)

2011/9/8 Patricio Echagüe <patr...@datastax.com>:

smh

unread,
Sep 26, 2011, 7:16:18 PM9/26/11
to Brisk Users

Hi,

I am seeing the same issue that Aaron is talking about here. can
anyone point me to the right resolution?



On Sep 8, 6:32 pm, Leonardo Stern <leonardo.st...@gmail.com> wrote:
> +1 to this question.
>
> I needed to install a newer version of cassandra on brisk, and bumped
> in this issue. In the end of the day I applied a patch (the Aaron to
> CASSANDRA-2981) in the 0.8.1 version. (I didn't try to solve the
> issue, I was worried about collateral effects)
>
> 2011/9/8 Patricio Echagüe <patri...@datastax.com>:

Leonardo Stern

unread,
Sep 26, 2011, 7:26:49 PM9/26/11
to brisk...@googlegroups.com
Hi
I'm using cassandra 0.8.6 (got the binary from site) on top of brisk beta 2 and not having this issue anymore (no config tweaking).
What's your cassandra version ?

Did you tweak your configuration ? Used the same cluster with a previous cassandra version ?
Leonardo

smh

unread,
Sep 26, 2011, 8:02:16 PM9/26/11
to Brisk Users

Thanks Leo for the reply.

I am using Brisk beta 2 and the cassandra installation that comes out
of the box.
I was trying to create a database on hive for which a corresponding
cassandra keyspace existed and i would get the exception i pasted
below. I progressed somewhat by specifying in the hive config to
connect to the specific host and port and this time i get the message
that - " the database X already exists". Encouraged, i tried the next
hive query by trying to create an external table like this - CREATE
EXTERNAL TABLE X.sample_table
(row_key string, column_name string, value string)
STORED BY
'org.apache.hadoop.hive.cassandra.CassandraStorageHandler';

But i get the below exception in the logs

2011-09-26 23:54:03,567 ERROR exec.DDLTask
(SessionState.java:printError(343)) - FAILED: Error in metadata:
MetaException(message:Unable to connect to the server unable to
connect to server)
org.apache.hadoop.hive.ql.metadata.HiveException:
MetaException(message:Unable to connect to the server unable to
connect to server)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:476)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:
3146)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:213)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:
57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:
164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:
241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: MetaException(message:Unable to connect to the server
unable to connect to server)
at
org.apache.hadoop.hive.cassandra.CassandraManager.openConnection(CassandraManager.java:
90)
at
org.apache.hadoop.hive.cassandra.CassandraStorageHandler.preCreateTable(CassandraStorageHandler.java:
143)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:
344)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:470)
... 15 more

2011-09-26 23:54:03,571 ERROR ql.Driver
(SessionState.java:printError(343)) - FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

So i dont know whats going on here. BTW i am interested to know how
you upgraded an existing Cassandra cluster of 0.8.6 to be brisk
enabled. What did you have to do to get this working?



On Sep 26, 4:26 pm, Leonardo Stern <leonardo.st...@gmail.com> wrote:
> Hi
> I'm using cassandra 0.8.6 (got the binary from site) on top of brisk beta 2
> and not having this issue anymore (no config tweaking).
> What's your cassandra version ?
>
> Did you tweak your configuration ? Used the same cluster with a previous
> cassandra version ?
> Leonardo
>

Leonardo Stern

unread,
Sep 26, 2011, 8:25:33 PM9/26/11
to brisk...@googlegroups.com
There's a automap feature in hive that creates all keyspaces / column families you have. Maybe all tables are really already mapped.

Try
Drop Table X.SampleTable;

and also
use X;
show tables;

select * from X.SampleTable;

For the cassandra 0.8.6 , I did : 

1 - mv usr/share/brisk/cassandra/lib mv /usr/share/brisk/cassandra/lib-bkp
2 - cp BINARYCASSANDRA/lib /usr/share/brisk/cassandra/

regards,
Leonardo

Subrahmanya Harve

unread,
Sep 26, 2011, 8:59:19 PM9/26/11
to brisk...@googlegroups.com

I tried selecting from the table like you suggested but it complained that the table does not exist. So i think no tables exist.
I will now try to go the route you went - the upgrade of 0.8.6 to be brisk-aware. Looks like all we have to do is copy the binaries from the raw cassandra installation to the brisk cassandra lib and things should work fine? Can you confirm again that you didnt have to tweak any settings for Hive to work well with your Cassandra?

Leonardo Stern

unread,
Sep 26, 2011, 9:12:07 PM9/26/11
to brisk...@googlegroups.com
Yes I can confirm that I didn't change any configuration or anything else.
I was having some problems with mixed components version and picked a new Amazon EC2 instance, installed the brisk b2 binaries and copied the cassandra lib (0.8.6).
Executed some hive queries to make sure everithing was ok (I was interested in finding about my problem so, I was taking one step at time)
Latter I upgrade the hive cassandra handler .. but you don't need to , only if you want to get avoid some bugs related to counter columns

regards,
Leonardo

smh

unread,
Sep 27, 2011, 5:35:28 PM9/27/11
to Brisk Users
Thank you Leo.

I tried upgrading the brisk installation to be cassandra-0.8.6 but i
now see the original exception i was seeing (There was a problem with
the Cassandra Hive MetaStore: Could not connect to Cassandra. Reason:
Error connecting to node localhost). I then changed the hive-site xml
to provide the cassandra seed host and port and then i see the
exception i sent in my earlier mail.

Here is what i did
- Downloaded brisk binaries and formed a cluster
- Downloaded Cassandra 0.8.6 and copied the lib from here to the brisk/
resources/cassandra/lib

The version upgrade worked just fine and the brisk cluster formed well
with no issues, but the hive problem still exists. I wonder if there
is anyone else seeing similar issues.

On Sep 26, 6:12 pm, Leonardo Stern <leonardo.st...@gmail.com> wrote:
> Yes I can confirm that I didn't change any configuration or anything else.
> I was having some problems with mixed components version and picked a new
> Amazon EC2 instance, installed the brisk b2 binaries and copied the
> cassandra lib (0.8.6).
> Executed some hive queries to make sure everithing was ok (I was interested
> in finding about my problem so, I was taking one step at time)
> Latter I upgrade the hive cassandra handler .. but you don't need to , only
> if you want to get avoid some bugs related to counter columns
>
> regards,
> Leonardo
>
> On Mon, Sep 26, 2011 at 9:59 PM, Subrahmanya Harve <
>
> subrahmanyaha...@gmail.com> wrote:
>
> > I tried selecting from the table like you suggested but it complained that
> > the table does not exist. So i think no tables exist.
> > I will now try to go the route you went - the upgrade of 0.8.6 to be
> > brisk-aware. Looks like all we have to do is copy the binaries from the raw
> > cassandra installation to the brisk cassandra lib and things should work
> > fine? Can you confirm again that you didnt have to tweak any settings for
> > Hive to work well with your Cassandra?
>
> > On Mon, Sep 26, 2011 at 5:25 PM, Leonardo Stern <leonardo.st...@gmail.com>wrote:
>
> >> There's a automap feature in hive that creates all keyspaces / column
> >> families you have. Maybe all tables are really already mapped.
>
> >> Try
> >> Drop Table X.SampleTable;
>
> >> and also
> >> use X;
> >> show tables;
>
> >> select * from X.SampleTable;
>
> >> For the cassandra 0.8.6 , I did :
>
> >> 1 - mv usr/share/brisk/cassandra/lib mv /usr/share/brisk/cassandra/lib-bkp
> >> 2 - cp BINARYCASSANDRA/lib /usr/share/brisk/cassandra/
>
> >> regards,
> >> Leonardo
>
Reply all
Reply to author
Forward
0 new messages