Row-Key query with scanner

94 views
Skip to first unread message

sefoli

unread,
Sep 20, 2012, 9:04:24 AM9/20/12
to hyperta...@googlegroups.com
hi...

Could you give me an example with row_interval in the ScanSpec? and
How can I get this query returns the cells with scanner on python
query : select * from [table] where row='xxx'
scanner code : ?

regards,
Sefa

Christoph Rupp

unread,
Sep 20, 2012, 9:21:50 AM9/20/12
to hyperta...@googlegroups.com
Hi Sefa,

you can find the ScanSpec object in /opt/hypertable/<version>/lib/py/gen-py/hyperthrift/gen/ttypes.py.

it has a field "row_intervals" (the first parameter of the constructor) which is a list of RowInterval objects (also defined in ttypes.py), which has 4 members:
start_row, start_inclusive, end_row, end_inclusive.

ri = RowInterval("start", True, "end", "False)
ss = ScanSpec()
ss.row_intervals = [ ri ]

bye
Christoph

2012/9/20 sefoli <sefis...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "Hypertable Development" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hypertable-dev/-/qU0nFJl-WesJ.
To post to this group, send email to hyperta...@googlegroups.com.
To unsubscribe from this group, send email to hypertable-de...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hypertable-dev?hl=en.

sefoli

unread,
Sep 21, 2012, 4:35:47 AM9/21/12
to hyperta...@googlegroups.com, ch...@hypertable.com
Thanks Christoph,

i have another question,
how to 'xxx' is the row that starts with scanner

hql query:select * from [table] where row=^'xxx'
scanner:?

Christoph Rupp

unread,
Sep 21, 2012, 5:29:48 AM9/21/12
to hyperta...@googlegroups.com

You want to have a prefix search for all rows starting with "xxx"?

in that case you have an inclusive start row "xxx" and an exclusive end row of "xxy" (just increment the last byte).

in Python:
ri = RowInterval("xxx", True, "xxy", "False)

bye
Christoph

2012/9/21 sefoli <sefis...@gmail.com>
To view this discussion on the web visit https://groups.google.com/d/msg/hypertable-dev/-/tcwUW4-K_i8J.

Mehmet Ali Cetinkaya

unread,
Oct 3, 2012, 7:42:58 AM10/3/12
to hyperta...@googlegroups.com
Hello,

i'm using stand alone hypertable. i backup and restored 180 million cell (it's a 9 gb gz file) to table successfully. 

and same time, i'm using Hypertable (HT) on Hadoop cluster of 3 machines. 
and i restored 500.000 cell but i didn't restore 800.000 (and more) cell (it's a 373mb gz file) to same table. 

it's waiting after %70. 

how can i resolve this problem?

regards,
mali

-----------------------------------------------------------------

root@dfs1:/opt/hypertable/current/bin# ./restore.sh crawler meta
Restoring 'crawler/meta' ...
cat: create-table-meta.hql: No such file or directory

Welcome to the hypertable command interpreter.
For information about Hypertable, visit http://hypertable.com

Type 'help' for a list of commands, or 'help shell' for a
list of shell meta commands.


Welcome to the hypertable command interpreter.
For information about Hypertable, visit http://hypertable.com

Type 'help' for a list of commands, or 'help shell' for a
list of shell meta commands.


  Elapsed time:  0.00 s

Loading 373,855,030 bytes of input data...

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
*************************

Mehmet Ali Cetinkaya

unread,
Oct 3, 2012, 7:44:59 AM10/3/12
to hyperta...@googlegroups.com
And logs are;

root@dfs1:~# tail -f /opt/hypertable/current/log/*
==> /opt/hypertable/current/log/archive <==
tail: error reading `/opt/hypertable/current/log/archive': Is a directory
tail: /opt/hypertable/current/log/archive: cannot follow end of this type of file; giving up on this name

==> /opt/hypertable/current/log/DfsBroker.hadoop.log <==
INFO: Opening file '/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0' flags=1 bs=0 handle = 12
Oct 3, 2012 2:29:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
INFO: Readdir('/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574')
Oct 3, 2012 2:29:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
INFO: Getting length of file '/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0' (accurate: true)
Exception in thread "ApplicationQueueThread 15" java.lang.IllegalAccessError: org/apache/hadoop/hdfs/DFSClient$DFSDataInputStream
at org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
at org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLength.java:53)
at org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:98)
at java.lang.Thread.run(Thread.java:679)

==> /opt/hypertable/current/log/Hyperspace.log <==
1349263233 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:1915) open(session_id=26, session_name = hypertable, fname=/hypertable/tables/0/0, flags=0x1, event_mask=0x0)
1349263233 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2044) handle 131 created ('/hypertable/tables/0/0', session=26(hypertable), flags=0x1, mask=0x0)
1349263233 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:698) exitting open(session_id=26, session_name = hypertable, fname=/hypertable/tables/0/0, flags=0x1, event_mask=0x0)
1349263233 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2368) attrget(session=26(hypertable), handle=131, attr=schema)
1349263233 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:750) close(session=26(hypertable), handle=131)
1349263237 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2407) readpathattr(session=26(hypertable), name=/hypertable/namemap/names/crawler, attr=id)
1349263247 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2407) readpathattr(session=26(hypertable), name=/hypertable/namemap/names/crawler/meta, attr=id)
1349263247 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2407) attrget(session=26(hypertable), name=/hypertable/tables/2/6, attr=schema)
1349263247 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2368) attrget(session=26(hypertable), handle=130, attr=Location)
1349263776 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2407) attrexists(session=2(Hypertable.Master), name=/hypertable/tables/2/6, attr=x)

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349263797 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationGatherStatistics.cc:57) Entering GatherStatistics-1193 state=INITIAL
1349263797 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationCollectGarbage.cc:38) Entering CollectGarbage-1194
1349263797 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:53) Entering LoadBalancer-1195
1349263797 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:72) Leaving LoadBalancer-1195
1349263827 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349263857 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349263887 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349263917 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349263947 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349263977 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349263776 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading CellStore 2/6/meta/qyoNKN5rd__dbHKv/cs0
1349263776 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1436) Hypertable::Exception: (a) 2/6[50b2ad7c-0a7e-4193-b18e-4eb8f145867e..��] - RANGE SERVER range not found
at void Hypertable::RangeServer::create_scanner(Hypertable::ResponseCallbackCreateScanner*, const Hypertable::TableIdentifier*, const Hypertable::RangeSpec*, const Hypertable::ScanSpec*, Hypertable::QueryCache::Key*) (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1319)
1349263777 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 2 k/v pairs, more=0
1349263777 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 2 k/v pairs, more=0
1349263777 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1436) Hypertable::Exception: (a) 2/6[50b2ad7c-0a7e-4193-b18e-4eb8f145867e..��] - RANGE SERVER range not found
at void Hypertable::RangeServer::create_scanner(Hypertable::ResponseCallbackCreateScanner*, const Hypertable::TableIdentifier*, const Hypertable::RangeSpec*, const Hypertable::ScanSpec*, Hypertable::QueryCache::Key*) (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1319)
1349263777 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading CellStore 2/6/default/qyoNKN5rd__dbHKv/cs0
1349263789 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RSStats.h:83) Maintenance stats scans=(8 22 1053 0.000002) updates=(3 0 0 0.000000 0)
1349263797 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3038) Entering get_statistics()

==> /opt/hypertable/current/log/MonitoringServer.log <==
from /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/backends/base.rb:63:in `start'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/server.rb:159:in `start'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/controllers/controller.rb:86:in `start'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/runner.rb:187:in `send'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/runner.rb:187:in `run_command'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/lib/thin/runner.rb:152:in `run!'
from /usr/lib/ruby/gems/1.8/gems/thin-1.5.0/bin/thin:6
from /usr/bin/thin:19:in `load'
from /usr/bin/thin:19

==> /opt/hypertable/current/log/ThriftBroker.log <==
Hypertable.RangeServer.QueryCache.MaxMemory=4000000000
Hypertable.RangeServer.Range.SplitSize=2000000000
Hypertable.Verbose=true
ThriftBroker.Port=38080
pidfile=/opt/hypertable/current/run/ThriftBroker.pid
port=38080
reactors=8
verbose=true
1349246511 INFO ThriftBroker : (/root/src/hypertable/src/cc/Hyperspace/Session.cc:63) Hyperspace session setup to reconnect
1349246511 INFO ThriftBroker : (/root/src/hypertable/src/cc/ThriftBroker/ThriftBroker.cc:2487) Starting the server...

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264007 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264037 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264067 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264097 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264127 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264157 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264187 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264217 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264247 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264277 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264307 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264337 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264367 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding

==> /opt/hypertable/current/log/Hyperspace.log <==
1349264376 INFO Hyperspace.Master : (/root/src/hypertable/src/cc/Hyperspace/Master.cc:2407) attrexists(session=2(Hypertable.Master), name=/hypertable/tables/2/6, attr=x)

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264376 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationWaitForServers.cc:37) Entering WaitForServers-4 (state=INITIAL)
1349264376 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationWaitForServers.cc:47) Leaving WaitForServers-4 (state=COMPLETE)
1349264376 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationMoveRange.cc:97) Entering MoveRange-1148 2/6[50b2ad7c-0a7e-4193-b18e-4eb8f145867e..��] state=LOAD_RANGE
1349264376 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationGatherStatistics.cc:100) Leaving GatherStatistics-1193

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264377 ERROR Hypertable.RangeServer : load_range (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1784): Hypertable::Exception: Error getting length of DFS file: /hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0 - HYPERTABLE request timeout
at virtual int64_t Hypertable::DfsBroker::Client::length(const Hypertable::String&, bool) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:451)
at virtual int64_t Hypertable::DfsBroker::Client::length(const Hypertable::String&, bool) (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:445): Event: type=ERROR "HYPERTABLE request timeout" from=127.0.0.1:38030
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/TableInfo.cc:226) Unstaging range 2/6[50b2ad7c-0a7e-4193-b18e-4eb8f145867e..��] to TableInfo
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 2 k/v pairs, more=0

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264377 WARN Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/IOHandlerData.cc:591) Received response for non-pending event (id=2587,version=1,total_len=140)

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 2 k/v pairs, more=0
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 0 k/v pairs, more=0
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1570) Loading range: {TableIdentifier: id='2/6' generation=1} {RangeSpec: start='50b2ad7c-0a7e-4193-b18e-4eb8f145867e' end='��'} {RangeState: state=STEADY timestamp=0 soft_limit=666666666 transfer_log='/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574' split_point='' old_boundary_row=''} needs_compaction=0
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/TableInfo.cc:212) Staging range 2/6[50b2ad7c-0a7e-4193-b18e-4eb8f145867e..��] to TableInfo
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 4 k/v pairs, more=0
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:271) Memory Statistics (MB): VM=878.84, RSS=79.66, tracked=43.00, computed=3857.70 limit=19310.40
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:276) Memory Allocation: BlockCache=0.00% BlockIndex=0.00% BloomFilter=0.00% CellCache=1.11% ShadowCache=0.00% QueryCache=98.89%
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3777) Memory Usage: 45089992 bytes

==> /opt/hypertable/current/log/DfsBroker.hadoop.log <==
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
INFO: Closing noverify input stream for file /hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0 handle 12
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Mkdirs
INFO: Making directory '/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv'
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
INFO: Closing noverify input stream for file /hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0 handle 11

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264377 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationCollectGarbage.cc:47) Leaving CollectGarbage-1194

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3283) Exiting get_statistics()

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264377 WARN Hypertable.Master : (/root/src/hypertable/src/cc/AsyncComm/IOHandlerData.cc:591) Received response for non-pending event (id=2590,version=1,total_len=1194)

==> /opt/hypertable/current/log/DfsBroker.hadoop.log <==
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Mkdirs
INFO: Making directory '/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv'
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
INFO: Testing for existence of file '/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
INFO: Readdir('/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv')
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
INFO: Testing for existence of file '/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
INFO: Readdir('/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv')

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400) Successfully created scanner (id=0) on table '0/0', returning 4 k/v pairs, more=0
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading CellStore 2/6/meta/qyoNKN5rd__dbHKv/cs0

==> /opt/hypertable/current/log/DfsBroker.hadoop.log <==
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
INFO: Getting length of file '/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0' (accurate: false)
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
INFO: Opening file '/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0' flags=1 bs=0 handle = 13

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264377 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading CellStore 2/6/default/qyoNKN5rd__dbHKv/cs0

==> /opt/hypertable/current/log/DfsBroker.hadoop.log <==
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
INFO: Getting length of file '/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0' (accurate: false)
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
INFO: Opening file '/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0' flags=1 bs=0 handle = 14
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
INFO: Readdir('/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574')
Oct 3, 2012 2:39:37 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
INFO: Getting length of file '/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0' (accurate: true)
Exception in thread "ApplicationQueueThread 18" java.lang.IllegalAccessError: org/apache/hadoop/hdfs/DFSClient$DFSDataInputStream
at org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
at org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLength.java:53)
at org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:98)
at java.lang.Thread.run(Thread.java:679)

==> /opt/hypertable/current/log/Hypertable.RangeServer.log <==
1349264389 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RSStats.h:83) Maintenance stats scans=(5 12 609 0.000001) updates=(2 0 0 0.000000 0)
1349264397 INFO Hypertable.RangeServer : (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3038) Entering get_statistics()

==> /opt/hypertable/current/log/Hypertable.Master.log <==
1349264397 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationGatherStatistics.cc:57) Entering GatherStatistics-1215 state=INITIAL
1349264397 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationCollectGarbage.cc:38) Entering CollectGarbage-1216
1349264397 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:53) Entering LoadBalancer-1217
1349264397 INFO Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:72) Leaving LoadBalancer-1217
1349264427 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264458 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264488 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264518 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264548 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264578 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding
1349264608 WARN Hypertable.Master : (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220) Dropping OperationGatherStatistics because another one is outstanding

-----

root@dfs1:~# tail -f /hadoop/logs/*
==> /hadoop/logs/hadoop-root-namenode-dfs1.log <==
2012-10-03 14:29:36,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=mkdirs src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv dst=null perm=root:supergroup:rwxr-xr-x
2012-10-03 14:29:36,940 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.56 cmd=open src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:29:36,957 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.56 cmd=open src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:29:36,973 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=mkdirs src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv dst=null perm=root:supergroup:rwxr-xr-x
2012-10-03 14:29:36,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv dst=null perm=null
2012-10-03 14:29:36,982 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv dst=null perm=null
2012-10-03 14:29:36,987 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:29:37,034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:29:37,078 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574 dst=null perm=null
2012-10-03 14:29:37,080 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0 dst=null perm=null

==> /hadoop/logs/hadoop-root-secondarynamenode-dfs1.log <==
2012-10-03 14:10:44,571 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root
2012-10-03 14:10:44,572 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-10-03 14:10:44,572 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-10-03 14:10:44,572 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 56
2012-10-03 14:10:44,578 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 10
2012-10-03 14:10:44,588 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /home/hadoop/double/hadoop-root/dfs/namesecondary/current/edits of size 4821 edits # 51 loaded in 0 seconds.
2012-10-03 14:10:44,595 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 10847 saved in 0 seconds.
2012-10-03 14:10:44,598 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2012-10-03 14:10:44,609 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL 0.0.0.0:50070putimage=1&port=50090&machine=172.16.200.52&token=-18:1354290518:0:1349262644000:1349259044405
2012-10-03 14:10:44,648 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 10847

==> /hadoop/logs/history <==
tail: error reading `/hadoop/logs/history': Is a directory
tail: /hadoop/logs/history: cannot follow end of this type of file; giving up on this name

==> /hadoop/logs/hadoop-root-namenode-dfs1.log <==
2012-10-03 14:39:37,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 4 Number of syncs: 0 SyncTimes(ms): 0 0 0 0 0 0 0 0 0 
2012-10-03 14:39:37,086 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=mkdirs src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv dst=null perm=root:supergroup:rwxr-xr-x
2012-10-03 14:39:37,125 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=mkdirs src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv dst=null perm=root:supergroup:rwxr-xr-x
2012-10-03 14:39:37,129 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv dst=null perm=null
2012-10-03 14:39:37,133 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv dst=null perm=null
2012-10-03 14:39:37,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/tables/2/6/meta/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:39:37,181 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/tables/2/6/default/qyoNKN5rd__dbHKv/cs0 dst=null perm=null
2012-10-03 14:39:37,226 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=listStatus src=/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574 dst=null perm=null
2012-10-03 14:39:37,228 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root,root ip=/172.16.200.52 cmd=open src=/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0 dst=null perm=null


From: Mehmet Ali Cetinkaya <malice...@yahoo.com>
To: "hyperta...@googlegroups.com" <hyperta...@googlegroups.com>
Sent: Wednesday, October 3, 2012 2:42 PM
Subject: [hypertable-dev] Restore Problem

--
You received this message because you are subscribed to the Google Groups "Hypertable Development" group.

Mehmet Ali Cetinkaya

unread,
Oct 4, 2012, 3:10:24 AM10/4/12
to hyperta...@googlegroups.com
and i'm using 0.9.6.4 version of HyperTable...

mali

Sent: Wednesday, October 3, 2012 2:44 PM
Subject: Re: [hypertable-dev] Restore Problem

Christoph Rupp

unread,
Oct 4, 2012, 3:14:00 AM10/4/12
to hyperta...@googlegroups.com
Hi Mehmet,

There are exceptions in your DfsBroker.hadoop.log file:


INFO: Getting length of file '/hypertable/servers/rs2/log/2/6/TiqgSR4nDYpHlWJH-1349262574/0' (accurate: true)
Exception in thread "ApplicationQueueThread 15" java.lang.IllegalAccessError: org/apache/hadoop/hdfs/DFSClient$DFSDataInputStream
at org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
at org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLength.java:53)
at org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:98)
at java.lang.Thread.run(Thread.java:679)

From http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/IllegalAccessError.html:

"Thrown if an application attempts to access or modify a field, or to call a method that it does not have access to.

Normally, this error is caught by the compiler; this error can only occur at run time if the definition of a class has incompatibly changed."

This sounds as if your installation is messed up; maybe you have an older version in your java classpath?

bye

Christoph


2012/10/3 Mehmet Ali Cetinkaya <malice...@yahoo.com>

Mehmet Ali Cetinkaya

unread,
Oct 16, 2012, 4:26:09 AM10/16/12
to hyperta...@googlegroups.com
Hello,

i wanna delete guid that it's row must have "aime" from sozluk table. like this;

delete guid from sozluk where row="aime";

  Elapsed time:  0.00 s
   Total cells:  1
    Throughput:  323.42 cells/s
       Resends:  0

ht said "i deleted 1 row". it's cool for me. but sometimes when i use select query i can find deleted row. i don't know why.
this is my all select querys after then deleted the row. 

how can i solve this issue?
thanx
mali

select guid:b from sozluk cell_limit 2;
aime    guid:b  0a2431a5-22b3-4461-9ac7-3bf67c1e2061
aime_pas        guid:b  1d1b7f35-a14f-4e24-9e98-c2f626f66c21

  Elapsed time:  0.01 s
Avg value size:  36.00 bytes
  Avg key size:  9.00 bytes
    Throughput:  7102.27 bytes/s
   Total cells:  2
    Throughput:  157.83 cells/s

select guid:b from sozluk cell_limit 2 display_timestamps;
2012-10-05 19:53:08.307870001   aime    guid:b  0a2431a5-22b3-4461-9ac7-3bf67c1e2061
2012-10-05 19:53:08.290855001   aime_pas        guid:b  1d1b7f35-a14f-4e24-9e98-c2f626f66c21

  Elapsed time:  0.01 s
Avg value size:  36.00 bytes
  Avg key size:  9.00 bytes
    Throughput:  7747.27 bytes/s
   Total cells:  2
    Throughput:  172.16 cells/s

select guid:b from sozluk where guid="0a2431a5-22b3-4461-9ac7-3bf67c1e2061" display_timestamps;
2012-10-05 19:53:08.307870001   aime    guid:b  0a2431a5-22b3-4461-9ac7-3bf67c1e2061

  Elapsed time:  1.58 s
Avg value size:  36.00 bytes
  Avg key size:  7.00 bytes
    Throughput:  27.19 bytes/s
   Total cells:  1
    Throughput:  0.63 cells/s

select guid:b from sozluk where guid="1d1b7f35-a14f-4e24-9e98-c2f626f66c21" display_timestamps;
2012-10-05 19:53:08.290855001   aime_pas        guid:b  1d1b7f35-a14f-4e24-9e98-c2f626f66c21

  Elapsed time:  1.59 s
Avg value size:  36.00 bytes
  Avg key size:  11.00 bytes
    Throughput:  29.62 bytes/s
   Total cells:  1
    Throughput:  0.63 cells/s


----

select guid:b from sozluk where row=^"aime_";

  Elapsed time:  0.00 s

select * from sozluk where row=^"aime_";

  Elapsed time:  0.00 s

select * from sozluk where row=^"aime_" cell_limit 1;

  Elapsed time:  0.00 s

----

select * from sozluk where row="aime" cell_limit 1 display_timestamps;
2012-10-04 18:43:00.608148001   aime    guid:b  d7e9c5cb-d647-4ba2-8ec1-3f505ced376b

  Elapsed time:  0.00 s
Avg value size:  36.00 bytes
  Avg key size:  6.00 bytes
    Throughput:  44025.16 bytes/s
   Total cells:  1
    Throughput:  1048.22 cells/s
   
select guid:b from sozluk where row="aime";
aime    guid:b  d7e9c5cb-d647-4ba2-8ec1-3f505ced376b

  Elapsed time:  0.00 s
Avg value size:  36.00 bytes
  Avg key size:  6.00 bytes
    Throughput:  46357.62 bytes/s
   Total cells:  1
    Throughput:  1103.75 cells/s

select guid:b from sozluk where row=^"aime" cell_limit 5;
aime    guid:b  d7e9c5cb-d647-4ba2-8ec1-3f505ced376b
aimed   guid:b  08864475-9914-4a6f-b86a-0ca9c6c21014
aimee   guid:b  f96c9014-8e9f-40b5-9057-6a379c7a682b
aimee_myers_dolich      guid:b  2d1042e1-325c-4de9-9ae6-89eeaa3f9308
aimee_sweet     guid:b  1ab3307e-67d7-4bca-95b5-48c93fbb970e

  Elapsed time:  0.00 s
Avg value size:  36.00 bytes
  Avg key size:  10.60 bytes
    Throughput:  193843.59 bytes/s
   Total cells:  5
    Throughput:  4159.73 cells/s

Mehmet Ali Cetinkaya

unread,
Oct 16, 2012, 4:50:47 AM10/16/12
to hyperta...@googlegroups.com
hello again;

my last mail is wrong. because i mixed up all queries. sorry for this mistake.

anyway,

step 1: when i select first three (cell_limit 3) "guid:b" cells from sozluk table results are;

hypertable> select guid:b from sozluk cell_limit 3 display_timestamps;

2012-10-05 19:53:08.307870001   aime    guid:b  0a2431a5-22b3-4461-9ac7-3bf67c1e2061
2012-10-05 19:53:08.290855001   aime_pas        guid:b  1d1b7f35-a14f-4e24-9e98-c2f626f66c21
2012-10-08 17:20:00.197097001   herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7


  Elapsed time:  0.01 s
Avg value size:  36.00 bytes
  Avg key size:  9.33 bytes
    Throughput:  10813.39 bytes/s
   Total cells:  3
    Throughput:  238.53 cells/s

step 2: i'm using another select query. but i don't get any result.

hypertable> select guid:b from sozluk where row="herbier";

  Elapsed time:  0.00 s

step 3: i wanna delete cell;

hypertable> delete guid from sozluk where row="herbier";


  Elapsed time:  0.00 s
   Total cells:  1
    Throughput:  744.60 cells/s
       Resends:  0

step 4: i'm trying same select queries. and everything is same.

hypertable> select guid:b from sozluk cell_limit 3;

aime    guid:b  0a2431a5-22b3-4461-9ac7-3bf67c1e2061
aime_pas        guid:b  1d1b7f35-a14f-4e24-9e98-c2f626f66c21
herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7


  Elapsed time:  0.01 s
Avg value size:  36.00 bytes
  Avg key size:  9.33 bytes
    Throughput:  11675.82 bytes/s
   Total cells:  3
    Throughput:  257.55 cells/s
hypertable> select guid:b from sozluk where row="herbier";

  Elapsed time:  0.00 s


step 5: when i selected same cell from guid;

hypertable> select guid:b from sozluk where guid="0c9702aa-ea09-4c77-9e59-7caaa3a726c7";
herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7
herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7

  Elapsed time:  1.57 s

Avg value size:  36.00 bytes
  Avg key size:  10.00 bytes
    Throughput:  58.62 bytes/s
   Total cells:  2
    Throughput:  1.27 cells/s

hypertable> select guid:b from sozluk where guid=^"0c9702aa-ea0" display_timestamps;
2012-10-08 17:20:00.197097001   herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7
2012-10-08 17:20:00.197041001   herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7

  Elapsed time:  1.57 s

Avg value size:  36.00 bytes
  Avg key size:  10.00 bytes
    Throughput:  58.69 bytes/s
   Total cells:  2
    Throughput:  1.28 cells/s

    
how can i solve this issue?

thanx,
mali

Sent: Tuesday, October 16, 2012 11:26 AM
Subject: [hypertable-dev] is it delete problem?

Christoph Rupp

unread,
Oct 16, 2012, 4:54:20 AM10/16/12
to hyperta...@googlegroups.com
Hi,

can you run the SELECTs again with RETURN_DELETES?

Thanks
Christoph

2012/10/16 Mehmet Ali Cetinkaya <malice...@yahoo.com>

Mehmet Ali Cetinkaya

unread,
Oct 16, 2012, 6:53:04 AM10/16/12
to hyperta...@googlegroups.com
Hi Christoph,

hypertable> select guid:b from sozluk where guid="0c9702aa-ea09-4c77-9e59-7caaa3a726c7" RETURN_DELETES CELL_LIMIT 5;

herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7
herbier guid:b  0c9702aa-ea09-4c77-9e59-7caaa3a726c7
aaa_auto_transport      guid:a          DELETE CELL
aaa_auto_transport      guid:i172_16_200_34i            DELETE CELL
aaa_car guid:a          DELETE CELL

  Elapsed time:  0.01 s
Avg value size:  14.40 bytes
  Avg key size:  16.60 bytes
    Throughput:  11003.83 bytes/s
   Total cells:  5
    Throughput:  354.96 cells/s


hypertable> select guid:b from sozluk where row="herbier" RETURN_DELETES;
herbier guid            DELETE COLUMN FAMILY

  Elapsed time:  0.00 s
  Avg key size:  8.00 bytes
   Total cells:  1
    Throughput:  1338.69 cells/s

mali


From: Christoph Rupp <ch...@hypertable.com>
To: hyperta...@googlegroups.com
Sent: Tuesday, October 16, 2012 11:54 AM
Subject: Re: [hypertable-dev] is it delete problem?

Christoph Rupp

unread,
Oct 16, 2012, 9:57:10 AM10/16/12
to hyperta...@googlegroups.com
This could be a bug in the scanner implementation, maybe it's not handling the deleted cells correctly. I tried to reproduce this. I created a table, inserted a couple of values, but everything worked as expected.

hypertable> create table sozluk(guid);
hypertable> insert into sozluk values ("aime", "guid:b", "0a2431a5-22b3-4461-9ac7-3bf67c1e2061");
hypertable> insert into sozluk values ("aime_pas", "guid:b", "1d1b7f35-a14f-4e24-9e98-c2f626f66c21");
hypertable> insert into sozluk values ("herbier", "guid:b", "0c9702aa-ea09-4c77-9e59-7caaa3a726c7");


hypertable> select guid:b from sozluk cell_limit 3;
aime    guid:b    0a2431a5-22b3-4461-9ac7-3bf67c1e2061
aime_pas    guid:b    1d1b7f35-a14f-4e24-9e98-c2f626f66c21
herbier    guid:b    0c9702aa-ea09-4c77-9e59-7caaa3a726c7

hypertable> select guid:b from sozluk where row="herbier";
herbier    guid:b    0c9702aa-ea09-4c77-9e59-7caaa3a726c7


hypertable> delete guid from sozluk where row="herbier";

hypertable> select guid:b from sozluk cell_limit 3;
aime    guid:b    0a2431a5-22b3-4461-9ac7-3bf67c1e2061
aime_pas    guid:b    1d1b7f35-a14f-4e24-9e98-c2f626f66c21

Are you able to recreate the problem with a series of HQL commands?

Mehmet Ali Cetinkaya

unread,
Oct 16, 2012, 10:13:48 AM10/16/12
to hyperta...@googlegroups.com
Thanks Christoph i will try.

but first, can you look my table's structure. is it any problem?

hypertable> show create table sozluk;

CREATE TABLE sozluk (
  guid,
  ACCESS GROUP default (guid)
)


  Elapsed time:  0.00 s


hypertable> describe table sozluk;
<Schema>
  <AccessGroup name="default">
    <ColumnFamily>
      <Name>guid</Name>
      <Counter>false</Counter>
      <deleted>false</deleted>
    </ColumnFamily>
  </AccessGroup>
</Schema>

Elapsed time:  0.00 s



Sent: Tuesday, October 16, 2012 4:57 PM

Christoph Rupp

unread,
Oct 16, 2012, 10:38:24 AM10/16/12
to hyperta...@googlegroups.com
No, that looks fine. It's the same as i used when i tried to reproduce this.

Mehmet Ali Cetinkaya

unread,
Oct 17, 2012, 4:48:38 AM10/17/12
to hyperta...@googlegroups.com
Hi Christoph,

i tried your suggestion. it worked succesfully.

but i'm using "delete 'guid:b' from sozluk where row="herbier";" hql for delete in my project.
i tried it too. i didn't get any error.

anyway, this is steps of my work. maybe you can find any mistake.

step1: insert into sozluk values ("aime", "guid:a", "0a2431a5-22b3-4461-9ac7-3bf67c1e2061");
step2: delete 'guid:a' from sozluk where row="aime";
step3: insert into sozluk values ("aime", "guid:crawi_172_16_16_16", "0a2431a5-22b3-4461-9ac7-3bf67c1e2061");
step4: delete 'guid:crawi_172_16_16_16' from sozluk where row="aime";
step5: insert into sozluk values ("aime", "guid:b", "0a2431a5-22b3-4461-9ac7-3bf67c1e2061");

i saw that sometimes step5 is working 2 times in my python code.

for example;

hypertable> select * from sozluk return_deletes;
aime    guid:b    0c9702aa-ea09-4c77-9e59-7caaa3a726c7
aime    guid:b    0c9702aa-ea09-4c77-9e59-7caaa3a726c7

  Elapsed time:  0.00 s
Avg value size:  18.00 bytes
  Avg key size:  10.20 bytes
    Throughput:  314732.14 bytes/s
   Total cells:  2
    Throughput:  11160.71 cells/s

hypertable> delete 'guid:b' from sozluk where row="aime";

  Elapsed time:  2.62 s
   Total cells:  1
    Throughput:  0.38 cells/s
       Resends:  0

hypertable> select * from sozluk return_deletes;

aime    guid:b        DELETE CELL

  Elapsed time:  0.00 s
Avg value size:  9.00 bytes
  Avg key size:  10.75 bytes
    Throughput:  165618.45 bytes/s
   Total cells:  4
    Throughput:  8385.74 cells/s

thanks,
mali

Sent: Tuesday, October 16, 2012 4:57 PM

Christoph Rupp

unread,
Oct 17, 2012, 9:46:53 AM10/17/12
to hyperta...@googlegroups.com
Hi Mali,


you write:

> i saw that sometimes step5 is working 2 times in my python code.
>
> for example;
>
> hypertable> select * from sozluk return_deletes;
> aime    guid:b    0c9702aa-ea09-4c77-9e59-
> 7caaa3a726c7
> aime    guid:b    0c9702aa-ea09-4c77-9e59-7caaa3a726c7

That's right. You can insert the same cell multiple times, and the cell will be created with a newer timestamp. If you run the SELECT query with DISPLAY_TIMESTAMPS then you will see that there are two versions.

If you want to avoid that then you can change your query:

    select * from sozluk return_deletes MAX_VERSIONS 1;

or change the column family:

    create table t (guid MAX_VERSIONS 1);

bye
Christoph

2012/10/17 Mehmet Ali Cetinkaya <malice...@yahoo.com>

Mehmet Ali Cetinkaya

unread,
Oct 18, 2012, 3:14:38 AM10/18/12
to hyperta...@googlegroups.com
Thank you Christoph...

Sent: Wednesday, October 17, 2012 4:46 PM

Mehmet Ali Cetinkaya

unread,
Oct 18, 2012, 3:32:09 AM10/18/12
to hyperta...@googlegroups.com
Hello again,

i'm using hypertable 0.9.6.4 version. i can't write turkish characters (ç, ş, ü, ğ,ı, İ...) in the queries on hypertable shell. 
but i know that i could write turkish characters on older version of hypertable.

mali

Christoph Rupp

unread,
Oct 18, 2012, 3:38:24 AM10/18/12
to hyperta...@googlegroups.com
yes, i can confirm that. The German ü also doesn't appear.

We switched to a different input library for the shell a few releases ago, i guess this introduced the problem. I will track it here:
http://code.google.com/p/hypertable/issues/detail?id=958

bye
Christoph

2012/10/18 Mehmet Ali Cetinkaya <malice...@yahoo.com>
Reply all
Reply to author
Forward
0 new messages