[PATCH] materialized views: allow empty strings in views and indexes

1 view
Skip to first unread message

Nadav Har'El

unread,
Sep 22, 2021, 11:47:58 AMSep 22
to scylla...@googlegroups.com, Nadav Har'El
Although Cassandra generally does not allow empty strings as partition keys
(not they are allowed as clustering keys!), it *does* allow empty strings
in regular columns to be indexed by a secondary index, or to become an empty
partition-key column in a materialized view. As noted in issues #9375
and #9364 and verified in a few xfailing cql-pytest tests, Scylla didn't
allow these cases - and this patch fixes that.

The patch is easy, almost "too easy" - it just removes an elaborate function
is_partition_key_empty() which the code used to check whether the view's
row will end up with an empty partition key, which was supposedly
forbidden. But in fact, should have been allowed like they are allowed
in Cassandra and required for the secondary-index implementation, and
the entire function wasn't necessary. This patch also comments-out a
part of a unit test which enshrined the wrong behavior.

Note that the removed function is_partition_key_empty() was *NOT* required
for the "IS NOT NULL" feature of materialized views - this continues to work
as expected after this patch, and we add another test to confirm it.
Being null and being an empty string are two different things.

After this patch we are left with one interesting difference from
Cassandra: Though Cassandra allows a user to create a view row with an
empty-string partition key, and this row is fully visible in when
scanning the view, this row can *not* be queried individually because
"WHERE v=''" is forbidden when v is the partition key (of the view).
Scylla doesn't reproduce this anomaly - and such point query does work
in Scylla after this patch. We add a new test to check this case, and mark
it "cassandra_bug", i.e., it's a Cassandra behavior which we consider wrong
and don't want to emulate.

Fixes #9364
Fixes #9375

Signed-off-by: Nadav Har'El <n...@scylladb.com>
---
test/cql-pytest/test_materialized_view.py | 52 ++++++++++++++++++++---
test/cql-pytest/test_secondary_index.py | 1 -
db/view/view.cc | 34 ++-------------
test/boost/view_schema_pkey_test.cc | 7 +++
4 files changed, 58 insertions(+), 36 deletions(-)

diff --git a/test/cql-pytest/test_materialized_view.py b/test/cql-pytest/test_materialized_view.py
index d69677e80..cfb4ff45c 100644
--- a/test/cql-pytest/test_materialized_view.py
+++ b/test/cql-pytest/test_materialized_view.py
@@ -81,7 +81,6 @@ def test_mv_select_stmt_bound_values(cql, test_keyspace):
# because the "IS NOT NULL" clause in the view's declaration does not
# eliminate this row (an empty string is not considered NULL).
# This reproduces issue #9375.
-...@pytest.mark.xfail(reason="issue #9375")
def test_mv_empty_string_partition_key(cql, test_keyspace):
schema = 'p int, v text, primary key (p)'
with new_test_table(cql, test_keyspace, schema) as table:
@@ -94,7 +93,50 @@ def test_mv_empty_string_partition_key(cql, test_keyspace):
# The view row with the empty partition key should exist.
# In #9375, this failed in Scylla:
assert list(cql.execute(f"SELECT * FROM {mv}")) == [('', 123)]
- # However, it is still impossible to select just this row,
- # because Cassandra forbids an empty partition key on select
- with pytest.raises(InvalidRequest, match='Key may not be empty'):
- cql.execute(f"SELECT * FROM {mv} WHERE v=''")
+
+# The previous test (test_mv_empty_string_partition_key) verifies that a
+# row with an empty-string partition key can appear in the view. This was
+# checked with a full-table scan. But curiously, Cassandra does NOT allow
+# to SELECT this specific row individually, because "WHERE v=''" is not
+# allowed when v is the partition key (of the view).
+# Scylla *does* allow such a query. This demonstrates a difference between
+# Scylla and Cassandra, but we believe that the Cassandra behavior is the
+# wrong one: It doesn't make sense to allow adding a row and making it
+# visible in a full-table scans, but not allowing to query it individually.
+# This is why we mark this test as a Cassandra bug.
+def test_mv_empty_string_partition_key_individual(cassandra_bug, cql, test_keyspace):
+ schema = 'p int, v text, primary key (p)'
+ with new_test_table(cql, test_keyspace, schema) as table:
+ with new_materialized_view(cql, table, '*', 'v, p', 'v is not null and p is not null') as mv:
+ cql.execute(f"INSERT INTO {table} (p,v) VALUES (123, '')")
+ # Note that because cql-pytest runs on a single node, view
+ # updates are synchronous, and we can read the view immediately
+ # without retrying. In a general setup, this test would require
+ # retries.
+ # The view row with the empty partition key should exist.
+ assert list(cql.execute(f"SELECT * FROM {mv} WHERE v=''")) == [('', 123)]
+
+# Test that the "IS NOT NULL" clause in the materialized view's SELECT
+# functions as expected - namely, rows which have their would-be view
+# key column unset (aka null) do not get copied into the view.
+def test_mv_is_not_null(cql, test_keyspace):
+ schema = 'p int, v text, primary key (p)'
+ with new_test_table(cql, test_keyspace, schema) as table:
+ with new_materialized_view(cql, table, '*', 'v, p', 'v is not null and p is not null') as mv:
+ cql.execute(f"INSERT INTO {table} (p,v) VALUES (123, 'dog')")
+ cql.execute(f"INSERT INTO {table} (p,v) VALUES (17, null)")
+ # Note that because cql-pytest runs on a single node, view
+ # updates are synchronous, and we can read the view immediately
+ # without retrying. In a general setup, this test would require
+ # retries.
+ # The row with 123 should appear in the view, but the row with
+ # 17 should not, because v *is* null.
+ assert list(cql.execute(f"SELECT * FROM {mv}")) == [('dog', 123)]
+ # The view row should disappear and reappear if its key is
+ # changed to null and back in the base table:
+ cql.execute(f"UPDATE {table} SET v=null WHERE p=123")
+ assert list(cql.execute(f"SELECT * FROM {mv}")) == []
+ cql.execute(f"UPDATE {table} SET v='cat' WHERE p=123")
+ assert list(cql.execute(f"SELECT * FROM {mv}")) == [('cat', 123)]
+ cql.execute(f"DELETE v FROM {table} WHERE p=123")
+ assert list(cql.execute(f"SELECT * FROM {mv}")) == []
diff --git a/test/cql-pytest/test_secondary_index.py b/test/cql-pytest/test_secondary_index.py
index 362a87c3d..672cecb41 100644
--- a/test/cql-pytest/test_secondary_index.py
+++ b/test/cql-pytest/test_secondary_index.py
@@ -289,7 +289,6 @@ def test_multi_column_with_regular_index(cql, test_keyspace):
# wrong or unusual about an empty string, and it should be supported just
# like any other string.
# Reproduces issue #9364
-...@pytest.mark.xfail(reason="issue #9364")
def test_index_empty_string(cql, test_keyspace):
schema = 'p int, v text, primary key (p)'
# Searching for v='' without an index (with ALLOW FILTERING), works
diff --git a/db/view/view.cc b/db/view/view.cc
index 28e1dbd75..274c80795 100644
--- a/db/view/view.cc
+++ b/db/view/view.cc
@@ -309,32 +309,6 @@ static bool update_requires_read_before_write(const schema& base,
return false;
}

-static bool is_partition_key_empty(
- const schema& base,
- const schema& view_schema,
- const partition_key& base_key,
- const clustering_row& update) {
- // Empty partition keys are not supported on normal tables - they cannot
- // be inserted or queried, so enforce those rules here.
- if (view_schema.partition_key_columns().size() > 1) {
- // Composite partition keys are different: all components
- // are then allowed to be empty.
- return false;
- }
- auto* base_col = base.get_column_definition(view_schema.partition_key_columns().front().name());
- switch (base_col->kind) {
- case column_kind::partition_key:
- return base_key.get_component(base, base_col->position()).empty();
- case column_kind::clustering_key:
- return update.key().get_component(base, base_col->position()).empty();
- default:
- // No multi-cell columns in the view's partition key
- auto& c = update.cells().cell_at(base_col->id);
- atomic_cell_view col_value = c.as_atomic_cell(*base_col);
- return !col_value.is_live() || col_value.value().empty();
- }
-}
-
// Checks if the result matches the provided view filter.
// It's currently assumed that the result consists of just a single row.
class view_filter_checking_visitor {
@@ -692,7 +666,7 @@ static void add_cells_to_view(const schema& base, const schema& view, row base_c
* This method checks that the base row does match the view filter before applying anything.
*/
void view_updates::create_entry(const partition_key& base_key, const clustering_row& update, gc_clock::time_point now) {
- if (is_partition_key_empty(*_base, *_view, base_key, update) || !matches_view_filter(*_base, _view_info, base_key, update, now)) {
+ if (!matches_view_filter(*_base, _view_info, base_key, update, now)) {
return;
}
deletable_row& r = get_view_row(base_key, update);
@@ -710,7 +684,7 @@ void view_updates::create_entry(const partition_key& base_key, const clustering_
void view_updates::delete_old_entry(const partition_key& base_key, const clustering_row& existing, const clustering_row& update, gc_clock::time_point now) {
// Before deleting an old entry, make sure it was matching the view filter
// (otherwise there is nothing to delete)
- if (!is_partition_key_empty(*_base, *_view, base_key, existing) && matches_view_filter(*_base, _view_info, base_key, existing, now)) {
+ if (matches_view_filter(*_base, _view_info, base_key, existing, now)) {
do_delete_old_entry(base_key, existing, update, now);
}
}
@@ -821,11 +795,11 @@ bool view_updates::can_skip_view_updates(const clustering_row& update, const clu
void view_updates::update_entry(const partition_key& base_key, const clustering_row& update, const clustering_row& existing, gc_clock::time_point now) {
// While we know update and existing correspond to the same view entry,
// they may not match the view filter.
- if (is_partition_key_empty(*_base, *_view, base_key, existing) || !matches_view_filter(*_base, _view_info, base_key, existing, now)) {
+ if (!matches_view_filter(*_base, _view_info, base_key, existing, now)) {
create_entry(base_key, update, now);
return;
}
- if (is_partition_key_empty(*_base, *_view, base_key, update) || !matches_view_filter(*_base, _view_info, base_key, update, now)) {
+ if (!matches_view_filter(*_base, _view_info, base_key, update, now)) {
do_delete_old_entry(base_key, existing, update, now);
return;
}
diff --git a/test/boost/view_schema_pkey_test.cc b/test/boost/view_schema_pkey_test.cc
index 54b5eea09..e4bc901ee 100644
--- a/test/boost/view_schema_pkey_test.cc
+++ b/test/boost/view_schema_pkey_test.cc
@@ -727,6 +727,12 @@ SEASTAR_TEST_CASE(test_base_non_pk_columns_in_view_partition_key_are_non_emtpy)
});
}

+ // The following if'ed-out tests verified behavior that we
+ // no longer believe to be correct, and also turns out not to be
+ // compatible with Cassandra (see issue #9375): these tests checked
+ // that if a view row should get an empty string as a partition key,
+ // this row should not be generated. But this is not the case.
+#if 0
auto views_not_matching = {
"create materialized view %s as select * from cf "
"where p1 is not null and p2 is not null and c is not null and v is not null "
@@ -776,5 +782,6 @@ SEASTAR_TEST_CASE(test_base_non_pk_columns_in_view_partition_key_are_non_emtpy)
auto msg = e.execute_cql(format("select p1, p2, c, v from {}", name)).get0();
assert_that(msg).is_rows().is_empty();
});
+#endif
});
}
--
2.31.1

Nadav Har'El

unread,
Sep 22, 2021, 11:55:38 AMSep 22
to scylladb-dev, Tomasz Grabiec, Piotr Sarna, Botond Dénes
Note: as I noted in https://github.com/scylladb/scylla/issues/9352, we should consider whether we want to make a decision on that issue before committing this patch (which fixes two other issues - https://github.com/scylladb/scylla/issues/9364 and https://github.com/scylladb/scylla/issues/9375) .

The point is that before this patch, there was no way to get empty strings as partition keys in Scylla - CQL and Thrift forbid it, and materialized views and secondary index prevented them. After this patch, MV (and SI) *can* create empty strings as partition keys (as is necessary for CQL compatibility). So we don't want to fix 9352 *later* and suddenly find these empty partition keys changing their token. If we want to change this token, we should do this now, before anything in Scylla is using them.

To stress again, this patch that fixes 9364 and 9375 does not *require* changing the empty-string token (fixing 9352). The tests for 9364 and 9375 work just fine without touching 9352. I just think we should look ahead in case one day we will want to do something about 9352, we should do it now - before this patch.

--
Nadav Har'El
n...@scylladb.com

Botond Dénes

unread,
Sep 23, 2021, 2:29:25 AMSep 23
to Nadav Har'El, scylla...@googlegroups.com
Does this also check that the partition with the empty key can indeed be written to disk? I recall problems in the past related to empty keys, where the sstable writer would refuse to seal the sstable if it contained an empty key.

Nadav Har'El

unread,
Sep 23, 2021, 3:16:23 AMSep 23
to Botond Dénes, scylladb-dev
This is a very good question. Indeed, it doesn't force a flush. I'll add a flush to the test, and also try BYPASS CACHE to force a read from disk.

Nadav Har'El

unread,
Sep 23, 2021, 3:30:26 AMSep 23
to Botond Dénes, scylladb-dev
Good catch!!
Flushing indeed breaks Scylla in a bad way - it goes into an apparent loop of

ERROR 2021-09-23 10:24:04,950 [shard 0] table - failed to write sstable /tmp/scylla-test-319084/data/cql_test_1632381734904/cql_test_1632381734915-f9fa8e201c3e11ec86ca87f7a141297e/md-24-big-Data.db: sstables::malformed_sstable_exception (first key of summary of /tmp/scylla-test-319084/data/cql_test_1632381734904/cql_test_1632381734915-f9fa8e201c3e11ec86ca87f7a141297e/md-24-big-Data.db is empty)

Scylla seems to try this write every 10 seconds, and fail every time - and the flush call never completes.

So we cannot commit my patch before I solve this. And of course the next version will have a flush in this test to avoid this ability from regressing.

Does anyone remember why we had such an aversion to empty partition keys? Why did we go to such lengths to forbid them everywhere when they could have just worked? Was it just the fear that somebody will confuse them with null (*unset* value for the column) or something more fundamental?

Botond Dénes

unread,
Sep 23, 2021, 4:17:09 AMSep 23
to Nadav Har'El, Raphael S. Carvalho, scylladb-dev
This is exactly what I saw at a user some time ago. Failed flushes are retried infinitely. If you kill the node, on next startup commitlog replay will get the empty key back into the memtable and it starts all over again. To make nodes with such commitlog starteable I developed 6083ed668 which I guess we have to revert once we decide to allow empty keys.
Now that I think of it maybe they were using MV and that is how empty keys got into their system.


So we cannot commit my patch before I solve this. And of course the next version will have a flush in this test to avoid this ability from regressing.

Does anyone remember why we had such an aversion to empty partition keys? Why did we go to such lengths to forbid them everywhere when they could have just worked? Was it just the fear that somebody will confuse them with null (*unset* value for the column) or something more fundamental?


Git archeology turned up with this commit: 6083ed668, carbon dated to 2016 AD.
Looks like before that commit we only returned an error when attempting to get the first/last keys and they were empty. This might just have been a defense against unset keys, as it is said commit which starts actually setting these keys. Maybe the defense was just automatically copied to the set path. Raphael do you remember any particular reason?

Nadav Har'El

unread,
Sep 23, 2021, 4:24:09 AMSep 23
to Botond Dénes, scylladb-dev
The good news is that if I simply remove the code that checks and prints the above message, the flush succeeds fine, and the resulting sstables are readable. But the bad news is some reads do not work as intended:

Specifically,
    SELECT * FROM mv BYPASS CACHE
works perfectly with the empty partition key, but
    SELECT * FROM mv WHERE v='' BYPASS CACHE
does not find the row :-(

Unfortunately, removing the special-case MINIMUM token described in https://github.com/scylladb/scylla/issues/9352 does not seem to solve this problem. Perhaps there is yet another piece of code which short-circuits the v='' and needs to be changed...
   

Nadav Har'El

unread,
Sep 23, 2021, 4:46:40 AMSep 23
to Botond Dénes, scylladb-dev
Just one correction: the "WHERE v=''" works fine from the memtable, so it's not short-circuited in the CQL level but perhaps somewhere in the sstable level. I was hoping that maybe this is issue 9352 - where the empty key is searched incorrectly because of the wrong token, but fixing 9352 didn't seem to help, so I need to find which other problems remain...


   

Nadav Har'El

unread,
Sep 27, 2021, 3:38:35 AMSep 27
to Botond Dénes, scylladb-dev
On Thu, Sep 23, 2021 at 11:46 AM Nadav Har'El <n...@scylladb.com> wrote:
The good news is that if I simply remove the code that checks and prints the above message, the flush succeeds fine, and the resulting sstables are readable. But the bad news is some reads do not work as intended:

Specifically,
    SELECT * FROM mv BYPASS CACHE
works perfectly with the empty partition key, but
    SELECT * FROM mv WHERE v='' BYPASS CACHE
does not find the row :-(

I'm still trying to debug this, and unfortunately, I find that my understanding of the various layers that a read query involves has diminished, and I can't find the culprit :-( I wonder if anyone with a fresher understanding of the read code can help me pinpoint the code to debug. The facts are:

1. With a tiny patch (removing a check...), I manage to write an empty-string partition-key to the view table. and "nodetool flush" it to sstables. So far so good.
2. The write worked and the partition is fine in the sstable (at least its data file): "SELECT * FROM mv BYPASS CACHE" shows this partition with the empty key.
3. However, trying a point query to select the one specific partition with "SELECT * FROM mv WHERE v='' BYPASS CACHE" doesn't find this partition!
4. The problem is not that "WHERE v=''" doesn't work in general - the same query does work fine when reading from memtable (without a flush and bypass cache). So we can rule out various short-circuits or bugs in the CQL parsing layer.
5. https://github.com/scylladb/scylla/issues/9352 suggested that maybe a reader can't find the right partition because it tokenises the empty key wrong. So I modified murmur3_partitioner::get_token(bytes_view key) to remove the empty key special case, it didn't seem to help...

Anybody has ideas on what code to debug to find where the partition-range read fails from sstables but succeeds from memtables, or which logging messages to enable to understand what's going on?

Thanks,
Nadav.

Piotr Sarna

unread,
Sep 27, 2021, 3:41:35 AMSep 27
to Nadav Har'El, Botond Dénes, scylladb-dev
You could also try enabling CQL tracing - maybe the output will be helpful in some way - e.g. you'd notice that the serialized key value is subtly wrong, or similar. Especially if you compare the correct memtable read against the incorrect sstable read.

Thanks,
Nadav.
--
You received this message because you are subscribed to the Google Groups "ScyllaDB development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-dev/CANEVyju3x5LuFwCt%2Bf9mSWHhVkEMxabsMf-MHx9xGEFU%2B5HZxQ%40mail.gmail.com.

Botond Dénes

unread,
Sep 27, 2021, 3:49:32 AMSep 27
to Nadav Har'El, scylladb-dev
If you suspect the bug to be in the sstable reader (which is my understanding of the above) you can enable trace level sstable logging (--logger-log-level sstable=trace), it will help us understand what happens.


Thanks,
Nadav.

Nadav Har'El

unread,
Sep 27, 2021, 8:04:15 AMSep 27
to Piotr Sarna, Botond Dénes, scylladb-dev
On Mon, Sep 27, 2021 at 10:41 AM Piotr Sarna <sa...@scylladb.com> wrote:
On 9/27/21 9:38 AM, Nadav Har'El wrote:

On Thu, Sep 23, 2021 at 11:46 AM Nadav Har'El <n...@scylladb.com> wrote:
The good news is that if I simply remove the code that checks and prints the above message, the flush succeeds fine, and the resulting sstables are readable. But the bad news is some reads do not work as intended:

Specifically,
    SELECT * FROM mv BYPASS CACHE
works perfectly with the empty partition key, but
    SELECT * FROM mv WHERE v='' BYPASS CACHE
does not find the row :-(

I'm still trying to debug this, and unfortunately, I find that my understanding of the various layers that a read query involves has diminished, and I can't find the culprit :-( I wonder if anyone with a fresher understanding of the read code can help me pinpoint the code to debug. The facts are:

1. With a tiny patch (removing a check...), I manage to write an empty-string partition-key to the view table. and "nodetool flush" it to sstables. So far so good.
2. The write worked and the partition is fine in the sstable (at least its data file): "SELECT * FROM mv BYPASS CACHE" shows this partition with the empty key.
3. However, trying a point query to select the one specific partition with "SELECT * FROM mv WHERE v='' BYPASS CACHE" doesn't find this partition!
4. The problem is not that "WHERE v=''" doesn't work in general - the same query does work fine when reading from memtable (without a flush and bypass cache). So we can rule out various short-circuits or bugs in the CQL parsing layer.
5. https://github.com/scylladb/scylla/issues/9352 suggested that maybe a reader can't find the right partition because it tokenises the empty key wrong. So I modified murmur3_partitioner::get_token(bytes_view key) to remove the empty key special case, it didn't seem to help...

Anybody has ideas on what code to debug to find where the partition-range read fails from sstables but succeeds from memtables, or which logging messages to enable to understand what's going on?
You could also try enabling CQL tracing - maybe the output will be helpful in some way - e.g. you'd notice that the serialized key value is subtly wrong, or similar. Especially if you compare the correct memtable read against the incorrect sstable read.

Thanks. Botond's suggestion of enabling trace-level logging could be helpful but I found it very difficult to understand which log message is related to which query. So the nice thing about your suggestion of CQL tracing is that it's much easier to understand what is relevant to which query, and I can see what changes between different queries. For example, for WHERE v='' WITH BYPASS CACHE (and nothing in memtables) I see:

Start querying singular range {{0, pk{0000}}}
Reading key {0, pk{0000}} from sstable /tmp/scylla-test-329044/data/cql_test_163
2743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-2-big-Data
.db
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582
b601f8911ec9b5a007cb8b1ddb4/md-2-big-Index.db: scheduling bulk DMA read of size
4 at offset 0
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582
b601f8911ec9b5a007cb8b1ddb4/md-2-big-Index.db: finished bulk DMA read of size 4
at offset 0, successfully read 4 bytes
Querying is done

Whereas for WHERE v='xyz' (which is also in the sstable) I see:

Reading key {-5638396539200223566, pk{000378797a}} from sstable /tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-4-big-Data.db
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-4-big-Index.db: scheduling bulk DMA read of size 7 at offset 0
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-4-big-Index.db: finished bulk DMA read of size 7 at offset 0, successfully read 7 bytes
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-4-big-Data.db: scheduling bulk DMA read of size 33 at offset 0
/tmp/scylla-test-329044/data/cql_test_1632743672071/cql_test_1632743672087-ad582b601f8911ec9b5a007cb8b1ddb4/md-4-big-Data.db: finished bulk DMA read of size 33 at offset 0, successfully read 33 bytes
Querying is done

This makes it clear that the bug is that in the empty pk case, after the index query we do NOT do a data query, so either wrongly believe we didn't find the empty-pk in the index file - or, maybe we didn't write it correctly to the index file. I'll need to figure out know which of the two problems we have.

This seems very close to https://github.com/scylladb/scylla/issues/9352, but strangely trying a trivial fix to that (removing the special case returning MIN_TOKEN) doesn't seem to help.

Nadav Har'El

unread,
Sep 27, 2021, 11:03:38 AMSep 27
to Piotr Sarna, Botond Dénes, scylladb-dev
Unfortunately in the few hours that I worked on this today (yet another Jewish holiday... But this is the last one this month!) I was not able to find the problem. Almost all the code in this area has changed since I last read (or wrote...) it, so I had a lot of new code to try to understand, and the various CQL tracing and logging messages were helpful to understand which code paths are being taken - but I still don't understand why the partition with the empty partition key isn't being read from the data file. I'll have to continue debugging this on Wednesday :-(
Reply all
Reply to author
Forward
0 new messages