Reverse the order of the check since we'd originally intended it to
guard the assignment to the uncached HashSplitter field (allowing a
simple assignment), meaning that it needs to check that any size_t
will fit in an off_t.
Thanks to Mark J Hewitt for reporting the problem, which appeared on a
32-bit system where, unsurprisingly, off_t was 8 and size_t was 4.
lib/bup/_hashsplit.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/bup/_hashsplit.c b/lib/bup/_hashsplit.c
index 289509638..ade413181 100644
--- a/lib/bup/_hashsplit.c
+++ b/lib/bup/_hashsplit.c
@@ -383,7 +383,7 @@ static int HashSplitter_uncache(HashSplitter *self, int last)
size_t pages = len / page_size;
// now track where and how much to uncache
- off_t start = self->uncached; // see assumptions (off_t <= size_t)
+ off_t start = self->uncached; // see assumptions (size_t <= off_t)
// Check against overflow up front
size_t pgstart = self->uncached / page_size;
@@ -659,7 +659,7 @@ int hashsplit_init(void)
{
// Assumptions the rest of the code can depend on.
assert(sizeof(Py_ssize_t) <= sizeof(size_t));
- assert(sizeof(off_t) <= sizeof(size_t));
+ assert(sizeof(size_t) <= sizeof(off_t));
assert(CHAR_BIT == 8);
assert(sizeof(Py_ssize_t) <= sizeof(size_t));
--
2.39.2