This engine is part of a production workflow built on MementoDB.
It converts a 12-column merged transaction CSV into a deterministic 16-column Digi@Flow format, with a focus on reproducibility and stable financial outcomes in a mobile execution environment.
I’m documenting an approach and interested in how others think about rounding, tax normalization, and transaction modeling.
The engine operates on a snapshot of merged transaction data.
In this model:
each row is treated as an immutable input state
no external lookups or cross-row dependencies
all calculations are finalized per row
upstream inconsistencies are normalized at this layer
This keeps the output deterministic regardless of how the source data was produced.
In practice, financial pipelines in MementoDB often require:
deterministic rounding at the smallest currency unit (0.01)
row-level finalization (no cross-row adjustments)
tolerance for inconsistent tax rate formats (0.08 / 8 / 1.08)
safe handling of quoted CSV fields
predictable behavior under mobile constraints
This engine is used in production, so operational stability is prioritized over mathematical symmetry.
All monetary values are finalized per row using Math.floor() with a small epsilon to suppress floating-point noise.
Tax Rate NormalizationAccepts multiple formats (0.08, 8, 1.08) and normalizes to 0.08.
Service vs. Product LogicIf cost == unit price and margin resolves to zero,
the full ex-tax amount is treated as profit (service/labor model).
A lightweight parser handles quoted fields.
Optimized for controlled input, not full RFC4180 compliance.
Designed for MementoDB’s JavaScript runtime with mobile-first constraints.
These choices are intentional:
floor-based rounding ensures consistent accumulation
epsilon is used only to neutralize floating-point artifacts
date formatting follows the local runtime timezone
service detection is heuristic and model-dependent
Curious how others approach this layer:
Do you prefer floor, round, or banker’s rounding for accumulation consistency?
How do you normalize tax inputs in loosely structured data?
Do you model service vs. product explicitly, or infer it like this?
Interested in different trade-offs and design directions.
--
You received this message because you are subscribed to the Google Groups "mementodatabase" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mementodataba...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mementodatabase/e9d51d17-4a53-4c1f-ab58-f4fb573741b2n%40googlegroups.com.