Has anyone ever faced the issue of id column by the limitation of java int data type?

92 views
Skip to first unread message

Syed

unread,
Aug 3, 2018, 8:01:14 AM8/3/18
to iDempiere
Hi Community,

Can iDempiere work without any issues if the high volume tables such as M_Transaction and Fact_Acct have records more than java int max size 2,147,483,647? 

Regards,
Syed.

Ayaz Ahmed

unread,
Aug 4, 2018, 12:57:41 AM8/4/18
to idem...@googlegroups.com
Yes we faced, could not solve the issue.

--
You received this message because you are subscribed to the Google Groups "iDempiere" group.
To unsubscribe from this group and stop receiving emails from it, send an email to idempiere+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/idempiere/6956a5bc-a047-4e95-a30b-5a920999a549%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Message has been deleted

bizk...@gmail.com

unread,
Aug 4, 2018, 3:33:28 AM8/4/18
to iDempiere
2.1 Trillion is quite a massive number. 

If there were a robust way to 
  • archive data from X years back 
  • Restart the numbering of active transactions from 1000000, 
  • Add an ID1 field, which would contain the Old Transaction ID number (for audit purposes)
We wouldn't have to worry about running out of numbers


On Saturday, August 4, 2018 at 10:27:41 AM UTC+5:30, Ayaz Ahmed wrote:
Yes we faced, could not solve the issue.
On Fri, Aug 3, 2018 at 5:01 PM, Syed <sy...@syvasoft.com> wrote:
Hi Community,

Can iDempiere work without any issues if the high volume tables such as M_Transaction and Fact_Acct have records more than java int max size 2,147,483,647? 

Regards,
Syed.

--
You received this message because you are subscribed to the Google Groups "iDempiere" group.
To unsubscribe from this group and stop receiving emails from it, send an email to idempiere+...@googlegroups.com.

Syed

unread,
Aug 4, 2018, 4:13:47 AM8/4/18
to iDempiere
It is 2.1 billion, not trillion, but it is still a massive number and only a few businesses are capable to reach this number.

Changing int data type into long / BigInteger seems a right solution for me instead of archiving transactions because it provides consistency in reporting and auditing the entire history of transactions.

Another approach is having a new idempiere instance with opening entries will also overcome this limitation.

Carlos Antonio Ruiz Gomez

unread,
Aug 4, 2018, 6:26:09 AM8/4/18
to idem...@googlegroups.com
Sounds interesting, how did you arrive to such number?
Is it because you have 2 billion records in your table?
Or you have less records but the sequence went wild.

If you have 2 billion records in your table - which table is that?  How is the DB performance with that number of rows?  PostgreSQL?  Oracle?

If you have not that many records but your sequence went wild - then I think is easy to write a program to re-sequence.


BTW - the idea to change everything to long sounds interesting (that's possible after java 8)
would that affect performance?
all plugins and jasper reports using int needs to be modified also?


Another thing we have explored is changing the _ID to be a UUID string instead of a numeric - but from what I researched that's a massive change - probably not backward compatible.


Regards,

Carlos Ruiz




On 04.08.2018 at 10:13, Syed wrote:
It is 2.1 billion, not trillion, but it is still a massive number and only a few businesses are capable to reach this number.

Changing int data type into long / BigInteger seems a right solution for me instead of archiving transactions because it provides consistency in reporting and auditing the entire history of transactions.

Another approach is having a new idempiere instance with opening entries will also overcome this limitation.

On Saturday, August 4, 2018 at 1:03:28 PM UTC+5:30, bizk...@gmail.com wrote:
2.1 Trillion is quite a massive number. 

If there were a robust way to 
  • archive data from X years back 
  • Restart the numbering of active transactions from 1000000, 
  • Add an ID1 field, which would contain the Old Transaction ID number (for audit purposes)
We wouldn't have to worry about running out of numbers


On Saturday, August 4, 2018 at 10:27:41 AM UTC+5:30, Ayaz Ahmed wrote:
Yes we faced, could not solve the issue.

On Fri, Aug 3, 2018 at 5:01 PM, Syed <sy...@syvasoft.com> wrote:
Hi Community,

Can iDempiere work without any issues if the high volume tables such as M_Transaction and Fact_Acct have records more than java int max size 2,147,483,647? 

Regards,
Syed.


--

bizk...@gmail.com

unread,
Aug 4, 2018, 6:44:52 AM8/4/18
to iDempiere
Good Points, Carlos. I do think an archival process becomes key, given performance considerations. 

It would be nice if we could Archive and Delete old records, even if we don't change the sequence. That could be something for the future.

Syed

unread,
Aug 4, 2018, 7:35:32 AM8/4/18
to iDempiere
Sounds interesting, how did you arrive to such number?
Is it because you have 2 billion records in your table?
Or you have less records but the sequence went wild.
If you have 2 billion records in your table - which table is that?  How is the DB performance with that number of rows?  PostgreSQL?  Oracle?
No Carlos, actually I did not have this much transaction, but I see the limitation of int data type which can hold up to  2,147,483,647.
I guess M_Transaction and Fact_Acct tables will be likely to reach billions of records in high volume business and SAAS environment.

BTW - the idea to change everything to long sounds interesting (that's possible after java 8)
would that affect performance?
I am sure that changing int to long will simply increase the capacity and neither increases nor decreases the performance.
To increase the performance, we have to optimize the database and/or upgrading the database server hardware

all plugins and jasper reports using int needs to be modified also?.
Yes.


On Saturday, August 4, 2018 at 3:56:09 PM UTC+5:30, Carlos Antonio Ruiz Gómez wrote:

Carlos Antonio Ruiz Gómez

unread,
Aug 5, 2018, 7:20:46 AM8/5/18
to iDempiere
Hi Syed, yes, I understood your message as theoretical, but Ayaz said they confronted already the situation, so my question was trying to get info about that specific case to understand better.

iDempiere was born in sep-2006 (adempiere times) - so, if you started your implementation 12 years ago, to arrive to this sequence you need to push about 15 millions of records every month to the same table.

That's 500K records daily, or, if that would be a continous flow it would need to push 6 records per second.

The scenario seems feasible, but not plausible.  If you are going to have that number of records, I guess you take care about that since early stages.

Regards,

Carlos Ruiz

Ayaz Ahmed

unread,
Aug 6, 2018, 7:02:51 AM8/6/18
to idem...@googlegroups.com
Hi,
In my case : Client's ID were starting from 9,000,000,000. I was trying to import their data into iDempiere.
I faced above issue, So we decided to regenerate their ID. They were migrating from old solution to iDempiere.

--
You received this message because you are subscribed to the Google Groups "iDempiere" group.
To unsubscribe from this group and stop receiving emails from it, send an email to idempiere+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/idempiere/a98311b5-e355-425b-9d76-54d1cc4d740d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Carlos Antonio Ruiz Gomez

unread,
Aug 6, 2018, 9:17:03 AM8/6/18
to idem...@googlegroups.com
Thanks Ayaz for the info, clearer now.
----

bizk...@gmail.com

unread,
Aug 7, 2018, 12:26:52 AM8/7/18
to iDempiere
Carlos - out of curiosity, has anyone tried / built anything for archiving old data?

Nicolas Micoud

unread,
Aug 7, 2018, 2:58:33 AM8/7/18
to iDempiere
There is the housekeeping process (http://wiki.idempiere.org/en/Housekeeping).

Regards,

Nicolas

shiju01

unread,
Aug 7, 2018, 4:05:55 AM8/7/18
to iDempiere
Dear Community

This issue can solved permanently by converting the data type of ID from numeric to string

The long  story can be referred  here

http://wiki.openbravo.com/wiki/Projects:UUIDs_Usage/Technical_Documentation



UUID identifiers implementation

These are the steps to be performed for the UUID implementation:

  • Change all the primary key column from number(10) to varchar2(32) to be able to store UUIDs.
  • Do the same for all the foreing key columns.
  • Look for non-db-FK columns that actually store references to other PKs (for example ad_tree_node.node_id, fact_acct.record_id, fact_acct.record_id2, etc...) and change its data type.
  • Create UUID generator in Java and DB.
  • Adapt PL/SQL code to the new data types.
    • Types for PK and FK types must be transformed to varchar2(32)
  • Adapt WAD code to the new data types.
  • Adapt manual code to the new data types.
    • xsql file needn't explicitly cast char as they are doing with numbers so castings for these fields must be removed.
Reply all
Reply to author
Forward
0 new messages