About your suggestions:
> 1. slow foreign keys. We added indexes where huge data are in big tables like m_transaction, c_orderline etc. this
Yes, indexes on foreign keys improve heavily the delete/update
> 2. the form is slow - installed on 8.2z selecting all checkboxes from 85 to 250 took a long time, probably something can be optimized in java
Haven't tried lately, when I tested was OK, maybe there is room
for improvement, AFAIR clicking on a table detects all its
children and automatically enable them, so maybe that could be a
heavy process.
> 3. run in the background - I tested a big tenant but for 1 table approx 300k workflows to delete, after 2 hours i got java transaction timeout
Yes, we have this transaction timeout limit hardcoded in
iDempiere, I think it can be changed via customization.
> summary: I was able to delete small init tenants or small
data, but can't delete tenants with 5 years' data. (approx 30k
invoices + related data)
> Wondering did anybody deletes successfully huge data with the
plugin on AWS RDS or a bare-metal server.
In general for complete tenants I would go with the script
mentioned at the beginning of this message.
> I suppose implementing some changes long term would allow
deleting big tenants or orgs' data
> 1. delete/retention schema (define the same as in checkbox
and right order loaded by App Dict/FK)
Yes, this sounds like a good improvement, also would be good to allow defining it in batches as your next point.
> 2. process which does the same as form based on schema and
allows deletion data in batches (with limit/offset and continuous
commit) *
The form allows to delete partial data, indeed originally was
intended to delete just transaction data, so you can do it in
batches, I have done this, but selecting the batches manually is a
bit overwhelming.