I think there are several aspects. Depending on what you want - reliable iteration and movement of a pointer along a text or reliable search or else then you need to agree on a standard. Or possibly several within the same programme.
Where I deal with Farsi etc texts in programmes (search of poor quality user input into long prior prepared texts of often higher quality - I.e. with zwnj and diacritics reliably in the text) we use “decomposed stripped of diacritics and zwnj etc, normalised” for search which will give you the minimal count. Similarly for “give me the third and seventh letter of your password“ you need to have a count which is based on stripped characters and deletion of ZWNJ
Simple decomposed unstripped will always give the highest count of your alternatives. We use that for all pointing and iterating on the texts, where user input is not involved.
I am sure there are other applications - the bottom line is , you need to think and agree (with anyone else contributing at the very least ) what you want to achieve and hence which way of processing Unicode is the best in your particular use case. But no way is “wrong” and hence none of the results is wrong.
Peter
Sent from my phone. Please forgive misspellings and weird “corrections”