I started to make the changes into CN1 source code to improve how densities are handled. One of the only issue I have is that currently density constants are small int values that actually do not match the PPI (pixels per inch) density. It would make more sense to have them as PPI values (like android does). What is the reason of not having them directly as PPI values (there might be some not obvious good reason that I do not see)? Do you see any objection of having them changed to PPI values (it would also requiere to handle some special case for MultipleImages created before this change as, if I understood correctly, these density values are used to identify the matching image for the device resolution. So if a density coded into a multipleImage is <=80 it would have to be converted to PPI before testing (but PPI values < 80 do not make sense on any device used nowadays, so this test should not be an issue)).
Appart from that, for iOS devices, I was thinking of having a hard coded conversion map between devices models and their PPI (the list iOS devices is short so it is something that is quite easy to maintain. If the device is not in the list it would default to the PPI assumption, like for android devices anyway, so it doesn't really matter if the list of devices is not up to date). For Android, I was thinking of using xdpi and ydpi and see if the PPI obtain with these values do not deviate too much from the estimated PPI (based on device resolution), if not, I keep that value that should be more precise than the estimated one, else I default to the estimate (as xdpi and ydpi might be erroneous)
For other platforms, I don't know what whould be the best approach (on windows desktops, the ppi is a system variable that is 96 by default and can be changed by the user, so maybe taking this value if not at 96 could be an idea...?) so I would befault to the PPI assumption based on resolution for now...
Sounds good to you? If yes I would probably send a pull request with these changes in the comming days.