Hi,
I'm working on a Drive storage reduction project in a large environment and have been analyzing duplicate files using admin tools, GAM, and GAT+. Reporting is clear, but I'm trying to understand what is actually doable.
Has anyone successfully deleted duplicate files at scale while preserving sharing, or is that not feasible due to ownership and permissions. Also, if duplicates are shared with different users, have you found any workable cleanup approaches?
I'm also curious if anyone has used commands or workflows to safely delete non shared duplicates only. I wanted to see if others have tackled this already or if there is a recommended way to test outside of live data, such as a Workspace test environment.
Thank you.