Interesting. Unfortunately, it's really hard to tell whether this is a user error or something else with just this information. A couple salient points:
First, all autocomplete fields in AtoM have a known issue, where if you type too fast and don't actually select from the autocomplete result that shows up in the drop-down (and instead just tab out or hit enter, etc) then you can end up creating duplicate records. It's possible this is what's happening here.
As a second minor point of information - even though I don't think this is what's happening in this particular case - it's worth noting the way that the new global physical storage report works. From the TIP admonition at the very end of this section of the docs
Remember, the resulting report is focused on container relations, and not just on the containers themselves. Because of this, the same physical storage container might be described in multiple rows of your export. Each row in the CSV report represents a relation (or for unlinked containers, a lack of one), so if a single storage container is linked to 5 archival description records and 5 accession records, that storage container would appear in 10 rows in the exported report.
However, since your screenshot is showing the user interface and not a report page, this is more of an FYI than a theory as to the cause.
Given the above however, if you are saying that these duplicates are appearing AFTER running a report, then it is possible there's a new application bug at play. Do any of the common maintenance tasks (rebuild nested set, clear cache, restart php-fpm, rebuild search index) resolve this?
Causes aside, there's some good news at least: we have a command-line task that will hopefully help you resolve this issue if in fact these are all accidental exact duplicate locations.
If the name, location, AND container type are identical, then when the following task is run, the oldest of the duplicate locations is preserved, and all accession and description relations are moved to that record before any remaining duplicates are deleted:
Be sure to review the CLI options - for example, there is a --dry-run option so you can see how many containers will be affected before actually running the task, and a --verbose option that will output more of the process in the CLI as the task progresses.
If that DOES succeed in clearing things up, I would be very curious to hear if you can produce new duplicates by running a storage report (rather than creating/linking new storage locations, which might have been a user error). Let us know how everything goes!