Depending on how long a site has been on Rootsweb, I would also do a pull for two other URLs for the same county:
If they return items likely the total number of files will vary by the time the site existed with that URL.
Two cases in point - a county in New York - using only HTTrack:
A-sites.rootsweb - 872 files, 68 folders - 41.22 MB
B-rootsweb.ancestry - 4974 files, 162 folders - 352.89 MB
C-rootsweb.com - 5216 files, 300 folders - 153 MB
Another county - using Wayback and HTTrack:
=====================================================
So in the case of Queens created by different persons over the years for NYGenWeb, you have this mass of files
and the first thought is what kind of mess is this to deal with? Speaking from experience...
Alas for me, I will either end up downloading and uploading in masse or take folder by folder of what I have access to
and figure it out as I go.
I plan as Joy said some time ago to scrape Rootsweb for as much as I can for each county I maintain.
Best of luck to you all.
Tim Stowell