Hi all,
I was spidering a website to gather endpoints, when the program started scanning some unusual URL's. Something like:
I figured that ZAP didn't like a page or two and didn't know what to do with it, but the scan tried reading about 10,000 pages like that, all resulting in a 404 error "not found" and not adding any pages to the "eyeball". It took about 2 hours.
Do I have a setting wrong, that I should have told the program to exclude a certain type of page? Not really sure what it was doing, so I ended the scan without it finishing.
Sorry if this is a noob question, but I have never seen this addressed here.