Hello,
I have a basic test website that I am trying to spider using ZAP API but the urls found by the spidering look questionable.
I have installed Python OWASP ZAP API version 2.4-0.0.10 and Python version 3.6. The website is running on IIS and has no authentication on it.
The site is up and I can access it fine using browser.
However when I spider it using the ZAP API it returns the following urls after the spidering is complete. I checked the spider status is 100%
retrieving the URLs.
The problem is none of these URLs/files (robots.txt and sitemap.xml) actually exist.
There are other files and subfolders (virtual directories), each containing some files, in the wwwroot folder. But these are not found or returned.
I tried adjusting the default depth and max children properties in the spider object using this:
zap.spider.set_option_max_depth(6) # This should be more than enough as the website is only 2 level deep (root and some subdirectories under)
zap.spider.set_option_max_children(0) # 0 is supposed to be interpreted as unlimited.
It does not help. I have seen it return lot more and actual folders/files within a website (different one) earlier but it only happened once and I cannot
make it reproduce it.
One other observation. It seems that for the spidering (or anything for that matter) to work I need to launch zap in daemon mode first.
Like zap -daemon. I thought when you create a zap object like below it should not have been required but correct me if am wrong.
from zapv2 import ZAPv2
...
zap = ZAPv2(apikey)
What I am missing or doing wrong? Thanks,
Jay