Good morning all,
Is it possible to exclude certain urls from the active scan job using automation framework?
Currently, we have the following layers to prevent the active scan from attacking third party services integrated in our web app:
We use the `-configfile` param pointing to a file with 20+ excluded regexes:
network.globalExclusions.exclusions.exclusion(0).name="Extensions"
network.globalExclusions.exclusions.exclusion(0).enabled=true
network.globalExclusions.exclusions.exclusion(0).value="(?i)^.*\.(?:gif|jpe?g|png|ico|icns|bmp|svg|webp|mp4|css|pdf|avi|mov|wmv|webm)$"
network.globalExclusions.exclusions.exclusion(1).name="DataDog"
network.globalExclusions.exclusions.exclusion(1).enabled=true
network.globalExclusions.exclusions.exclusion(1).value=".*datadoghq\\.com.*"
(If this is not the correct format, please let me know- there is not a lot of documentation on this specific issue)
Inside of our context block of the automation plan, we specify exclusions again:
excludePaths:
- "(?!.*\\.${TARGET}).*"
- ".*thirdparty2.com"
- ".*thirdparty3.com"
- ".*\\.(png|jpeg|jpg|mp4|mp3|wav|jpg|gif|svg|ico|woff|woff2|ttf|css|js)$"
Yet, inspecting the traffic through a proxy reveals that a vast majority (80%+) of the traffic during the active scan job is part of the excluded list.
I previously asked a question regarding the feasibility of excluding such sites from the ajax spider and was told that it was not doable- is the answer the same for the active scan?
Furthermore, we use the `export` job to export a sitetree, and over 90% of the tree is made up of excluded sites- I noticed there was a `prune` job as well, but our sitemap ended up too large, and the prune job was unable to handle a file that large, causing an error in the logs.
Thanks in advance, and any pointers would be super helpful as I'm a bit stuck on what to do next.
Cheers,
Peter