Questions About ZAP Features

27 views
Skip to first unread message

Alexis N.

unread,
Feb 27, 2026, 3:48:56 AM (12 days ago) Feb 27
to ZAP User Group

Hello ZAP Team & Community 😀 !

I hope you’re all doing well!

I have a few questions about some specific features in ZAP, and I was hoping you might be able to help me understand them a bit better.

First, I’d love to know more about how the threshold mechanism works exactly. How does ZAP decide whether a finding is more or less likely to be a false positive? Does it run additional checks or apply some kind of internal validation logic?

Also, when a technology is excluded, are the related rules automatically excluded as well? From the tests I’ve done and from what I can see in the reports it looks like they are, but I’d really appreciate a quick confirmation just to be sure.

Another question: is there a way to see the exact list of pages that were analyzed during a scan? Ideally, I’d like to retrieve the full list of scanned URLs, not only the total number. I know statistics provide a URL count, but I couldn’t find a way to extract the complete list unless I missed something

Finally, I’ve done some work on my side to update the rule mapping based on the recent OWASP Top 10 2025 changes, since I noticed the documentation hasn’t been updated yet. If that’s helpful, I’d be more than happy to share it!

Thanks a lot in advance for your help, and for all the great work you’re doing on ZAP.

Have a great day and a nice weekend!

Kind regards,
Alexis

Simon Bennetts

unread,
Feb 27, 2026, 5:36:32 AM (11 days ago) Feb 27
to ZAP User Group
Hiya Alexis,

Replies inline:

Hello ZAP Team & Community 😀 !

I hope you’re all doing well!

I have a few questions about some specific features in ZAP, and I was hoping you might be able to help me understand them a bit better.

First, I’d love to know more about how the threshold mechanism works exactly. How does ZAP decide whether a finding is more or less likely to be a false positive? Does it run additional checks or apply some kind of internal validation logic?

It depends on the rule. You'll have to look at the code, which is linked off every alert: https://www.zaproxy.org/docs/alerts/

Also, when a technology is excluded, are the related rules automatically excluded as well? From the tests I’ve done and from what I can see in the reports it looks like they are, but I’d really appreciate a quick confirmation just to be sure.

Yes.

Another question: is there a way to see the exact list of pages that were analyzed during a scan? Ideally, I’d like to retrieve the full list of scanned URLs, not only the total number. I know statistics provide a URL count, but I couldn’t find a way to extract the complete list unless I missed something

How are you running ZAP?
The URLs are all shown in the ZAP Sites Tree, which is also available via the API and the Automation Framework "export" job. 

Finally, I’ve done some work on my side to update the rule mapping based on the recent OWASP Top 10 2025 changes, since I noticed the documentation hasn’t been updated yet. If that’s helpful, I’d be more than happy to share it!

That is being worked on - this PR added the tags: #7137
I think a PR for the rule mappings should be submitted soon - we would definitely appreciate your review when its been submitted!

Thanks a lot in advance for your help, and for all the great work you’re doing on ZAP.

Thanks!

Simon 

Alexis N.

unread,
Feb 27, 2026, 10:10:01 AM (11 days ago) Feb 27
to ZAP User Group

Hello Simon,

Thanks a lot for your answers they were very clear and helpful. I realize that for my first question I should have been a bit more precise.

In the Automation Framework, we can configure rules (for active and passive scans) thresholds (Off, Low, Medium, High). As I understand it, this threshold impacts how strict a rule is, with lower thresholds tending to report more potential issues (and therefore more false positives), and higher thresholds being more conservative.

What I’m trying to better understand is what this evaluation is actually based on. Is the threshold generally driven by the same type of logic across most rules? What are the typical criteria used “behind the scenes” to decide whether a finding should be reported or filtered out when adjusting the threshold (while using the Automation Framework) ?

I’d really like to understand this mechanism in more depth so I can better explain and justify how we tune thresholds when reviewing reports.

Regarding the PR, I’d be very happy to review it once it’s submitted and compare it with the work I’ve done on my side.

Thanks again for your time and for all the great work on ZAP.

Have a nice weekend!

Best regards,
Alexis

Simon Bennetts

unread,
Feb 27, 2026, 11:07:24 AM (11 days ago) Feb 27
to ZAP User Group
Hi Alexis,

Its just a combination of the opinions of the implementor and reviewers :)
How confident are we that the relevant alert is valid?
Hopefully the results are vaguely consistent, but all of the tests are different, so its hard to be completely consistent.

I recommend sticking with the default thresholds and only changing them if you have reason to.

Cheers,

Simon
Reply all
Reply to author
Forward
0 new messages