Problems trying to use ZAP (student here, trying to learn)

130 views
Skip to first unread message

Victor

unread,
Jun 17, 2024, 9:06:22 AMJun 17
to ZAP User Group
Hello,

I'm a student in cybersecurity from France (that might explain the possible language mistakes in this message) and I'm trying to learn how to use zap.
I'm doing an internship in a company that develops a web application for company management (billing, budget management ...). My mission as an intern is to learn how to use zap and create an automated way to run scans.

So far, I managed to run zap on my test environment (test server with a Nextcloud running on it). I was able to run manual and automated tests (with automation framework). I even managed to make the authentication work, as the authentication was handled almost instantly by zap.

Now that I am on my company's test instance, I'm having troubles making it work. Their app is a quite modern app, which makes it harder than my Nextcloud to scan. I know that the login process is managed by a different service/server than the main app. From what I've heard, there are a bunch of redirections between the login page and the main app page. It doesn't seem to be a problem for zap's authentication test, but it might be one of the sources of my problems.
Here are the points that I have troubles with :

- Crawlers (spider and Ajax Spider) doesn't seem to scan everything on the page. I cannot see the entire website architecture in zap. It also seems to loop at some moments.
I know that the crawlers are not working well because if I run a manual scan, and I log in to the website manually, I still don't have the entirety of the website scanned by the crawlers. It's like every single tab in the menu are not clicked on by the crawlers.

- I am not sure authentication really works, but I don't really understand how I can check if it worked well, as it seems I have a problem with crawlers too. The authentication test works, but I'm not sure the session is kept alive if I run other tests after the "requestor" one in an Automation Framework.

- When the Ajax Spider is running in my AF, it opens a Chrome window and tries to log in again, but it fails as it enters the email address twice (or the email address is kept from the previous login, but I end up with 2 email addresses in the field, and it blocks the login process). Even if I enter the right credentials for it, the same window opens up a few seconds after the first one closed. Is my crawler login out again and again ?

Here is my (modified for privacy) .yaml file.
Any help is welcome, and yes, I'm probably doing it wrong, but I've been trying for more than 2 weeks to make it work, watched a lot of your videos and I feel like I cannot really go further without help.

Thanks in advance for anyone who will take some time to help me :)



---------------------------yaml configuration file--------------------------------

---
env:
  contexts:
  - name: "company_test_env"
    urls:
    - "https://company_test_env_addresse.com"
    - "https://company_login_page_test_env.com"
    excludePaths:
    authentication:
      method: "browser"
      parameters:
        loginPageUrl: "https://company_login_page_test_env.com"
        browserId: "chrome"
        loginPageWait: 30
      verification:
        method: "poll"
        loggedInRegex: "\\Q 200 OK\\E"
        loggedOutRegex: "\\Q 302 Found\\E"
        pollFrequency: 60
        pollUnits: "requests"
        pollUrl: "https://company_test_env_addresse.com"
        pollPostData: ""
    sessionManagement:
      method: "headers"
      parameters:
        Cookie: "cookies here (automatically generated by the requestor)"
    technology:
      exclude: []
      include: []
    users:
    - name: "myn...@company.com"
      credentials:
        password: "Super secret and unbreakable password"
        username: "myn...@company.com"
  parameters:
    failOnError: true
    failOnWarning: false
    progressToStdout: true
  vars: {}
jobs:
- parameters:
    scanOnlyInScope: true
    enableTags: false
    disableAllRules: false
  rules: []
  name: "passiveScan-config"
  type: "passiveScan-config"
- parameters:
    user: "myn...@company.com"
  requests:
  - url: "https://company_login_page_test_env.com"
    name: ""
    method: ""
    httpVersion: ""
    headers: []
    data: ""
  name: "requestor"
  type: "requestor"
- parameters:
    context: ""
    user: "myn...@company.com"
    url: "https://company_test_env_addresse.com"
    maxDuration: 30
    maxDepth: 5
    maxChildren: 0
    acceptCookies: false
    handleODataParametersVisited: false
    handleParameters: "IGNORE_COMPLETELY"
    maxParseSizeBytes: 0
    parseComments: false
    parseGit: false
    parseRobotsTxt: false
    parseSitemapXml: false
    parseSVNEntries: false
    postForm: false
    processForm: false
    requestWaitTime: 0
    sendRefererHeader: false
    userAgent: ""
  tests:
  - onFail: "INFO"
    statistic: "automation.spider.urls.added"
    site: ""
    operator: ">="
    value: 100
    name: "At least 100 URLs found"
    type: "stats"
  name: "spider"
  type: "spider"
- parameters:
    context: ""
    user: "myname@company"
    url: "https://company_test_env_addresse.com"
    maxDuration: 60
    maxCrawlDepth: 10
    numberOfBrowsers: 16
    browserId: "chrome-headless"
    maxCrawlStates: 0
    eventWait: 1000
    reloadWait: 1000
    clickDefaultElems: true
    clickElemsOnce: true
    randomInputs: true
    inScopeOnly: true
    runOnlyIfModern: false
  tests:
  - onFail: "INFO"
    statistic: "spiderAjax.urls.added"
    site: ""
    operator: ">="
    value: 100
    name: "At least 100 URLs found"
    type: "stats"
  name: "spiderAjax"
  type: "spiderAjax"
- parameters:
    maxDuration: 0
  name: "passiveScan-wait"
  type: "passiveScan-wait"
- parameters:
    context: ""
    user: "https://company_test_env_addresse.com"
    policy: ""
    maxRuleDurationInMins: 0
    maxScanDurationInMins: 0
    addQueryParam: false
    delayInMs: 0
    handleAntiCSRFTokens: false
    injectPluginIdInHeader: false
    scanHeadersAllRequests: false
    threadPerHost: 16
    maxAlertsPerRule: 0
  policyDefinition:
    defaultStrength: "medium"
    defaultThreshold: "medium"
    rules: []
  name: "activeScan"
  type: "activeScan"
- parameters:
    template: "risk-confidence-html"
    theme: "original"
    reportDir: "/home/myname/Documents/Zap_reports"
    reportFile: ""
    reportTitle: "ZAP Scanning Report"
    reportDescription: ""
    displayReport: false
  risks:
  - "info"
  - "low"
  - "medium"
  - "high"
  confidences:
  - "falsepositive"
  - "low"
  - "medium"
  - "high"
  - "confirmed"
  sections:
  - "siteRiskCounts"
  - "responseBody"
  - "appendix"
  - "alertTypes"
  - "responseHeader"
  - "alertTypeCounts"
  - "riskConfidenceCounts"
  - "alerts"
  - "aboutThisReport"
  - "contents"
  - "requestBody"
  - "reportDescription"
  - "reportParameters"
  - "requestHeader"
  - "summaries"
  name: "report"
  type: "report"

Victor

unread,
Jun 18, 2024, 10:23:09 AMJun 18
to ZAP User Group
Updates on my researches :

I changed the way I maintain session, I now use automatic authentication and automatic session maintain. Authentication was once again tested with the authentication test tool and passed.

I now think the problem only comes from Ajax Spider. I don't think it understands how to click the links to explore the app.
I tried adding some of the URLs (URLs like app.companyname.com/list/clients) but i don't think theses are even checked either by the crawler, the passive scan or by the active scan. I don't see the addresses in the main site architecture tab on the left of zap's GUI. What's strange is that when i manually scan the app, the URLs that i visit appear on this tab under the "favorite" folder. Any idea what could be my problem and how i could solve it ?


Thanks again for anyone who will take the time to help me :)

Victor

unread,
Jun 19, 2024, 6:10:41 AM (14 days ago) Jun 19
to ZAP User Group
Updates :

It seems my problem comes from a framework use on our app called vuejs. I've seen multiple peoples talking about the problem they also encountered with vuejs, and, from what i understood, the only workarounds are either :

- manually crawling the entire website and export the links in the context (which i'm not sure works because I already tried in my last .yaml to manually add 10 urls that did not appear in my site tree in after the scan)

- modify the .java code of the crawler as such :
https://groups.google.com/g/zaproxy-users/c/_JR-fiMiJuM/m/9Y8GUQECCwAJ

Feel free to give me any information you might have to help me resolve my issues :)

Thanks ! Best regards

Vic

Simon Bennetts

unread,
Jun 27, 2024, 11:44:18 AM (5 days ago) Jun 27
to ZAP User Group
Hi Vic,

Sounds like you've managed to make a lot of progress on your own :)

Exploring modern web apps is hard :(
However there are some options that may help you: https://www.zaproxy.org/docs/desktop/addons/ajax-spider/options/

In your AF file you have "clickDefaultElems: true".
Have a look at the HTML element used to navigate your app, especially the ones ZAP doesnt seem to explore.
For example I've seen apps that use "div" elements for navigation.
The AJAX spider will not click on those by default because there could be lots of them, and they are not really designed for navigation.
But if your app uses them, or other "non default" elements then you can configure the AJAX spider to use them, via "elements".

If that doesnt help then have a look at the Client Side add-on: https://www.zaproxy.org/docs/desktop/addons/client-side-integration/ajax-scan/

If you can share any HTML fragments that the AJAX Spider struggles with then that would be appreciated.
Ideally we'd like standalone test cases we can, er, test :D

Oh, and we do have other options for exploring web apps: https://www.zaproxy.org/docs/getting-further/automation/exploring-your-app/

Cheers,

Simon
Reply all
Reply to author
Forward
0 new messages