performance

69 views
Skip to first unread message

Jakub Kuchar

unread,
Sep 8, 2023, 6:05:25 AM9/8/23
to ModSecurity Core Rule Set project
Good day everybody, 

we have nginx+modsecurity on aws ECS with 1vCPU and 2GB RAM, we are experiencing long processing time with json payload of 10 000 keys. My first guess is the HW is not enough. What are you running on ? 

Thank you

Christian Folini

unread,
Sep 8, 2023, 7:36:40 AM9/8/23
to Jakub Kuchar, ModSecurity Core Rule Set project
I'm inclined to say ModSecurity is the problem, but with 10K keys it's the
payload.

What happens here is that ModSec will parse the JSON and end up with
10K ARGS_NAMES and 10K ARGS. Then all that stuff is running through
all your rules targeting ARGS and ARGS_NAMES. Assuming CRS, that's a lot
of regular expressions. 2 * 10K * ~ 100. So roughly 2M regular expressions for
every single request. That is unbearable up to the best hardware.

ModSecurity better be optimized, but this is several levels above what you
can expect to be able to process on a live system.

You could turn to a commercial API gateway, but all I have seen in this
direction is that they do not look at the payloads at all in order to speed
up things.

What can you do:
* Think about the need to send 10K keys around (with every request)
* Ignore request bodies like commercial competition
* Stick to an allow-list checking endpoints, methods and keynames
* Create a custom CRS that only looks at the ARGS you really want and need
to inspect

Hope this helps!

Best,

Christian




>
> Thank you
>
> --
> You received this message because you are subscribed to the Google Groups "ModSecurity Core Rule Set project" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to modsecurity-core-rule-...@owasp.org.
> To view this discussion on the web visit https://groups.google.com/a/owasp.org/d/msgid/modsecurity-core-rule-set-project/12aa5206-9e4b-44a3-81d3-937bdabec500n%40owasp.org.

Jakub Kuchar

unread,
Sep 8, 2023, 8:30:12 AM9/8/23
to ModSecurity Core Rule Set project, Christian Folini, ModSecurity Core Rule Set project, Jakub Kuchar
Hello Christian,

thanks for reply, and yes, before posting message here we did quick test to turning off/on ModSecurity and results are 18sec vs 2sec. 
Thanks also for recommendation list you propose, really appreciate that. But could you please elaborate a little more about 

* Stick to an allow-list checking endpoints, methods and keynames ? 

Thanks many

Christian Folini

unread,
Sep 8, 2023, 8:42:43 AM9/8/23
to Jakub Kuchar, ModSecurity Core Rule Set project
On Fri, Sep 08, 2023 at 05:30:11AM -0700, Jakub Kuchar wrote:
> thanks for reply, and yes, before posting message here we did quick test to
> turning off/on ModSecurity and results are 18sec vs 2sec.
> Thanks also for recommendation list you propose, really appreciate that.
> But could you please elaborate a little more about
>
> * Stick to an allow-list checking endpoints, methods and keynames ?

Sure. You design an allow list similar to what I have outlined in this
tutorial:

https://www.netnea.com/cms/apache-tutorial-6_embedding-modsecurity/#step_8_writing_simple_allowlist_rules

With the exception of the 114xx and 115xx rule range.

There are various options to make this more dynamic or operation friendly,
but that tutorial is the basic idea. If you have a swagger or something
similar, then machine-generating the rules is an option.

Apparently, this is not the same thing as testing arg 7546 against all the
SQLi rules, but that is simply not possible on a WAF.

Good luck!

Christian

Andrew Howe

unread,
Sep 8, 2023, 10:12:22 AM9/8/23
to Jakub Kuchar, ModSecurity Core Rule Set project
Hi Jakub,

> we have nginx+modsecurity on aws ECS with 1vCPU and 2GB RAM, we are experiencing long processing time with json payload of 10 000 keys. My first guess is the HW is not enough. What are you running on ?

For a moment, let's ignore the problem with the massive JSON payloads...

Is this a dev/testing WAF machine? If it is then fine, you can stop
reading here :)

Otherwise, regardless of the JSON question, those specs are
fundamentally insufficient for real world use as a WAF (unless you
have many small 1 vCPU machines clustered together, which is certainly
an option.)

If this machine is, or is going to be, a standalone production WAF
then you should seriously bump up those specifications. I work for a
ModSecurity + CRS integrator and we advise our customers that, to have
a reliable WAF box in a busy production environment, to use a VM with
4 vCPUs and 16 GB of RAM. You could probably halve those numbers,
though, as long as you keep an eye on CPU and memory load.

Also, is the box handling TLS termination or are you handling HTTP
traffic only? If your machine is stripping the TLS then that will
further increase the load and you'll need to take that into account.

I hope that's useful.

Thanks,
Andrew

--

Andrew Howe
Loadbalancer.org Ltd.
www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064

Christian Folini

unread,
Sep 8, 2023, 10:15:00 AM9/8/23
to Andrew Howe, Jakub Kuchar, ModSecurity Core Rule Set project
I second that.
> --
> You received this message because you are subscribed to the Google Groups "ModSecurity Core Rule Set project" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to modsecurity-core-rule-...@owasp.org.
> To view this discussion on the web visit https://groups.google.com/a/owasp.org/d/msgid/modsecurity-core-rule-set-project/CADi1syDh7WrD-6gCoqSYq-2fLyYrNnaDs%3DhJaDdTxgWPxT6QPw%40mail.gmail.com.

Jakub Kuchar

unread,
Sep 8, 2023, 10:57:06 AM9/8/23
to ModSecurity Core Rule Set project, Christian Folini, ModSecurity Core Rule Set project, Jakub Kuchar
Great! Thanks for the link ! 

Jakub Kuchar

unread,
Sep 8, 2023, 11:14:22 AM9/8/23
to ModSecurity Core Rule Set project, Jakub Kuchar, Christian Folini, ModSecurity Core Rule Set project
Hello Andrew 

thanks for sharing information, and yes this is staging machine, if there is usage 75% CPU it should scale up new docker container, but for sure when testing big payload it was crunching by one for ~16 sec (18sec - 2sec for backend proccessing)

on production we have cluster of 2 fixed running to be sure, and for now we dont see much memory/cpu usage so we just thinking this is enough.  

To be honest when looking into the cpu/memory usage on aws, i don't even see a peak during big payload, but maybe aws logs are misleading.

Thanks again for sharing your specs, we need to do something about it here. 

Also, is the box handling TLS termination or are you handling HTTP
traffic only? If your machine is stripping the TLS then that will
further increase the load and you'll need to take that into account.

-> yes it is handling TLS termination, make sense now, thanks for this insight, i would not think about it !

Have a nice weekend
Reply all
Reply to author
Forward
0 new messages