Typosquattingis a type of social engineering attack which targets internet users who incorrectly type a URL into their web browser rather than using a search engine. Typically, it involves tricking users into visiting malicious websites with URLs that are common misspellings of legitimate websites. Users may be tricked into entering sensitive details into these fake sites. For organizations victimized by these attackers, these sites can do significant reputational damage.
Typosquatting is a form of cybercrime that involves hackers registering domains with deliberately misspelled names of well-known websites. Hackers do this to lure unsuspecting visitors to alternative websites, typically for malicious purposes. Visitors may end up at these alternative websites through one of two ways:
The hackers may emulate the look and feel of the sites they are attempting to mimic hoping that users will divulge personal information such as credit card or bank details. Or the sites may be well-optimized landing pages containing advertising or pornographic content, which generate high revenue streams for their owners.
Typosquatting attacks start with cybercriminals buying and registering a domain name that is a misspelling of a popular website (some cybercriminals go so far as to buy multiple URLs.) For example, instead of purchasing
example.com, the cybercriminal might buy
examplle.com or
exmple.com.
A typosquatting domain becomes dangerous when real users start visiting the site. They may have typed the URL by mistake. Or they may have been lured there by a phishing scam, typically over email, which contains a link to the typosquatted website.
As outlined above: the scam website passes itself off as the real thing, portraying itself as the correct site. For example, if the site is emulating a well-known bank, it will adopt the logo, color scheme, and page layout of that bank. The purpose of an imitator site is to host a phishing scam, gathering log-in credentials and personal data.
The fake website purports to sell you something you might have bought at the correct URL. Often, these are digital purchases that are difficult to dispute on a credit card statement. The buyer does not receive the item they want, but they will still pay for it.
A similar cybercrime to typosquatting is cybersquatting, also known as domain squatting. In this case, a person purchases URLs that have similar spellings to other websites and brands. Typically, the motivation is not to build a website at the address but to sell the URLs to the owners of the authentic websites and brands for maximum profit.
Because companies want to protect their customers and brands, many feel compelled to buy URLs from cybersquatters and are often prepared to pay a premium to do so. This makes cybersquatting a profitable activity since it is often quite cheap for the cybersquatter to register domains for most TLDs.
A variation on typosquatting is called combosquatting. This is where criminals register domains that are slightly different to legitimate domains by adding extra words, such as,
amazon-onlineshop.com to confuse users into thinking it is a legitimate Amazon website. In this instance, no typos are involved, merely the presence of additional words to deceive users.
Purchase important and obvious typo-domains and redirect these to your website. In addition, register other country extensions and other relevant top-level domains, alternate spellings, and variants with and without hyphens. Once registered, misspelled domains can easily be rerouted to the actual website with the help of redirects.
SSL certificates are an excellent way to signal that your website is legitimate. They tell the end-user who they are connected with and protect user data during transfer. A missing SSL certificate can be a sign you have been taken to an alternative website.
If you believe someone is impersonating (or preparing to impersonate) your organization, let your customers, staff, or other relevant parties know to look out for suspicious emails or a phishing website.
This presentation was also the first time we had publicly disclosed the details of all exploits and vulnerabilities that were used in the attack. We discover and analyze new exploits and attacks using these on a daily basis, and we have discovered and reported more than thirty in-the-wild zero-days in Adobe, Apple, Google, and Microsoft products, but this is definitely the most sophisticated attack chain we have ever seen.
What we want to discuss is related to the vulnerability that has been mitigated as CVE-2023-38606. Recent iPhone models have additional hardware-based security protection for sensitive regions of the kernel memory. This protection prevents attackers from obtaining full control over the device if they can read and write kernel memory, as achieved in this attack by exploiting CVE-2023-32434. We discovered that to bypass this hardware-based security protection, the attackers used another hardware feature of Apple-designed SoCs.
If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.
Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.
We are publishing the technical details, so that other iOS security researchers can confirm our findings and come up with possible explanations of how the attackers learned about this hardware feature.
Various peripheral devices available in the SoC may provide special hardware registers that can be used by the CPU to operate these devices. For this to work, these hardware registers are mapped to the memory accessible by the CPU and are known as memory-mapped I/O (MMIO).
Address ranges for MMIOs of peripheral devices in Apple products (iPhones, Macs, and others) are stored in a special file format: DeviceTree. Device tree files can be extracted from the firmware, and their contents can be viewed with the help of the dt utility.
The prompted me to try something. I checked different device tree files for different devices and different firmware files: no luck. I checked publicly available source code: no luck. I checked the kernel images, kernel extensions, iboot, and coprocessor firmware in search of a direct reference to these addresses: nothing.
After that, I took a closer look at the exploit and found one more thing that confirmed my theory. The first thing the exploit does during initialization is writing to some other MMIO register, which is located at a different address for each SoC.
This way, I was able to confirm that all these unknown MMIO registers used for the exploitation belonged to the GPU coprocessor. This motivated me to take a deeper look at its firmware, which is also written in ARM and unencrypted, but I could not find anything related to these registers in there.
I was able to match the ml_dbgwrap_halt_cpu function from the pseudocode above to a function with the same name in the dbgwrap.c file of the XNU source code. This file contains code for working with the ARM CoreSight MMIO debug registers of the main CPU. The source code states that there are four CoreSight-related MMIO regions, named ED, CTI, PMU, and UTT. Each occupies 0x10000 bytes, and they are all located next to one another. The ml_dbgwrap_halt_cpu function uses the UTT region, and the source code states that, unlike the other three, it does not come from ARM, but is a proprietary Apple feature that was added just for convenience.
It is also interesting that the author(s) of this exploit knew how to use the proprietary Apple UTT region to unhalt the CPU: this code is not part of the XNU source code. Perhaps it is fair to say that this could easily be found out through experimentation.
Something that cannot be found that way is what the attackers did with the registers in the second unknown region. I am not sure what blocks of MMIO debug registers are located there, or how the attackers found out how to use them if they were not used by the firmware.
The last register, 0x206150048, is used for storing the data that needs to be written and the upper half of the destination physical address, bundled together with the data hash and another value (possibly a command). This hardware feature writes the data in aligned blocks of 0x40 bytes, and everything should be written to the 0x206150048 register in nine sequential writes.
You may notice that this hash does not look very secure, as it occupies just 20 bits (10+10, as it is calculated twice), but it does its job as long as no one knows how to calculate and use it. It is best summarized with the term security by obscurity.
Through an amazing coincidence, both my 37C3 presentation and this post discuss a vulnerability very similar to the one I talked about during my presentation at the 36th Chaos Communication Congress (36C3) in 2019.
I was able to discover and exploit this vulnerability, because earlier versions of the firmware used these registers for all DRAM operations, but then Sony stopped using them and started just accessing DRAM directly, because all DRAM was also mapped to the CPU address space. Because no one was using these registers anymore and I knew how to use them, I took advantage of them. It did not need to know any secret hash algorithm.
Could something similar have happened in this case? I do not know that, but this GPU coprocessor first appeared in the recent Apple SoCs. In my personal opinion, based on all the information that I provided above, I highly doubt that this hardware feature was previously used for anything in retail firmware. Nevertheless, there is a possibility that it was previously revealed by mistake in some particular firmware or XNU source code release and then removed.
3a8082e126