Fawkes Tool

0 views
Skip to first unread message

Peppin Kishore

unread,
Aug 5, 2024, 12:45:18 AM8/5/24
to glarylborso
Fawkesis a facial image cloaking software created by the SAND (Security, Algorithms, Networking and Data) Laboratory of the University of Chicago.[1] It is a free tool that is available as a standalone executable.[2] The software creates small alterations in images using artificial intelligence to protect the images from being recognized and matched by facial recognition software.[3] The goal of the Fawkes program is to enable individuals to protect their own privacy from large data collection. As of May 2022, Fawkes v1.0 has surpassed 840,000 downloads.[4] Eventually, the SAND Laboratory hopes to implement the software on a larger scale to combat unwarranted facial recognition software.[5]

The Fawkes program was named after the fictional protagonist from the movie and comic V for Vendetta, who drew inspiration from historical figure Guy Fawkes.[6] The Fawkes proposal was initially presented at a USENIX Security conference in August 2020 where it received approval and was launched shortly after. The most recent version available for download, Fawkes v1.0, was released in April 2021, and is still being updated in 2022.[4] The founding team is led by Emily Wenger and Shawn Shan, PhD students at the University of Chicago. Additional support from Jiayun Zhang and Huiying Li, with faculty advisors Ben Zhao and Heather Zheng, contributed to the creation of the software.[7] The team cites nonconsensual data collection, specifically done by such companies as Clearwater AI, as being the prime inspiration behind the creation of Fawkes.[8]


The methods that Fawkes uses can be identified as similar to adversarial machine learning. This method trains a facial recognition software using already altered images. This results in the software not being able to match the altered image with the actual image, as it does not recognize them as the same image. Fawkes also uses data poisoning attacks, which change the data set used to train certain deep learning models. Fawkes utilizes two types of data poisoning techniques: clean label attacks and model corruption attacks. The creators of Fawkes identify, that using sybil images can increase the effectiveness of their software against recognition software products. Sybil images are images that do not match the person they are attributed to. This confuses the facial recognition software and leads to misidientification which also helps the efficacy of image cloaking. Privacy preserving machine learning uses techniques similar to the Fawkes software but opts for differentially private model training, which helps to keep information in the data set private.[3]


Fawkes image cloaking can be used on images and apps that are used every day. However, the efficacy of the software wanes if there are cloaked and uncloaked images that the facial recognition software can utilize. The image cloaking software has been tested on high-powered facial recognition software with varied results.[3] A similar facial cloaking software to Fawkes is called LowKey. LowKey also alters images on a visual level, but these alterations are much more noticeable compared to the Fawkes software.[2]


The rapid rise of facial recognition systems has placed the technology into many facets of our daily lives, whether we know it or not. What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet.


To use Fawkes, users simply apply the cloaking software to photos before posting them to a public site. Currently, the tool is free and available on the project website for users familiar with using the command line interface on their computer. The team has also made it available as software for Mac and PC operating systems, and hopes that photo-sharing or social media platforms might offer it as an option to their users.


Given the large market for facial recognition software, the team expects that model developers will try to adapt to the cloaking protections provided by Fawkes. But in the long run, the strategy offers promise as a technical hurdle to make facial recognition more difficult and expensive for companies to effectively execute without user consent, putting the choice to participate back in the hands of the public.


"It basically resets the bar for mass surveillance back to the pre-deep learning facial recognition model days. It evens the playing field just a little bit," says Ben Zhao. (Credit: Jason Hargrove/Flickr)


What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet.


With enough cloaked photos in circulation, a computer observer will be unable to identify a person from even an unaltered image, protecting individual privacy from unauthorized and malicious intrusions. The tool targets unauthorized use of personal images, and has no effect on models built using legitimately obtained images, such as those used by law enforcement.


In a paper that the researchers will present at the USENIX Security symposium next month, the researchers found that the method was nearly 100% effective at blocking recognition by state-of-the-art models from Amazon, Microsoft, and other companies.


In early August, Fawkes was featured in the New York Times. However, the researchers clarified a few points from the piece. As of August 3, the tool had accumulated nearly 100,000 downloads, and the team had updated the software to prevent the significant distortions described by the article, which were in part due to some outlier samples in a public dataset.


Given the large market for facial recognition software, the team expects that model developers will try to adapt to the cloaking protections Fawkes provides. But in the long run, the strategy offers promise as a technical hurdle to make facial recognition more difficult and expensive for companies to effectively execute without user consent, putting the choice to participate back in the hands of the public.


I searched for tools like Fawkes & Lowkey but couldn't find a proper way to integrate them in mobile app. Fawkes can be used as an API for private or educational purpose but i need for commercial use.


It's a valid concern to want to protect user privacy by preventing AI facial detection tools from utilizing images uploaded in your mobile app for training purposes. While tools like Fawkes and Lowkey provide some level of image cloaking, integrating them into a mobile app can be a bit tricky, especially when considering commercial use.


To achieve image cloaking in a mobile app for commercial purposes, you might want to explore alternative methods. One approach is to implement client-side image transformations before uploading them. This involves applying image processing techniques directly on the user's device to alter facial features in a reversible manner, making it more challenging for facial detection algorithms to extract meaningful information.


Consider leveraging image processing libraries like OpenCV or TensorFlow Lite for on-device transformations. Implementing a combination of techniques such as blurring, noise addition, or feature distortion can help achieve the desired cloaking effect.


It's crucial to strike a balance between privacy protection and maintaining usability. Additionally, consult legal experts to ensure compliance with privacy regulations when implementing such features in your mobile app.


The potential benefits of automatic recognition from a snapshot or video feed are limited: You can unlock the front door to your house, use your face to pay for your subway ride in a number of cities, and easily find images of yourself in galleries and albums.


Such demonstrations are pitted against some aspects of the establishment and the status quo. Naturally, the establishment pushes back with a mind-boggling array of technological toys such as drones, tanks, tear gas, and of course, facial recognition.


But Facebook was only ever a part of the problem. Clearview AI, the company so beloved by law enforcement agencies across the world, has historically scraped social media and other sites for faces and corroborative information rather than collaborating with the companies hosting the pictures. It is estimated that Clearview AI downloaded over 3 billion photos and used them to create facial recognition models of millions of citizens.


Facial recognition tools can recognize you because they have a vast trove of data to analyze and discover the unique combination of features that make up your face. The fewer images available, the worse these tools will perform.


The software responsible for this trickery is called Fawkes, and it was developed by the SAND Lab (Security, Algorithms, Networking and Data) at the University of Chicago to combat the ubiquity of facial recognition systems. The name is a deliberate pop culture reference, referring to to the Guy Fawkes mask worn by the protagonist of V for Vendetta, and ideally the software should afford users some degree of anonymity.


To give you an idea of the results, I generated a convincing face using thispersondoesnotexist.com, then ran it through Fawkes. After around 30 seconds, the cloaking process was complete. These are the results:


To a facial recognition engine that already has a model built on thousands of images of this person, it is close enough to add to the model but different enough to fundamentally alter the basis of the model.

3a8082e126
Reply all
Reply to author
Forward
0 new messages