Hello TrojAI Community,
Some new papers were released to arXiv recently that I thought were interesting and wanted to share!
One Sentence Summary: Outlines a method for finding alternative triggers in poisoned models and proposes that backdoor attacks make neural networks susceptible to a wide range of backdoors in addition to the one directly inserted by the adversary
One Sentence Summary: Proposes a new attack where the trigger varies from input to input in an attempt to make backdoor verification more difficult
Feel free to any other papers you have found!