This week we have a special guest. Eugene Bagdasaryan
is a PhD student at Cornell University and is an Apple AI/ML fellow
. He will present his work on the evaluation of privacy preserving techniques in ML.
Abstract: Modern applications extensively use machine learning to create new or improve existing services. These applications frequently require access to sensitive data, such as facial images, typing history, or health records, thereby increasing the need for expressive privacy protection. These applications are used in safety-critical tasks such as controlling cars on the road and diagnosing diseases; as well as in wide-scale deployments such as keyboard prediction used by billions of users. In this talk, I am going to present our research on investigating tradeoffs in novel Machine Learning privacy tools. We study two emerging privacy-preserving techniques: (A) Federated Learning (FL) -- a form of distributed ML training across many users that keeps their data on the device and still produces an accurate aggregate model. Although, FL does limit the attacker’s ability to learn user’s data, it exposes the user to integrity attacks on the FL model. (B) Differential Privacy (DP) -- a property of an ML model that guarantees the same performance regardless of inclusion of a single individual contribution. In both federated and centralized scenarios we show that DP can significantly hurt underrepresented groups by degrading performance of the model on these groups. Nevertheless, both techniques are important and we propose a novel way to use them while maintaining desired properties: high accuracy on long-tail participants and privacy or robustness guarantees in a federated setting by using local adaptation of the received aggregated model for each participant.
Due to timezone differences, this week we will meet on Saturday, November 20, at 7pm Yerevan time (10am EST).