Hello,
We are looking for a young researcher interested in federated
learning issues within a security framework in networks of Cloud
type.
In this context, we are looking to design machine learning tools
that comply with the GDPR and therefore offer, at the very least,
In this context, federated learning has several advantages. For
example, it does not require the direct sharing
of data, which could be sensitive. Sending data can also be very
costly, so we’ll just send the minimum amount
of information necessary. However, securing Cloud-type services,
or protecting against denial of service,
requires access to sensitive data (e.g. IP addresses, identifiers)
necessary for learning and explicability. A first
research objective will be to find a framework capable of using
only useful information (in the GDPR sense).
The theoretical framework of federated learning is fairly generic
and can be used for a variety of learning
methods. Nevertheless, it is sensitive to various types of attack
that can render the model biased or even unusable.
For example, there are attacks where a malicious site transmits
bad information while retaining
good information. A second objective will be to strengthen the
model to be robust to adversarial (or Byzantine)
attacks, and to add a network of trust feature.
Location : Laboratoire d’Informatique et des Systèmes, Marseille,
FRANCE
Supervisors : François-Xavier Dupé (francois-x...@lis-lab.fr),
Emmanuel Godard (emmanue...@lis-lab.fr).
Duration of the post-doc : 1 year, possibility to extend to 2
years to discuss
Deadline for application : July 31st, 2025
For more information, please see the attached file (both in French and English).
Best regards,
François-Xavier Dupé