Machine Learning models are vulnerable to privacy attacks, and computer vision (CV) models are no exception to this. This means that simply publishing a pretrained image classifier or generator opens the door to attacks such as Membership Inference (MI), which aims to predict if a target image was part of the training set. Understanding why privacy attacks are possible and how to prevent them is crucial towards the objective of building Responsible AI tools.
In this workshop, we will cover the subject of privacy attacks in CV by introducing:
1. Privacy in Machine Learning: what is it?
2. The main Privacy Attacks against Machine Learning models
3. How to defend Machine Learning models from Privacy Attacks
4. Privacy vs Utility: optimizing the privacy-utility trade-off
5. Privacy Auditing: how to estimate privacy risks in advance?
6. Invited speakers
7. Hands-on sessions
We will also show practical examples using open-source libraries. Finally, we will also put extra focus on the synthetic image generators, which have been gaining momentum recently, and which we fear may be used wrongly.
We believe this workshop can be relevant for the following reasons:
- make the CV community aware of the privacy-utility trade-off that also exists in the image domain;
- provide CV practitioners with the theoretical and practical tools that are needed to estimate the privacy-utility trade-off in their use-case;
- given the increasing popularity of image generative models, this workshop can provide model-agnostic fundamentals on how to assess the privacy leakage of such models and the images they generate in a robust manner.
This workshop will be composed of keynote sessions on the theoretical background, keynote sessions for invited speakers whose papers have been accepted by this workshop, and one session on practical use-cases. The workshop will be held by Euranova's R&D department.