Introduction

Machine Learning models are vulnerable to privacy attacks, and computer vision (CV) models are no exception to this. This means that simply publishing a pretrained image classifier or generator opens the door to attacks such as Membership Inference (MI), which aims to predict if a target image was part of the training set. Understanding why privacy attacks are possible and how to prevent them is crucial towards the objective of building Responsible AI tools.

In this workshop, we will cover the subject of privacy attacks in CV by introducing:

1. Privacy in Machine Learning: what is it?

2. The main Privacy Attacks against Machine Learning models

3. How to defend Machine Learning models from Privacy Attacks

4. Privacy vs Utility: optimizing the privacy-utility trade-off

5. Privacy Auditing: how to estimate privacy risks in advance?

6. Invited speakers

7. Hands-on sessions

We will also show practical examples using open-source libraries. Finally, we will also put extra focus on the synthetic image generators, which have been gaining momentum recently, and which we fear may be used wrongly.

We believe this workshop can be relevant for the following reasons:

  1. make the CV community aware of the privacy-utility trade-off that also exists in the image domain;
  2. provide CV practitioners with the theoretical and practical tools that are needed to estimate the privacy-utility trade-off in their use-case;
  3. given the increasing popularity of image generative models, this workshop can provide model-agnostic fundamentals on how to assess the privacy leakage of such models and the images they generate in a robust manner.

This workshop will be composed of keynote sessions on the theoretical background, keynote sessions for invited speakers whose papers have been accepted by this workshop, and one session on practical use-cases. The workshop will be held by Euranova's R&D department.

Research Topics

The topics of interest include but are not limited to:

Programme

Here is the workshop's programme:

  • (9:00-9:10) Welcome.
  • (9:10-10:10) Keynote presentation on Privacy Attacks. Speaker: Gregory Scafarto.
  • (10:15-10:45) Paper presentation: "DPGOMI: Differentially Private Data Publishing with Gaussian Optimized Model Inversion".
  • (10:50-11:20) Paper presentation: "An Effective CNN-based method for Camera Model Identification in Privacy Preserving Settings".
  • (11:30-13:00) Hands-on session on Privacy Attacks in Computer Vision.

About the Speaker: Gregory Scafarto is a Data Scientist in Euranova's Research Department. Currently, he contributes to the BISHOP program.

Accepted Papers

  • "DPGOMI: Differentially Private Data Publishing with Gaussian Optimized Model Inversion". Authors: Dongjie Chen, University of California, Davis (cdjchen@ucdavis.edu); Sen-ching S. Cheung, University of Kentucky (sccheung@ieee.org); Chen-Nee Chuah, University of California, Davis (chuah@ucdavis.edu).
  • "An Effective CNN-based method for Camera Model Identification in Privacy Preserving Settings". Authors: Kapil Rana, Indian Institute of Technology Ropar (2018csz0007@iitrpr.ac.in); Vishwas Rathi, Thapar Institute of Engineering and Technology (vishwasrathi3@gmail.com); Puneet Goyal, Indian Institute of Technology Ropar (puneet@iitrpr.ac.in.)

Information

IMPORTANT DATES

Workshop
October 8, 2023

PROGRAM CO-CHAIRS

  • Sabri Skhiri
    Euranova, BE
  • Gianmarco Aversano
    Euranova, BE
  • Gregory Scafarto
    Euranova, BE
  • Yixi Xu
    Microsoft

PROGRAM COMMITTEE MEMBERS (TBC)

  • Sabri Skhiri
    Euranova, BE
  • Gianmarco Aversano
    Euranova, BE
  • Yixi Xu
    Microsoft