Adversarial Threats on Real Life Learning Systems

Workshop on Machine Learning Security

Workshop Overview

Important Information

  • Date: September 17th, 2025
  • Time: 9h00 - 17h30
  • Location: Esclangon building, room to be announced, Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris
  • Language: English
  • Registration: Mandatory (Limited places)
  • Cost: Free

Description

This workshop focuses on adversarial and backdoor attacks targeting real-life machine learning systems. We will explore vulnerabilities in deployed learning systems, examine attack vectors in practical scenarios, and discuss defense mechanisms for robust ML deployment. The workshop is inspired by research from the KINAITICS project, which investigates kinematic indicators for adversarial behavior detection in AI systems. The event brings together researchers, academics, and industry professionals to discuss cutting-edge developments in adversarial machine learning, security implications, and mitigation strategies for production environments.

Organizers

Rafaël Pinot (Sorbonne University) and Cédric Gouy-Pailler (CEA)

Program

Workshop Agenda

September 17th, 2025

program
Time Session Speaker
9h00 - 9h25 Registration & Welcome Coffee
9h25 - 9h30 Opening Remarks Rafaël PINOT (Sorbonne University) & Cédric GOUY-PAILLER (CEA)
9h30 - 10h30 Keynote 1: Backdoors in Artificial Intelligence: Stealth Weapon or Structural Weakness? Kassem KALLAS (INSERM, IMT Atlantique)
10h30 - 11h00 Coffee Break
11h00 - 12h00 Session 1:
Deceiving Defect Detection: Backdoor Attacks Against SHM models in the Physical World Aurélien MAYOUE (CEA)
Verifiable Federated Learning with Incremental Zero-Knowledge Proofs Aleksei KORNEEV (Université de Lille, INRIA)
12h00 - 13h30 Lunch Break
13h30 - 14h30 Keynote 2: Adversarial attacks and mitigations Benjamin NEGREVERGNE (PSL Paris-Dauphine University)
14h30 - 16h00 Session 2:
Topological safeguard for evasion attack interpreting the neural networks’ behavior Xabier ECHEBERRIA BARRIO (VicomTech, Spain)
From Attacks to Answers: Counterfactuals at the Intersection of Robustness and Explainability in AI Davy PREUVENEERS (KU Leuven, Belgium)
Unveiling the Role of Randomization in Multiclass Adversarial Classification: Insights from Graph Theory Matteo SAMMUT (Université Paris-Dauphine)
16h00 - 16h30 Coffee Break
16h30 - 17h30 Session 3:
Towards Byzantine-Resilient Dynamic Gossip Learning Sonia BEN MOKHTAR (LIRIS, CNRS)
Unified Breakdown Analysis for Byzantine Robust Gossip Renaud GAUCHER (Ecole Polytechnique)
17h30 Closing Remarks Organizers

Program subject to modifications

Keynote Speakers

Keynote Speaker 1

Dr. Kassem KALLAS

INSERM, IMT Atlantique

Senior Scientist, HDR

Keynote 1: Backdoors in Artificial Intelligence: Stealth Weapon or Structural Weakness?

Abstract: Backdoor attacks represent a stealthy yet serious threat to the integrity of AI systems, especially in black-box image classification settings. This talk will begin with a general introduction to backdoor threats, outlining the attack assumptions, threat models, and real-world implications. It will then present a series of concrete attack and defense strategies developed in recent research. The keynote will conclude with a forward-looking perspective on either the use of watermarking for model ownership verification or the unique challenges posed by backdoor attacks in federated learning.

Keynote Speaker 2

Dr. Benjamin NEGREVERGNE

LAMSADE, PSL-Paris Dauphine University

Associate Professor

Keynote 2: Adversarial attacks and mitigations

Abstract: Adversarial machine learning no longer centers on imperceptible pixel tweaks. As foundation models become multimodal and instruction-tuned, the attack surface shifts to safety alignment itself and to the prompts that govern multi-turn reasoning. This talk frames these trends—adversarial alignment and prompt-level manipulation—using recent studies on large language and vision-language systems as touchstones, and outlines why future defenses must address both model internals and their interactive context.

Online Registration Form (Attendance only)

Registration Form

Practical Information

Venue Details

Esclangon Building
Campus Pierre et Marie Curie
4 place Jussieu, 75005 Paris
France

Capacity: 40 participants

Getting There

Location on Maps: - Google Maps - OpenStreetMap

By Public Transport: - Metro: Line 7, 10 (Jussieu station) - Direct access - RER: Line C (Saint-Michel Notre-Dame) - 5 min walk - Bus: Lines 63, 67, 86, 87 (Jussieu stop)

By Car: - Limited parking in the area - Nearby parking: Parking Maubert (5 min walk) - Parking locations on Google Maps

Sponsors & Partners

We thank our sponsors and partners for their support:

  • The workshop is supported by the Responsible AI Team.

  • The KINAITICS project is funded under Horizon Europe Grant Agreement n°101070176. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.