Adversarial Threats on Real Life Learning Systems
Workshop on Machine Learning Security
Workshop Overview
Important Information
- Date: September 17th, 2025
- Time: 9h30 - 17h00
- Location: Esclangon building, 1st floor, Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris
- Language: English
- Registration: Mandatory (Limited places)
- Cost: Free
Description
This workshop focuses on adversarial and backdoor attacks targeting real-life machine learning systems. We will explore vulnerabilities in deployed learning systems, examine attack vectors in practical scenarios, and discuss defense mechanisms for robust ML deployment. The workshop is inspired by research from the KINAITICS project, which investigates kinematic indicators for adversarial behavior detection in AI systems. The event brings together researchers, academics, and industry professionals to discuss cutting-edge developments in adversarial machine learning, security implications, and mitigation strategies for production environments.
Organizers
Rafaël Pinot (Sorbonne University) and Cédric Gouy-Pailler (CEA)
Program
Workshop Agenda
September 17th, 2025
Time | Session | Speaker/Activity |
---|---|---|
9h30 - 9h45 | Registration & Welcome Coffee | |
9h45 - 10h00 | Opening Remarks | Rafaël Pinot & Cédric Gouy-Pailler |
10h00 - 11h00 | Keynote 1: Adversarial attacks and mitigations | Benjamin Negrevergne |
11h00 - 11h20 | Coffee Break | |
11h20 - 12h15 | Session 1: Real-world Attack Scenarios | [Speaker(s)] |
12h15 - 13h45 | Lunch Break | |
13h45 - 14h45 | Keynote 2: Backdoors in Artificial Intelligence: Stealth Weapon or Structural Weakness? | Kassem Kallas |
14h45 - 15h30 | Session 2: Defense Mechanisms & Mitigation | [Speaker(s)] |
15h30 - 15h45 | Coffee Break | |
15h45 - 16h30 | Session 3: Industry Case Studies | [Speaker(s)] |
16h30 - 17h00 | Closing Remarks & Networking | Organizers |
Program subject to modifications
Keynote Speakers

Dr. Benjamin NEGREVERGNE
LAMSADE, PSL-Paris Dauphine University
Associate Professor
Keynote 1: Adversarial attacks and mitigations
Abstract: Adversarial machine learning no longer centers on imperceptible pixel tweaks. As foundation models become multimodal and instruction-tuned, the attack surface shifts to safety alignment itself and to the prompts that govern multi-turn reasoning. This talk frames these trends—adversarial alignment and prompt-level manipulation—using recent studies on large language and vision-language systems as touchstones, and outlines why future defenses must address both model internals and their interactive context.

Dr. Kassem KALLAS
INSERM, IMT Atlantique
Senior Scientist, HDR
Keynote 2: Backdoors in Artificial Intelligence: Stealth Weapon or Structural Weakness?
Abstract: Backdoor attacks represent a stealthy yet serious threat to the integrity of AI systems, especially in black-box image classification settings. This talk will begin with a general introduction to backdoor threats, outlining the attack assumptions, threat models, and real-world implications. It will then present a series of concrete attack and defense strategies developed in recent research. The keynote will conclude with a forward-looking perspective on either the use of watermarking for model ownership verification or the unique challenges posed by backdoor attacks in federated learning.
Call for Talks/Posters
We invite researchers, academics, and industry professionals to contribute to the workshop by presenting their work on adversarial threats and defenses for real-life learning systems.
Submission Types
Talk Presentations (15-20 minutes) - Original research on adversarial ML, backdoor attacks, or defense mechanisms - Case studies from real-world deployments - Novel theoretical contributions to adversarial robustness
Poster Presentations - Work-in-progress on adversarial ML security - Preliminary results and ongoing research - Demonstrations of tools and frameworks
Submission Guidelines
Priority Criteria
Proposals will be evaluated based on:
- Published Papers - Submissions based on peer-reviewed publications will receive priority consideration for talk slots
- Relevance - Direct relevance to adversarial threats on real-life systems
- Novelty - Original contributions to the field
- Impact - Practical implications for ML security
Preferred Topics:
- Adversarial attacks on production ML systems
- Backdoor attacks and detection methods
- Real-world robustness evaluation
- Defense mechanisms for deployed models
- Security implications of ML in critical applications
How to Submit
Option 1: Published Work
- Provide DOI, arXiv link, or full citation of your published paper
Option 2: Abstract Submission
- Submit an abstract (maximum 300 words)
- Include preliminary results if available
- Specify your presentation preference
Important Dates
- Submission Deadline: July 18th, 2025
- Notification: July 25th, 2025
- Final Program: August 1st, 2025
- Workshop Date: September 17th, 2025
Submission Process
Submit your proposal through the registration form below by selecting “Talk proposal” or “Poster proposal” and completing the additional fields. All submissions will be reviewed by the organizing committee.
Registration
Registration Information
- Participation: Free of charge
- Registration: Mandatory (Limited places available)
- Confirmation: You will receive a confirmation email
Online Registration Form
Registration Form
Practical Information
Venue Details
Esclangon Building
Campus Pierre et Marie Curie
4 place Jussieu, 75005 Paris
France
Floor: 1st floor
Capacity: 40 participants
Getting There
Location on Maps: - Google Maps - OpenStreetMap
By Public Transport: - Metro: Line 7, 10 (Jussieu station) - Direct access - RER: Line C (Saint-Michel Notre-Dame) - 5 min walk - Bus: Lines 63, 67, 86, 87 (Jussieu stop)
By Car: - Limited parking in the area - Nearby parking: Parking Maubert (5 min walk) - Parking locations on Google Maps
Sponsors & Partners
We thank our sponsors and partners for their support:





The workshop is supported by the Responsible AI Team.
The KINAITICS project is funded under Horizon Europe Grant Agreement n°101070176. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.