RISC Seminars (Research on Information Security and Cryptology)

     Archives: [2024] [2023] [2022] [2021] [2020] [2019] [2018] [2017] [2016] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008] [2007] [2006] [2005] [2004] [List of Speakers]
(To receive information about upcoming seminars, register for the RISC mailing list.)
[print]
Joint RISC / ML-Group Seminar on Can Machine Learning be Hacked?

The seminar is cancelled and will be rescheduled at a later moment. We will email a new announcement with a new date when ready./

Registration mandatory. See here for more details (speakers, program, registration, etc).

Date:March 27, 2020
Location:Euler Room, Amsterdam Science Park Congress Center (next to CWI)
Schedule: 
10:00 - 10:45Registration and Welcome with Coffee
10:45 - 10:50Welcome and Introduction
10:50 - 11:40Audra McMillan (Boston University & Northeastern University):
Online Learning via the Differential Privacy Framework
Abstract: In this talk we discuss the use of differential privacy as a lens to examine online learning in both full and partial information settings. The differential privacy framework is, at heart, about algorithmic stability, and thus has found application in domains well beyond those where information security is central. Here we develop an algorithmic property called one-step differential stability which facilitates a more refined regret analysis for online learning methods. We show that tools from the differential privacy literature can yield regret bounds for many interesting online learning problems including online convex optimization and online linear optimization. Our stability notion is particularly well-suited for deriving first-order regret bounds for follow-the-perturbed-leader algorithms, something that all previous analyses have struggled to achieve. We also generalize the standard max-divergence to obtain a broader class called Tsallis max-divergences. These define stronger notions of stability that are useful in deriving bounds in partial information settings such as multi-armed bandits and bandits with experts.
11:55 - 12:45Thijs Veugen (TNO & CWI):
Privacy-Preserving Coupling of Vertically-Partitioned Databases and Subsequent Training with Gradient Descent
Abstract: We show how multiple data owning parties can collaboratively train several machine learning algorithms without jeopardizing the privacy of their sensitive data. In particular, we assume that every party knows specific features of an overlapping set of people. Using a secure implementation of an advanced hidden set intersection protocol and a privacy-preserving Gradient Descent algorithm, we are able to train a Ridge, LASSO or SVM model over the intersection of people in their data sets. Both the hidden set intersection protocol and privacy-preserving LASSO implementation are unprecedented in literature.
12:45 - 13:45Lunch (served outside Euler Room)
13:45 - 14:35Phuong Ha Nguyen & Marten van Dijk (U. of Connecticut; U. of Connecticut & CWI):
Buffer Zones for Defending against Adversarial Examples in Image Classification
Abstract: We propose a novel defense strategy against all existing black box gradient based adversarial attacks on deep neural networks for image classification problems. Our strategy yields a unique security property which we term buffer zones and we argue that this offers significant improvements over state-of-the-art defenses. We are able to achieve this improvement even when the adversary has access to the entire original training data set and unlimited query access to the image classifier with defense. In order to compare different defenses, we will provide a graphic representation that visualizes the trade-off between accuracy in a non-malicious environment (clean accuracy) and accuracy in a malicious environment (with adversarial examples).
14:50 - 15:40Joaquin Vanschoren (Eindhoven University of Technology):
Automated Machine Learning (a Tutorial)
Abstract: Automated machine learning (AutoML) is the science of building machine learning models in a data-driven, efficient, and objective way. It replaces manual trial-and-error with automated, guided processes. In this tutorial, we will guide you through the current state of the art in hyper parameter optimization, pipeline construction, and neural architecture search. We will discuss model-free black box optimization methods, Bayesian optimization, as well as evolutionary and other techniques. We will also pay attention to meta-learning, i.e. learning how to build machine learning models based on prior experience. Finally, we will give some practical guidance on how to do AutoML and meta-learning with open source tools.
15:40 - 16:15Discussion Panel:
Can ML be Hacked?
16:15 - 17:15Cocktails
0.12431s