Skip to content

Commit

Permalink
added last event
Browse files Browse the repository at this point in the history
  • Loading branch information
maurapintor committed Dec 7, 2023
1 parent 0150311 commit 495730d
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions _events/boenisch.md → _past/boenisch.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
type: events
date: 2023-12-07T17:00:00+1:00
type: past
date: 2018-09-16T8:00:00+1:00
speaker: Franziska Boenisch
affiliation: CISPA
title: "Can individuals trust privacy mechanisms for machine learning? A case study of federated learning"
bio: "Franziska is a tenure-track faculty at the CISPA Helmholtz Center for Information Security where she co-leads the SprintML lab. Before, she was a Postdoctoral Fellow at the University of Toronto and Vector Institute in Toronto advised by Prof. Nicolas Papernot. Her current research centers around private and trustworthy machine learning with a focus on decentralized applications. Franziska obtained her Ph.D. at the Computer Science Department at Freie University Berlin, where she pioneered the notion of individualized privacy in machine learning. During her Ph.D., Franziska was a research associate at the Fraunhofer Institute for Applied and Integrated Security (AISEC), Germany. She received a Fraunhofer TALENTA grant for outstanding female early career researchers and the German Industrial Research Foundation prize for her research on machine learning privacy."
abstract: "What is the trusted computing base for privacy? This talk will answer this question from the perspective of individual users. I will first focus on a case study of federated learning (FL). My work shows that vanilla FL currently does not provide meaningful privacy for individual users who cannot trust the central server orchestrating the FL protocol. This is because gradients of the shared model directly leak individual training data points.The resulting leakage can be amplified by a malicious attacker through small, targeted manipulations of the model weights. My work thus shows that the protection that vanilla FL claims to offer is but a thin facade: data may never \"leave'' personal devices explicitly but it certainly does so implicitly through gradients. Then, I will show that the leakage is still exploitable for what is considered the most private instantiation of FL: a protocol that combines secure aggregation with differential privacy. This highlights that individuals unable to trust the central server should instead rely on verifiable mechanisms to obtain privacy. I will conclude my talk with an outlook on how such verifiable mechanisms can be designed in the future, as well as how my work generally advances the ability to audit privacy mechanisms. "
youtube:
zoom: https://us02web.zoom.us/meeting/register/tZcqcu6orjIjEtbYTZTIdikT4rCZM1F3zk4h
youtube: L2ywRARuGTk
---

0 comments on commit 495730d

Please sign in to comment.