Skip to content

Commit

Permalink
Update references.bib
Browse files Browse the repository at this point in the history
  • Loading branch information
kristiankersting committed Jan 17, 2024
1 parent 19340c8 commit 20b3f9e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ @inproceedings{delfosse2024raRL
booktitle = {Proceedings of the International Conference on Representation Learning (ICLR) },
title={Adaptive Rational Activations to Boost Deep Reinforcement Learning},
author={Quentin Delfosse and Patrick Schramowski and Martin Mundt and Alejandro Molina and Kristian Kersting},
year={2022},
year={2024},
Keywords={Neural Plasticity, Deep Reinforcement Learning, Rational Activations},
Anote={./images/delfosse2024ratRL.png},
Note={Latest insights from biology show that intelligence not only emerges from the connections between neurons, but that individual neurons shoulder more computational responsibility than previously anticipated. Specifically, neural plasticity should be critical in the context of constantly changing reinforcement learning (RL) environments, yet current approaches still primarily employ static activation functions. In this work, we motivate the use of adaptable activation functions in RL and show that rational activation functions are particularly suitable for augmenting plasticity. Inspired by residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version. The proposed joint-rational activation allows for desirable degrees of flexibility, yet regularises plasticity to an extent that avoids overfitting by leveraging a mutual set of activation function parameters across layers. We demonstrate that equipping popular algorithms with (joint) rational activations leads to consistent improvements on different games from the Atari Learning Environment benchmark, notably making DQN competitive to DDQN and Rainbow.},
Expand All @@ -14,7 +14,7 @@ @inproceedings{struppek2024iclr
booktitle = {Proceedings of the International Conference on Representation Learning (ICLR) },
title={Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks},
author={Lukas Struppek and Dominik Hintersdorf and Kristian Kersting},
year={2022},
year={2024},
Keywords={Label Smoothing, Privacy, Membership Attack, Defense},
Anote={./images/struppek2024iclr.png},
Note={Label smoothing – using softened labels instead of hard ones – is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration. Its implications for preserving model privacy, however, have remained unexplored. To fill this gap, we investigate the impact of label smoothing on model inversion attacks (MIAs), which aim to generate class-representative samples by exploiting the knowledge encoded in a classifier, thereby inferring sensitive information about its training data. Through extensive analyses, we uncover that traditional label smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even more, we reveal that smoothing with negative factors counters this trend, impeding the extraction of class-related information and leading to privacy preservation, beating state-of-the-art defenses. This establishes a practical and powerful novel way for enhancing model resilience against MIAs.},
Expand Down

0 comments on commit 20b3f9e

Please sign in to comment.