From 93e857d0ac324c3b3a8804fbf7c367f56a346981 Mon Sep 17 00:00:00 2001 From: maurapintor Date: Thu, 2 May 2024 17:27:31 +0200 Subject: [PATCH] added last event --- {_events => _past}/abdelnabi.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {_events => _past}/abdelnabi.md (90%) diff --git a/_events/abdelnabi.md b/_past/abdelnabi.md similarity index 90% rename from _events/abdelnabi.md rename to _past/abdelnabi.md index acc8865..2e231ff 100644 --- a/_events/abdelnabi.md +++ b/_past/abdelnabi.md @@ -1,11 +1,11 @@ --- -type: event +type: past date: 2024-05-02T16:00:00+2:00 speaker: Sahar Abdelnabi affiliation: Microsoft title: "On New Security and Safety Challenges Posed by LLMs and How to Evaluate Them" bio: "Sahar Abdelnabi is an AI security researcher at Microsoft Security Response Center (Cambridge). Previously, she was a PhD candidate at CISPA Helmholtz Center for Information Security, advised by Prof. Dr. Mario Fritz and she obtained her MSc degree at Saarland University. Her research interests lie in the broad intersection of machine learning with security, safety, and sociopolitical aspects. This includes the following areas: 1) Understanding and mitigating the failure modes of machine learning models, their biases, and their misuse scenarios. 2) How machine learning models could amplify or help counter existing societal and safety problems (e.g., misinformation, biases, stereotypes, cybersecurity risks, etc.). 3) Emergent challenges posed by new foundation and large language models." abstract: "Large Language Models (LLMs) are integrated into many widely used and real-world applications and use-case scenarios. With their capabilities and agentic-like adoption, they open new frontiers to assist in various tasks. However, they also bring new security and safety risks. Unlike previous models with static generation, LLMs’ nature of dynamic, multi-turn, and flexible functionality makes them notoriously hard to robustly evaluate and control. This talk will cover some of these new potential risks imposed by LLMs, how to evaluate them, and the challenges of mitigations. " -zoom: https://us02web.zoom.us/meeting/register/tZMpceiupz4qH9OpLXTQ4m268hieklVJy1NL -youtube: https://youtube.com/live/gKsiUi3qMiA?feature=share +youtube: gKsiUi3qMiA + ---