The code for Kernel attention transformer (KAT)
-
Updated
Jun 13, 2024 - Python
The code for Kernel attention transformer (KAT)
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Add a description, image, and links to the histopathology-image-classfication topic page so that developers can more easily learn about it.
To associate your repository with the histopathology-image-classfication topic, visit your repo's landing page and select "manage topics."