Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
354 changes: 354 additions & 0 deletions content/docs/Groups/SHAP_for_credit_risk.md

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added public/SHAP_for_credit_risk/5_Summary_plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 14 additions & 0 deletions public/SHAP_for_credit_risk/force_plot_custom.html

Large diffs are not rendered by default.

21 changes: 18 additions & 3 deletions public/docs/groups/ai-playing-geoguessr-explained/index.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,21 @@
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="AI playing GeoGuessr explained # Author: Pavel Roganin
Prerequisites to read # None
Introduction # Everyone has played GeoGuessr at least once in their life. This is how the game looks like:
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="
AI playing GeoGuessr explained
#

Author: Pavel Roganin

Prerequisites to read
#

None

Introduction
#

Everyone has played
GeoGuessr at least once in their life. This is how the game looks like:


If you do not remember this game, I will briefly explain the rules of the game.
This is a simple browser game that selects a random location from around the world and shows the player interactive Google Street View panoramas of that location."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:title" content><meta property="og:description" content="AI playing GeoGuessr explained # Author: Pavel Roganin
Prerequisites to read # None
Expand Down
22 changes: 17 additions & 5 deletions public/docs/groups/contrastive-grad-cam-consistency/index.html

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,4 +1,21 @@
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines # Authors: Ivan Golov, Roman Makeev
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="
Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines
#

Authors: Ivan Golov, Roman Makeev
To see the implementation, visit our
github project.




Introduction
#

In this work, we introduce an interpretable, end-to-end framework that enhances Stable Diffusion v1.5 model fine‑tuned via the
DreamBooth method [1] to generate high‑fidelity, subject‑driven images from as few reference examples.
While DreamBooth effectively personalizes generation by associating a unique rare token with the target concept, the internal process through which textual prompts are transformed into visual representations remains opaque. To bridge this gap, we integrate
Diffusion Lens [2], a visualization technique that decodes the text encoder’s intermediate hidden states into images, producing a layer‑by‑layer sequence that illuminates how semantic concepts emerge and refine over the course of encoding."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:url" content="http://localhost:1313/docs/groups/diffusion-lens-interpreting-text-encoders-in-text-to-image-pipelines-tuned-using-dreambooth/"><meta property="og:site_name" content="XAI"><meta property="og:title" content="Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines"><meta property="og:description" content="Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines # Authors: Ivan Golov, Roman Makeev
To see the implementation, visit our github project.
Introduction # In this work, we introduce an interpretable, end-to-end framework that enhances Stable Diffusion v1.5 model fine‑tuned via the DreamBooth method [1] to generate high‑fidelity, subject‑driven images from as few reference examples.
While DreamBooth effectively personalizes generation by associating a unique rare token with the target concept, the internal process through which textual prompts are transformed into visual representations remains opaque."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:title" content="Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines"><meta property="og:description" content="Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines # Authors: Ivan Golov, Roman Makeev
Expand Down
11 changes: 9 additions & 2 deletions public/docs/groups/dndfs_shap/index.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="Deep Neural Decision Forests (DNDFs) with SHAP Values # Introduction # Deep Neural Decision Forests (DNDFs) combine the interpretability and robustness of decision trees with the power of neural networks to capture complex patterns in data. This integration allows DNDFs to perform well on various tasks, especially in high-dimensional spaces where traditional methods may struggle.
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="
Deep Neural Decision Forests (DNDFs) with SHAP Values
#


Introduction
#

Deep Neural Decision Forests (DNDFs) combine the interpretability and robustness of decision trees with the power of neural networks to capture complex patterns in data. This integration allows DNDFs to perform well on various tasks, especially in high-dimensional spaces where traditional methods may struggle.
The method is different from random forests in that it uses a principled, joint, and global optimization of split and leaf node parameters and from conventional deep networks because a decision forest provides the final predictions."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:url" content="http://localhost:1313/docs/groups/dndfs_shap/"><meta property="og:site_name" content="XAI"><meta property="og:title" content="XAI"><meta property="og:description" content="Deep Neural Decision Forests (DNDFs) with SHAP Values # Introduction # Deep Neural Decision Forests (DNDFs) combine the interpretability and robustness of decision trees with the power of neural networks to capture complex patterns in data. This integration allows DNDFs to perform well on various tasks, especially in high-dimensional spaces where traditional methods may struggle.
The method is different from random forests in that it uses a principled, joint, and global optimization of split and leaf node parameters and from conventional deep networks because a decision forest provides the final predictions."><meta property="og:locale" content="en_us"><meta property="og:type" content="article"><meta property="article:section" content="docs"><title>Dndfs Shap | XAI</title>
<link rel=manifest href=/manifest.json><link rel=icon href=/favicon.png type=image/x-icon><link rel=stylesheet href=/book.min.e832d4e94212199857473bcf13a450d089c3fcd54ccadedcfac84ed0feff83fb.css integrity="sha256-6DLU6UISGZhXRzvPE6RQ0InD/NVMyt7c+shO0P7/g/s=" crossorigin=anonymous><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/contrib/mathtex-script-type.min.js integrity=sha384-jiBVvJ8NGGj5n7kJaiWwWp9AjC+Yh8rhZY3GtAX8yU28azcLgoRo4oukO87g7zDT crossorigin=anonymous></script><script defer src=/flexsearch.min.js></script><script defer src=/en.search.min.ad436edd829ec592525c968cb38b5379be6117b5639c053bb9908a9c0a469c15.js integrity="sha256-rUNu3YKexZJSXJaMs4tTeb5hF7VjnAU7uZCKnApGnBU=" crossorigin=anonymous></script><script defer src=/sw.min.6f6f90fcb8eb1c49ec389838e6b801d0de19430b8e516902f8d75c3c8bd98739.js integrity="sha256-b2+Q/LjrHEnsOJg45rgB0N4ZQwuOUWkC+NdcPIvZhzk=" crossorigin=anonymous></script></head><body dir=ltr><input type=checkbox class="hidden toggle" id=menu-control>
<input type=checkbox class="hidden toggle" id=toc-control><main class="container flex"><aside class=book-menu><div class=book-menu-content><nav><h2 class=book-brand><a class="flex align-center" href=/><img src=/YELLOW_BAR.png alt=Logo><span><b>XAI</b></span></a></h2><div class=book-search><input type=text id=book-search-input placeholder=Search aria-label=Search maxlength=64 data-hotkeys=s/><div class="book-search-spinner hidden"></div><ul id=book-search-results></ul></div><ul><li><a href=/docs/groups/cam_and_secam/>CAM and SeCAM</a></li><li><a href=/docs/groups/counterfactual-explanations-for-credit-risk-models/>Counterfactual Explanations for Credit Risk Models: A Case Study</a></li><li><a href=/docs/groups/diffusion-lens-interpreting-text-encoders-in-text-to-image-pipelines-tuned-using-dreambooth/>Diffusion Lens: Interpreting Text Encoders in Text-to-Image pipelines</a></li><li><a href=/docs/groups/dimensionality-reduction-in-nlp-visualizing-sentence-embeddings-with-umap-and-t-sne/>Dimensionality Reduction in NLP: Visualizing Sentence Embeddings with UMAP and t-SNE</a></li><li><a href=/docs/groups/example/>Example</a></li><li><a href=/docs/groups/ai-playing-geoguessr-explained/>Ai Playing Geo Guessr Explained</a></li><li><a href=/docs/groups/contrastive-grad-cam-consistency/>Contrastive Grad Cam Consistency</a></li><li><a href=/docs/groups/dndfs_shap/ class=active>Dndfs Shap</a></li><li><a href=/docs/groups/gradcam/>Grad Cam</a></li><li><a href=/docs/groups/integrated-gradients/>Integrated Gradients</a></li><li><a href=/docs/groups/kernel-shap/>Kernel Shap</a></li><li><a href=/docs/groups/rag/>Rag</a></li><li><a href=/docs/groups/shap_darya_and_viktoria/>Shap Darya and Viktoria</a></li><li><a href=/docs/groups/sverl_tac_toe/>Sverl Tac Toe</a></li><li><a href=/docs/groups/torchprism/>Torch Prism</a></li><li><a href=/docs/groups/xai_for_transformers/>Xai for Transformers</a></li></ul></nav><script>(function(){var e=document.querySelector("aside .book-menu-content");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script></div></aside><div class=book-page><header class=book-header><div class="flex align-center justify-between"><label for=menu-control><img src=/svg/menu.svg class=book-icon alt=Menu>
</label><strong>Dndfs Shap</strong>
Expand Down
14 changes: 12 additions & 2 deletions public/docs/groups/gradcam/index.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="This work is made by Andrei Markov and Nikita Bogdankov
Grad-CAM # What is it? # Grad-CAM (Gradient-weighted Class Activation Mapping) is a technique used in deep learning, particularly with convolutional neural networks (CNNs), to understand which regions of an input image are important for the network&rsquo;s prediction of a particular class.
This method can be used for understanding how a CNN has been driven to make a final classification decision."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:url" content="http://localhost:1313/docs/groups/gradcam/"><meta property="og:site_name" content="XAI"><meta property="og:title" content="XAI"><meta property="og:description" content="This work is made by Andrei Markov and Nikita Bogdankov

Grad-CAM
#


What is it?
#

Grad-CAM (Gradient-weighted Class Activation Mapping) is a technique used in deep learning, particularly with convolutional neural networks (CNNs), to understand which regions of an input image are important for the network&rsquo;s prediction of a particular class.


This method can be used for understanding how a CNN has been driven to make a final classification decision. Grad-CAM is class-specific, which means that it can produce a separate visualizations on the image for each class. If happens that there is a classification error, than Grad-CAM can help you to see what it did wrong, so we can say that it makes the algorithm more transparent to the developers."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:url" content="http://localhost:1313/docs/groups/gradcam/"><meta property="og:site_name" content="XAI"><meta property="og:title" content="XAI"><meta property="og:description" content="This work is made by Andrei Markov and Nikita Bogdankov
Grad-CAM # What is it? # Grad-CAM (Gradient-weighted Class Activation Mapping) is a technique used in deep learning, particularly with convolutional neural networks (CNNs), to understand which regions of an input image are important for the network’s prediction of a particular class.
This method can be used for understanding how a CNN has been driven to make a final classification decision."><meta property="og:locale" content="en_us"><meta property="og:type" content="article"><meta property="article:section" content="docs"><title>Grad Cam | XAI</title>
<link rel=manifest href=/manifest.json><link rel=icon href=/favicon.png type=image/x-icon><link rel=stylesheet href=/book.min.e832d4e94212199857473bcf13a450d089c3fcd54ccadedcfac84ed0feff83fb.css integrity="sha256-6DLU6UISGZhXRzvPE6RQ0InD/NVMyt7c+shO0P7/g/s=" crossorigin=anonymous><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/contrib/mathtex-script-type.min.js integrity=sha384-jiBVvJ8NGGj5n7kJaiWwWp9AjC+Yh8rhZY3GtAX8yU28azcLgoRo4oukO87g7zDT crossorigin=anonymous></script><script defer src=/flexsearch.min.js></script><script defer src=/en.search.min.ad436edd829ec592525c968cb38b5379be6117b5639c053bb9908a9c0a469c15.js integrity="sha256-rUNu3YKexZJSXJaMs4tTeb5hF7VjnAU7uZCKnApGnBU=" crossorigin=anonymous></script><script defer src=/sw.min.6f6f90fcb8eb1c49ec389838e6b801d0de19430b8e516902f8d75c3c8bd98739.js integrity="sha256-b2+Q/LjrHEnsOJg45rgB0N4ZQwuOUWkC+NdcPIvZhzk=" crossorigin=anonymous></script></head><body dir=ltr><input type=checkbox class="hidden toggle" id=menu-control>
Expand Down
18 changes: 17 additions & 1 deletion public/docs/groups/integrated-gradients/index.html
Original file line number Diff line number Diff line change
@@ -1,4 +1,20 @@
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="Integrated Gradients Method for Image Classification # XAI Course Project | Anatoliy Pushkarev
<!doctype html><html lang=en-us dir=ltr><head><script src="/livereload.js?mindelay=10&amp;v=2&amp;port=1313&amp;path=livereload" data-no-instant defer></script><meta charset=UTF-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="
Integrated Gradients Method for Image Classification
#

XAI Course Project | Anatoliy Pushkarev

Goal
#

Develop a robust image classification model and analyze its behavior with the help of the integrated gradients method.

Integrated gradients paper

Integrated Gradients
#

Integrated Gradients is a technique for attributing a classification model&rsquo;s prediction to its input features. It is a model interpretability technique: you can use it to visualize the relationship between input features and model predictions. It finds the importance of each pixel or feature in input data for a particular prediction of the model."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:url" content="http://localhost:1313/docs/groups/integrated-gradients/"><meta property="og:site_name" content="XAI"><meta property="og:title" content="XAI"><meta property="og:description" content="Integrated Gradients Method for Image Classification # XAI Course Project | Anatoliy Pushkarev
Goal # Develop a robust image classification model and analyze its behavior with the help of the integrated gradients method.
Integrated gradients paper
Integrated Gradients # Integrated Gradients is a technique for attributing a classification model&rsquo;s prediction to its input features. It is a model interpretability technique: you can use it to visualize the relationship between input features and model predictions."><meta name=theme-color content="#FFFFFF"><meta name=color-scheme content="light dark"><meta property="og:title" content><meta property="og:description" content="Integrated Gradients Method for Image Classification # XAI Course Project | Anatoliy Pushkarev
Expand Down
Loading