A game theoretic approach to explain the output of any machine learning model.
-
Updated
Aug 4, 2025 - Jupyter Notebook
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Fit interpretable models. Explain blackbox machine learning.
?? Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
XAI - An eXplainability toolbox for machine learning
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Power Tools for AI Engineers With Deadlines
Papers about explainability of GNNs
Visualization toolkit for neural networks in PyTorch! Demo -->
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Shapley Interactions and Shapley Values for Machine Learning
?? Interactive Diagrams for Code
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu
[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
Official implementation of Score-CAM in PyTorch
Neural network visualization toolkit for tf.keras
This is an open-source version of the representation engineering framework for stopping harmful outputs or hallucinations on the level of activations. 100% free, self-hosted and open-source.
?? Adversarial attacks on explanations and how to defend them
Add a description, image, and links to the explainability topic page so that developers can more easily learn about it.
To associate your repository with the explainability topic, visit your repo's landing page and select "manage topics."
ft是什么单位hcv7jop6ns7r.cn | 斗破苍穹什么时候出的hcv8jop8ns6r.cn | 躯体化障碍是什么病hcv8jop8ns8r.cn | 脑膜瘤钙化意味着什么hcv8jop7ns7r.cn | 红裤子配什么上衣hcv8jop8ns7r.cn |
革兰阴性杆菌是什么hcv8jop0ns2r.cn | 妈妈的妈妈叫什么wmyky.com | 硫黄是什么fenrenren.com | 不惑是什么意思hcv8jop4ns6r.cn | 什么是微信号bjcbxg.com |
嘴巴里甜甜的是什么原因hcv9jop8ns3r.cn | 为什么右眼皮一直跳hcv9jop0ns8r.cn | 乳房胀痛是什么原因hcv8jop9ns2r.cn | 练八段锦有什么好处yanzhenzixun.com | 机械性窒息死亡是什么意思hcv8jop7ns9r.cn |
才下眉头却上心头是什么意思hcv8jop0ns9r.cn | 什么是庚日hkuteam.com | 什么茶能去体内湿气hcv8jop7ns8r.cn | 肝内脂肪浸润是什么意思hcv8jop0ns5r.cn | 兜兜转转是什么意思hcv7jop5ns2r.cn |