arxiv
PublishedApril 9, 2026 at 4:00 AM
▼bearish
Non-identifiability of Explanations from Model Behavior in Deep Networks of Image Authenticity Judgments
Publisher summary· verbatim
arXiv:2604.07254v1 Announce Type: cross Abstract: Deep neural networks can predict human judgments, but this does not imply that they rely on human-like information or reveal the cues underlying those judgments. Prior work has addressed this issue using attribution heatmaps, but their explanatory va
Discussion
No replies yet. Be first.
Related coverage
More from ARXIV
arxivFrom Local to Cluster: A Unified Framework for Causal Discovery with Latent Variables10harxivConsequentialist Objectives and Catastrophe10harxivEgoMAGIC- An Egocentric Video Field Medicine Dataset for Training Perception Algorithms10harxivA general optimization solver based on OP-to-MaxSAT reduction10hOriginally published on arxiv ↗