Radiology is one of the most data-dense, high-stakes environments for visualization. The input is a medical image. The output is a diagnosis. And now, somewhere in between, there is an AI making predictions that the radiologist has to act on, fast.
The problem is not whether the AI is accurate. It is whether the radiologist can trust it without abandoning their own judgment, and without losing time they do not have.
This talk is about resolving that tension through design. Bargava shares his journey building visualization interfaces for radiologists: what failed, what worked, and the specific techniques that made AI feel like a collaborator rather than a black box demanding compliance.
You will see how spatial anchoring grounds abstract AI predictions in familiar anatomy, how non-destructive uncertainty layers let radiologists interrogate model confidence without losing sight of the underlying scan, and how progressive disclosure interfaces reduce cognitive load while keeping the human firmly in control.
The domain is radiology. The problems are universal: trust, uncertainty, cognitive load, and the question of how much agency to hand to an algorithm. If you design dashboards, data tools, or any interface where AI and human judgment have to coexist, the framework here is directly transferable.
No medical background required.