Research Webzine of the KAIST College of Engineering since 2014
Fall 2025 Vol. 25Professor Jaesik Choi's team has developed an innovative XAI framework that meticulously visualizes the process by which deep learning models recognize and judge images, making it human-interpretable at the circuit level. This achievement is considered a step forward in solving the long-standing AI black box problem.
Example of Granular Concept Circuits. (Top) Circuits obtained by merging all captured 20 circuits conditioned on the query image in ResNet50. (Bottom) Examples of individual concept circuits.
Even as deep learning-based image recognition technology advances, it remains a difficult task to explain how the AI internally combines concepts (e.g., cat ears, car wheels) to reach a final conclusion. Existing explanation techniques primarily focused on the role of single neurons or provided unified explanations that failed to capture fine-grained conceptual structures.
Professor Choi's research team focused on the idea that actual deep learning models form various concepts through a circuit structure where multiple neurons cooperate. Based on this, they developed the Granular Concept Circuits (GCC) technology.
Inside a deep learning model, there exists a fundamental computation unit called a neuron, similar to the human brain. Neurons detect small features in an image—such as the shape of an ear, a specific color, or an outline—calculate a value (signal), and pass it to the next stage. Conversely, a circuit refers to a structure where several such neurons are interconnected to jointly recognize a single meaning (concept). For example, recognizing the concept of a cat ear requires multiple neurons—such as those detecting the ear's outline, a triangular shape, and fur color patterns—to operate sequentially, forming a single functional unit (circuit).
Until now, explanation techniques were either single-neuron-centric approaches, suggesting that one neuron corresponds to or “sees” one specific concept, or were limited to providing unified explanations that failed to capture the fine-grained conceptual structure. Against this backdrop, Professor Choi's research team overcame the limitations of previous approaches and presented a technology that precisely explores and interprets the fine-grained conceptual structure within the model.
figure 1 Illustration of how neurons in consecutive layers are considered to be connected based on Neuron Sensitivity Score and Semantic Flow Score.
The Granular Concept Circuits (GCC) technology developed by the team is a new method for analyzing and visualizing the process of concept formation inside image classification models at the circuit level. GCC automatically tracks circuits by calculating Neuron Sensitivity and Semantic Flow. Neuron Sensitivity indicates how sensitive a specific neuron is to a particular feature, and Semantic Flow is an indicator showing how strongly that feature is transmitted to the next concept. This allows for the step-by-step visualization of how basic features are assembled into higher-level concepts (figure 1).
Furthermore, the research team verified the significance of the proposed method by confirming that when an extracted circuit was temporarily deactivated, the AI's prediction changed as the concept handled by that circuit disappeared.
This research is evaluated as the first to reveal the actual structure of concept formation within complex deep learning models at the detailed circuit level. Through this, it suggests practical applicability across the entire spectrum of Explainable AI (XAI), including enhancing the transparency of AI decision-making, analyzing misclassification causes, detecting bias, model debugging and structural improvement, and increasing safety and accountability.
figure 2 Visualization of GCC exemplars for different datasets and models. Each node is labeled with its index and accompanied by four highly activating image patches.
This research, in which Ph.D. candidates Dahee Kwon and Sehyun Lee from the KAIST Kim Jaechul AI Graduate School participated as co-first authors, was presented on October 21st at the international conference, the International Conference on Computer Vision (ICCV). More details on the work, including code and experimental results, can be found on the project page: https://github.com/daheekwon/GCC
Wearable Haptics of Orthotropic Actuation for 3D Spatial Perception in Low-visibility Environment
Read moreTwinSpin: A Novel VR Controller Enabling In-Hand Rotation
Read moreHow AI Thinks: Understanding Visual Concept Formations in Deep Learning Models
Read moreSoft Airless Wheel for A Lunar Exploration Rover Inspired by Origami and Da Vinci Bridge Principles
Read moreLighting the Lunar Night: KAIST Develops First Electrostatic Power Generator for the Moon
Read more