|
Research
My research focuses on building robust, efficient, and interpretable AI systems, spanning generative
models, reinforcement learning, explainable AI, and causal inference.
Currently, I am working on enhancing causal reasoning capabilities and interpretability of LLMs.
During my PhD, I worked on causal world model, robustness under distribution shift,
and robust and efficient algorithms for causal inference.
|
|
Publications
(C: conference, W: workshop, P: preprint / * equal contribution, † equal
advising)
|
|
[C10]
Towards Spatially Consistent Image Generation: On Incorporating Intrinsic Scene Properties into
Diffusion Models
Hyundo Lee,
Suhyung Choi,
Inwoo Hwang†,
Byoung-Tak Zhang†
- AAAI Conference on Artificial Intelligence (AAAI), 2026   (Oral presentation)
Current image generation models trained on large datasets (e.g., Stable Diffusion) often produce
spatially inconsistent and distorted images.
Our idea is to leverage various intrinsic scene properties, such as depth and segmentation map,
enabling the model to capture the underlying scene structure more faithfully by learning to
co-generate them.
As a result, our method produces a more natural layout of scenes while maintaining the fidelity and
textual alignment of the base model.
[PDF]
|
|
[C9]
From Black-box to Causal-box: Towards Building More Interpretable Models
Inwoo Hwang,
Yushu Pan,
Elias Bareinboim
- Neural Information Processing Systems (NeurIPS), 2025
- NeurIPS Mechanistic Interpretability Workshop, 2025
Can we understand the model's counterfactual predictions under hypothetical "What if" questions?
Standard black-box and concept-based models cannot answer their own counterfactual questions, a
fundamental limitation we prove formally.
We introduce the first causal framework for building interpretable-by-design models, revealing a
trade-off between interpretability and predictive accuracy.
[PDF]
|
|
[C8]
PEER pressure: Model-to-Model Regularization for Single Source Domain Generalization
Dong Kyu Cho,
Inwoo Hwang†,
Sanghack Lee†
- IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR),
2025
Data augmentation is a popular tool for improving generalization, but induces fluctuations in
target domain accuracy, complicating model selection.
We propose a simple model-to-model regularization with parameter-averaging framework that reduces
mid-train OOD fluctuations and achieves strong generalization performance
in single-source domain generalization tasks.
[PDF]
|
|
[W3]
Locality-aware Concept Bottleneck Model
Sujin Jeon*,
Inwoo Hwang*,
Sanghack Lee†,
Byoung-Tak Zhang†
- NeurIPS Workshop on Unifying Representations in Neural Models, 2024
[PDF]
|
|
[C7]
On Positivity Condition for Causal Inference
Inwoo Hwang*,
Yesong Choe*,
Yeahoon Kwon,
Sanghack Lee
- International Conference on Machine Learning (ICML), 2024
- UAI Workshop on Causal Inference, 2024
Identifying and estimating a causal effect is a fundamental problem in many areas of scientific research.
A conventional assumption is the strict positivity of the given distribution, which is often violated in many real-world scenarios.
We establish rigorous foundations for licensing the use of identification formulas without
strict
positivity, a long-standing critical assumption in causal inference.
[PDF]
|
|
[C6]
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in
Reinforcement Learning
Inwoo Hwang,
Yunhyeok Kwak,
Suhyung Choi,
Byoung-Tak Zhang†,
Sanghack Lee†
- International Conference on Machine Learning (ICML), 2024
- NeurIPS Workshop on Generalization in Planning, 2023
- Finalist, Qualcomm Innovation Fellowship Korea, 2024
Causal world model is the key to robust decision-making. However, in real-world, causal relationships are often non-stationary across different contexts.
We propose a fine-grained causal dynamics learning framework where the agent is capable of understanding and reasoning about context-dependent causal relationships,
thereby enabling robust decision-making.
Our approach is principled and practical, with identifiability guarantees for discovering fine-grained causal relationships.
[PDF]
[Code]
|
|
[C5]
Efficient Monte Carlo Tree Search via On-the-Fly State-Conditioned Action Abstraction
Yunhyeok Kwak*,
Inwoo Hwang*,
Dooyoung Kim,
Sanghack Lee†,
Byoung-Tak Zhang†
- Uncertainty in Artificial Intelligence (UAI), 2024   (Oral presentation, 28/744=3.8%)
MCTS is a powerful tool for solving complex sequential decision-making problems.
However, it suffers from the curse of dimensionality when confronted with a vast combinatorial action space.
We propose state-conditioned action abstraction with latent causal world model that effectively reduces the search space of
MCTS by discovering and leveraging compositional relationships between state and actions.
[PDF]
[Code]
|
|
[C4]
Causal Discovery with Deductive Reasoning: One Less Problem
Jonghwan Kim,
Inwoo Hwang,
Sanghack Lee
- Uncertainty in Artificial Intelligence (UAI), 2024
We propose a simple yet effective plug-in module that corrects unreliable CI statements through
deductive reasoning using graphoid axioms,
thereby improving the robustness of constraint-based causal discovery methods.
[PDF]
[Code]
|
|
[C3]
Learning Geometry-aware Representations by Sketching
Hyundo Lee,
Inwoo Hwang,
Hyunsung Go,
Won-Seok Choi,
Kibeom Kim,
Byoung-Tak Zhang
- IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR),
2023
Inspired by human behavior that depicts an image by sketching, we propose a novel
representation
learning framework
that captures geometric information of the scene, such as distance or shape.
[PDF]
[Code]
|
|
[C2]
On Discovery of Local Independence over Continuous Variables via Neural Contextual
Decomposition
Inwoo Hwang,
Yunhyeok Kwak,
Yeon-Ji Song,
Byoung-Tak Zhang†,
Sanghack Lee†
- Conference on Causal Learning and Reasoning (CLeaR), 2023
- NeurIPS Workshop on Causal Inference Challenges in Sequential Decision Making: Bridging
Theory and Practice, 2021
Local independence (e.g., context-specific independence) provides a way to understand
fine-grained
causal relationships,
but it has mostly been studied for discrete variables. We define and characterize local
independence
for continuous variables,
provide its fundamental properties, and propose a differentiable method to discover it.
[PDF]
[Code]
|
|
[C1]
SelecMix: Debiased Learning by Contradicting-pair Sampling
Inwoo Hwang, Sangjun Lee,
Yunhyeok Kwak,
Seong Joon Oh,
Damien Teney,
Jin-Hwa Kim†,
Byoung-Tak Zhang†
- Neural Information Processing Systems (NeurIPS), 2022
- ICML Workshop on Spurious Correlations, Invariance, and Stability, 2022
Neural networks trained with ERM often learn unintended decision rules when trained on a biased
dataset where the labels are strongly correlated with undesirable features.
We propose a novel debiasing method that applies mixup to the selected pairs of examples,
utilizing
a contrastive loss designed to amplify reliance on biased features.
[PDF]
[Code]
|
|
[W2]
On the Importance of Critical Period in Multi-stage Reinforcement Learning
Junseok Park,
Inwoo Hwang,
Min Whoo Lee, Hyunseok Oh, Minsu Lee, Youngki Lee, Byoung-Tak Zhang
- ICML Workshop on Complex Feedback in Online Learning, 2022
[PDF]
|
|
[W1]
Improving Robustness to Texture Bias via Shape-focused Augmentation
Sangjun Lee,
Inwoo Hwang,
Gi-Cheon Kang,
Byoung-Tak Zhang
- CVPR Workshop on Human-centered Intelligent Services: Safety and Trustworthy, 2022
[PDF]
|
Education
- (2019.03 - 2025.02) Ph.D. in Computer Science and Engineering, Seoul National University
- (2016.03 - 2018.02) M.S. in School of Computing, KAIST
- (2010.02 - 2016.02) B.S. in Mathematical Science, KAIST
- (2007.02 - 2010.02) Highschool, Korea Science Academy of KAIST
|
Work Experience
- (Mar 2025 - current) Postdoctoral research scientist, Columbia University (host: Elias Bareinboim)
- (Oct 2024 - Feb 2025) Visiting scholar, Columbia University (host: Elias Bareinboim)
- (Sep 2021 - May 2022) External collaborator, Naver AI (host: Jin-Hwa Kim)
- (Aug 2012 - May 2014) Military service, Korean Augmentation To the US Army
(KATUSA), 8th US Army, 65th Medical
Brigade, Camp Humphreys, Republic of Korea
|
Academic Services
- Conference Reviewer: NeurIPS, ICLR, ICML, AAAI, UAI, AISTATS, CLeaR, CVPR, ICCV, ECCV, ICRA,
ECAI, WACV
- Journal Reviewer: IEEE Trans. Multimedia
- Workshop Reviewer
- NeurIPS 2024 Workshop on Causality and Large Models (CaLM)
- NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps)
- RLC 2024 Workshop on Reinforcement Learning Beyond Rewards (RLBRew)
- NeurIPS 2023 Workshop on Causal Representation Learning (CRL)
- ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability (SCIS)
|
Invited Talks
- (Jun 2024) IITP Workshop
- (Sep 2023) IITP Workshop
- (May 2023) SNU AIIS Retreat
- (Dec 2022) Korea Software Congress
- (Nov 2022) Kakao Enterprise TechTalk
- (Nov 2022) SNU AIIS Retreat
- (Oct 2022) NAVER TechTalk
|
Honors and Awards
- Finalist, Qualcomm
Innovation Fellowship, 2024
- Outstanding Reviewer, ECCV 2024
- Recipient, Youlchon AI Star Scholarship, 2024
- Recipient, UAI Scholarship, 2024
- Recipient, NAVER Ph.D. Fellowship, 2022
- Recipient, NeurIPS Scholarship, 2022
- Recipient, BK21 Plus Scholarship, Republic of Korea
- Recipient, National Science and Technology Scholarship, Korea Student Aid
Foundation
- Gold Award, The Korean Mathematical Olympiad (KMO)
|
Teaching Experience
- [CS204] Discrete Mathematics, KAIST, 2016S - 2017F
|
The source of this website is from here.
|
|