Inwoo Hwang

I am a final-year PhD student in Computer Science at Seoul National University, advised by Byoung-Tak Zhang and Sanghack Lee. I am currently a visiting PhD student at Columbia University, hosted by Elias Bareinboim. Prior to joining Ph.D program, I did my master study in Computer Science at KAIST. I earned my Bachelor's degree in Department of Mathematical Sciences from KAIST.

CV  /  Email  /  Google Scholar  /  Github  /  Twitter

profile photo
Research

My research is centered on building trustworthy AI systems whose decision making is robust and interpretable, encompassing the fields of representation learning, reinforcement learning, and causal inference. In particular, my recent works involve developing robust and efficient algorithms for causal inference and causal discovery, with their application for building reliable machine learning models. Additionally, I am interested in discovering and utilizing useful inductive biases to better align model decisions with human reasoning.

Publications

(* equal contribution, equal advising)

lcbm Locality-aware Concept Bottleneck Model
Sujin Jeon*, Inwoo Hwang*, Sanghack Lee, Byoung-Tak Zhang
  • NeurIPS Workshop on Unifying Representations in Neural Models, 2024
  • [PDF]

    positivity On Positivity Condition for Causal Inference
    Inwoo Hwang*, Yesong Choe*, Yeahoon Kwon, Sanghack Lee
  • International Conference on Machine Learning (ICML), 2024
  • UAI Workshop on Causal Inference, 2024
  • We establish rigorous foundations for licensing the use of identification formulas without strict positivity, a long-standing critical assumption in causal inference.

    [PDF]

    fcdl Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement Learning
    Inwoo Hwang, Yunhyeok Kwak, Suhyung Choi, Byoung-Tak Zhang, Sanghack Lee
  • International Conference on Machine Learning (ICML), 2024
  • NeurIPS Workshop on Generalization in Planning, 2023
  • Finalist, Qualcomm Innovation Fellowship Korea, 2024
  • We propose a principled and practical approach to discovering fine-grained causal relationships with identifiability guarantees for robust decision-making.

    [PDF] [Code]

    mcts Efficient Monte Carlo Tree Search via On-the-Fly State-Conditioned Action Abstraction
    Yunhyeok Kwak*, Inwoo Hwang*, Dooyoung Kim, Sanghack Lee, Byoung-Tak Zhang
  • Uncertainty in Artificial Intelligence (UAI), 2024   (Oral, 28/744=3.8%)
  • We propose state-conditioned action abstraction that effectively reduces the search space of MCTS under vast combinatorial action space by harnessing compositional relationships between state and sub-actions.

    [PDF] [Code]

    deduce Causal Discovery with Deductive Reasoning: One Less Problem
    Jonghwan Kim, Inwoo Hwang, Sanghack Lee
  • Uncertainty in Artificial Intelligence (UAI), 2024
  • We propose a simple yet effective plug-in module that corrects unreliable CI statements through deductive reasoning using graphoid axioms, thereby improving the robustness of constraint-based causal discovery methods.

    [PDF] [Code]

    LBS Learning Geometry-aware Representations by Sketching
    Hyundo Lee, Inwoo Hwang, Hyunsung Go, Won-Seok Choi, Kibeom Kim, Byoung-Tak Zhang
  • IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
  • Inspired by human behavior that depicts an image by sketching, we propose a novel representation learning framework that captures geometric information of the scene, such as distance or shape.

    [PDF] [Code]

    CSSI On Discovery of Local Independence over Continuous Variables via Neural Contextual Decomposition
    Inwoo Hwang, Yunhyeok Kwak, Yeon-Ji Song, Byoung-Tak Zhang, Sanghack Lee
  • Conference on Causal Learning and Reasoning (CLeaR), 2023
  • NeurIPS Workshop on Causal Inference Challenges in Sequential Decision Making: Bridging Theory and Practice, 2021
  • Local independence (e.g., context-specific independence) provides a way to understand fine-grained causal relationships, but it has mostly been studied for discrete variables. We define and characterize local independence for continuous variables, provide its fundamental properties, and propose a differentiable method to discover it.

    [PDF] [Code]

    SelecMix SelecMix: Debiased Learning by Contradicting-pair Sampling
    Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, Byoung-Tak Zhang
  • Neural Information Processing Systems (NeurIPS), 2022
  • ICML Workshop on Spurious Correlations, Invariance, and Stability, 2022
  • Neural networks trained with ERM often learn unintended decision rules when trained on a biased dataset where the labels are strongly correlated with undesirable features. We propose a novel debiasing method that applies mixup to the selected pairs of examples, utilizing a contrastive loss designed to amplify reliance on biased features.

    [PDF] [Code]

    CriticalPeriod On the Importance of Critical Period in Multi-stage Reinforcement Learning
    Junseok Park, Inwoo Hwang, Min Whoo Lee, Hyunseok Oh, Minsu Lee, Youngki Lee, Byoung-Tak Zhang
  • ICML Workshop on Complex Feedback in Online Learning, 2022
  • [PDF]

    ShapeCon Improving Robustness to Texture Bias via Shape-focused Augmentation
    Sangjun Lee, Inwoo Hwang, Gi-Cheon Kang, Byoung-Tak Zhang
  • CVPR Workshop on Human-centered Intelligent Services: Safety and Trustworthy, 2022
  • [PDF]

    Education
    • (2019.03 - current) Ph.D in Computer Science and Engineering, Seoul National University
    • (2016.03 - 2018.02) MS in School of Computing, KAIST
    • (2010.02 - 2016.02) BS in Mathematical Science, KAIST
    • (2007.02 - 2010.02) Highschool, Korea Science Academy of KAIST
    Work Experience
    • (Oct 2024 - current) Visiting scholar, Columbia University (host: Elias Bareinboim)
    • (Sep 2021 - May 2022) External collaborator, Naver AI (host: Jin-Hwa Kim)
    • (Aug 2012 - May 2014) Mandatory military service, Korean Augmentation To the US Army (KATUSA)
    Academic Services
    • Conference Reviewer: NeurIPS (2023-2024), ICLR (2024-2025), ICML (2024), AAAI (2025), AISTATS (2024-2025), CLeaR (2024-2025), CVPR (2023-2024), ICCV (2023), ECCV (2024), ICRA (2024-2025)
    • Journal Reviewer: IEEE Trans. Multimedia
    • Workshop Reviewer
      • NeurIPS 2024 Workshop on Causality and Large Models (CaLM)
      • NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps)
      • RLC 2024 Workshop on Reinforcement Learning Beyond Rewards (RLBRew)
      • NeurIPS 2023 Workshop on Causal Representation Learning (CRL)
      • ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability (SCIS)
    Invited Talks
    • (Jun 2024) IITP Workshop
    • (Sep 2023) IITP Workshop
    • (May 2023) SNU AIIS Retreat
    • (Dec 2022) Korea Software Congress
    • (Nov 2022) Kakao Enterprise TechTalk
    • (Nov 2022) SNU AIIS Retreat
    • (Oct 2022) NAVER TechTalk
    Honors and Awards
    • Finalist, Qualcomm Innovation Fellowship Korea, 2024
    • Outstanding Reviewer, ECCV 2024
    • Recipient, Youlchon AI Star Scholarship, 2024
    • Recipient, UAI Scholarship, 2024
    • Recipient, NAVER PhD Fellowship, 2022
    • Recipient, NeurIPS Scholarship, 2022
    • Recipient, BK21 Plus Scholarship, Republic of Korea
    • Recipient, National Science and Technology Scholarship, Korea Student Aid Foundation
    • Gold Award, The Korean Mathematical Olympiad (KMO)
    Teaching Experience
    • [CS204] Discrete Mathematics, KAIST, 2016S - 2017F

    The source of this website is from here.