About
Hello, I am a Ph.D. candidate in the Graduate School of Artificial Intelligence at Seoul National University. I am currently a member of IDS lab. My research focuses on advancing Natural Language Processing (NLP) by developing robust models that recognize their own limits. By equipping LLMs with abstention capabilities, I aim to build systems you can truly rely on. If my research interests overlap or you’d like to learn more about my previous work, please don’t hesitate to connect with me. 😄 I would be delighted to explore potential collaborations! 🎉
Education
- 2020.09 ~ present : MS. and Ph.D. (integrated) in Graduate School of Artificial Intelligence, Seoul National University
- Advisor : Prof. Sang-goo Lee
- 2014.03 ~ 2020.08 : B.S. in Computer Science and Engineering, Soongsil University
Work Experience
- 2020 : SWE intern in Tomato System
- 2019 : SWE intern in Naver Corp.
Recent Publications (Google Scholar)
- Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Youna Kim, Minjoon Choi, Sungmin Cho, Hyuhng Joon Kim, Sang-goo Lee, Taeuk Kim
under-review - When to Speak, When to Abstain: Contrastive Decoding with Abstention
Hyuhng Joon Kim, Youna Kim, Sang-goo Lee, Taeuk Kim
The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) - Aligning Language Models to Explicitly Handle Ambiguity
Hyuhng Joon Kim, Youna Kim, Cheonbok Park, Junyeob Kim, Choonghyun Park, Kang Min Yoo, Sang-goo Lee, Taeuk Kim
The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024) - Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts
Youna Kim, Hyuhng Joon Kim, Cheonbok Park, Choonghyun Park, Hyunsoo Cho, Junyeob Kim, Kang Min Yoo, Sang-goo Lee, Taeuk Kim
Findings of the Association for Computational Linguistics: EMNLP 2024 (Findings of EMNLP 2024) - Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
Hyuhng Joon Kim, Hyunsoo Cho, Sang-Woo Lee, Junyeob Kim, Choonghyun Park, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
Findings of the Association for Computational Linguistics: EMNLP 2023 (Findings of EMNLP 2023) - Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Hyunsoo Cho, Choonghyun Park, Junyeop Kim, Hyuhng Joon Kim, Kang Min Yoo, Sang-goo Lee The 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023) - Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners
Hyunsoo Cho, Hyuhng Joon Kim, Jun Yeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023) - Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Hyuhng Joon Kim, Hyunsoo Cho, Jun Yeob Kim, Taeuk Kim, Kang Min Yoo, Sang-goo Lee
Workshop on Large-scale Pre-trained Language Models 2022 (NAACL Workshop) - Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Kang Min Yoo, Jun Yeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, Taeuk Kim
The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)