Sung-Yub Kim
Staff Engineer @ AI Center, Samsung Electronics
sungyub.kim@mli.kaist.ac.kr
Seoul, South Korea
I’m a researcher in the LLM team @ AI Center, Samsung Electronics. I received my Ph.D. from KAIST, where I studied machine learning and deep learning theory.
Research Interests
- Post-hoc Analysis & Adaptation of Foundation Models: Developing methods to analyze and adapt pre-trained large-language models (LLMs) and vision-language models (VLMs) for downstream tasks without full retraining.
- Explainability and Interpretability: Understanding the internal mechanisms of foundation models to improve transparency and trustworthiness.
- Generalization via Loss Landscape: Investigating the generalization properties of deep neural networks through the lens of loss landscape geometry and optimization dynamics.
- Efficient ML Systems: Exploring distributed training, sequence parallelism, and memory-efficient attention mechanisms for scaling large models.
Professional Timeline
| Period | Role | Organization |
|---|---|---|
| 2024 – Present | Staff Engineer | AI Center, Samsung Electronics |
| 2019 – 2024 | Ph.D. in Machine Learning | KAIST |
Collaboration
I’m always open to research collaborations, especially in the areas of LLM efficiency, interpretability, and distributed training. Feel free to reach out via email or connect on LinkedIn.
For more details, see my CV.
News
| Jun 04, 2025 | New blog series on FlashAttention (v1 → v2 → v3) published! Check out the series page. |
|---|---|
| Jun 02, 2025 | Blog series on Sequence Parallelism completed: Ring Attention → Blockwise RingAttention → DeepSpeed Ulysses → USP. Read the series. |
| May 30, 2025 | New blog series on Distributed Training started — covering Tensor Parallelism, Pipeline Parallelism, and Activation Recomputation. Read more. |