Hi, I'm Taegyeong Lee. I'm passionate about novel research at the intersection of multi modalities—especially generating images or videos from audio and text. I enjoy exploring research that is simple yet effective, leveraging multimodal and generative models to make a strong impact in the real world.
I'm also deeply interested in quantitative trading, and I'm excited about the potential of applying multimodal large language models and generative models to the field of quant finance.
I am currently working as a AI researcher on the FnGuide, focusing on LLMs and RAG (Retrieval-Augmented Generation). Previously, I earned my Master’s degree from the UNIST AIGS. I interned at the ETRI and completed the Software Maestro 8th. I also served as a software developer in the Promotion Data Management Division at the Republic of Korea Army Headquarters. I hold a Bachelor of Computer Engineering from Pukyong National University.
[NEWS] I have received a nomination from CVPR 2026, ICLR 2026 to serve as a Reviewer (ICPR 2024, ICLR 2025)
[NEWS] Our paper on prompt gaurd for safety LLM has been accepted to a ACL 2025 Workshop (Selected a lightning talk presentation).
[NEWS] I have started working as a AI Researcher in FnGuide.
[NEWS] Our paper on knowledge distillation has been accepted to a CVPR 2025 Workshop.
[NEWS] Our paper on sound to image generation has been accepted to a ICCV 2023.


