
Namgyu Ho
@itsnamgyu
Efficient reasoning in LLMs. Student researcher @GoogleDeepMind, PhD student at osi.kaist.ac.kr @kaist_ai | Previously @LG_AI_Research.
ID: 2579393588
https://scholar.google.com/citations?user=4vOf7N8AAAAJ&hl=en&oi=ao 20-06-2014 23:12:19
260 Tweet
1,1K Followers
447 Following

🚨 New Paper co-led with byeongguk jeon 🚨 Q. Can we adapt Language Models, trained to predict next token, to reason in sentence-level? I think LMs operating in higher-level abstraction would be a promising path towards advancing its reasoning, and I am excited to share our



