Hoyeon Chang (@hoyeon_chang) 's Twitter Profile
Hoyeon Chang

@hoyeon_chang

PhD student at @kaist_ai Language & Knowledge Lab Passionate about understanding intelligent systems Also a jazz pianist

ID: 1456167823114313729

linkhttp://duemoo.github.io/about calendar_today04-11-2021 07:54:08

146 Tweet

760 Followers

1,1K Following

Hoyeon Chang (@hoyeon_chang) 's Twitter Profile Photo

I'll be presenting my work on factual knowledge acquisition at #NeurIPS2024! ๐Ÿ“… Main Poster Session 2 East, Dec 11th 16:30-19:30

Seongyun Lee (@sylee_ai) 's Twitter Profile Photo

๐ŸŽ‰ Excited to share that our paper "How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?" has been accepted to #ICLR2025! ๐Ÿ–ผ Vision-Language Adaptation empowers LLMs to process visual informationโ€”but how does it impact their safety? ๐Ÿ›ก And what about

๐ŸŽ‰ Excited to share that our paper "How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?" has been accepted to #ICLR2025!

๐Ÿ–ผ Vision-Language Adaptation empowers LLMs to process visual informationโ€”but how does it impact their safety?

๐Ÿ›ก And what about
Luiz Pessoa (@pessoabrain) 's Twitter Profile Photo

๐—–๐—ฎ๐˜‚๐˜€๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐—บ๐—ฎ๐—ป๐˜† ๐˜๐—ต๐—ถ๐—ป๐—ด๐˜€ Always thought causation connected very closely to the notion of modularity in general but specifically in neuroscience. Finally found some discussion by Cartwright!

๐—–๐—ฎ๐˜‚๐˜€๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐—บ๐—ฎ๐—ป๐˜† ๐˜๐—ต๐—ถ๐—ป๐—ด๐˜€ 
Always thought causation connected very closely to the notion of modularity in general but specifically in neuroscience. Finally found some  discussion by Cartwright!
Andreas Tolias Lab @ Stanford University (@atoliaslab) 's Twitter Profile Photo

After 7 years, thrilled to finally share our #MICrONS functional connectomics results! We recorded activity from ~75K neurons in visual cortex in a single mouse, then mapped its wiring using electron microscopy. To systematically characterize neuron function, we built the first

After 7 years, thrilled to finally share our #MICrONS functional connectomics results!

We recorded activity from ~75K neurons in visual cortex  in a single mouse, then mapped its wiring using electron microscopy. To systematically characterize neuron function, we built the first
Dongkeun Yoon (@dongkeun_yoon) 's Twitter Profile Photo

๐Ÿ™ LLMs are overconfident even when they are dead wrong. ๐Ÿง What about reasoning models? Can they actually tell us โ€œMy answer is only 60% likely to be correctโ€? โ—Our paper suggests that they can! Through extensive analysis, we investigate what enables this emergent ability.

๐Ÿ™ LLMs are overconfident even when they are dead wrong.

๐Ÿง What about reasoning models? Can they actually tell us โ€œMy answer is only 60% likely to be correctโ€?

โ—Our paper suggests that they can! Through extensive analysis, we investigate what enables this emergent ability.
Hyeonbin Hwang (@ronalhwang) 's Twitter Profile Photo

๐Ÿšจ New Paper co-led with byeongguk jeon ๐Ÿšจ Q. Can we adapt Language Models, trained to predict next token, to reason in sentence-level? I think LMs operating in higher-level abstraction would be a promising path towards advancing its reasoning, and I am excited to share our

๐Ÿšจ New Paper co-led with <a href="/bkjeon1211/">byeongguk jeon</a> ๐Ÿšจ

Q. Can we adapt Language Models, trained to predict next token, to reason in sentence-level? 

I think LMs operating in higher-level abstraction would be a promising path towards advancing its reasoning, and I am excited to share our
Yunjae Won (@yunjae_won_) 's Twitter Profile Photo

[1/6] Ever wondered why Direct Preference Optimization is so effective for aligning LLMs? ๐Ÿค” Our new paper dives deep into the theory behind DPO's success, through the lens of information gain. Paper: "Differential Information: An Information-Theoretic Perspective on Preference

[1/6] Ever wondered why Direct Preference Optimization is so effective for aligning LLMs? ๐Ÿค”
Our new paper dives deep into the theory behind DPO's success, through the lens of information gain.

Paper: "Differential Information: An Information-Theoretic Perspective on Preference
Raphaรซl Milliรจre (@raphaelmilliere) 's Twitter Profile Photo

Transformer-based neural networks achieve impressive performance on coding, math & reasoning tasks that require keeping track of variables and their values. But how can they do that without explicit memory? ๐Ÿ“„ Our new ICML paper investigates this in a synthetic setting! ๐Ÿงต 1/13

Sohee Yang (@soheeyang_) 's Twitter Profile Photo

๐Ÿšจ New Paper ๐Ÿงต How effectively do reasoning models reevaluate their thought? We find that: - Models excel at identifying unhelpful thoughts but struggle to recover from them - Smaller models can be more robust - Self-reevaluation ability is far from true meta-cognitive awareness

๐Ÿšจ New Paper ๐Ÿงต
How effectively do reasoning models reevaluate their thought? We find that:
- Models excel at identifying unhelpful thoughts but struggle to recover from them
- Smaller models can be more robust
- Self-reevaluation ability is far from true meta-cognitive awareness