Jin Zhou (@jinpzhou) 's Twitter Profile
Jin Zhou

@jinpzhou

Computer Science PhD at Cornell

ID: 1584374004017758208

linkhttps://www.linkedin.com/in/jinpeng-zhou/ calendar_today24-10-2022 02:39:43

8 Tweet

96 Followers

88 Following

Albert Jiang (@albertqjiang) 's Twitter Profile Photo

Large language models can write informal proofs, translate them into formal ones, and achieve SoTA performance in proving competition-level maths problems! LM-generated informal proofs are sometimes more useful than the human ground truth 🤯 Preprint: arxiv.org/abs/2210.12283 🧵

Large language models can write informal proofs, translate them into formal ones, and achieve SoTA performance in proving competition-level maths problems!

LM-generated informal proofs are sometimes more useful than the human ground truth 🤯

Preprint: arxiv.org/abs/2210.12283
🧵
Keiran Paster (@keirp1) 's Twitter Profile Photo

Can large language models write prompts…for themselves? Yes, at a human-level (!) if they are given the ability to experiment and see what works. arxiv.org/abs/2211.01910 with Yongchao Zhou, Andrei Muresanu, Ziwen, Silviu Pitis, Harris Chan, and Jimmy Ba (1/7)

Jin Zhou (@jinpzhou) 's Twitter Profile Photo

Strongly agree with the points of synthetic data + verification by formal tools. Check out our recent paper accepted to ICLR 2024 that makes a small step towards that direction! openreview.net/forum?id=V5tdi…

Owen Oertell (@owenoertell) 's Twitter Profile Photo

Tired of over-optimized generations that stray too far from the base distribution? We present SLCD: Supervised Learning based Controllable Diffusion, which (provably) solves the KL constrained reward maximization problem for diffusion through supervised learning! (1/n)

Tired of over-optimized generations that stray too far from the base distribution?
We present SLCD: Supervised Learning based Controllable Diffusion, which (provably) solves the KL constrained reward maximization problem for diffusion through supervised learning! (1/n)
Kaiwen Wang (@kaiwenw_ai) 's Twitter Profile Photo

I’m presenting two papers on value-based RL for post-training & reasoning on Friday at AI for Math Workshop @ ICML 2025 at #ICML2025! 1️⃣ Q#: lays theoretical foundations for value-based RL for post-training LMs; 2️⃣ VGS: practical value-guided search scaled up for long CoT reasoning. 🧵👇

Jin Zhou (@jinpzhou) 's Twitter Profile Photo

This captures something fundamental we're seeing in AI right now! The shift from just scaling pre-training to scaling test-time compute is huge. Our Q# + VGS work shows how value-based methods can guide models through the vast implicit graphs of reasoning possibilities.