Conference on Parsimony and Learning (CPAL) (@cpalconf) 's Twitter Profile
Conference on Parsimony and Learning (CPAL)

@cpalconf

CPAL is a new annual research conference focused on the parsimonious, low dimensional structures that prevail in ML, signal processing, optimization, and beyond

ID: 1659059197013807105

linkhttps://cpal.cc/ calendar_today18-05-2023 04:52:36

247 Tweet

888 Followers

1,1K Following

Yi Ma (@yimatweets) 's Twitter Profile Photo

I gave a lecture on Pursuing the Nature of Intelligence yesterday and also attended a couple of tutorials. I have already learned more than any other conferences that I have attended in recent years… CPAL serves as the score function of ever diffused mega AI conferences,

Conference on Parsimony and Learning (CPAL) (@cpalconf) 's Twitter Profile Photo

DAY 2 starting off with a keynote seminar by Richard Baraniuk talking about Continuous Piecewise Affine Layers or CPAL! "Put that on the CPAL twitter feed" Richard said. Done ✅

DAY 2 starting off with a keynote seminar by <a href="/rbaraniuk/">Richard Baraniuk</a>  talking about Continuous Piecewise Affine Layers or CPAL! "Put that on the CPAL twitter feed" Richard said. Done ✅
Yu-Xiang Wang (@yuxiangw_cs) 's Twitter Profile Photo

Super excited to be at Stanford for #CPAL2025 this week. Very fitting for Richard Baraniuk to kick start the exceptional three-day program by telling us how Continuous Piecewise Affine Layers and splines play a central role in modern deep learning.

Super excited to be at Stanford for #CPAL2025 this week.  Very fitting for <a href="/rbaraniuk/">Richard Baraniuk</a> to kick start the exceptional three-day program by telling us how Continuous Piecewise Affine Layers and splines play a central role in modern deep learning.
Yani Ioannou @ ICLR 2025 ✈️ (@yanii) 's Twitter Profile Photo

Attending Conference on Parsimony and Learning (CPAL) this week on the beautiful Stanford campus! Will be presenting our work with Mohammed Adnan @ ICML 2025 Rohan Jain Ekansh Sharma in today’s recent spotlight poster session showing how to train Lottery Tickets from random initialization!

Attending <a href="/CPALconf/">Conference on Parsimony and Learning (CPAL)</a> this week on the beautiful Stanford campus! 

Will be presenting our work with <a href="/adnan_ahmad1306/">Mohammed Adnan @ ICML 2025</a>  <a href="/JainRohan16/">Rohan Jain</a>  <a href="/EkanshSh/">Ekansh Sharma</a> in today’s recent spotlight poster session showing how to train Lottery Tickets from random initialization!
Surya Ganguli (@suryaganguli) 's Twitter Profile Photo

Looking forward to talking today at Stanford's cpal.cc on parsimonious learning. Talk title: "Learning, creativity and reasoning: of monkeys and machines."

Yu-Xiang Wang (@yuxiangw_cs) 's Twitter Profile Photo

Why are diffusion model creative? #CPAL2025 invited speaker Surya Ganguli gives a compelling theory that it is creative because it mixes and matches local patches from many different training images.

Why are diffusion model creative? #CPAL2025 invited speaker <a href="/SuryaGanguli/">Surya Ganguli</a> gives a compelling theory that it is creative because it mixes and matches local patches from many different training images.
Yu-Xiang Wang (@yuxiangw_cs) 's Twitter Profile Photo

#CPAL2025 invited speaker Yuandong Tian shows that attention map first goes *sparser* than gets *denser* as we train a nonlinear attention mechanism using gradient flow — and the phenomena show up in the experiments too.

#CPAL2025 invited speaker <a href="/tydsh/">Yuandong Tian</a> shows that attention map first goes *sparser* than gets *denser* as we train a nonlinear attention mechanism using gradient flow — and the phenomena show up in the experiments too.
Yi Ma (@yimatweets) 's Twitter Profile Photo

There are only two types of generic linear/matrix operations that truly scale: one is low-rank and the other is sparse.

Yi Ma (@yimatweets) 's Twitter Profile Photo

I came to realize that, for studying intelligence, the only "inductive bias" that one is allowed to assume is that the data worth sensing and learning has extremely low intrinsic dimension. This can be precisely verified and quantified by the data's volume, entropy, or coding