Richard Baraniuk (@rbaraniuk) 's Twitter Profile
Richard Baraniuk

@rbaraniuk

Professor at Rice University (machine learning, signal processing) and Founder/Director of OpenStax (open education)

ID: 52790283

linkhttps://richb.rice.edu calendar_today01-07-2009 17:29:27

3 Tweet

382 Followers

4 Following

Rice DSP Group (@rice_dsp) 's Twitter Profile Photo

It is not what a deep net can approximate that matters, but how it learns to approximate. Matrix perturbation theory explains why the loss surface of a ResNet or DenseNet is less erratic and eccentric than a ConvNet and hence easier to optimize under SGD arxiv.org/abs/2203.03099

Imtiaz Humayun (@imtiazprio) 's Twitter Profile Photo

Does recursive training of generative models only lead to a loss of diversity or is there more? We show that 'self-consuming' generative models don't only face model collapse, they can emphasize hidden artifacts of the generative model as well! arxiv.org/abs/2307.01850 🧵(1/N)

Does recursive training of generative models only lead to a loss of diversity or is there more?

We show that 'self-consuming' generative models don't only face model collapse, they can emphasize hidden artifacts of the generative model as well!

arxiv.org/abs/2307.01850

🧵(1/N)