Yuval Dagan (@yuvaldagan3) 's Twitter Profile
Yuval Dagan

@yuvaldagan3

PhD student, MIT EECS

ID: 1258175768368238593

calendar_today06-05-2020 23:24:23

41 Tweet

453 Followers

311 Following

Constantinos Daskalakis (@konstdaskalakis) 's Twitter Profile Photo

Min-max optimization (used among other applications in GANs, and adversarial training more broadly) is empirically challenging. We show why min-max optimization is hard in the following paper with Stratis Skoulakis and Manolis Zampetakis: arxiv.org/abs/2009.09623

Constantinos Daskalakis (@konstdaskalakis) 's Twitter Profile Photo

Samples from high-dimensional distributions can be scarce or expensive to acquire. Can we meaningfully learn them from *one* sample?!? In new work w/ Yuval Dagan, Nishanth Dikkala, &Vardis Kandiros, we show how to learn Ising models given a single sample arxiv.org/abs/2004.09370

Yuval Dagan (@yuvaldagan3) 's Twitter Profile Photo

Have you wondered how to win the game "Twenty questions" with lies? youtu.be/FxKKVvtyLDw Now on ITCS, joint with Yuval Filmus Yuval Filmus, Daniel Kane and Shay Moran

Yuval Dagan (@yuvaldagan3) 's Twitter Profile Photo

The following paper (accepted to STOC 21) studies sampling algorithms on a stream, against an adversary that has access to the internal state of the algorithm. Further, it resolves an open question in online classification. arxiv.org/pdf/2101.09054…

The following paper (accepted to STOC 21) studies sampling algorithms on a stream, against an adversary that has access to the internal state of the algorithm.
Further, it resolves an open question in online classification.
arxiv.org/pdf/2101.09054…
Kevin Hartnett (@kshartnett) 's Twitter Profile Photo

A statistics postdoc has solved an important "slicing" problem in convex geometry. Erica Klarreich reports in her terrific new piece for Quanta Magazine. quantamagazine.org/statistics-pos…

ACM SIGecom (@acmsigecom) 's Twitter Profile Photo

The ACM SIGecom Dissertation Award for 2020 goes to Manolis Zampetakis for his thesis "Statistics in High Dimensions without IID Samples: Truncated Statistics and Minimax Optimization" advised by Constantinos Daskalakis at Massachusetts Institute of Technology (MIT). Congratulations! More details: sigecom.org/award-phd.html

Yuval Dagan (@yuvaldagan3) 's Twitter Profile Photo

We've analyzed differential-privacy mechanisms that add bounded noise, and it usually outperforms the Gaussian mechanism! (if you consider a uniform bound on the noises of > 1000 queries) new revision: arxiv.org/pdf/2012.03817… (Joint with Gil Kur)

We've analyzed differential-privacy mechanisms that add bounded noise, and it usually outperforms the Gaussian mechanism! (if you consider a uniform bound on the noises of > 1000 queries) 
new revision: arxiv.org/pdf/2012.03817…
(Joint with Gil Kur)
Idan Attias (@idanattias) 's Twitter Profile Photo

Happy to share with you a new paper with Steve Hanneke and Yishay Mansour: A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability. arxiv.org/abs/2202.05420

Max Hopkins (@mhop_theory) 's Twitter Profile Photo

Wow! Brukhim, Carmon, Dinur, Moran, and Yehudayoff just resolved a long-standing open question on multi-class learning. They show that learnability is equivalent to the (non)-existence of "pseudo-cubes" over large enough samples (DS-dimension) (1/3)

Giannis Daras (@giannis_daras) 's Twitter Profile Photo

DALLE-2 has a secret language. "Apoploe vesrreaitais" means birds. "Contarra ccetnxniams luryca tanniounons" means bugs or pests. The prompt: "Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons" gives images of birds eating bugs. A thread (1/n)🧵

DALLE-2 has a secret language. 
"Apoploe vesrreaitais" means birds. 
"Contarra ccetnxniams luryca tanniounons" means bugs or pests. 

The prompt: "Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons" gives images of birds eating bugs. 

A thread (1/n)🧵
Giannis Daras (@giannis_daras) 's Twitter Profile Photo

New ICML paper: Score-Guided Intermediate Layer Optimization (SGILO). We train diffusion models on the latent space of StyleGAN and we show provable mixing of Langevin Dynamics for random generators. Reconstructions for *extremely sparse* (<1%) measurements. A thread🧵(1/N)

New ICML paper: Score-Guided Intermediate Layer Optimization (SGILO).

We train diffusion models on the latent space of StyleGAN and we show provable mixing of Langevin Dynamics for random generators.

Reconstructions for *extremely sparse* (&lt;1%) measurements.

A thread🧵(1/N)
Giannis Daras (@giannis_daras) 's Twitter Profile Photo

Announcing Soft Diffusion: A framework to correctly schedule, learn and sample from general diffusion processes. State-of-the-art results on CelebA, outperforms DDPMs and vanilla score-based models. A 🧵to learn about Soft Score Matching, Momentum Sampling and the role of noise

Announcing Soft Diffusion: A framework to correctly schedule, learn and sample from general diffusion processes.

State-of-the-art results on CelebA, outperforms DDPMs and vanilla score-based models.

A 🧵to learn about Soft Score Matching, Momentum Sampling and the role of noise
Idan Attias (@idanattias) 's Twitter Profile Photo

We studied the setting of semi-supervised adversarially robust PAC learning. Perhaps surprisingly, we show that the labeled sample complexity can be arbitrarily smaller than the unlabeled one, and controlled by a different complexity measure. #NeurIPS2022 arxiv.org/abs/2202.05420

Yuval Dagan (@yuvaldagan3) 's Twitter Profile Photo

At COLT 2023! Learning and Testing Latent-Tree Ising Models Efficiently. Given i.i.d samples from the leaves of some evolutionary tree, we learn it in TV. Proof relies on a tensorization argument for latent models. w. V Kandiros, C Daskalakis and D Choo. arxiv.org/abs/2211.13291

Giannis Daras (@giannis_daras) 's Twitter Profile Photo

Stable Diffusion and other text-to-image models sometimes blatantly copy from their training images. We introduce Ambient Diffusion, a framework to train/finetune diffusion models given only *corrupted* images as input. This reduces the memorization of the training set. A 🧵

Stable Diffusion and other text-to-image models sometimes blatantly copy from their training images.

We introduce Ambient Diffusion, a framework to train/finetune diffusion models given only *corrupted* images as input.  This reduces the memorization of the training set.

A 🧵
Constantinos Daskalakis (@konstdaskalakis) 's Twitter Profile Photo

Exciting work w/ Yuval Dagan Maxwell Fishelson Noah Golowich on efficient algos for no-swap regret learning and, relatedly, correlated eq when the #actions is exponentially large/infinite. While classical works point in the opposite direction, we show that this is actually possible!