Ihab Ilyas (@ihabilyas) 's Twitter Profile
Ihab Ilyas

@ihabilyas

professor and co-founder

ID: 262387823

linkhttp://www.cs.uwaterloo.ca/~ilyas calendar_today07-03-2011 23:30:54

379 Tweet

1,1K Followers

186 Following

Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

Come hear about work on datamodels (arxiv.org/abs/2202.00622) at ICML *tomorrow* in the Deep Learning/Optimization track (Rm 309)! The presentation is at 4:50 with a poster session at 6:30. Joint work with Sam Park Logan Engstrom Guillaume Leclerc Aleksander Madry

Ihab Ilyas (@ihabilyas) 's Twitter Profile Photo

Was great talking to the amazing Brazilian data management community SBBD 2022 about our Knowledge Graph @apple more details machinelearning.apple.com/research/conti…

Johannes Gehrke (@johannesgehrke) 's Twitter Profile Photo

Based on the latest groundbreaking #GPT4 model from OpenAI, we had the opportunity to experiment and learn more about its capabilities. I would not have believed just six months ago that I am now saying that there are sparks of #AGI in the model. Amazing times.

Aleksander Madry (@aleks_madry) 's Twitter Profile Photo

As ML models/datasets get bigger + more opaque, we need a *scalable* way to ask: where in the *data* did a prediction come from? Presenting TRAK: data attribution with (significantly) better speed/efficacy tradeoffs: w/ Sam Park Kristian Georgiev Andrew Ilyas Guillaume Leclerc 1/6

As ML models/datasets get bigger + more opaque, we need a *scalable* way to ask: where in the *data* did a prediction come from?

Presenting TRAK: data attribution with (significantly) better speed/efficacy tradeoffs:

w/ <a href="/smsampark/">Sam Park</a> <a href="/kris_georgiev1/">Kristian Georgiev</a>  <a href="/andrew_ilyas/">Andrew Ilyas</a> <a href="/gpoleclerc/">Guillaume Leclerc</a> 1/6
Alex Ratner (@ajratner) 's Twitter Profile Photo

The future of AI is many smaller "specialist" models that are faster *and* cheaper on specific datasets & use cases- not one large "generalist" model. Very excited to share our work on doing this via more efficient distillation, with Google AI & led by the amazing Cheng-Yu Hsieh !!

Communications of the ACM (@cacmmag) 's Twitter Profile Photo

"Data Science–A Systematic Treatment," by M. Tamer @Ozsu University of Waterloo, discusses the scope of #DataScience and highlights aspects that are not part of AI bit.ly/3JqNp5z. Özsu talks about leveraging data and deploying data science in a #video. bit.ly/3NJaToY

"Data Science–A Systematic Treatment," by M. Tamer @Ozsu <a href="/UWaterloo/">University of Waterloo</a>, discusses the scope of #DataScience and highlights aspects that are not part of AI bit.ly/3JqNp5z. Özsu talks about leveraging data and deploying data science in a #video. bit.ly/3NJaToY
Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

What makes ML models tick? How do we attribute model behavior to the training data, algorithm, architecture, or scale used in training? Papers (or ideas) here? Submit to ATTRIB @ NeurIPS 2023 (attrib-workshop.cc)! Deadline is September 23!

What makes ML models tick? How do we attribute model behavior to the training data, algorithm, architecture, or scale used in training?

Papers (or ideas) here? Submit to ATTRIB @ NeurIPS 2023 (attrib-workshop.cc)! 

Deadline is September 23!
Yunyao Li (@yunyao_li) 's Twitter Profile Photo

🥳 Super excited at multiple #EMNLP2023 acceptance. Two by my summer interns Apple: Simone Conia on growing multilingual KG and Farima Fatahi Bayat on factual error detection & correction. One with my former colleagues IBM Research on a novel labeling tool. Details coming soon.

🥳 Super excited at multiple #EMNLP2023 acceptance. Two by my summer interns <a href="/Apple/">Apple</a>: <a href="/ConiaSimone/">Simone Conia</a> on growing multilingual KG and Farima Fatahi Bayat on factual error detection &amp; correction.  One with my former colleagues <a href="/IBMResearch/">IBM Research</a> on a novel labeling tool. Details coming soon.
Ronak Pradeep (@rpradeep42) 's Twitter Profile Photo

ConvKGYarn🧶: Spinning Configurable & Scalable Conversational KGQA datasets with LLMs is accepted at #EMNLP2024 Industry Track with a top-20% rec⭐️ Work done Apple with Daniel Lee , Ali, Jeff, Yisi Sang , Jimmy Lin , Ihab Ilyas , Saloni Potdar , Mostafa, Yunyao Li ✈️ ACL'2025!

Logan Engstrom (@logan_engstrom) 's Twitter Profile Photo

Want state-of-the-art data curation, data poisoning & more? Just do gradient descent! w/ Andrew Ilyas Ben Chen Axel Feldmann Billy Moses Aleksander Madry: we show how to optimize final model loss wrt any continuous variable. Key idea: Metagradients (grads through model training)

Want state-of-the-art data curation, data poisoning &amp; more? Just do gradient descent!

w/ <a href="/andrew_ilyas/">Andrew Ilyas</a> Ben Chen <a href="/axel_s_feldmann/">Axel Feldmann</a>  <a href="/wsmoses/">Billy Moses</a> <a href="/aleks_madry/">Aleksander Madry</a>: we show how to optimize final model loss wrt any continuous variable.

Key idea: Metagradients (grads through model training)
Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

“How will my model behave if I change the training data?” Recent(-ish) work w/ Logan Engstrom: we nearly *perfectly* predict ML model behavior as a function of training data, saturating benchmarks for this problem (called “data attribution”).

“How will my model behave if I change the training data?”

Recent(-ish) work w/ <a href="/logan_engstrom/">Logan Engstrom</a>: we nearly *perfectly* predict ML model behavior as a function of training data, saturating benchmarks for this problem (called “data attribution”).