Jingling Li @ICLR (@jingling_li) 's Twitter Profile
Jingling Li @ICLR

@jingling_li

Research Scientist at Google DeepMind | ex-Bytedancer | CS PhD from UMD | Become a creator | How far can an idealist go?

ID: 3268759112

calendar_today05-07-2015 05:26:16

53 Tweet

387 Followers

212 Following

Keyulu Xu (@keyuluxu) 's Twitter Profile Photo

How do neural networks extrapolate, i.e., predict outside the training distribution? We study MLPs and Graph Neural Networks trained by gradient descent, and show how a good representation and architecture can help extrapolation. arxiv.org/abs/2009.11848

How do neural networks extrapolate, i.e., predict outside the training distribution? We study MLPs and Graph Neural Networks trained by gradient descent, and show how a good representation and architecture can help extrapolation.

arxiv.org/abs/2009.11848
Jingling Li @ICLR (@jingling_li) 's Twitter Profile Photo

#Neurips2021 How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? arxiv.org/abs/2012.12896 (Arxiv version will be updated soon) Joint work with Mozhi Zhang Keyulu Xu John P Dickerson, and Jimmy Ba

Jingling Li @ICLR (@jingling_li) 's Twitter Profile Photo

Our work studies how the inductive bias in an architecture could impact its representations learned from noisy labels. We provide both empirical and theoretical support showing well-designed architectures can easily combat label noise even without robust training methods. 😃

AK (@_akhaliq) 's Twitter Profile Photo

VQ-GNN: A Universal Framework to Scale-up Graph Neural Networks using Vector Quantization abs: arxiv.org/abs/2110.14363

VQ-GNN: A Universal Framework to Scale-up Graph Neural Networks using Vector Quantization
abs: arxiv.org/abs/2110.14363
Jingling Li @ICLR (@jingling_li) 's Twitter Profile Photo

How does a neural network's architecture impact its robustness to noisy labels? Visit us at #Neurips2021 to learn more! When 🗓: Wed Dec 08, 7:30 - 9:00 PM (EST) Where📍: Slot B1 at eventhosts.gather.town/app/jS4PEO6rvP… UMD Department of Computer Science Joint work with Mozhi Zhang Keyulu Xu John P Dickerson Jimmy Ba

How does a neural network's architecture impact its robustness to noisy labels? Visit us at #Neurips2021 to learn more!
When 🗓: Wed Dec 08, 7:30 - 9:00 PM (EST)
Where📍: Slot B1 at  eventhosts.gather.town/app/jS4PEO6rvP…
<a href="/umdcs/">UMD Department of Computer Science</a> 
Joint work with <a href="/mozhi_zhang/">Mozhi Zhang</a> <a href="/KeyuluXu/">Keyulu Xu</a> <a href="/johnpdickerson/">John P Dickerson</a> Jimmy Ba
Furong Huang (@furongh) 's Twitter Profile Photo

Chen out our #NeurIPS2021 paper "VQ-GNN: A Framework to Scale up GNNs using Vector Quantization" Poster session #7 7:30-9:00pm ET today (nips.cc/Conferences/20…) Paper link: arxiv.org/abs/2110.14363

Furong Huang (@furongh) 's Twitter Profile Photo

We propose a universal and theoretically justified way to mini-batch train most Graph Neural Networks on large graphs, by approximating the remaining graph with vector quantization.

We propose a universal and theoretically justified way to mini-batch train most Graph Neural Networks on large graphs, by approximating the remaining graph with vector quantization.
Kangwook Lee (@kangwook_lee) 's Twitter Profile Photo

1/10: The summer break is the perfect time to share recent research from my lab. Our first story revolves around a fresh interpretation of diffusion-based generative modeling by my brilliant student Ying Fan. She proposed "diffusion models are solving a control problem".

1/10: The summer break is the perfect time to share recent research from my lab. Our first story revolves around a fresh interpretation of diffusion-based generative modeling by my brilliant student <a href="/yingfan_bot/">Ying Fan</a>. She proposed "diffusion models are solving a control problem".
Ying Fan (@yingfan_bot) 's Twitter Profile Photo

🔥Check out our ICML Conference ICML23' work on training diffusion models with policy gradient for shortcuts, which is the first work to use RL for training diffusion models to our knowledge. Check out our Arxiv paper arxiv.org/abs/2301.13362 & an exciting follow-up work coming soon!

Jingling Li @ICLR (@jingling_li) 's Twitter Profile Photo

Can large language models (LLMs) understand human beings (both individual and group wise) better than humans? Random thought @NeurIPS

Jingling Li @ICLR (@jingling_li) 's Twitter Profile Photo

I recently paid off my undergrad student loan from Bryn Mawr College, which had a 0% interest rate for nearly 11 years. I am deeply grateful for all the educational opportunities I've had over the years, and I still recall one of my CS professors joking that our first million was

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

⚡️ Excited to share that I am starting an AI+Education company called Eureka Labs. The announcement: --- We are Eureka Labs and we are building a new kind of school that is AI native. How can we approach an ideal experience for learning something new? For example, in the case

⚡️ Excited to share that I am starting an AI+Education company called Eureka Labs. 
The announcement:

---
We are Eureka Labs and we are building a new kind of school that is AI native.

How can we approach an ideal experience for learning something new? For example, in the case
Ching-An Cheng (Hiring 2025 intern) (@chinganc_rl) 's Twitter Profile Photo

How to provably solve contextual goal-oriented problems (like controlling a robot via language cmd) using offline data? We present a very neat solution CODA, without the need of reward learning! Check out our poster tomorrow #NeurIPS2024 W Ballroom A-D #6206 (12/12, 11am-2pm)

How to provably solve contextual goal-oriented problems (like controlling a robot via language cmd) using offline data? We present a very neat solution CODA, without the need of reward learning!  Check out our poster tomorrow #NeurIPS2024 W Ballroom A-D #6206 (12/12, 11am-2pm)
Google AI (@googleai) 's Twitter Profile Photo

Today we introduce an AI co-scientist system, designed to go beyond deep research tools to aid scientists in generating novel hypotheses & research strategies. Learn more, including how to join the Trusted Tester Program, at goo.gle/417wJrA

Martin Klissarov (@martinklissarov) 's Twitter Profile Photo

Our team is hiring! We are working on improving the social intelligence of the smartest models out there. Perks include: working with a diverse team of creative individuals (such Sian Gooding Jingling Li, naming those on X) One week left to apply 👇

Sian Gooding (@siangooding) 's Twitter Profile Photo

🚨 I’m hosting a Student Researcher Google DeepMind! Join us on the Autonomous Assistants team (led by Edward Grefenstette ) to explore multi-agent communication—how agents learn to interact, coordinate, and solve tasks together. DM me for details!