Yanlai Yang (@yanlaiyang) 's Twitter Profile
Yanlai Yang

@yanlaiyang

PhD student @nyuniversity @agentic_ai_lab

ID: 1421949179639734276

linkhttps://yanlai00.github.io/ calendar_today01-08-2021 21:41:32

12 Tweet

98 Followers

82 Following

Sergey Levine (@svlevine) 's Twitter Profile Photo

Reusable datasets, such as ImageNet, are a driving force in ML. But how can we reuse data in robotics? In his new blog post, Frederik Ebert talks about "bridge data": multi-domain and multi-task datasets that boost generalization of new tasks: bair.berkeley.edu/blog/2021/11/1… A thread:

Sergey Levine (@svlevine) 's Twitter Profile Photo

How should we pretrain for robotic RL? Turns out the same offline RL methods that learn the skills serve as excellent pretraining. Our latest experiments show that offline RL learns better representations w/ real robots: sites.google.com/view/ptr-robot… arxiv.org/abs/2210.05178 Thread>

Mengye Ren (@mengyer) 's Twitter Profile Photo

Introducing LifelongMemory, an LLM-based personalized AI for egocentric video natural language query (NLQ). This amazing work is led by Ying Wang Ying Wang

Mengye Ren (@mengyer) 's Twitter Profile Photo

🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by Yanlai Yang, we discovered

🔍 New LLM Research 🔍
Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged!

In our recent paper led by <a href="/YanlaiYang/">Yanlai Yang</a>, we discovered
Ying Wang (@yingwww_) 's Twitter Profile Photo

A gloomy day in New York couldn't dampen the fun with new friends and new research at NYC CV Day 🥳 Excited to share our updated LifelongMemory framework that leverages LLMs for long-form video understanding, which achieves SOTA on EgoSchema! lifelongmemory.github.io

A gloomy day in New York couldn't dampen the fun with new friends and new research at NYC CV Day 🥳  Excited to share our updated LifelongMemory framework that leverages LLMs for long-form video understanding, which achieves SOTA on EgoSchema! lifelongmemory.github.io
Yanlai Yang (@yanlaiyang) 's Twitter Profile Photo

I’ll be presenting the poster of this work at #NeurIPS2024 tomorrow from 11-2, at West 5609. Welcome everyone to check it out and happy to chat!

Yanlai Yang (@yanlaiyang) 's Twitter Profile Photo

Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!

Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!
Yanlai Yang (@yanlaiyang) 's Twitter Profile Photo

Excited to present my work at CoLLAs 2025 CoLLAs 2026! In our paper arxiv.org/abs/2501.12254, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.

Excited to present my work at CoLLAs 2025 <a href="/CoLLAs_Conf/">CoLLAs 2026</a>! In our paper arxiv.org/abs/2501.12254, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.
Jack Lu (@jacklu_me) 's Twitter Profile Photo

ICL is powerful, but only if LLMs actually understand their contexts. Let’s optimize the KV-cache itself for few-shot adaptation! Introducing Context Tuning: 📎 Initialize prefixes from examples ⚙️ Optimize them via gradient descent 🚀 Unlock strong, efficient adaptation 🧵👇

ICL is powerful, but only if LLMs actually understand their contexts. Let’s optimize the KV-cache itself for few-shot adaptation!

Introducing Context Tuning:
📎 Initialize prefixes from examples
⚙️ Optimize them via gradient descent
🚀 Unlock strong, efficient adaptation

🧵👇
Jack Lu (@jacklu_me) 's Twitter Profile Photo

Wondering how to get the most out of LLM test-time verification? New study: “When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers". 🔍 37 models, 9 datasets 🔥 Self vs intra-family vs cross-family verification Result: verify across families! 🧵👇

agentic learning ai lab (@agentic_ai_lab) 's Twitter Profile Photo

At Agentic Learning AI Lab, the best part of our research is working with the incredible students. They are the ones writing the code, running the experiments, and solving the hard problems in AI today. As we approach the end of the year, we are fundraising to ensure these

agentic learning ai lab (@agentic_ai_lab) 's Twitter Profile Photo

Lab website: agenticlearning.ai Link to donate: bit.ly/48PnznN 100% of your support goes toward student funding and research resources. As a 501(c)(3) institution, your donation is tax-deductible. Help us keep science open!