Wei Yu (@gnosisyu) 's Twitter Profile
Wei Yu

@gnosisyu

ID: 970040220904177664

calendar_today03-03-2018 20:56:25

93 Tweet

31 Followers

979 Following

Danijar Hafner (@danijarh) 's Twitter Profile Photo

World models are the future and the future is now! 🌎🚀 Proud to share DreamerV2, the first agent that achieves human-level Atari performance by learning behaviors purely within a separately trained world model. Paper: arxiv.org/pdf/2010.02193… Thread 👇

World models are the future and the future is now! 🌎🚀

Proud to share DreamerV2, the first agent that achieves human-level Atari performance by learning behaviors purely within a separately trained world model.

Paper: arxiv.org/pdf/2010.02193…

Thread 👇
Animesh Garg (@animesh_garg) 's Twitter Profile Photo

Deep RL is not really using deep networks! We found that dense connections and deeper networks help improve learning performance. Results hold across various manipulation and locomotion tasks, for both proprioceptive and image observations! sites.google.com/view/d2rl/home

Animesh Garg (@animesh_garg) 's Twitter Profile Photo

How do we bridge Sim 2 Real gap? What is necessary and what is sufficient! 🤖🤯 Checkout: Dynamics Randomization Revisited A Case Study for Quadrupedal Locomotion Paper: arxiv.org/abs/2011.02404 Project: pair.toronto.edu/understanding-… Dennis Da, Xie Zhaoming @Mvandepanne Buck Babich

arXiv Daily (@arxiv_daily) 's Twitter Profile Photo

Action Concept Grounding Network for Semantically-Consistent Video Generation deepai.org/publication/ac… by Wei Yu et al. #ComputerScience #Learning

Brady Neal (@casualbrady) 's Twitter Profile Photo

Yoshua Bengio will give the next talk in causalcourse.com! Topic: Causal Representation Learning Date/time: Tuesday, December 1, at 1 pm EST Watch the course Slack for a join link. #CausalTwitter

Yoshua Bengio will give the next talk in causalcourse.com!

Topic: Causal Representation Learning

Date/time: Tuesday, December 1, at 1 pm EST

Watch the course Slack for a join link. #CausalTwitter
Animesh Garg (@animesh_garg) 's Twitter Profile Photo

Long term reasoning needs an understanding ofcontinuous changes in the world. "What happens if I open the door" Action Concept Grounding Networks learn these semantics arxiv.org/abs/2011.11201 iclr-acgn.github.io/ACGN/ Wei Yu W.Chen No longer on Xitter University of Toronto Robotics Institute Vector Institute

Allen Institute (@alleninstitute) 's Twitter Profile Photo

Fascinating new research suggests an approach to reversing age-related decline by reprogramming some cells to a ‘younger’ state. fiercebiotech.com/research/rever…

Jason Kim (@jason_z_kim) 's Twitter Profile Photo

From logical reasoning to mental simulation, recurrent neural networks (RNNs) possess an incredible capacity for computation. To fully harness this capacity, we developed a programming language for RNNs arxiv.org/abs/2203.05032. Here is a game of pong programmed inside of an RNN.

OpenAI (@openai) 's Twitter Profile Photo

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy

AK (@_akhaliq) 's Twitter Profile Photo

Meta announces Aria Everyday Activities Dataset present Aria Everyday Activities (AEA) Dataset, an egocentric multimodal open dataset recorded using Project Aria glasses. AEA contains 143 daily activity sequences recorded by multiple wearers in five geographically diverse indoor

Jiachen Li (@jiachenli11) 's Twitter Profile Photo

How to become a toxic Reviewer💀 for #NeurIPS ? 🤔 1. List "Lack of technical novelty" as the only weakness. 2. Give the paper a rating 4 and confidence 4. 3. During rebuttal, acknowledge the authors' response but never take them into account. Tell the author that after

Ziyi Wu (@dazitu_616) 's Twitter Profile Photo

MinT beats Sora in multi-event generation! One week after the release of MinT, Sora also released a *storyboard* tool that targets the same task (sequential events + time control). Below are a few comparisons, where MinT shows better event transition and timing: (1/N)

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.

Wei Yu (@gnosisyu) 's Twitter Profile Photo

Come find me at Hall 2B, booth #182 between 3:00–5:30 PM! I’ll be presenting EgoSim, our multimodal-conditioned video generation model. Stop by—I'd love to chat!