Yash Kant (@yash2kant) 's Twitter Profile
Yash Kant

@yash2kant

ai phd @uoftcompsci // prev @meta @snap @georgiatech // now on job market! // web: yashkant.github.io

ID: 352228898

linkhttp://yashkant.github.io calendar_today10-08-2011 09:51:46

1,1K Tweet

850 Followers

492 Following

Simo Ryu (@cloneofsimo) 's Twitter Profile Photo

Very interesting, standard attention causes vanishing gradient due to most prob being very small after some training. LASER tackles this by pushing the attention operation on exponential space. i.e., exp_output = sm(QK^T) exp(V) They dont seem to exaggerate on the performance

Very interesting, standard attention causes vanishing gradient due to most prob being very small after some training. LASER tackles this by pushing the attention operation on exponential space.

i.e., exp_output = sm(QK^T) exp(V)

They dont seem to exaggerate on the performance
International Conference on 3D Vision (@3dvconf) 's Twitter Profile Photo

πŸ“’ 3DV 2026 – Call for Papers is Out! πŸ“ Paper Deadline: Aug 18 πŸŽ₯ Supplementary: Aug 21 πŸ”— 3dvconf.github.io/2026/call-for-… πŸ“… Conference Date: Mar 20–23, 2026 πŸŒ† Location: Vancouver πŸ‡¨πŸ‡¦ πŸš€ Showcase your latest research to the world! #3DV2026 #CallForPapers #Vancouver #Canada

πŸ“’ 3DV 2026 – Call for Papers is Out!

πŸ“ Paper Deadline: Aug 18
πŸŽ₯ Supplementary: Aug 21
πŸ”— 3dvconf.github.io/2026/call-for-…

πŸ“… Conference Date: Mar 20–23, 2026
πŸŒ† Location: Vancouver πŸ‡¨πŸ‡¦

πŸš€ Showcase your latest research to the world!
#3DV2026 #CallForPapers #Vancouver #Canada
Yash Kant (@yash2kant) 's Twitter Profile Photo

I will be at #CVPR25! πŸ€— Please come chat with me and Ethan Weber - during our poster session for Pippo, on Sat 5-7pm (Hall D)! πŸ‘‹

I will be at #CVPR25! πŸ€—

Please come chat with me and <a href="/ethanjohnweber/">Ethan Weber</a> - during our poster session for Pippo, on Sat 5-7pm (Hall D)! πŸ‘‹
Shunsuke Saito (@psyth91) 's Twitter Profile Photo

I'm giving a keynote talk in a very interesting workshop on virtual humans and humanoid robots today at 2:25-2:50pm at 101B #CVPR2025 2025!

Ning Yu (@realningyu) 's Twitter Profile Photo

Grateful to everyone who stopped by our oral presentation and posters during the whirlwind of #CVPR2025 β€” we know you had plenty of options! I'm at the Eyeline Studios Booth (1209) from now–3pm today. Come say hi β€” I’d love to chat about our research philosophy and how it ties

Grateful to everyone who stopped by our oral presentation and posters during the whirlwind of #CVPR2025 β€” we know you had plenty of options!

I'm at the <a href="/eyelinestudios/">Eyeline Studios</a> Booth (1209) from now–3pm today. Come say hi β€” I’d love to chat about our research philosophy and how it ties
fal (@fal) 's Twitter Profile Photo

🚨🎬 MiniMax Hailuo AI (MiniMax) 's latest video model, Hailuo 02, is now available exclusively on fal's API platform. For just 28 cents per video, you can generate these beautiful shots with our API. fal.ai/models/fal-ai/… Links and examples below πŸ‘‡πŸ‘‡

GeekyRakshit (e/mad) (@soumikrakshit96) 's Twitter Profile Photo

✨ CVPR 2025 highlight #3 --- MatAnyone: Stable Video Matting with Consistent Memory Propagation MatAnyone is a robust framework for video matting that pairs image-level matting quality with video-level semantic consistency, and it does so while asking the user for just a single

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

I'm attending Robotics: Science and Systems for the first time #RSS2025 in LA Excited to be giving two invited talks, at the Continual Learning and EgoAct workshops on Sat, June 21 I'll share the latest on 2D/3D motion prediction from human videos for manipulation! Do drop by and say hi :)

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

Presenting DemoDiffusion: An extremely simple approach enabling a pre-trained 'generalist' diffusion policy to follow a human-demonstration for a novel task during inference One-shot human imitation *without* requiring any paired human-robot data or online RL πŸ™‚ 1/n

Eyeline Studios (@eyelinestudios) 's Twitter Profile Photo

The latest research paper from Eyeline Studios, FlashDepth, has been accepted to the International Conference on Computer Vision (#ICCV2025). Our model produces accurate and high-resolution depth maps from streaming videos in real time and is completely built on open-source

Ning Yu (@realningyu) 's Twitter Profile Photo

πŸ”Š #ICCV2025 acceptance: ⚑FlashDepth⚑ estimates accurate and consistent depth for 2K-resolution videos in a real-time (24 FPS) streaming fashion on a single A100 GPU. ✊ Kudos to the teamwork led by our intern Gene Chou at Eyeline Studios . πŸ‘‰ Join us to be the next one

πŸ”Š #ICCV2025 acceptance: ⚑FlashDepth⚑ estimates accurate and consistent depth for 2K-resolution videos in a real-time (24 FPS) streaming fashion on a single A100 GPU.
✊ Kudos to the teamwork led by our intern <a href="/gene_ch0u/">Gene Chou</a> at <a href="/eyelinestudios/">Eyeline Studios</a> .

πŸ‘‰ Join us to be the next one
Michael Zollhoefer (@mzollhoefer) 's Twitter Profile Photo

My research group in the Codec Avatars lab at Meta contributed by developing the technology required to display the outputs of FAIR SeamlessNext's dyadic motion models as 3D Full-body Codec Avatars.

Towaki Takikawa / η€§ε·ζ°Έι εΈŒ (@yongyuanxi) 's Twitter Profile Photo

Adding horizontal lines to images improves VLM (vision language model) performance of tasks like counting, visual search, spatial understating, scene understanding, and more

Adding horizontal lines to images improves VLM (vision language model) performance of tasks like counting, visual search, spatial understating, scene understanding, and more
Michael Albergo (@msalbergo) 's Twitter Profile Photo

Dear NeurIPS Conference -- it seems OpenReview is down entirely, and we cannot submit reviews for the upcoming review deadline tonight. Please share if you are having a similar issue. #neurips2025

Devi Parikh (@deviparikh) 's Twitter Profile Photo

Some samples of (positive) feedback from Scouts users this past week! We're continuing to let people in every day. Join the waitlist at yutori .com.

Some samples of (positive) feedback from Scouts users this past week! 

We're continuing to let people in every day. Join the waitlist at yutori .com.