Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile
Kyoto University Computer Vision Lab (Nishino Lab)

@kyoto_vision

Computer vision research lab in Kyoto University led by Ko Nishino, Ken Sakurada, Ryo Kawahara. #ComputerVision #ComputationalPhotography #MachineLearning #AI

ID: 1288654605077766145

linkhttps://vision.ist.i.kyoto-u.ac.jp/ calendar_today30-07-2020 01:57:31

76 Tweet

501 Followers

91 Following

Jitendra MALIK (@jitendramalikcv) 's Twitter Profile Photo

I delivered the 110th Annual Martin Meyerson UC Berkeley Faculty Research Lecture on March 20, 2023. youtube.com/watch?v=f6fDpK…

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

Congratulations to my graduating students and thanks all for a wonderful academic year! A huge shout-out to Mai Nishimura denkiwakame (front row 4th from right) for finishing a super strong PhD all the while working full-time as an engineer. YOU ROCK!

Congratulations to my graduating students and thanks all for a wonderful academic year! A huge shout-out to Mai Nishimura <a href="/denkivvakame/">denkiwakame</a> (front row 4th from right) for finishing a super strong PhD all the while working full-time as an engineer. YOU ROCK!
Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

DeePoint: Visual Pointing Recognition and Direction Estimation #ICCV2023 We point out that we can tell when you are pointing and which 3D direction you are pointing at. Is my point clear? vision.ist.i.kyoto-u.ac.jp/research/deepo…

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

京都大学情報学研究科2023年度公開講座 「カメラで『見る』からAIで『視る』へ」 講演:西野 恒 youtu.be/hvDiJCaF0kg

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance #CVPR2024 [Highlight] We reformulate inverse rendering as recursive diffusion on the reflectance map to recover lost illumination frequencies. -> SOTA! vision.ist.i.kyoto-u.ac.jp/research/drm/

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

Diffusion Reflectance Map #CVPR2024 [Highlight] Single-Image Stochastic Inverse Rendering of Illumination and Reflectance on the reflectance map to recover lost illumination frequencies. vision.ist.i.kyoto-u.ac.jp/research/drm/ youtu.be/UqRqp_UZRLs

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

SPIDeRS: Structured Polarization for Invisible Depth and Reflectance Sensing #CVPR2024 The first structured polarization method for recovering both depth and reflectance while being invisible to the naked eyes! vision.ist.i.kyoto-u.ac.jp/research/spide… youtu.be/F5wt3vazHRw

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

HeatFormer: A Neural Optimizer for Multiview Human Mesh Recovery Yuto Matsubara and Ko Nishino (Kyoto University) Occlusion-aware, view-flexible multiview human shape and pose recovery as learned optimization. vision.ist.i.kyoto-u.ac.jp/research/heatf…

Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars Shota Sasaki, Jane Wu, Ko Nishino Human avatar with movement- (not pose-)dependent clothing as 3D GS simulated with PBD attached to SMPL, all recovered from multiview video. vision.ist.i.kyoto-u.ac.jp/research/pbdyg/

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views Antoine Guédon, Tomoki Ichikawa, Kohei Yamashita, Ko Nishino tl;dr: underlying scene geometry mesh->an Atlas of Charts->render with 2D Gaussian surfels arxiv.org/abs/2412.06767

MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views

<a href="/antoine_guedon/">Antoine Guédon</a>, Tomoki Ichikawa, Kohei Yamashita, Ko Nishino

tl;dr: underlying scene geometry mesh-&gt;an Atlas of Charts-&gt;render with 2D Gaussian surfels

arxiv.org/abs/2412.06767
Kyoto University Computer Vision Lab (Nishino Lab) (@kyoto_vision) 's Twitter Profile Photo

Multistable Shape from Shading Emerges from Patch Diffusion #NeurIPS2024 Spotlight X. Nicole Han, T. Zickler and K. Nishino (Harvard+Kyoto) Diffusion-based SFS lets you sample multistable shape perception! Come see Nicole Han Th 12/12 11am East A-C 1308 vision.ist.i.kyoto-u.ac.jp/research/mssfs/

Multistable Shape from Shading Emerges from Patch Diffusion #NeurIPS2024 Spotlight
X. Nicole Han, T. Zickler and K. Nishino (Harvard+Kyoto)

Diffusion-based SFS lets you sample multistable shape perception!

Come see <a href="/nahelocin/">Nicole Han</a> Th 12/12 11am East A-C 1308

vision.ist.i.kyoto-u.ac.jp/research/mssfs/
MrNeRF (@janusch_patas) 's Twitter Profile Photo

MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views Abstract (excerpt): We present a novel appearance model that simultaneously achieves explicit high-quality 3D surface mesh recovery and photorealistic novel view synthesis from sparse

Antoine Guédon (@antoine_guedon) 's Twitter Profile Photo

⚠️Reconstructing sharp 3D meshes from a few unposed images is a hard and ambiguous problem. ☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵 🌐Webpage: anttwo.github.io/matcha/

Antoine Guédon (@antoine_guedon) 's Twitter Profile Photo

💻We've released the code for our #CVPR2025 paper MAtCha! 🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)... ...While also working with dense-view datasets (hundreds of images)!

💻We've released the code for our #CVPR2025 paper MAtCha!

🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...

...While also working with dense-view datasets (hundreds of images)!
Matthias Niessner (@mattniessner) 's Twitter Profile Photo

📢 SHeaP: Self-Supervised Head Predictor Learned via 2D Gaussians 📢 Given a single input image, we predict accurate 3D head geometry, pose, and expression. Previous works (e.g. DECA, EMOCA) use differentiable mesh rasterization to learn a self-supervised head geometry

Imagine-ENPC (@imagineenpc) 's Twitter Profile Photo

#CVPR2025 Fri June 13 (PM) ✨ Highlight 🍵 MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views Antoine Guédon Kyoto University Computer Vision Lab (Nishino Lab) 📄 pdf: arxiv.org/abs/2412.06767 🌐 webpage: anttwo.github.io/matcha/

#CVPR2025 Fri June 13 (PM) ✨ Highlight
🍵 MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views
<a href="/antoine_guedon/">Antoine Guédon</a> <a href="/kyoto_vision/">Kyoto University Computer Vision Lab (Nishino Lab)</a>
📄 pdf: arxiv.org/abs/2412.06767
🌐 webpage: anttwo.github.io/matcha/
sonson (@sonson_twit) 's Twitter Profile Photo

今年のMIRUは西野先生の発表がダントツでおもしろかった。最近、そういう課題ばっかりやってるからっていうのもあるけど。 #MIRU2025