Seungwhan Shane Moon (@shane_moon) 's Twitter Profile
Seungwhan Shane Moon

@shane_moon

| Research Scientist @ Facebook |
| PhD @ LTI SCS, CMU |

ID: 127754506

linkhttps://shanemoon.com calendar_today30-03-2010 04:10:46

25 Tweet

713 Followers

191 Following

AI at Meta (@aiatmeta) 's Twitter Profile Photo

We’ve released SIMMC, a data set on situated and interactive multimodal conversations, to help conversational AI researchers ground conversations in a co-observed and evolving multimodal context. A challenge track at DSTC9 around SIMMC is currently live. ai.facebook.com/blog/simmc-a-d…

We’ve released SIMMC, a data set on situated and interactive multimodal conversations, to help conversational AI researchers ground conversations in a co-observed and evolving multimodal context.
A challenge track at DSTC9 around SIMMC is currently live.
ai.facebook.com/blog/simmc-a-d…
Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We are running a challenge track at DSTC9 around multimodal conversational AI! To participate: - paper: arxiv.org/pdf/2006.01460… - code & challenge website: github.com/facebookresear…

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

The call for track proposals for the next Dialogue System Technology Challenge (DSTC10) is out! More info: dstc9.dstc.community/calls/call-for…

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

Two papers from our group were accepted at #NAACL2021 🎉 * Adding chit-chat to enhance task-oriented dialogues: github.com/facebookresear… w/ Kai Sun * A new SOTA for zeroshot cross-domain DST: manuscript📑 to be released soon! Zhaojiang Lin Kudos to our amazing interns! 😀

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

4 papers accepted at #EMNLP2021🎉 #NLProc - ToD Dataset for Immersive Multimodal Conversation; Satwik Kottur et al - Continual Learning in ToD System; Andrea Madotto et al - Zero-Shot DST via CrossTask Transfer; Zhaojiang Lin et al - Annotation for Nuanced Conversation; Chen et al

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We are hiring research interns to work on various multimodal & NLP related projects (Reality Labs). See JDs here -- or reach out to me directly via email! facebookcareers.com/v2/jobs/442756… facebookcareers.com/v2/jobs/183398…

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We are hiring PhD Research Interns to work on various Multimodal & NLP related projects (Reality Labs) for 2023. See JDs here -- apply directly or reach out to me directly via email! - metacareers.com/jobs/665911908… - metacareers.com/jobs/854010465…

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

Meta Reality Lab is organizing "Ambient AI Workshop" -- focusing on multimodal understanding with wearable sensors, combining NLP + Vision + Sensor Signals. For more details & call for paper (now due Mar 26): sites.google.com/view/ambientai… We look forward to your participation!

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

Excited to share our recent work, AnyMAL -- a unified Multimodal LLM built on LLaMA-2 that can reason over various inputs, e.g. images, audio, motion sensors. Check out our paper for more information on the model training, evaluation, safety and more! ➡️ arxiv.org/abs/2309.16058

Excited to share our recent work, AnyMAL -- a unified Multimodal LLM built on LLaMA-2 that can reason over various inputs, e.g. images, audio, motion sensors.

Check out our paper for more information on the model training, evaluation, safety and more!
➡️ arxiv.org/abs/2309.16058
AK (@_akhaliq) 's Twitter Profile Photo

Meta introduces AnyMAL - a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses - best model achieves strong zero-shot performance in both automatic and human evaluation on diverse

Meta introduces AnyMAL

- a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses

- best model achieves strong zero-shot performance in both automatic and human evaluation on diverse
Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We are hiring PhD AI Research Interns to work on various projects around Multimodal LLM for Summer 2024 (Reality Labs). Please reach out to me directly via email with your resume!

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We're hiring exceptional AI Research Scientists to join our team at Meta Reality Labs, where you'll work on cutting-edge projects in Vision LLMs. Please reach out to me directly via email with your resume! (Check minimum qualifications) metacareers.com/jobs/388290281…

Seungwhan Shane Moon (@shane_moon) 's Twitter Profile Photo

We're organizing a visual Q&A benchmark challenge at KDD, focusing on the Multimodal RAG task. Join the CRAG-MM Challenge! More details here: aicrowd.com/challenges/met…