msr_knowledgenlp (@ms_knowledgenlp) 's Twitter Profile
msr_knowledgenlp

@ms_knowledgenlp

Microsoft Cognitive Services Research, Knowledge and Language team, microsoft.com/en-us/researchโ€ฆ

ID: 1587137295974797315

calendar_today31-10-2022 17:40:04

17 Tweet

109 Followers

6 Following

KnowledgeNLP Workshop @NAACL 2025 (@knowledgenlp) 's Twitter Profile Photo

Call for papers! The first workshop on Knowledge Augmented Methods for NLP (#NLProc) at #AAAI2023 is welcoming submissions๐Ÿ™Œ! Papers due on Nov. 4! Papers will be non-archival, so published papers (e.g.#EMMLP2022) can also present at our workshop! Details๐Ÿ‘‰knowledge-nlp.github.io/aaai2023

Call for papers! The first workshop on Knowledge Augmented Methods for NLP (#NLProc) at #AAAI2023 is welcoming submissions๐Ÿ™Œ! Papers due on Nov. 4! Papers will be non-archival, so published papers (e.g.#EMMLP2022) can also present at our workshop! Details๐Ÿ‘‰knowledge-nlp.github.io/aaai2023
Wenhao Yu (@wyu_nd) 's Twitter Profile Photo

๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ฒ ๐—ฅ๐—ฎ๐˜๐—ต๐—ฒ๐—ฟ ๐—ง๐—ต๐—ฎ๐—ป ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฒ is now ๐—ฎ๐—ฐ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ฒ๐—ฑ to #๐—œ๐—–๐—Ÿ๐—ฅ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฏ ๐ŸŽ‰๐ŸŽ‰ Without using DPR/Google, it achieved SoTA on multiple open-domain QA and knowledge-intensive benchmarks! Work done @ms_knowledgenlp! Code and paper: arxiv.org/abs/2209.10063

๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ฒ ๐—ฅ๐—ฎ๐˜๐—ต๐—ฒ๐—ฟ ๐—ง๐—ต๐—ฎ๐—ป ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฒ is now ๐—ฎ๐—ฐ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ฒ๐—ฑ to #๐—œ๐—–๐—Ÿ๐—ฅ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฏ ๐ŸŽ‰๐ŸŽ‰ Without using DPR/Google, it achieved SoTA on multiple open-domain QA and knowledge-intensive benchmarks! Work done 
@ms_knowledgenlp!
Code and paper: arxiv.org/abs/2209.10063
Wenhao Yu (@wyu_nd) 's Twitter Profile Photo

Combing Retrieval AND Generation (in step1) can further improve the model performance, as shown in Figure 3. The choice of retrieval or generation is interesting, and their complementarity is worth exploring. Using retriever or generator only where it helps.

msr_knowledgenlp (@ms_knowledgenlp) 's Twitter Profile Photo

Thank you for the interest in our team! Sorry for not being able to reply everyone. We will keep hiring interns quarter by quarter. Feel free to message us your personal page or CV by Twitter whenever you are available. Our researchers will talk with you if matched : )

KnowledgeNLP Workshop @NAACL 2025 (@knowledgenlp) 's Twitter Profile Photo

The second workshop on Knowledge Augmented Methods for NLP at #KDD2023 is welcoming submissions๐Ÿ“ท! Papers due by May 23rd! Accepted paper will be non-archival! Details are available ๐Ÿ“ท knowledge-nlp.github.io/kdd2023

The second workshop on Knowledge Augmented Methods for NLP at #KDD2023 is welcoming submissions๐Ÿ“ท! Papers due by May 23rd! Accepted paper will be non-archival! Details are available ๐Ÿ“ท knowledge-nlp.github.io/kdd2023
Meng Jiang (@meng_cs) 's Twitter Profile Photo

THE BATCH selected four pieces of news after today's Andrew's newsletter, from Hinton leaving G to Wenhao Yu GenRead paper at ICLR. The work is open sourced. Wenhao is on the market! Grab him asap! deeplearning.ai/the-batch/issuโ€ฆ github.com/wyu97/GenRead

Canwen Xu (@xucanwen) 's Twitter Profile Photo

Do you know small models can be LLM's plugins? ๐ŸฅณIntroducing SuperICL, it's ICL but super dope! ๐Ÿš€SuperICL combines small models with LLM like GPT-3.5 and can improve accuracy, multilinguality and interpretability! ๐Ÿ“„Paper: arxiv.org/abs/2305.08848 ๐Ÿ”งCode: aka.ms/SuperICL

Do you know small models can be LLM's plugins?
๐ŸฅณIntroducing SuperICL, it's ICL but super dope!
๐Ÿš€SuperICL combines small models with LLM like GPT-3.5 and can improve accuracy, multilinguality and interpretability!
๐Ÿ“„Paper: arxiv.org/abs/2305.08848
๐Ÿ”งCode: aka.ms/SuperICL
AK (@_akhaliq) 's Twitter Profile Photo

Small Models are Valuable Plug-ins for Large Language Models abs: arxiv.org/abs/2305.08848 paper page: huggingface.co/papers/2305.08โ€ฆ

Small Models are Valuable Plug-ins for Large Language Models

abs: arxiv.org/abs/2305.08848
paper page: huggingface.co/papers/2305.08โ€ฆ
AK (@_akhaliq) 's Twitter Profile Photo

Any-to-Any Generation via Composable Diffusion present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities paper page: