Manaal Faruqui (@manaalfar) 's Twitter Profile
Manaal Faruqui

@manaalfar

Senior Staff Research Scientist @Google Bard. Love eating, movies, travel and politics. Spread love, not war.

ID: 123595497

calendar_today16-03-2010 15:57:44

3,3K Tweet

3,3K Followers

630 Following

Yu Su @#ICLR2025 (@ysu_nlp) 's Twitter Profile Photo

Hi @emilymbender.bsky.social, I'm one of the lead authors of MMMU. I can certify that 1) Google didn't fund this work, and 2) Google didn't have early access. They really like the benchmark after our release and worked very hard to get the results. It doesn't take that long to eval on a

Yu Su @#ICLR2025 (@ysu_nlp) 's Twitter Profile Photo

@emilymbender.bsky.social (this will be the last response just for the record; this type of engagement is not why I use this app) 1. Dataset was released along with the paper. again, eval on a dataset of this scale really doesn't take long, especially for google 2. this was an one-off project

Jack Krawczyk (@jackk) 's Twitter Profile Photo

Bard with Gemini Pro is now available in over 40 languages and 230+ countries and territories, bringing its top 2 most preferred status across the world (👀 LMSYS Org). AND you can now bring your imagination to life with image generation. It's optimized for speed and is

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention 1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem arxiv.org/abs/2404.07143

Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem

arxiv.org/abs/2404.07143
AK (@_akhaliq) 's Twitter Profile Photo

Google announces Leave No Context Behind Efficient Infinite Context Transformers with Infini-attention This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key

Google announces Leave No Context Behind

Efficient Infinite Context Transformers with Infini-attention

This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key
Vinod Khosla (@vkhosla) 's Twitter Profile Photo

Hard for me to support someone with no values, lies, cheats, rapes, demeans women, hates immigrants like me. He may cut my taxes or reduce some regulation but that is no reason to accept depravity in his personal values. Do you want President who will set back climate by a

Kamala Harris (@kamalaharris) 's Twitter Profile Photo

On behalf of the American people, I thank Joe Biden for his extraordinary leadership as President of the United States and for his decades of service to our country. I am honored to have the President’s endorsement and my intention is to earn and win this nomination.

Manaal Faruqui (@manaalfar) 's Twitter Profile Photo

It's delightful that we are #1 here, but... We can be #3 in the next few weeks, and then again become #1 sometime What's really exciting about working on Gemini is that we will bring the best user-experience in our products through these models, these metrics are by-products.😉

Acyn (@acyn) 's Twitter Profile Photo

Kaling: The real reason I'm here is that deep down, I truly believe that as a woman of color and a single mother of three, it is incredibly important that I be appointed ambassador to Italy.

Prateek Yadav (@prateeky2806) 's Twitter Profile Photo

Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it generalizes well? cc: Google AI Google DeepMind UNC NLP 🧵👇 Excited to announce my

Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models?

Maybe you considered using model merging for post-training of your large model but not sure if it  generalizes well?

cc: <a href="/GoogleAI/">Google AI</a> <a href="/GoogleDeepMind/">Google DeepMind</a> <a href="/uncnlp/">UNC NLP</a>
🧵👇

Excited to announce my
Satyapriya Krishna (@satyascribbles) 's Twitter Profile Photo

🚀 Excited to share the research I worked on during my summer internship at Google AI! We developed FRAMES (Factuality, Retrieval, And reasoning MEasurement Set), a challenging high-quality benchmark for evaluating retrieval-augmented large language models. FRAMES tests LLMs on

🚀 Excited to share the research I worked on during my summer internship at <a href="/GoogleAI/">Google AI</a>! We developed FRAMES (Factuality, Retrieval, And reasoning MEasurement Set), a challenging high-quality benchmark for evaluating retrieval-augmented large language models. FRAMES tests LLMs on
Manaal Faruqui (@manaalfar) 's Twitter Profile Photo

While taking a domestic flight in India, I had a genuine question of whether I can use my OCI (overseas citizen of India) card as a valid proof of ID, and so I googled it and got the following conflicting results. Factuality remains a core problem to fix.

While taking a domestic flight in India, I had a genuine question of whether I can use my OCI (overseas citizen of India) card as a valid proof of ID, and so I googled it and got the following conflicting results.

Factuality remains a core problem to fix.