Aritra R G (@arig23498) 's Twitter Profile
Aritra R G

@arig23498

MLE @ 🤗 Hugging Face |
@GoogleDevExpert in ML |
ex-@PyImageSearch & @weights_biases |
ex-@huggingface fellow |
ex-contractor MLE @ Keras |

ID: 767383106408091648

linkhttps://arig23498.github.io/ calendar_today21-08-2016 15:29:24

6,6K Tweet

3,3K Followers

730 Following

merve (@mervenoyann) 's Twitter Profile Photo

Fine-tune Gemma3n on videos with audios inside with Colab A100 🔥 Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!

Fine-tune Gemma3n on videos with audios inside with Colab A100 🔥

Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!
GeekyRakshit (e/mad) (@soumikrakshit96) 's Twitter Profile Photo

Supervision 0.26.0 is out 🔥🔥🔥 Our biggest update in months—packed with new features, support for VLM object detection, smarter and optimized annotators, and a fresh docs revamp! 🚀 pip install -U supervision 🔗 release notes → github.com/roboflow/super… 🔗 docs →

Lysandre (@lysandrejik) 's Twitter Profile Photo

BOOOM! Both VLMs and LLMs now have a baked-in http server w/ OpenAI spec compatible API in transformers Launch it with `transformers serve` and connect your favorite apps. Here I'm running Open WebUI with local transformers. LLM, VLM, tool calling is in, STT & TTS coming soon!

Aritra R G (@arig23498) 's Twitter Profile Photo

> Follow a researcher on hf.co > Go to the papers tab on your feed > Find the most recent paper authored by them > Read the paper

> Follow a researcher on hf.co
> Go to the papers tab on your feed
> Find the most recent paper authored by them
> Read the paper
Aritra R G (@arig23498) 's Twitter Profile Photo

We need to either spell International Math Olympiad or In My Opinion completely. The tweets are getting out of hand now.

Qwen (@alibaba_qwen) 's Twitter Profile Photo

Bye Qwen3-235B-A22B, hello Qwen3-235B-A22B-2507! After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible. Today, we’re releasing

Bye Qwen3-235B-A22B, hello Qwen3-235B-A22B-2507!

After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible. Today, we’re releasing
Aritra R G (@arig23498) 's Twitter Profile Photo

With the Vision Language Support enabled, you can now use a wide variety of VLMs with the transformers backend for vLLM Read about the backends here: hf.co/docs/transform…

With the Vision Language Support enabled, you can now use a wide variety of VLMs with the transformers backend for <a href="/vllm_project/">vLLM</a> 

Read about the backends here: hf.co/docs/transform…
Gradio (@gradio) 's Twitter Profile Photo

🎉 Big news! Google Colab now comes with Gradio pre-installed (v5.38)! No more pip install gradio needed - just import and start building AI apps instantly. Thanks to Colaboratory team and Chris Perry for making Gradio more accessible to millions of developers worldwide! 🙏

🎉 Big news! Google Colab now comes with Gradio pre-installed (v5.38)!

No more pip install gradio needed - just import and start building AI apps instantly.

Thanks to <a href="/GoogleColab/">Colaboratory</a> team and <a href="/thechrisperry/">Chris Perry</a> for making Gradio more accessible to millions of developers worldwide! 🙏
Aritra R G (@arig23498) 's Twitter Profile Photo

> Scrape discuss.pytorch.org > For any question > Look for ptrblck's comment (mark it as the correct verified answer) Align any model on this data, you get a humble model which guides you in PyTorch. Anyone doing it? I would be down to help.

JetBrains (@jetbrains) 's Twitter Profile Photo

Not every developer task requires a general-purpose LLM. We’re betting on specialized focal LLMs – smaller, faster, and focused. Join JetBrains and Hugging Face for a livestream on how focal models like Mellum will shape the industry. 📅 July 29, 6 pm CET 👉 Save your spot:

Not every developer task requires a general-purpose LLM.

We’re betting on specialized focal LLMs – smaller, faster, and focused.

Join <a href="/jetbrains/">JetBrains</a> and <a href="/huggingface/">Hugging Face</a> for a livestream on how focal models like Mellum will shape the industry.

📅 July 29, 6 pm CET

👉 Save your spot:
Aritra R G (@arig23498) 's Twitter Profile Photo

re-search is tiring and exhausting, it is not the shiny job that I used to think was easy i have a new born respect for all the researchers in my TL, thank you for doing what you do

Suvaditya Mukherjee (@halcyonrayes) 's Twitter Profile Photo

After using FFmpeg for a while now, thought I'd write a quick blog post detailing everything you can do with it. Consider this a quick read that will try to convince you why you should be learning how to use FFmpeg as a ML Engineer and why it's the most important toolbox for

Aritra R G (@arig23498) 's Twitter Profile Photo

I swear there was one time I could look at a paper, read the author list and smile as I knew all of them from another paper ofcourse. It was a good get-together with the authors. Either I am not reading papers enough, or the research field has a huge entropy.