Sang Truong (@sangttruong) 's Twitter Profile
Sang Truong

@sangttruong

phding @stanfordailab with @sanmikoyejo and @nickhaber, interning @msftresearch, prev: @virtueai_co @emergence_ai

ID: 1301378865487589376

linkhttp://ai.stanford.edu/~sttruong calendar_today03-09-2020 04:38:01

19 Tweet

198 Followers

362 Following

Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

How to intern? Research internships are great opportunities to do cool research in industrial labs! But it's definitely not easy to complete a solid work in 3 months. Some tips for having a successful internship ... 🧵

Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

How to work with your senior advisor(s)? Many students find it challenging to navigate grad school when working with senior professors as they are often extremely busy and hands-off in research. Check out below for some tips. 🧵

Tim Rocktäschel (@_rockt) 's Twitter Profile Photo

Finally, I can again include tikZ figures in my lectures, talks and papers without wasting the precious time given to me on this earth 😅

Finally, I can again include tikZ figures in my lectures, talks and papers without wasting the precious time given to me on this earth 😅
Rob Nowak (@rdnowak) 's Twitter Profile Photo

New paper on label-efficient supervised finetuning of LLMs. We address the expensive prompt annotation cost by humans/proprietary LLMs, saving as much as 50% on FLAN V2. Paper: arxiv.org/abs/2401.06692 Work led by: Jifan Zhang Yifang Chen Gantavya Bhatt Arnav Das 1/

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

How will we broaden LLM access beyond the English-speaking world? Stanford HAI Faculty Affiliate Sanmi Koyejo and CS PhD candidate Sang Truong have explored how to create and evaluate performance of LLMs that serve underrepresented languages. stanford.io/3UNZzLS

alphaXiv (@askalphaxiv) 's Twitter Profile Photo

Author Sang Truong is on alphaXiv this week to answer questions on his latest paper “Crossing Linguistic Horizons”: alphaxiv.org/abs/2403.02715… This paper focuses on expanding LLMs beyond English-speaking communities by introducing the first large-scale open-source Vietnamese

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

"In A.I. the industry is often focused on the latest model and how it performs, “but in this particular topic [multilingual AI], it’s also reshaping the ecosystem as a whole,”" comments head of C4AI, Sara Hooker in this article by Sara Ruberg for The New York Times. nytimes.com/2024/07/26/tec…

Stanford Trustworthy AI Research (STAIR) Lab (@stai_research) 's Twitter Profile Photo

Language models keep improving, but not for everyone. Sang Truong and Sanmi Koyejo are featured in the The New York Times article nytimes.com/2024/07/26/tec…, discussing our collaboration with Unlimited Research Group of AI on language models for Vietnamese and our thoughts on global language gap & interventions

Stanford Trustworthy AI Research (STAIR) Lab (@stai_research) 's Twitter Profile Photo

Sanmi Koyejo, Sang Truong, and Angelina Wang @ FAccT @angelinawang.bsky.social are excited to present at the National Council on Measurement in Education on Feb 26, 2025 (4–5 PM ET)! We will dive into #algorithmic #fairness with a fresh look at group difference awareness and 1/2

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

📢 New white paper: Scholars from Stanford HAI, The Asia Foundation, and University of Pretoria map the current landscape of technical approaches to developing LLMs that better perform for and represent low-resource languages. (1/4) ↘️ hai.stanford.edu/policy/mind-th…

📢 New white paper: Scholars from <a href="/StanfordHAI/">Stanford HAI</a>, <a href="/Asia_Foundation/">The Asia Foundation</a>, and <a href="/UPTuks/">University of Pretoria</a> map the current landscape of technical approaches to developing LLMs that better perform for and represent low-resource languages. (1/4) ↘️

hai.stanford.edu/policy/mind-th…
Tianyu Hua (@tianyu_hua) 's Twitter Profile Photo

🚨 New benchmark alert! 🚨 Can today’s LLMs implement tomorrow’s research ideas? We put them to the test. Introducing #ResearchCodeBench: 212 tasks from 2024–25 ML papers and code, most released after any model’s training cutoff. 🔗 researchcodebench.github.io 🧵

🚨 New benchmark alert! 🚨

Can today’s LLMs implement tomorrow’s research ideas?

We put them to the test.
Introducing #ResearchCodeBench:
212 tasks from 2024–25 ML papers and code, most released after any model’s training cutoff.
🔗 researchcodebench.github.io
🧵
SynthLabs (@synth_labs) 's Twitter Profile Photo

Our new method (ALP) monitors solve rates across RL rollouts and applies inverse difficulty penalties during RL training. Result? Models learn an implicit difficulty estimator—allocating 5x more tokens to hard vs easy problems, cutting overall usage by 50% 🧵👇1/10

Our new method (ALP) monitors solve rates across RL rollouts and applies inverse difficulty penalties during RL training.

Result? Models learn an implicit difficulty estimator—allocating 5x more tokens to hard vs easy problems, cutting overall usage by 50%

🧵👇1/10