
Sang Truong
@sangttruong
phding @stanfordailab with @sanmikoyejo and @nickhaber, interning @msftresearch, prev: @virtueai_co @emergence_ai
ID: 1301378865487589376
http://ai.stanford.edu/~sttruong 03-09-2020 04:38:01
19 Tweet
198 Followers
362 Following





New paper on label-efficient supervised finetuning of LLMs. We address the expensive prompt annotation cost by humans/proprietary LLMs, saving as much as 50% on FLAN V2. Paper: arxiv.org/abs/2401.06692 Work led by: Jifan Zhang Yifang Chen Gantavya Bhatt Arnav Das 1/

How will we broaden LLM access beyond the English-speaking world? Stanford HAI Faculty Affiliate Sanmi Koyejo and CS PhD candidate Sang Truong have explored how to create and evaluate performance of LLMs that serve underrepresented languages. stanford.io/3UNZzLS

Author Sang Truong is on alphaXiv this week to answer questions on his latest paper “Crossing Linguistic Horizons”: alphaxiv.org/abs/2403.02715… This paper focuses on expanding LLMs beyond English-speaking communities by introducing the first large-scale open-source Vietnamese

"In A.I. the industry is often focused on the latest model and how it performs, “but in this particular topic [multilingual AI], it’s also reshaping the ecosystem as a whole,”" comments head of C4AI, Sara Hooker in this article by Sara Ruberg for The New York Times. nytimes.com/2024/07/26/tec…

Language models keep improving, but not for everyone. Sang Truong and Sanmi Koyejo are featured in the The New York Times article nytimes.com/2024/07/26/tec…, discussing our collaboration with Unlimited Research Group of AI on language models for Vietnamese and our thoughts on global language gap & interventions


Sanmi Koyejo, Sang Truong, and Angelina Wang @ FAccT @angelinawang.bsky.social are excited to present at the National Council on Measurement in Education on Feb 26, 2025 (4–5 PM ET)! We will dive into #algorithmic #fairness with a fresh look at group difference awareness and 1/2

📢 New white paper: Scholars from Stanford HAI, The Asia Foundation, and University of Pretoria map the current landscape of technical approaches to developing LLMs that better perform for and represent low-resource languages. (1/4) ↘️ hai.stanford.edu/policy/mind-th…


