Joon Sung Park (@joon_s_pk) 's Twitter Profile
Joon Sung Park

@joon_s_pk

CS Ph.D. student @StanfordHCI + @StanfordNLP. Previously @MSFTResearch, @IllinoisCS & @Swarthmore. Oil painter. HCI, NLP, generative agents, human-centered AI

ID: 175004978

linkhttps://www.joonsungpark.com/ calendar_today05-08-2010 11:14:46

510 Tweet

7,7K Followers

1,1K Following

Percy Liang (@percyliang) 's Twitter Profile Photo

How close can LM agents simulate people? We interview person P for 2 hours and prompt an LM with the transcript, yielding an agent P'. We find that P and P' behave similarly on a number of surveys and experiments. Very excited about the applications; this also forces us to think

Michael Bernstein (@msbernst) 's Twitter Profile Photo

In a follow-up to the generative agents paper, Joon Sung Park+team demonstrate that anchoring agents in rich qualitative information about an individual enables simulations to replicate an individual's attitudes 85% as well as the individual replicates themselves, + reduces bias

carolyn zou (@cqzou) 's Twitter Profile Photo

So thrilled to share our work on this~! Grounding generative agents in real, verifiable behavior is the step that takes this method from producing simulacra to creating simulations that capture the richness of human experiences.

Meredith Ringel Morris (@merrierm) 's Twitter Profile Photo

Excited to share this new work that shares a new method for creating realistic generative agents that can be used for synthetic social science. Such a pleasure to work with lead author Joon Sung Park

Ethan Mollick (@emollick) 's Twitter Profile Photo

Crazy interesting paper in many ways: 1) Voice-enabled GPT-4o conducted 2 hour interviews of 1,052 people 2) GPT-4o agents were given the transcripts & prompted to simulate the people 3) The agents were given surveys & tasks. They achieved 85% accuracy in simulating interviewees

Crazy interesting paper in many ways:
1) Voice-enabled GPT-4o conducted 2 hour
interviews of 1,052 people
2) GPT-4o agents were given the transcripts & prompted to simulate the people
3) The agents were given surveys & tasks. They achieved 85% accuracy in simulating interviewees
Tiziano Piccardi (@tizianopiccardi) 's Twitter Profile Photo

New paper: Do social media algorithms shape affective polarization? We ran a field experiment on X/Twitter (N=1,256) using LLMs to rerank content in real-time, adjusting exposure to polarizing posts. Result: Algorithmic ranking impacts feelings toward the political outgroup!🧵⬇️

New paper: Do social media algorithms shape affective polarization?

We ran a field experiment on X/Twitter (N=1,256) using LLMs to rerank content in real-time, adjusting exposure to polarizing posts. Result: Algorithmic ranking impacts feelings toward the political outgroup!🧵⬇️
Michael Bernstein (@msbernst) 's Twitter Profile Photo

Book announcement: with Melissa Valentine, we are publishing "Flash Teams: Leading the Future of AI-Enhanced, On-Demand Work." It's a leadership book synthesizing a decade of Stanford University research on how computing, online platforms, and AI reshape teamwork. Coming October from The MIT Press @mitpress.bsky.social!

Book announcement: with <a href="/stanfordmav/">Melissa Valentine</a>, we are publishing "Flash Teams: Leading the Future of AI-Enhanced, On-Demand Work." It's a leadership book synthesizing a decade of <a href="/Stanford/">Stanford University</a> research on how computing, online platforms, and AI reshape teamwork. Coming October from <a href="/mitpress/">The MIT Press @mitpress.bsky.social</a>!
Dora Zhao (@dorazhao9) 's Twitter Profile Photo

Todo lists, docs, email style – if you've got individual or team knowledge you want ChatGPT/Claude to have access to, Knoll (knollapp.com) is a personal RAG store from Stanford University that you can add any knowledge into. Instead of copy-pasting into your prompt every time,

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

📢 New policy brief: AI agents that can simulate human behaviors and attitudes can help test ideas in social science. Our latest brief introduces a generative AI agent architecture that simulates the attitudes of 1,000+ real people. Learn more: hai.stanford.edu/policy/simulat…

📢 New policy brief: AI agents that can simulate human behaviors and attitudes can help test ideas in social science. Our latest brief introduces a generative AI agent architecture that simulates the attitudes of 1,000+ real people. Learn more: hai.stanford.edu/policy/simulat…
Omar Shaikh (@oshaikh13) 's Twitter Profile Photo

What if LLMs could learn your habits and preferences well enough (across any context!) to anticipate your needs? In a new paper, we present the General User Model (GUM): a model of you built from just your everyday computer use. 🧵