Edgar Dobriban (@edgardobriban) 's Twitter Profile
Edgar Dobriban

@edgardobriban

Associate prof @Wharton @Penn. #Stats #ML #AI. PhD @Stanford. BA @Princeton. Recruiting students & postdocs.

ID: 2790700645

linkhttp://statistics.wharton.upenn.edu/profile/dobriban/ calendar_today04-09-2014 21:40:41

495 Tweet

1,1K Followers

128 Following

Kaixuan Huang (@kaixuanhuang1) 's Twitter Profile Photo

When I tested the performance of o3-mini on MATH-Perturb, I found that it performed significantly worse than o1-mini. After inspecting the raw outputs, I discovered that o3-mini used a lot of Unicode characters, and my previous parser failed to process them. So I hand-crafted a

When I tested the performance of o3-mini on MATH-Perturb, I found that it performed significantly worse than o1-mini.

After inspecting the raw outputs, I discovered that o3-mini used a lot of Unicode characters, and my previous parser failed to process them. So I hand-crafted a
prof-g (@robertghrist) 's Twitter Profile Photo

workflow of the past 24 hours... * start a convo w/GPT-o3 about math research idea [X] * it gives 7 good potential ideas; pick one & ask to develop * feed -o3 output to gemini-2.5-pro; it finds errors & writes feedback * paste feedback into -o3 and say asses & respond * paste

Aryeh Kontorovich (@aryehazan) 's Twitter Profile Photo

1/ I'd like to share a bit of math saga and also open the floor to my fellow mathematicians we've been hearing about how AI is starting to solve research-level math problems, and I was skeptical until I saw it first-hand cutting to the chase, I posed this problem to

1/ I'd like to share a bit of math saga and also open the floor to my fellow mathematicians

we've been hearing about how AI is starting to solve research-level math problems, and I was skeptical until I saw it first-hand

cutting to the chase, I posed this problem to
Jessica Hullman (@jessicahullman) 's Twitter Profile Photo

📢If you're interested in conformal prediction, algorithms w/predictions, robust stats & connections between them from a theory perspective, join us for a workshop at #COLT2025 in Lyon 🇫🇷 June 30! Submit a poster description by May 25, more here: vaidehi8913.github.io/predictions-an…

Nicușor Dan (@nicusordanro) 's Twitter Profile Photo

A fost o mobilizare fără precedent și, de aceea, victoria este a fiecăruia dintre voi. A fiecărui român care a ieșit la vot, și-a făcut vocea auzită și a luptat astfel pentru ceea ce crede, pentru țara pe care o vrea și în care își dorește să trăiască. De mâine, începem

Tymofiy Mylovanov (@mylovanov) 's Twitter Profile Photo

Romania voted. Pro-European forces won. Nicușor Dan — 54.3%. Simion — 45.7%. Anti-EU, anti-Ukraine, pro-Russia. Putin’s worldview lost. It will keep losing. Trust this. Work for it.

Shashwat Goel (@shashwatgoel7) 's Twitter Profile Photo

Confused about recent LLM RL results where models improve without any ground-truth signal? We were too. Until we looked at the reported numbers of the Pre-RL models and realized they were serverely underreported across papers. We compiled discrepancies in a blog below🧵👇

Confused about recent LLM RL results where models improve without any ground-truth signal? We were too. Until we looked at the reported numbers of the Pre-RL models and realized they were serverely underreported across papers. We compiled discrepancies in a blog below🧵👇
Aaron Roth (@aaroth) 's Twitter Profile Photo

The United States has had a tremendous advantage in science and technology because it has been the consensus gathering point: the best students worldwide want to study and work in the US because that is where the best students are studying and working. 1/

Shubhendu Trivedi (@_onionesque) 's Twitter Profile Photo

Also wanted to share another paper of ours (w Brian Nord) that articulates a POV drawn from, and inspired by, the frequent misalignment between the intent of UQ and how it is actually implemented in ML for science. arxiv.org/abs/2506.03037

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

It’s a hefty 206-page research paper, and the findings are concerning. "LLM users consistently underperformed at neural, linguistic, and behavioral levels" This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔 Relying only on EEG,

It’s a hefty 206-page research paper, and the findings are concerning.

"LLM users consistently underperformed at neural, linguistic, and behavioral levels"

This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔

Relying only on EEG,
Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

Iranians, check on your Israeli friends and colleagues. Israelis, check on your Iranian friends and colleagues. Because what will bring lasting peace starts and ends with the personal connections, the love and affection many of us already have for each other.

David Holzmüller (@dholzmueller) 's Twitter Profile Photo

For good probability predictions, you should use post-hoc calibration. With Eugène Berta, Michael Jordan, and Francis Bach we argue that early stopping and tuning should account for this! Using the loss after post-hoc calibration often avoids premature stopping. 🧵1/

For good probability predictions, you should use post-hoc calibration. With <a href="/Eugene_Berta/">Eugène Berta</a>, Michael Jordan, and <a href="/BachFrancis/">Francis Bach</a> we argue that early stopping and tuning should account for this! Using the loss after post-hoc calibration often avoids premature stopping. 🧵1/
Eugène Berta (@eugene_berta) 's Twitter Profile Photo

I’ll be presenting our paper at COLT in Lyon this Monday at the Predictions and Uncertainty workshop — come say hi if you're around! 👋 Check out David Holzmüller's thread below 👇 #COLT2025