Konstantin Schürholt (@k_schuerholt) 's Twitter Profile
Konstantin Schürholt

@k_schuerholt

AI Researcher at @ndea. Previously postdoc on weight space learning @ University of St.Gallen, Switzerland.

ID: 1151388211752636418

linkhttp://kschuerholt.github.io calendar_today17-07-2019 07:08:36

751 Tweet

189 Followers

328 Following

Haggai Maron (@haggaimaron) 's Twitter Profile Photo

🔬 Excited to announce our ICLR 2025 Workshop on *Weight Space Learning*! Applying deep learning to model weights could revolutionize how we develop, analyze & manipulate NNs. Submit by Feb 10 In-person @ Singapore + remote options ICLR 2026 #ICLR2025 weight-space-learning.github.io

🔬 Excited to announce our ICLR 2025 Workshop on *Weight Space Learning*! Applying deep learning to model weights could revolutionize how we develop, analyze & manipulate NNs.
Submit by Feb 10
In-person @ Singapore + remote options 
<a href="/iclr_conf/">ICLR 2026</a> #ICLR2025
weight-space-learning.github.io
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

⏳Final hours to submit to the 1st workshop on Weight Space Learning at #ICLR2025 @ICLR_conf❗ 🚀Explore learning directly from model weights! 🌐 weight-space-learning.github.io ⏰ Submit by Feb 10 2025 AoE

⏳Final hours to submit to the 1st workshop on Weight Space Learning at #ICLR2025 @ICLR_conf❗

🚀Explore learning directly from model weights!  

🌐 weight-space-learning.github.io 
⏰ Submit by Feb 10 2025 AoE
François Chollet (@fchollet) 's Twitter Profile Photo

A crucial point that everyone should be internalizing: in the age of test-time search, it's pretty much always possible to reach any level of capability by simply expending more compute. So it’s not just about, "can you do it?". The key is how efficiently you can do it.

ARC Prize (@arcprize) 's Twitter Profile Photo

Today we are announcing ARC-AGI-2, an unsaturated frontier AGI benchmark that challenges AI reasoning systems (same relative ease for humans). Grand Prize: 85%, ~$0.42/task efficiency Current Performance: * Base LLMs: 0% * Reasoning Systems: <4%

Today we are announcing ARC-AGI-2, an unsaturated frontier AGI benchmark that challenges AI reasoning systems (same relative ease for humans).

Grand Prize: 85%, ~$0.42/task efficiency

Current Performance:
* Base LLMs: 0%
* Reasoning Systems: &lt;4%
Mike Knoop (@mikeknoop) 's Twitter Profile Photo

The $1,000,000 ARC Prize 2025 competition is back! And introducing ARC-AGI-2 the only unbeaten benchmark (we're aware of) that remains easy for humans but now even harder for AI. New ideas are still needed to reach AGI. We've got lots of great updates for 2025 --

The $1,000,000 <a href="/arcprize/">ARC Prize</a> 2025 competition is back! And introducing ARC-AGI-2 the only unbeaten benchmark (we're aware of) that remains easy for humans but now even harder for AI. New ideas are still needed to reach AGI. We've got lots of great updates for 2025 --
Mike Knoop (@mikeknoop) 's Twitter Profile Photo

The chart below is IMO the most important thing we published today (h/t Bryan Landers). It shows that "scaling up" existing ideas, even the latest AI reasoning systems with log-linear accuracy/compute characteristics, is insufficient for AGI. We still need some architectural or

Ndea (@ndea) 's Twitter Profile Photo

ARC-AGI-2 is here — the new leading AI program synthesis benchmark. It challenges models to reason, abstract, and generalize — not just memorize. Design led by François Chollet with key contributions from the Ndea team.

ARC-AGI-2 is here — the new leading AI program synthesis benchmark.

It challenges models to reason, abstract, and generalize — not just memorize.

Design led by <a href="/fchollet/">François Chollet</a> with key contributions from the Ndea team.
ARC Prize (@arcprize) 's Twitter Profile Photo

Are You Smarter Than A.I.? An interactive article by The New York Times covers ARC Prize and François Chollet "Some experts predict that A.I. will surpass human intelligence within the next few years. Play this puzzle to see how far the machines have to go."

Are You Smarter Than A.I.?

An interactive article by <a href="/nytimes/">The New York Times</a> covers <a href="/arcprize/">ARC Prize</a> and <a href="/fchollet/">François Chollet</a> 

"Some experts predict that A.I. will surpass human intelligence within the next few years.

Play this puzzle to see how far the machines have to go."
Matt Turck (@mattturck) 's Twitter Profile Photo

Beyond Brute Force: my conversation with star AI researcher François Chollet and Zapier’s Mike Knoop * ARC AGI 2 and why it breaks base LLMs * test-time adaptation * the search for true machine intelligence All links in reply to this post below

Ndea (@ndea) 's Twitter Profile Photo

Ndea co-founder François Chollet featured in The Atlantic. "The Rise of Fluid Intelligence — François Chollet is on a quest to make AI a bit more human." A story about the increasing relevance of ARC-AGI by Matteo Wong.

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Join our ML Theory group next week on April 17th as they welcome Konstantin Schürholt for a session on "Weight Space Learning: Treating Neural Network Weights as Data" Thanks to Anier Velasco Sotomayor, Thang Chu and Andrej Jovanović for organizing this event 🥳

Join our ML Theory group next week on April 17th as they welcome <a href="/k_schuerholt/">Konstantin Schürholt</a> for a session on "Weight Space Learning: Treating Neural Network Weights as Data"

Thanks to <a href="/aniervs/">Anier Velasco Sotomayor</a>, <a href="/ThangChu77/">Thang Chu</a> and <a href="/itsmaddox_j/">Andrej Jovanović</a>  for organizing this event 🥳
Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Join us tomorrow, April 17th as we host Konstantin Schürholt for a session on "Weight Space Learning: Treating Neural Network Weights as Data" Learn more: cohere.com/events/cohere-…

Konstantin Schürholt (@k_schuerholt) 's Twitter Profile Photo

Our inaugural workshop on weight space learning at ICLR in Singapore has started! Join us in Topaz 220-225 to talk all things NN weights and check weight-space-learning.github.io!

Our inaugural workshop on weight space learning at ICLR in Singapore has started! Join us in Topaz 220-225 to talk all things NN weights and check weight-space-learning.github.io!
TuringPost (@theturingpost) 's Twitter Profile Photo

Text-to-LoRA (T2L) — a new hypernetwork by @SakanaAI, that: - Can compresses many existing LoRAs into itself. - Quickly generates new LoRA adapters for a big LM, using just a text description of a task. Once trained, it can create new LoRAs on the fly. Here's how T2L works:

Text-to-LoRA (T2L) — a new hypernetwork by @SakanaAI, that:

- Can compresses many existing LoRAs into itself.
- Quickly generates new LoRA adapters for a big LM, using just a text description of a task.

Once trained, it can create new LoRAs on the fly.

Here's how T2L works:
Ndea (@ndea) 's Twitter Profile Photo

.François Chollet spoke at Y Combinator AI Startup School about going from AI to AGI. Why did pretraining scaling hit a wall? Does Test-Time Adaptation scale to AGI? What's next for AI? Plus Ndea's goal to build a lifelong-learning, self-improving, DL-guided program synthesis engine.

.<a href="/fchollet/">François Chollet</a> spoke at <a href="/ycombinator/">Y Combinator</a> AI Startup School about going from AI to AGI.

Why did pretraining scaling hit a wall?
Does Test-Time Adaptation scale to AGI?
What's next for AI?

Plus Ndea's goal to build a lifelong-learning, self-improving, DL-guided program synthesis engine.
Victor.Kai Wang (@victorkaiwang1) 's Twitter Profile Photo

Customizing Your LLMs in seconds using prompts🥳! Excited to share our latest work with HPC-AI Lab, VITA Group, Konstantin Schürholt, Yang You, Michael Bronstein, Damian Borth : Drag-and-Drop LLMs(DnD). 2 features: tuning-free, comparable or even better than full-shot tuning.(🧵1/8)

Ndea (@ndea) 's Twitter Profile Photo

LLMs can synthesize many programs, but how should we search among them? New from Sakana AI - AB-MCTS frames code generation as an adaptive tree search, guided by external feedback. Beats baselines on synthesis benchmarks including ARC-AGI.