Higher Order Company (@higherordercomp) 's Twitter Profile
Higher Order Company

@higherordercomp

Getting to the very core of what makes computers capable of reasoning.

discord.gg/higherorderco

ID: 1620416072825409536

linkhttp://higherorderco.com calendar_today31-01-2023 13:38:00

70 Tweet

7,7K Followers

0 Following

Taelin (@victortaelin) 's Twitter Profile Photo

HOC - Complete Overview, no BS (Ask questions here!) Vision: - Interaction Nets are a powerful tech Mission: - Build powerful products with Interaction Nets Reasoning: - We believe this theory can result in major breakthroughs in many areas of CS, including programming

Taelin (@victortaelin) 's Twitter Profile Photo

HVM3 is turning out *really* good, in ways that I didn't even anticipate. The move to Haskell+C was the best decision I ever made. Haskell gives me the expressivity I need, C gives me the performance. Extremely refreshing to watch everything aligning after a bunch of missteps

Taelin (@victortaelin) 's Twitter Profile Photo

How to use HVM3's collapser to solve a functional equation! 1. Implement a function on HVM3's syntax (Bend soon) 2. Enumerate all terms of X using superpositions 3. Write your equation on the main function 4. Run it with: 'hvml run main.hvml -C' And that's it. HVM will

Taelin (@victortaelin) 's Twitter Profile Photo

Experiment complete, here are the results 🥳 tl;dr: still exponential, 10x speedup (replacing loops by superpositions) Q: What is this chart? A: The evaluation time (log scale) of the dumbest factorization algorithm (brute-force) on both Bend (HVM) and Haskell (GHC). Q: How

Experiment complete, here are the results 🥳

tl;dr: still exponential, 10x speedup
(replacing loops by superpositions)

Q: What is this chart?
A: The evaluation time (log scale) of the dumbest factorization algorithm (brute-force) on both Bend (HVM) and Haskell (GHC).

Q: How
Taelin (@victortaelin) 's Twitter Profile Photo

hey guys - so, today I can confirm my hypothesis worked, and the main milestone I sought has been hit. the synthesizer implemented algorithms and solved problems that even o1 couldn't. using it is somewhat magical - it *feels* intelligent, even though I know isn't. it is just a

Taelin (@victortaelin) 's Twitter Profile Photo

btw, the best way to describe what I'm doing is: I'm maximizing the bitter lesson. that is, if ingenious approaches failed, and what made GPT so good was, ultimately, the combination of a dumb simple algorithm with fuckton of compute; then, what happens if we take this lesson by

Taelin (@victortaelin) 's Twitter Profile Photo

despite making Bend as simple as I possibly could, 99.9% of people following me still have no idea what HVM is useful for. to most, it is no different than some pseudoscience bs. I eventually came to the acceptance that I shouldn't be trying to make people use interaction nets,

Taelin (@victortaelin) 's Twitter Profile Photo

HVM's Program Synthesis - Initial Results AI startups successfully developed systems capable of *learning* arbitrary skills. Yet, as impressive as LLMs are, they eventually hit a wall: the dataset they learn from. To break that wall, models were recently augmented with "search"

HVM's Program Synthesis - Initial Results

AI startups successfully developed systems capable of *learning* arbitrary skills. Yet, as impressive as LLMs are, they eventually hit a wall: the dataset they learn from. To break that wall, models were recently augmented with "search"
Taelin (@victortaelin) 's Twitter Profile Photo

DEMO TIME SupGen is a generative coding AI... except it isn't an AI. There is no model, there is no pre-training. You just give it some examples, and it gives you a program. It runs locally, in a single-core CPU. Oh, and it can also prove theorems. Here's a demo, including a

Taelin (@victortaelin) 's Twitter Profile Photo

HOC is now small research lab that is laser focused on building the first Symbolic Transformer. We had many pivots through the year, but we're finally stable in a direction, with promising results and a clear business path. It also makes the most logical sense: use HVM to build

Higher Order Company (@higherordercomp) 's Twitter Profile Photo

HOC is doing a post-seed funding round to turn SupGen, an HVM-powered program synthesizer, into a Symbolic Transformer: a logic-based AI architecture, and validate it on the ARC Prize. More details on: wefunder.com/higher.order.c…

Taelin (@victortaelin) 's Twitter Profile Photo

AGI apart, what can HOC can bring to the world? ✨The child of Haskell & Rust, born with CUDA superpowers.✨ That's the best way to describe what a mature Bend would be. It would: - Be significantly faster than Haskell even in a single-core (HVM3 is evidence of that), and

Taelin (@victortaelin) 's Twitter Profile Photo

morning! today I'll start reaching out to VCs. I want to close this round ASAP, start getting things done, beat ARC-AGI by EOY or fail for good. that's all, lg

Taelin (@victortaelin) 's Twitter Profile Photo

I received an interesting question about Interaction Nets: > Asperti and Guerrini themselves had certain reservations about the tractability of bringing Interaction Nets to life, do you have a path towards solving the issues they have outlined? This is not the first time and

ARC Prize (@arcprize) 's Twitter Profile Photo

AGI is reached when the capability gap between humans and computers is zero ARC Prize Foundation measures this to inspire progress Today we preview the unbeaten ARC-AGI-2 + open public donations to fund ARC-AGI-3 TY Schmidt Sciences (Eric Schmidt) for $50k to kick us off!

AGI is reached when the capability gap between humans and computers is zero

ARC Prize Foundation measures this to inspire progress

Today we preview the unbeaten ARC-AGI-2 + open public donations to fund ARC-AGI-3

TY Schmidt Sciences (<a href="/ericschmidt/">Eric Schmidt</a>) for $50k to kick us off!
Taelin (@victortaelin) 's Twitter Profile Photo

btw, just to elaborate on the significance of this SupGen is a tool that finds programs (and proofs) by example. it can discover stuff that LLMs can not, like new algorithms, new science, because it is a generalizer. basically, it is the opposite of GPTs: no memory, no model,