Vincent Abbott (@vtabbott_) 's Twitter Profile
Vincent Abbott

@vtabbott_

Maker of *those* diagrams for deep learning algorithms | 🇦🇺 in 🇬🇧

ID: 1549633222689992706

linkhttp://www.vtabbott.io calendar_today20-07-2022 05:53:00

463 Tweet

6,6K Followers

307 Following

Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

Recently posted w/ Gioele Zardini and SyntheticGestalt JP. Diagrams indicate exponents are attention’s bottleneck. We use the fusion theorems to show any normalizer works for fusion and we replace SoftMax with L2, and implement it thanks to Gerard Glowacki! Even w/o warp shuffling TC

Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

Making progress with automatically generating diagrams of deep learning models (here's multi-head attention). Next up, automated performance modelling + conversion from PyTorch to data structure that allows for diagram generation + performance modelling.

Making progress with automatically generating diagrams of deep learning models (here's multi-head attention). Next up, automated performance modelling + conversion from PyTorch to data structure that allows for diagram generation + performance modelling.
Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

The implementations I'm working on are based on novel algebraic/categorical constructs that can–at last–properly represent broadcasting. This will allow deep learning models to be symbolically expressed, from which Torch implementations, diagrams etc follow. Here's a sneak peak!

The implementations I'm working on are based on novel algebraic/categorical constructs that can–at last–properly represent broadcasting. This will allow deep learning models to be symbolically expressed, from which Torch implementations, diagrams etc follow. Here's a sneak peak!
Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

I'm working on symbolically expressed deep learning models. Built on standard definitions, we can provide a web of features from different modules. One module produces a model, another converts it to PyTorch, another exports it to JSON, and another loads to TypeScript and renders

I'm working on symbolically expressed deep learning models. Built on standard definitions, we can provide a web of features from different modules. One module produces a model, another converts it to PyTorch, another exports it to JSON, and another loads to TypeScript and renders
Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

Working on making automatically generated diagrams *aesthetic*. Here is attention, generated from a mathematical definition. Note how there are multiple k and m values, as the code found that these two values can be independently set.

Working on making automatically generated diagrams *aesthetic*. Here is attention, generated from a mathematical definition. Note how there are multiple k and m values, as the code found that these two values can be independently set.
Vincent Abbott (@vtabbott_) 's Twitter Profile Photo

SzymonOzog I'll be refactoring the code to allow for texture packs at some point. This is actually a good resource for style choices. The wires are drawn between anchors (shown below), so it should be straightforward to just change the "drawCurves" function.

<a href="/SzymonOzog_/">SzymonOzog</a> I'll be refactoring the code to allow for texture packs at some point. This is actually a good resource for style choices. The wires are drawn between anchors (shown below), so it should be straightforward to just change the "drawCurves" function.