Luca Moschella (@moschella_luca) 's Twitter Profile
Luca Moschella

@moschella_luca

Machine Learning Researcher @ Apple | Previously NVIDIA, NNAISENSE & ELLIS PhD

ID: 3467186115

linkhttps://luca.moschella.dev calendar_today28-08-2015 09:57:24

209 Tweet

679 Followers

638 Following

UniReps (@unireps) 's Twitter Profile Photo

📢 We're thrilled to announce that the UniReps workshop will return to NeurIPS Conference 24 for its second edition! Stay tuned for the speaker lineup, program, call for papers, and more!🔵🔴 Want to learn more? Join our community at: discord.com/invite/NZg2QDX… See you in Vancouver!🇨🇦

Nathan Cloos (@nacloos) 's Twitter Profile Photo

Can LLMs play the game Baba Is You?🧩 In our new ICML Conference workshop paper, we show GPT-4o and Gemini-1.5-Pro fail dramatically in environments where both objects and rules must be manipulated! Here is an example of correct gameplay: (1/n)

UniReps (@unireps) 's Twitter Profile Photo

🔵🔴Excited to announce the 2nd #UniReps Workshop at #NeurIPS2024! 🎉 We're seeking researchers to join our Program Committee. If you're interested, fill out this form: forms.gle/mnYGWzZ76VhcpU… We are working on prizes for the best papers and reviewers, so stay updated!🏆

UniReps (@unireps) 's Twitter Profile Photo

📢We're excited to announce that UniReps: the Workshop on Unifying Representations in Neural Models will come back to NeurIPS Conference for its 2nd edition!🔵🔴 SUBMISSION DEADLINE: 20 September Check out our Call for Papers, speakers lineup and schedule at: unireps.org/2024/

📢We're excited to announce that UniReps: the Workshop on Unifying Representations in Neural Models will come back to <a href="/NeurIPSConf/">NeurIPS Conference</a> for its 2nd edition!🔵🔴

SUBMISSION DEADLINE: 20 September

Check out our Call for Papers, speakers lineup and schedule at: unireps.org/2024/
UniReps (@unireps) 's Twitter Profile Photo

🔵🔴When do distinct learning processes learn similar representations? Detecting patterns and conditions for this to happen is an open direction: a thread🧵 Working on this topic? Submit at: openreview.net/group?id=NeurI… DEADLINE: 20 Sept See you at NeurIPS Conference! 🔵🔴 [1/N]

🔵🔴When do distinct learning processes learn similar representations? 

Detecting patterns and conditions for this to happen is an open direction: a thread🧵

Working on this topic? Submit at: openreview.net/group?id=NeurI…

DEADLINE: 20 Sept

See you at <a href="/NeurIPSConf/">NeurIPS Conference</a>! 🔵🔴

[1/N]
UniReps (@unireps) 's Twitter Profile Photo

📢📢Deadline extension for Proceedings and Extend Abstract tracks of the UniReps Workshop NeurIPS Conference 2024🔵🔴 NEW DEADLINE: 23 September AoE ⌛️ Submit your paper at: openreview.net/group?id=NeurI… Join our community at: discord.gg/ET5NVCWwnA 👈

UniReps (@unireps) 's Twitter Profile Photo

📣Exciting News for #NeurIPS2024! 🔵🔴The UniReps Workshop is introducing a Blogpost Track to share research in a new, creative format. Whether it's your own work, new early-stage results, or a tutorial-style summary of key methods, we want to hear from you! 📝 👇🧵 1/N

Irene Cannistraci (@ire_cannistraci) 's Twitter Profile Photo

(1/6) I am thrilled to share my latest research from my visit to Helmholtz Munich | @HelmholtzMunich in the #AIDOS lab led by Bastian Grossenbacher-Rieck! 🚀 We introduced Redundant Blocks Approximation (RBA)—a straightforward method to reduce model size & complexity while maintaining good performance📈🤖

UniReps (@unireps) 's Twitter Profile Photo

📢 Exciting news! The UniReps Blogpost Track is officially open for submissions! 🔵🔴 Submit your blog post all-year round at unireps.org/blog 👈 Want to present at the workshop poster session at NeurIPS Conference 24? Submit by **Nov 25 AoE** 📅 See you in Vancouver!🇨🇦

UniReps (@unireps) 's Twitter Profile Photo

🔵🔴📢UniReps 24 decisions are out! Check all accepted submissions and orals at: openreview.net/group?id=NeurI… A special thanks to all the authors and reviewers for their amazing job and contributions! 🙏 See you in Vancouver!🇨🇦

Donato Crisostomi (@donatocrisosto1) 's Twitter Profile Photo

I know you're probably thinking, "Yeah, these neuron-permutation-based model merging methods are cool.. but are they cycle-consistent (CC)?" Say no more! It just so happens that our new #NeurIPS24 paper covers exactly this! Huh? No idea what I am talking about? Read on (1/6)

I know you're probably thinking, "Yeah, these neuron-permutation-based model merging methods are cool.. but are they cycle-consistent (CC)?"

Say no more!
It just so happens that our new #NeurIPS24 paper covers exactly this!

Huh? No idea what I am talking about? Read on
(1/6)
UniReps (@unireps) 's Twitter Profile Photo

🚨 Conference-to-Workshop track 🚨 We’re inviting a selection of relevant papers from the #NeurIPS2024 main track to be presented in our poster session🎉 Only 15 papers will be accepted, on a first-come, first-served basis, provided they align with the workshop’s topics. 🔗

Marco Fumero@ICLR25 (@marco_fumero) 's Twitter Profile Photo

Excited to present "Latent Functional Maps" at #NeurIPS ! We show how neural models can be aligned by matching function spaces on representation manifolds, providing a unified framework for model comparison, matching, and information transfer. 📜: arxiv.org/abs/2406.14183 👇🧵

UniReps (@unireps) 's Twitter Profile Photo

🎤 Excited to introduce our incredible panelists for the Panel on Unifying Representations in Neural Models 🔵🔴 📌 Saturday 14th Dec 11:45 am, West Exhibition Hall C 🌍🤖 ✨Erin Grant (Erin Grant): Senior Research Fellow, UCL Gatsby Unit & Sainsbury Wellcome Centre. Her

UniReps (@unireps) 's Twitter Profile Photo

🚀 Did you attend the 2nd UniReps workshop on Unifying Representations in Neural Models? We’d love your feedback! Take 2 minutes to fill out our form & help shape future workshops in ML, Neuroscience, & Cognitive Science.📋docs.google.com/forms/d/e/1FAI…

UniReps (@unireps) 's Twitter Profile Photo

🔵🔴Better late than never! We're excited to finally share pictures from the Best Paper Awards Ceremony at the last edition of #UniReps at #NeurIPS! 📸🏆 Check them out in our latest blog post: 👉unireps.org/blog/2025/unir… Congratulations again to all the authors!🎉 Keep in

Aleksei Petrenko (@petrenko_ai) 's Twitter Profile Photo

Excited to share our new pre-print arxiv.org/pdf/2502.01600 We train a digital agent that solves diverse day-to-day tasks from the AppWorld benchmark by interacting with its stateful environment using API calls. AppWorld is hard! The previous best open-weight agent (Llama 3 70B)

Excited to share our new pre-print arxiv.org/pdf/2502.01600

We train a digital agent that solves diverse day-to-day tasks from the AppWorld benchmark by interacting with its stateful environment using API calls. AppWorld is hard! The previous best open-weight agent (Llama 3 70B)
Dingling Yao (@dingling_yao) 's Twitter Profile Photo

Excited to present our paper “Unifying Causal Representation Learning with the Invariance Principle” at #ICLR2025 in Singapore! Joint work with DarioRancati, Riccardo Cadei , MarcoFumero, and Francesco Locatello.

Excited to present our paper “Unifying Causal Representation Learning with the Invariance Principle” at #ICLR2025  in Singapore!

Joint work with DarioRancati, <a href="/riccardocadeii/">Riccardo Cadei</a> , MarcoFumero, and <a href="/FrancescoLocat8/">Francesco Locatello</a>.
jack morris (@jxmnop) 's Twitter Profile Photo

excited to finally share on arxiv what we've known for a while now: All Embedding Models Learn The Same Thing embeddings from different models are SO similar that we can map between them based on structure alone. without *any* paired data feels like magic, but it's real:🧵

Marco Fumero@ICLR25 (@marco_fumero) 's Twitter Profile Photo

Neural networks implicitly define a latent vector field on the data manifold, via autoencoding iterations🌀 This representation retains properties of the model, revealing memorization and generalization regimes, and characterizing distribution shifts 📜: arxiv.org/abs/2505.22785

Neural networks implicitly define a latent vector field on the data manifold, via autoencoding iterations🌀

This representation retains properties of the model, revealing memorization and generalization regimes, and characterizing distribution shifts

📜: arxiv.org/abs/2505.22785