Matthew Finlayson ✈️ NeurIPS (@mattf1n) 's Twitter Profile
Matthew Finlayson ✈️ NeurIPS

@mattf1n

First year PhD at @nlp_usc | Former predoc at @allen_ai on @ai2_aristo | Harvard 2021 CS & Linguistics

ID: 2149800655

linkhttp://mattf1n.github.io calendar_today22-10-2013 22:15:16

139 Tweet

954 Followers

907 Following

Sean Ren (@xiangrennlp) 's Twitter Profile Photo

Congratulations to the GDM Google DeepMind team on their best paper award at #ICML2024 & Appreciate @afedercooper's shout out to our concurrent paper 🙌 If you are into the topic of recovering model info through just its output logits, check out our paper led by Matthew Finlayson too!

Matthew Finlayson ✈️ NeurIPS (@mattf1n) 's Twitter Profile Photo

Just landed in Philly for Conference on Language Modeling where I’ll be presenting my work on extracting secrets from LLM APIs at the Wednesday afternoon poster sesh. Please reach out if you wanna hang and talk about sneaky LLM API hacks, accountability, and the geometry of LLM representations!

Harsh Trivedi (@harsh3vedi) 's Twitter Profile Photo

I had a fantastic time visiting USC and talking about 🌎AppWorld (appworld.dev) last Friday!! Thank you, Swabha Swayamdipta, Matthew Finlayson, & Brihi Joshi, for inviting and hosting me. Also thank you, Robin Jia, Jesse Thomason, & many others, for insightful discussions and meetings!

Sean Ren (@xiangrennlp) 's Twitter Profile Photo

Arrived in Philadelphia for the very 1st Conference on Language Modeling! Excited to catch up w/ everyone & happy to chat about faculty/phd positions USC Viterbi School 🙂 Plz meet our amazing PhD students (Huihan Li Matthew Finlayson Sahana Ramnath ) for their work on model safety and cultural bias analysis 👇

Sean Welleck (@wellecks) 's Twitter Profile Photo

Excited to give a NeurIPS tutorial on LLM inference strategies, inference-time scaling laws & more with Matthew Finlayson and Hailey Schoelkopf ! "Beyond Decoding: Meta-Generation Algorithms for Large Language Models" More details soon, check out arxiv.org/abs/2406.16838 in the meantime!

Sean Welleck (@wellecks) 's Twitter Profile Photo

Curious about inference-time scaling, the #1 trending topic in LLMs? Come to our NeurIPS tutorial: Beyond Decoding: Meta-Generation Algorithms for LLMs (Tue. @ 1:30)! cmu-l3.github.io/neurips2024-in…

Curious about inference-time scaling, the #1 trending topic in LLMs?

Come to our NeurIPS tutorial: Beyond Decoding: Meta-Generation Algorithms for LLMs (Tue. @ 1:30)!

cmu-l3.github.io/neurips2024-in…
Sean Welleck (@wellecks) 's Twitter Profile Photo

We're incredibly honored to have an amazing group of panelists: Rishabh Agarwal , Noam Brown , Beidi Chen, Nouha Dziri, Jakob Foerster , with Ilia Kulikov moderating We'll close with a panel discussion about scaling, inference-time strategies, the future of LLMs, and more!

We're incredibly honored to have an amazing group of panelists: <a href="/agarwl_/">Rishabh Agarwal</a> , <a href="/polynoamial/">Noam Brown</a> , <a href="/BeidiChen/">Beidi Chen</a>, <a href="/nouhadziri/">Nouha Dziri</a>, <a href="/j_foerst/">Jakob Foerster</a> , with <a href="/uralik1/">Ilia Kulikov</a> moderating

We'll close with a panel discussion about scaling, inference-time strategies, the future of LLMs, and more!
Robert Lange (@roberttlange) 's Twitter Profile Photo

Loving the #NeurIPS2024 'Beyond Decoding: Meta-Generation Algorithms for LLMs' workshop ❤️ by Sean Welleck Matthew Finlayson Hailey Schoelkopf: 1. Primitive generators: optimization vs sampling 2. Meta-generators: chain, parallel, tree, refinement 3. Efficiency: quantize, FA, sparse MoEs, KV

Loving the #NeurIPS2024 'Beyond Decoding: Meta-Generation Algorithms for LLMs' workshop ❤️  by <a href="/wellecks/">Sean Welleck</a> <a href="/mattf1n/">Matthew Finlayson</a> <a href="/haileysch__/">Hailey Schoelkopf</a>:

1. Primitive generators: optimization vs sampling
2. Meta-generators: chain, parallel, tree, refinement
3. Efficiency: quantize, FA, sparse MoEs, KV