Nikolas McNeal (@nikolasmcneal) 's Twitter Profile
Nikolas McNeal

@nikolasmcneal

ML PhD student @GeorgiaTech

ID: 841119304531746817

linkhttps://nikolasmcneal.github.io/ calendar_today13-03-2017 02:50:44

36 Tweet

169 Followers

307 Following

Lingbo Mo (@lingbomo) 's Twitter Profile Photo

[1/6] Happy to share our new #ACL2022 Findings paper: "Towards Transparent Interactive Semantic Parsing via Step-by-Step Correction".

Bernal Jiménez (@bernaaaljg) 's Twitter Profile Photo

Thinking about using GPT-3 in-context learning for biomedical information extraction? Think again 🧠 Our work suggests that small PLM fine-tuning might be a better option and points to some general limitations of GPT-3 in-context learning. (arxiv.org/abs/2203.08410) #NLProc [1/6]

Thinking about using GPT-3 in-context learning for biomedical information extraction?
Think again 🧠

Our work suggests that small PLM fine-tuning might be a better option and points to some general limitations of GPT-3 in-context learning. (arxiv.org/abs/2203.08410) #NLProc [1/6]
Colin Flanagan (@flanagancol) 's Twitter Profile Photo

Awesome work from my good friend, Nikolas McNeal. There are some great insights here on signal processing and cognition. Definitely worth your read.

Yu Su @#ICLR2025 (@ysu_nlp) 's Twitter Profile Photo

We have seen plenty of evidence for chain-of-thought reasoning with large language models. A new #EMNLP22 paper from OSU NLP group (OSU NLP Group) shows that small PLMs can also develop a chain of thought if we train a helper to prompt it iteratively 👇👇

Huan Sun (OSU) (@hhsun1) 's Twitter Profile Photo

Chain-of-thought prompting has been shown very effective for large LMs like GPT-3 & PaLM to solve tasks that require multi-step reasoning. Our #EMNLP2022 work investigates how to iteratively prompt much smaller pre-trained LMs (e.g., BART-large) to develop a chain of thought.👇👇

Sara A Solla (@saraasolla) 's Twitter Profile Photo

Dana Ballard was a brilliant scientist and a delightful colleague. He was kind, thoughtful, and funny. He was a creative and original thinker who made foundational contributions to computer vision. His insights about predictive coding paved the way for current theories.

Nikolas McNeal (@nikolasmcneal) 's Twitter Profile Photo

I am happy to share that I will be starting my Ph.D. in Machine Learning at Georgia Tech in the fall! I am immensely grateful for the endless support I have received from my research advisors and mentors throughout my undergraduate education.

Nikolas McNeal (@nikolasmcneal) 's Twitter Profile Photo

This is terrible news. I owe so much to Professor Braumoeller. He introduced me to research and offered invaluable guidance and resources to a clueless undergrad. I’ll forever be grateful.

Sam Stevens (@samstevens6860) 's Twitter Profile Photo

Did you know language models like GPT can also be used to encrypt and protect your data? I'm excited to finally present SELM, a Symmetric Encryption algorithm with autoregressive Language Models, with Yu Su (hiring postdoc) and OSU NLP Group! Website: samuelstevens.me/research/encry…

KrajbichLab (@krajbichlab) 's Twitter Profile Photo

Excited to share a new paper in Science Advances with colleagues from Bocconi. We offer a new way to estimate DDMs when drift rate is based on differences in stimulus magnitude, e.g. value. The method is better than logit and much faster than other DDMs. science.org/doi/full/10.11…

Apurva Ratan Murty (@apurvaratan) 's Twitter Profile Photo

Check out our @PNAS Commentary on the exciting new findings on face familiarity by Ben Deen and colleagues. It was super fun to write this with my awesome grad students Alish Dipani and Nikolas McNeal. pnas.org/doi/10.1073/pn…

Mariya Toneva (@mtoneva1) 's Twitter Profile Photo

Brain activity is useful for improving language models! We show that even small amounts of brain activity recorded while people listen to engaging stories (less than 0.7% of pretraining data) substantially improve speech LM understanding, when incorporated as a training signal.

Haider Al-Tahan (@haidertahan) 's Twitter Profile Photo

🚀 Excited to share our work at #NeurIPS2024! We show how billion parameter VLMs lose to a two-layer MLP on MNIST. Come by our poster presentation at West Ballroom A-D #5211, today from 4:30–7:30 PM PST. A 🧵:

🚀 Excited to share our work at #NeurIPS2024! We show how billion parameter VLMs lose to a two-layer MLP on MNIST.

Come by our poster presentation at West Ballroom A-D #5211, today from 4:30–7:30 PM PST.

A 🧵: