Eric Lin, MD (@emailelin) 's Twitter Profile
Eric Lin, MD

@emailelin

VA Medical Informatics Fellow +
McLean Hospital Technology in Psychiatry Institute;
Formerly: Yale Psych, UCLA Med, UCLA
Interests: NLP/ML for psychiatry

ID: 94168352

linkhttps://www.linkedin.com/in/ericlinmd/ calendar_today02-12-2009 20:26:12

257 Tweet

80 Followers

238 Following

Together AI (@togethercompute) 's Twitter Profile Photo

First, an instruction-tuned large language model, fine-tuned for chat from EleutherAI’s GPT-NeoX-20B with over 43 million instructions on 100% carbon negative compute available under Apache-2.0 license on Hugging Face. huggingface.co/togethercomput…

Niels Rogge (@nielsrogge) 's Twitter Profile Photo

Informer, from "Beyond Efficient Transformer for Long Sequence Time-Series Forecasting" (AAAI'21 Best Paper) is now available Hugging Face Transformers! 🙌 We created a tutorial for multi-variate forecasting, check it out: huggingface.co/blog/informer Colab with @ESimhayev (1/2)

AI Breakfast (@aibreakfast) 's Twitter Profile Photo

OpenAI's speech-to-text API "Whisper" just got supercharged: This tool transcribes audio 70x faster than Whisper A 2-hour podcast can now be transcribed in ~30 seconds using Whisper JAX: The Fastest Whisper API Try it here using your mic: huggingface.co/spaces/sanchit…

OpenAI's speech-to-text API "Whisper" just got supercharged:

This tool transcribes audio 70x faster than Whisper

A 2-hour podcast can now be transcribed in ~30 seconds using Whisper JAX: The Fastest Whisper API

Try it here using your mic: huggingface.co/spaces/sanchit…
Sebastian Raschka (@rasbt) 's Twitter Profile Photo

Want to adopt the latest LLMs for a specific target domain (e.g., finance data) or target task like document classification? Finetuning becomes more feasible as we see more and more pretrained LLMs become available under open-source licenses. magazine.sebastianraschka.com/p/finetuning-l… Here's an

Jared Palmer (@jaredpalmer) 's Twitter Profile Photo

Added more OSS language models to the Vercel AI Playground via Hugging Face: ◆ OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 ◆ EleutherAI/gpt-neox-20b ◆ bigcode/santacoder ◆ bigscience/bloom → play.vercel.ai/r/MsqSVto

Mike Nolivos (@myconull) 's Twitter Profile Photo

I had no idea about Hugging Face. Ppl can create and share interfaces to AI apps called Spaces. This one here can: - take any Youtube link - transcribes the linked video using Whisper - separates speakers and outputs the transcript huggingface.co/spaces/vumichi…

Thomas Wolf (@thom_wolf) 's Twitter Profile Photo

What was going on with the Open LLM Leaderboard? Its numbers didn't match the ones reported in the LLaMA paper! We've decided to dive in this rabbit hole with friends from the LLaMA & Falcon teams and got back with a blog post of learnings & surprises: huggingface.co/blog/evaluatin…

What was going on with the Open LLM Leaderboard?

Its numbers didn't match the ones reported in the LLaMA paper!

We've decided to dive in this rabbit hole with friends from the LLaMA & Falcon teams and got back with a blog post of learnings & surprises: huggingface.co/blog/evaluatin…
Yura Gorishniy (@yurafivetwo) 's Twitter Profile Photo

In our current project (coming soon, stay tuned!), we (with @irubachev) have evaluated our tabular DL models on the datasets from this paper by Léo Grinsztajn Gael Varoquaux 🦋 et al. Let's discuss the results! 🧵 1/7

In our current project (coming soon, stay tuned!), we (with @irubachev) have evaluated our tabular DL models on the datasets from this paper by <a href="/LeoGrint/">Léo Grinsztajn</a> <a href="/GaelVaroquaux/">Gael Varoquaux 🦋</a> et al.

Let's discuss the results!

🧵 1/7
Science of Science (@mishateplitskiy) 's Twitter Profile Photo

The longer people are in academia, the more they realize that when reading papers it's best to ignore Intro, Discussion etc. and just look at Methods and Results journals.plos.org/plosone/articl…

The longer people are in academia, the more they realize that when reading papers it's best to ignore Intro, Discussion etc. and just look at Methods and Results
journals.plos.org/plosone/articl…
Philipp Schmid (@_philschmid) 's Twitter Profile Photo

Code Llama with Hugging Face🤗 Yesterday, @MetaAI released Code Llama, a family of open-access code LLMs! Today, we release the integration in the Hugging Face ecosystem🔥 Models: 👉 huggingface.co/codellama blog post: 👉 hf.co/blog/codellama Blog post covers how to use it!

Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

I just uploaded a 90 minute tutorial, which is designed to be the one place I point coders at when they ask "hey, tell me everything I need to know about LLMs!" It starts at the basics: the 3-step pre-training / fine-tuning / classifier ULMFiT approach used in all modern LLMs.

I just uploaded a 90 minute tutorial, which is designed to be the one place I point coders at when they ask "hey, tell me everything I need to know about LLMs!"

It starts at the basics: the 3-step pre-training / fine-tuning / classifier ULMFiT approach used in all modern LLMs.
Roy Perlis (@royperlis) 's Twitter Profile Photo

Re LLM in medicine: “We encourage efforts to understand how and when models report wrong results, with examples of such responses.” jamanetwork.com/journals/jaman…

Eric Lin, MD (@emailelin) 's Twitter Profile Photo

Important to watch out for AI regulations that effectively bolster the position of big tech incumbents by shutting out open source researchers and developers. More of the right overwatch is good. Less of the FTX crypto - esque regulation lobbying businessinsider.com/andrew-ng-goog…

Roy Perlis (@royperlis) 's Twitter Profile Photo

We showed the problem with math in 2009 (pubmed.ncbi.nlm.nih.gov/19494805/): the conditions under which personalized standard antidepressant selection is particularly useful are often unrealistic, in terms of differences in effectiveness (bc much of benefit placebo-like)

Eiko Fried (@eikofried) 's Twitter Profile Photo

Neuroimaging research in psychology has led to so many false positive, non replicable findings due to questionable research practices & publication bias that Science Advances now publishes null findings to clean up the literature (this is from 2022). science.org/doi/10.1126/sc…

Aaditya Ura (@aadityaura) 's Twitter Profile Photo

Introducing OpenBioLLM-Llama3-70B & 8B: The most capable openly available Medical-domain LLMs to date! 🩺💊🧬 Outperforms industry giants like GPT-4, Gemini, Meditron-70B, Med-PaLM-1, and Med-PaLM-2 in the biomedical domain. 🏥📈 🌟 OpenBioLLM-70B delivers SOTA performance,

Introducing OpenBioLLM-Llama3-70B &amp; 8B: The most capable openly available Medical-domain LLMs to date! 🩺💊🧬

Outperforms industry giants like GPT-4, Gemini, Meditron-70B, Med-PaLM-1, and Med-PaLM-2 in the biomedical domain. 🏥📈 🌟

OpenBioLLM-70B delivers SOTA performance,
Thomas Wolf (@thom_wolf) 's Twitter Profile Photo

The new interpretability paper from Anthropic is totally based. Feels like analyzing an alien life form. If you only read one 90-min-read paper today, it has to be this one transformer-circuits.pub/2024/scaling-m…

The new interpretability paper from Anthropic is totally based. Feels like analyzing an alien life form.

If you only read one 90-min-read paper today, it has to be this one

transformer-circuits.pub/2024/scaling-m…
Isaac Kohane (@zakkohane) 's Twitter Profile Photo

How well can AI (GPT4+RAG) identify patients for clinical trials relative to experts using human-defined queries? NEJM AI ai.nejm.org/doi/full/10.10… (spoiler: Very well) CODA: Which one of the authors is featured in nomenomen.us ?

How well can AI (GPT4+RAG) identify patients for clinical trials relative to experts using human-defined queries? <a href="/NEJM_AI/">NEJM AI</a> ai.nejm.org/doi/full/10.10… (spoiler: Very well) 

CODA: Which one of the authors is featured in nomenomen.us ?
Joel Jang (@jang_yoel) 's Twitter Profile Photo

Excited to introduce 𝐋𝐀𝐏𝐀: the first unsupervised pretraining method for Vision-Language-Action models. Outperforms SOTA models trained with ground-truth actions 30x more efficient than conventional VLA pretraining 📝: arxiv.org/abs/2410.11758 🧵 1/9