Jaivardhan Kapoor (@_jaivardhan_) 's Twitter Profile
Jaivardhan Kapoor

@_jaivardhan_

PhD student @mackelab and affiliated with @MPI_IS, previously @IITKanpur, @MPI_IS, @AaltoPML
I develop diffusion models for medical images and neural recordings

ID: 731230526778970112

linkhttp://jkapoor.me calendar_today13-05-2016 21:11:58

421 Tweet

328 Followers

669 Following

Richard Gao (@_rdgao) 's Twitter Profile Photo

before Lisbon, I gave a Machine Learning in Science teatime talk about surfing as a beginner to prepare folks for the frustration but it was actually about the PhD: in SD, I realized surfing as a beginner is a lot like starting a PhD, when you, invariably, also suck this was my guide for both:

before Lisbon, I gave a <a href="/mackelab/">Machine Learning in Science</a> teatime talk about surfing as a beginner to prepare folks for the frustration

but it was actually about the PhD: in SD, I realized surfing as a beginner is a lot like starting a PhD, when you, invariably, also suck

this was my guide for both:
RE-OPEN THE SIZZLERS (@saladbarfan) 's Twitter Profile Photo

B***Sky is popular enough now that I think there’s a strong possibility the left intelligentsia memes themselves back into their 2016-2020 hysteria

hinterlander (@yoltartar) 's Twitter Profile Photo

this is why i think no llm can ever be skilled in the physical world: the whole technology is "based on a (based on a (based on a true story) story) story"

this is why i think no llm can ever be skilled in the physical world: the whole technology is "based on a (based on a (based on a true story) story) story"
Machine Learning in Science (@mackelab) 's Twitter Profile Photo

Thrilled to announce we have three #NeurIPS2024 papers! Interested in simulating realistic neural data with diffusion models or recurrent neural networks, or in source distribution sorcery? Have a look 👇 1/4

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

It’s been an amazing last couple of weeks, hope you enjoyed our end of year extravaganza as much as we did! Just some of the things we shipped: state-of-the-art image, video, and interactive world models (Imagen 3, Veo 2 & Genie 2); Gemini 2.0 Flash (a highly performant and

Neel Nanda (@neelnanda5) 's Twitter Profile Photo

This is cool work! One of the annoying things about using sparse autoencoders to interpret models is that substituting them in substantially lowers model performance - this suggests they're missing something important. Turns out training a LoRA on the next layer can reduce this!