
McAuley Lab UCSD
@mcauleylabucsd
We're the McAuley lab @ucsd_cse with PI Prof. Julian McAuley!
We work and tweet about cool #MachineLearning and #NLProc applications 🧠🤖
ID: 1458563571810390021
https://cseweb.ucsd.edu/~jmcauley/ 10-11-2021 22:34:03
16 Tweet
273 Followers
26 Following

Happy to share that our paper "Deep Performer: Score-to-Audio Music Performance Synthesis" has been accepted to IEEE ICASSP 2022! 🥳 This joint work with Cong Zhou, Taylor Berg-Kirkpatrick and Julian McAuley (McAuley Lab UCSD) is based on my internship work at Dolby last summer. 🎶


Announcing with shaky hands and much delight: Our "conversational critiquing" paper is selected for "Highlights of ACM RecSys '22". 🎉 Didn't know before what it is like to be among the bests of a conf ~ ACM RecSys Paper: bit.ly/3TMX891 Shuyang Li McAuley Lab UCSD 🤩


Researchers UC San Diego developed algorithms to rid speech generated by online bots of offensive language, on social media and elsewhere. For more stories about the #UCEngineer impact on #CyberSecurity, visit ucal.us/engineersweek #Eweek2023 #UCEngineer




Highly grateful! Definitely recommend the streamlined publication experience Transactions on Machine Learning Research For people intersted in data distillation, do checkout our survey - it designed to be to-the-point, and does not require a lot of prerequisite knowledge. Any feedback is highly appreciated!



🤖️ Are LLMs good Conversational Recommender Systems (CRS) ? We (McAuley Lab UCSD and Netflix Research) let LLMs generate movie names directly in response to natural-language user requests. Key observations in the experiments:


Lead sheets concisely describe music, but can we improve their compressive ability w.r.t. the original score? Check out our new work - Unsupervised Lead Sheet Generation via Semantic Compression 📖 arxiv.org/abs/2310.10772 w/n i k i t a Taylor Berg-Kirkpatrick McAuley Lab UCSD 1/n


Fine-grained control/editing in text-to-music diffusion models w/NO TRAINING? Presenting DITTO: Diffusion Inference-Time T-Optimization for Music Generation 📖:arxiv.org/abs/2401.12179 🎹:ditto-music.github.io/web/ w/McAuley Lab UCSD Taylor Berg-Kirkpatrick Nicholas J. Bryan🧵


Q: Can we pre-train LLMs efficiently (and better?) via data pruning? A: Yes! Q: How? A: (secret) Prompt LLMs for data quality 🤫 Check out our latest work Google DeepMind - “How to Train Data-Efficient LLMs” 📖 arxiv.org/abs/2402.09668 An expensive thread 🧵(RTs appreciated!)

