Thy Thy (@thy2512) 's Twitter Profile
Thy Thy

@thy2512

ID: 2289658004

calendar_today13-01-2014 13:09:22

39 Tweet

85 Followers

562 Following

Jeremy Howard (@jeremyphoward) 's Twitter Profile Photo

I completed my 1st data science project ~30 years ago. Since then I've been continuously developing a questionnaire I use for all new data projects, to ensure the right info is available from the start. I'm sharing it publicly today for the first time. fast.ai/2020/01/07/dat…

Julien Chaumond (@julien_c) 's Twitter Profile Photo

🔥 Introducing Tokenizers: ultra-fast, extensible tokenization for state-of-the-art NLP 🔥 ➡️github.com/huggingface/to…

🔥 Introducing Tokenizers: ultra-fast, extensible tokenization for state-of-the-art NLP 🔥

➡️github.com/huggingface/to…
Graham Neubig (@gneubig) 's Twitter Profile Photo

2020 edition of CMU CS11-747 "Neural Networks for NLP", is starting tomorrow! We (co-teacher Pengfei Liu and 6 wonderful TAs) restructured it a bit to be more focused on "core concepts" used across a wide variety of applications. phontron.com/class/nn4nlp20… 1/2

2020 edition of CMU CS11-747 "Neural Networks for NLP", is starting tomorrow! We (co-teacher <a href="/stefan_fee/">Pengfei Liu</a> and 6 wonderful TAs) restructured it a bit to be more focused on "core concepts" used across a wide variety of applications. phontron.com/class/nn4nlp20… 1/2
Julia Turc (@juliarturc) 's Twitter Profile Photo

Efficient BERT models from Google Research, now available at github.com/google-researc…! We hope our 24 BERT models with fewer layers and/or hidden sizes will enable research in resource-constrained institutions and encourage building more compact models. arxiv.org/abs/1908.08962

Jose Camacho-Collados (@camachocollados) 's Twitter Profile Photo

Given the current situation, Taher Pilehvar and I have decided to openly release the first draft of our book “Embeddings in Natural Language Processing”. We also thank Morgan & Claypool Publishers for agreeing to this early draft release. Link: josecamachocollados.com/book_embNLP_dr…

Given the current situation, <a href="/tpilehvar/">Taher Pilehvar</a> and I have decided to openly release the first draft of our book “Embeddings in Natural Language Processing”. We also thank <a href="/MorganClaypool/">Morgan & Claypool Publishers</a> for agreeing to this early draft release.

Link: josecamachocollados.com/book_embNLP_dr…
Tal Linzen (@tallinzen) 's Twitter Profile Photo

As we near the ACL camera-ready deadline, here's a checklist that will help you make sure the paper looks nice and the repo is maintained even after you've graduated and left to pursue a professional surfing career in the Philippines. Did I miss anything?

Tal Linzen (@tallinzen) 's Twitter Profile Photo

Is there a formal definition of what it means for a language model to "know" something? E.g. which of the following scenarios counts as knowing that Paris is the capital of France?

Lilian Weng (@lilianweng) 's Twitter Profile Photo

Exploration strategies in deep RL are such a critical topic. I almost immediately regretted it when I started writing on this big subject because it has so much more content than I expected. But here it comes, phew: lilianweng.github.io/lil-log/2020/0…

Robert Lange (@roberttlange) 's Twitter Profile Photo

🥳Really excited to be attending #MLSS2020. Great set of talks by Bernhard Schölkopf & Stefan Bauer starting from 101 causality to Representation Learning for Disentanglement 💯! Re-watch them here: 📺 (Part I): youtu.be/btmJtThWmhA 📺 (Part II): youtu.be/9DJWJpn0DmU

🥳Really excited to be attending #MLSS2020.  Great set  of talks by <a href="/bschoelkopf/">Bernhard Schölkopf</a> &amp; Stefan Bauer starting from 101 causality to Representation Learning for Disentanglement 💯!  Re-watch them here:

📺 (Part  I):  youtu.be/btmJtThWmhA
📺 (Part  II):  youtu.be/9DJWJpn0DmU
Andrew Gordon Wilson (@andrewgwils) 's Twitter Profile Photo

Happy to be giving an #ICML2020 tutorial on Bayesian Deep Learning and Probabilistic Model Construction. This area has made astounding progress in the last year. I'm grateful for the opportunity and thank the organizers for their efforts! icml.cc/Conferences/20…

Happy to be giving an #ICML2020 tutorial on Bayesian Deep Learning and Probabilistic Model Construction. This area has made astounding progress in the last year. I'm grateful for the opportunity and thank the organizers for their efforts!
icml.cc/Conferences/20…
ACL Anthology (@aclanthology) 's Twitter Profile Photo

The ACL Anthology is looking for a (paid) assistant to help with routine operations. There will also be time during slow periods to help with the implementation of new futures and with future planning. Please share! github.com/acl-org/acl-an……

Sasha Rush (@srush_nlp) 's Twitter Profile Photo

Such a important formula, such an ambiguous mess in matrix notation... What if there was a better way? namedtensor.github.io

David Sontag (@david_sontag) 's Twitter Profile Photo

Just one week till the start of MIT's edX course on Machine Learning for Healthcare - open to the whole world and free to audit! edx.org/course/machine…

Arthur Spirling (@arthur_spirling) 's Twitter Profile Photo

🚨 New Paper !🚨 Want to measure how different groups (e.g. GOP v Dems) "understand" words differently (e.g."immigration")? check out our "Embedding Regression" paper (w Pedro L. Rodríguez Brandon Stewart). Inference framework + software. Comments welcome! (1/4) github.com/prodriguezsosa…

🚨 New Paper !🚨
Want to measure how different groups (e.g. GOP v Dems) "understand" words differently (e.g."immigration")? check out our "Embedding Regression" paper (w <a href="/prodriguezsosa/">Pedro L. Rodríguez</a> <a href="/b_m_stewart/">Brandon Stewart</a>). Inference framework + software. Comments welcome! (1/4)
github.com/prodriguezsosa…
(((ل()(ل() 'yoav))))👾 (@yoavgo) 's Twitter Profile Photo

day1: i have an idea! day2: i implemented my idea and added it to the NN and it improved 10 points! day3: oops i had a bug and "my idea" was turned off when i achieved this gain, it was just hyper-param how many times did this happen to you? how many times you didn't reach day3?

Alexis Conneau (@alex_conneau) 's Twitter Profile Photo

New work: "Unsupervised speech recognition" TL;DR: it's possible for a neural network to transcribe speech into text with very strong performance, without being given any labeled data. Paper: ai.facebook.com/research/publi… Blog: ai.facebook.com/blog/wav2vec-u… Code: github.com/pytorch/fairse…

Lilian Weng (@lilianweng) 's Twitter Profile Photo

Contrastive learning aims to learn representation such that similar samples stay close, while dissimilar ones are far apart. It can be applied to supervised / unsupervised data and has been shown to achieve good results on various tasks. 📚 A long read: lilianweng.github.io/lil-log/2021/0…

arXiv CS-CL (@arxiv_cscl) 's Twitter Profile Photo

Simulated Chats for Building Dialog Systems: Learning to Generate Conversations from Instructions arxiv.org/abs/2010.10216

SQuARE Project (@ukp_square) 's Twitter Profile Photo

We are excited to announce the public beta of the UKP Lab SQuARE platform for Question Answering research square.ukp-lab.de. Run, deploy, and compare QA Skills online without writing code! 🚀 Check out our ACL 2022 paper 📜 arxiv.org/abs/2203.13693 for more details!

We are excited to announce the public beta of the <a href="/UKPLab/">UKP Lab</a> SQuARE platform for Question Answering research square.ukp-lab.de. Run, deploy, and compare QA Skills online without writing code! 🚀
Check out our ACL 2022 paper 📜 arxiv.org/abs/2203.13693 for more details!
Percy Liang (@percyliang) 's Twitter Profile Photo

As capabilities of foundation models are waxing, *transparency* is waning. How do we quantify transparency? We introduce the Foundation Models Transparency Index (FMTI), evaluating 10 foundation model developers on 100 indicators. crfm.stanford.edu/fmti/

As capabilities of foundation models are waxing, *transparency* is waning. How do we quantify transparency? We introduce the Foundation Models Transparency Index (FMTI), evaluating 10 foundation model developers on 100 indicators.
crfm.stanford.edu/fmti/