Da-Cheng Juan (@dachengjuan1) 's Twitter Profile
Da-Cheng Juan

@dachengjuan1

Research, Technology, Products.
Tech Lead Manager @GoogleAI & Adjunct Faculty @NTsingHuaU
All opinions are my own.

ID: 1253016284767203328

calendar_today22-04-2020 17:42:22

14 Tweet

65 Followers

42 Following

Sebastian Ruder (@seb_ruder) 's Twitter Profile Photo

10 Tips for Research and a PhD I've been asked in the past to provide advice on doing research. Here are 10 tips that worked well for me and will hopefully also be useful to others. ruder.io/10-tips-for-re…

Yi Tay (@yitayml) 's Twitter Profile Photo

Our paper "Sparse Sinkhorn Attention" is accepted to #ICML2020! 😃 In this paper, we propose efficient self-attention via differentiable sorting. Joint work with Dara Bahri @liuyangumass Don Metzler Da-Cheng Juan Preprint: arxiv.org/abs/2002.11296

Ines Chami (@chamii22) 's Twitter Profile Photo

Excited to share a video describing our approach to learn hyperbolic Knowledge Graph embeddings! youtube.com/watch?v=Yf03-C… Thanks to my amazing collaborators! Adva Wolf Da-Cheng Juan Fred Sala Sujith Ravi hazyresearch

Excited to share a video describing our approach to learn hyperbolic Knowledge Graph embeddings! 
youtube.com/watch?v=Yf03-C…

Thanks to my amazing collaborators! 
Adva Wolf <a href="/DaChengJuan1/">Da-Cheng Juan</a> <a href="/fredsala/">Fred Sala</a> <a href="/ravisujith/">Sujith Ravi</a> <a href="/HazyResearch/">hazyresearch</a>
Arjun Gopalan (@gopalanarjun) 's Twitter Profile Photo

Join us at KDD on Tuesday, 9 AM - 12 PM Pacific time. We’re organizing a hands-on tutorial on Neural Structured Learning. Tutorial outline at github.com/tensorflow/neu…. Find us on whova or Vfairs. #KDD2020 Da-Cheng Juan SIGKDD 2025

Join us at KDD on Tuesday, 9 AM - 12 PM Pacific time. 

We’re organizing a hands-on tutorial on Neural Structured Learning. Tutorial outline at github.com/tensorflow/neu….

Find us on whova or Vfairs. #KDD2020 

<a href="/DaChengJuan1/">Da-Cheng Juan</a> <a href="/kdd_news/">SIGKDD 2025</a>
Yi Tay (@yitayml) 's Twitter Profile Photo

Inspired by the dizzying number of efficient Transformers ("x-formers") models that are coming out lately, we wrote a survey paper to organize all this information. Check it out at arxiv.org/abs/2009.06732. Joint work with Mostafa Dehghani Dara Bahri and Don Metzler. Google AI 😀😃

Inspired by the dizzying number of efficient Transformers ("x-formers") models that are coming out lately, we wrote a survey paper to organize all this information. Check it out at arxiv.org/abs/2009.06732. 

Joint work with <a href="/m__dehghani/">Mostafa Dehghani</a> <a href="/dara_bahri/">Dara Bahri</a> and <a href="/metzlerd/">Don Metzler</a>. <a href="/GoogleAI/">Google AI</a> 😀😃
TensorFlow (@tensorflow) 's Twitter Profile Photo

📊 Neural Structured Learning in TFX. In Arjun Gopalan's new blog, you'll learn how graph regularization can be implemented using custom TFX components and how adversarial learning can be deployed seamlessly with TFX. Read more ↓ blog.tensorflow.org/2020/10/neural…

Yi Tay (@yitayml) 's Twitter Profile Photo

As a companion to our recent efficient Transformer survey, we designed "Long Range Arena" a new challenging benchmark to help understand and analyze trade-offs between recent efficient Transformer models. Check out our paper at arxiv.org/abs/2011.04006. Google AI @DeepMind

As a companion to our recent efficient Transformer survey, we designed "Long Range Arena" a new challenging benchmark to help understand and analyze trade-offs between recent efficient Transformer models. Check out our paper at arxiv.org/abs/2011.04006. <a href="/GoogleAI/">Google AI</a> @DeepMind
Crossminds.ai (@crossmindsai) 's Twitter Profile Photo

[#NeurIPS2020 Talk] Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization by researchers from NTHU Google AI An-Chieh Cheng Da-Cheng Juan #paperwithvideo #machinelearning #neuralnetwork #deeplearning #convolution bit.ly/3nE3veY

Yi Tay (@yitayml) 's Twitter Profile Photo

Excited to share that our Synthesizer paper was recently accepted to #ICML2021. Random Synthesizers are actually "All-MLP" architectures and MLP-Mixers are a form of Random Synthesizers (or vice versa)! preprint: arxiv.org/abs/2005.00743 code: github.com/tensorflow/mesh

Excited to share that our Synthesizer paper was recently accepted to #ICML2021. 

Random Synthesizers are actually "All-MLP" architectures and MLP-Mixers are a form of Random Synthesizers (or vice versa)! 

preprint: arxiv.org/abs/2005.00743
code: github.com/tensorflow/mesh
Google AI (@googleai) 's Twitter Profile Photo

#Pathways is a new #ML architecture that will let us train a single model to accomplish thousands or millions of tasks, across modalities - while consuming much less energy than typical models do today. Learn more at goo.gle/3jHFST5, and in the TED Talk below.