Akiyo Fukatsu (@akiyohukat_u) 's Twitter Profile
Akiyo Fukatsu

@akiyohukat_u

language acquisition and morphological inflection

ID: 1635993151390498816

calendar_today15-03-2023 13:15:51

83 Tweet

181 Followers

169 Following

Masato Mita (@chemical_tree) 's Twitter Profile Photo

主著論文がACL2024本会議に採択されました🎉NLP2023で委員特別賞を受賞した日本語広告文生成ベンチマーク(CAMERA)の話をさらに発展させたものです!

Tatsuki Kuribayashi (@ttk_kuribayashi) 's Twitter Profile Photo

Our paper will be presented by the first author, Yukiko, on 5/22 18:50- in LREC COLING 2024! arxiv.org/abs/2404.11315 Humans often omit words in pro-drop languages, e.g., Japanese. We create a large dataset on natural omission and analyze factors behind such human ellipsis decisions.

Our paper will be presented by the first author, Yukiko, on 5/22 18:50- in <a href="/LrecColing/">LREC COLING 2024</a>! arxiv.org/abs/2404.11315

Humans often omit words in pro-drop languages, e.g., Japanese. We create a large dataset on natural omission and analyze factors behind such human ellipsis decisions.
Masato Mita (@chemical_tree) 's Twitter Profile Photo

(やや今更ですが…)都立大上田さんとの共著でBLiMPなどのMPP Datasetsを用いた言語モデルの容認性判断はtoken長バイアスの影響を受け、かつそれらは単純な正規化では緩和できないことを示し、バイアス除去効果のある対処法を提案した論文が.LREC COLING 2024 に採択されてます👍 aclanthology.org/2024.lrec-main…

Ryo Yoshida (@yoshiryo0617) 's Twitter Profile Photo

"Tree-Planted Transformers: Unidirectional Transformer Language Models with Implicit Syntactic Supervision" (preprint: arxiv.org/abs/2402.12691) がACL 2024 (Findings) に採択されました。 染谷さん (Taiga Someya), 大関先生との研究です。

Masato Mita (@chemical_tree) 's Twitter Profile Photo

自然言語処理論文誌の最新号(31巻2号)にて以下の共著論文2本が掲載されてます。ぜひご笑覧ください。 ・Cloze Quality Estimation for Language Assessment(jstage.jst.go.jp/article/jnlp/3…) ・NLP2024 テーマセッション「人間と計算機のことばの評価」(jstage.jst.go.jp/article/jnlp/3…)

Masato Mita (@chemical_tree) 's Twitter Profile Photo

I'm attending #ACL2024 to present our work on standardizing and building the first benchmark for the task of generating attractive text beyond accuracy, i.e., ad text generation. Looking forward to seeing you all in Bangkok!🇹🇭

I'm attending #ACL2024 to present our work on standardizing and building the first benchmark for the task of generating attractive text beyond accuracy, i.e., ad text generation. Looking forward to seeing you all in Bangkok!🇹🇭
Akari Haga (@_akari000) 's Twitter Profile Photo

I will present "Modeling Overregularization in Children with Small Language Models" at #ACL2024NLP. I am excited to meet you tomorrow or the day after! Date: 12:15-13:15, 14th (Findings Poster Session 4) and 11:00 - 12:20, 15th (Workshop on CMCL) Paper: aclanthology.org/2024.findings-…

Momoka Furuhashi (@tohoku_nlp_mmk) 's Twitter Profile Photo

9/5(木) 17:50~18:50に以下のタイトルで発表します! [S3-P24] チェックリストを利用した生成系タスクの網羅的評価 NIIのRAとして取り組んでいる自動評価の研究の発表です。 沢山の方と議論できたらなと思います☺️ #YANS2024 Tohoku NLP Group

9/5(木) 17:50~18:50に以下のタイトルで発表します!
[S3-P24] チェックリストを利用した生成系タスクの網羅的評価

NIIのRAとして取り組んでいる自動評価の研究の発表です。
沢山の方と議論できたらなと思います☺️
#YANS2024 
<a href="/tohoku_nlp/">Tohoku NLP Group</a>
Miyu Oba (@rodamille) 's Twitter Profile Photo

🎉Thrilled to announce that our paper is accepted at EMNLP 2024 Main! We investigate whether LMs induce grammatical knowledge from indirect evidence, considered to contribute to efficient human language acquisition. arXiv coming soon… #EMNLP2024

🎉Thrilled to announce that our paper is accepted at EMNLP 2024 Main!
We investigate whether LMs induce grammatical knowledge from indirect evidence, considered to contribute to efficient human language acquisition. arXiv coming soon…
#EMNLP2024
Miyu Oba (@rodamille) 's Twitter Profile Photo

📢EMNLP paper alert The paper will be presented in the oral session on Nov 11 at 4pm 🤔Can language models induce grammatical knowledge from indirect evidence? This is considered one of the inductive biases contributing to efficient human language acquisition.

📢EMNLP paper alert

The paper will be presented in the oral session on Nov 11 at 4pm

🤔Can language models induce grammatical knowledge from indirect evidence? This is considered one of the inductive biases contributing to efficient human language acquisition.
Akari Haga (@_akari000) 's Twitter Profile Photo

I will present "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" at #CoNLL2024 Date: Sat 16 Nov 16:00–17:20 at BabyLM Challenge 2024 poster session Paper: arxiv.org/abs/2411.09587

I will present "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" at #CoNLL2024  

Date: Sat 16 Nov 16:00–17:20 at BabyLM Challenge 2024 poster session Paper: arxiv.org/abs/2411.09587
Akari Haga (@_akari000) 's Twitter Profile Photo

👶I am happy to announce that our paper "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" received the ✨Outstanding Paper Award✨ at babyLM !! #CoNLL2024 #EMNLP2024

👶I am happy to announce that our paper "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" received the ✨Outstanding Paper Award✨ at <a href="/babyLMchallenge/">babyLM</a> !!
#CoNLL2024 #EMNLP2024
Kohei Kajikawa (@qjqw1223) 's Twitter Profile Photo

Our papar won the Best Paper Award at CoNLL 2025 !! I am deeply honoured to receive such a wonderful award. Thank you for selecting us! Paper link: aclanthology.org/2024.conll-1.2…