🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile
🇺🇦 Taras Kucherenko @[email protected]

@svitozartaras

ML Research Scientist @SEED, @EA. My research is on using #machinelearning for #animation of virtual agents. Effective altruist. Opinions are mine. 🇺🇦

ID: 703298860668604417

linkhttps://svito-zar.github.io/ calendar_today26-02-2016 19:21:29

329 Tweet

433 Followers

240 Following

🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

It was my pleasure to give a tutorial on gesture generation at the WASP Research Summer School 2022. Students made a full story using ML: text using GPT-3, then text2speech and speech2gestures. Here are some examples: youtube.com/playlist?list=… @WASPresearch

It was my pleasure to give a tutorial on gesture generation at the <a href="/WASP_Research/">WASP Research</a> Summer School 2022.
Students made a full story using ML: text using GPT-3, then text2speech and speech2gestures. 
Here are some examples:
youtube.com/playlist?list=…
@WASPresearch
SEED (@seed) 's Twitter Profile Photo

Do gestures really affect how people perceive animated characters? And if so, how do you measure it? SEED’s own 🇺🇦 Taras Kucherenko @[email protected] co-authored a paper for The IVA Conference that shows how to test people's reactions to gesturing characters. Check it out! #IVA22 ea.com/seed/news/eval…

Do gestures really affect how people perceive animated characters? And if so, how do you measure it?

SEED’s own <a href="/SvitozarTaras/">🇺🇦 Taras Kucherenko @svitozar@masto.nu</a> co-authored a paper for <a href="/ACM_IVA/">The IVA Conference</a> that shows how to test people's reactions to gesturing characters. Check it out! #IVA22

ea.com/seed/news/eval…
SEED (@seed) 's Twitter Profile Photo

When a game character gestures as it talks, does it feel lifelike? And how do you benchmark the realism of co-speech gestures? This was the purpose of the GENEA Challenge 2022. Check out the paper and video. It's presented this week at ICMI 2022. ea.com/seed/news/gene…

When a game character gestures as it talks, does it feel lifelike? And how do you benchmark the realism of co-speech gestures? This was the purpose of the GENEA Challenge 2022. Check out the paper and video. It's presented this week at ICMI 2022. ea.com/seed/news/gene…
🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

Show-off alert! My first published work from IVA 2019 has now reached 100 citations 🥳 I calculated that out of those, 14 were self-citations, meaning that one of the paper authors was a part of them. What is your self-citation rate? :) #animation #MachineLearning

Show-off alert!
My first published work from IVA 2019 has now reached 100 citations 🥳
I calculated that out of those, 14 were self-citations, meaning that one of the paper authors was a part of them.
What is your self-citation rate? :)
#animation #MachineLearning
Prof. Asli Ozyurek @ozyurek_a (@ozyurek_a) 's Twitter Profile Photo

We are very happy to announce the last NGC Nijmegen Gesture Center talk of 2022 on 14 Dec 16:00 CET by Dr. Taras Kucharenko 🇺🇦 Taras Kucherenko @[email protected] working Electronic Arts. this is a hybrid talk to be given MaxPlanck-Psycholinguistics in person but talk can be watched in zoom. for sign up sites.google.com/view/nijmegeng…

We are very happy to announce the last NGC Nijmegen Gesture Center talk of 2022 on 14 Dec 16:00 CET  by Dr. Taras Kucharenko <a href="/SvitozarTaras/">🇺🇦 Taras Kucherenko @svitozar@masto.nu</a>  working <a href="/EA/">Electronic Arts</a>.  this is a hybrid talk to be given <a href="/MPI_NL/">MaxPlanck-Psycholinguistics</a> in person but talk can be watched in zoom. for sign up sites.google.com/view/nijmegeng…
Prof. Asli Ozyurek @ozyurek_a (@ozyurek_a) 's Twitter Profile Photo

Multimodallang Dept @MPİ_NL is HIRING MaxPlanck-Psycholinguistics !! looking for a software engineer/machine learning specialist who will help develop tools to automatically recognise speech and gestures in big corpora!! in collaboration with Peter Uhrig, @esam__ghaleb and others mpi.nl/career-educati…

🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

I have a tradition to make my CO2 emission public after the New Year. Not fully sure yet why I do it 😃 Last year my CO2 emission was much higher than I wished, mainly due to a work trip to India. This year I plan to mainly stay in Europe and hope to have much lower #emission.

I have a tradition to make my CO2 emission public after the New Year. Not fully sure yet why I do it 😃
Last year my CO2 emission was much higher than I wished, mainly due to a work trip to India.
This year I plan to mainly stay in Europe and hope to have much lower #emission.
🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

It is hard for me to listen to a discussion about sustainability and climate change that does not advocate for nuclear energy. Is it just me? 🤔 #Sustainability #NuclearEnergy

SEED (@seed) 's Twitter Profile Photo

Lifelike and emotive gestures are a huge part of making believable human game characters. SEED’s own 🇺🇦 Taras Kucherenko @[email protected] has co-authored this terrific paper for #Eurographics2023 that surveys the landscape of gesture generation driven by data. Check it out! ea.com/seed/news/seed…

Lifelike and emotive gestures are a huge part of making believable human game characters. SEED’s own <a href="/SvitozarTaras/">🇺🇦 Taras Kucherenko @svitozar@masto.nu</a> has co-authored this terrific paper for #Eurographics2023 that surveys the landscape of gesture generation driven by data. Check it out! ea.com/seed/news/seed…
🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

Last week at #EuroGraphics2023 I had my longest presentation at the conference yet (45 min). Together with Michael Neff, we presented our review paper "A Comprehensive Review of Data-Driven Co-Speech Gesture Generation": diglib.eg.org/xmlui/handle/1… #animation #machinelearning #hci

Last week at #EuroGraphics2023 I had my longest presentation at the conference yet (45 min).
Together with Michael Neff, we presented our review paper "A Comprehensive Review of Data-Driven Co-Speech Gesture Generation": diglib.eg.org/xmlui/handle/1…
#animation #machinelearning #hci
🇺🇦 Taras Kucherenko @svitozar@masto.nu (@svitozartaras) 's Twitter Profile Photo

Pretty impressive #SIGGRAPH2023 dance synthesis and gesture generation model from my ex-colleagues at KTH: Listen, denoise, action! Audio-driven motion synthesis with diffusion models youtu.be/Qfd2EpzWgok #gesture #animation