TISMIR journal (@tismirj) 's Twitter Profile
TISMIR journal

@tismirj

Transactions of the International Society for Music Information Retrieval (TISMIR). Official twitter account.

ID: 1053906088343547905

linkhttp://transactions.ismir.net calendar_today21-10-2018 07:09:26

327 Tweet

960 Followers

116 Following

Deezer Research (@researchdeezer) 's Twitter Profile Photo

Congratulations to Kristina Matrosova, Manuel Moussallam, Thomas Louail, and Olivier Bodini for their paper "Depict or Discern? Fingerprinting Musical Taste from Explicit Preferences" published in TISMIR journal! 👏 Paper link: transactions.ismir.net/articles/10.53…

Congratulations to Kristina Matrosova, Manuel Moussallam, Thomas Louail, and Olivier Bodini for their paper "Depict or Discern? Fingerprinting Musical Taste from Explicit Preferences" published in <a href="/TismirJ/">TISMIR journal</a>! 👏

Paper link: transactions.ismir.net/articles/10.53…
TISMIR journal (@tismirj) 's Twitter Profile Photo

Call for Papers for a special collection on Multi-Modal Music Information Retrieval with deadline August 1st. We particularly encourage under-explored repertoire, new connections between fields, and novel research areas. account.transactions.ismir.net/index.php/up-j…

TISMIR journal (@tismirj) 's Twitter Profile Photo

NEW: Dataset paper by Simon Schwär, Michael Krause, Michael Fast Sebastian Rosenzweig Frank Scherbaum and Meinard Müller: "A Dataset of Larynx Microphone Recordings for Singing Voice Reconstruction": 4h of pop with guitar + differentiable SP source/filter method. transactions.ismir.net/articles/10.53…

TISMIR journal (@tismirj) 's Twitter Profile Photo

NEW: Overview article by Stefan Uhlich and 16 co-authors (Gio Fabbro Jonathan Le Roux Dipam Chakraborty Sharada Mohanty Kai Li @ ICLR 2025🇸🇬 Yuki Mitsufuji…) "The Sound Demixing Challenge 2023 – Cinematic Demixing Track" Sound Demixing Challenge: setup, structure of the competition, datasets… transactions.ismir.net/articles/10.53…

Sound Demixing Challenge (@sounddemix) 's Twitter Profile Photo

We are happy to announce that our two papers summarizing the Sound Demixing Challenge 2023 have been published in TISMIR journal ! Thank you to everyone for their hard work! Music Track: transactions.ismir.net/articles/10.53… Cinematic Track: transactions.ismir.net/articles/10.53…

Yuki Mitsufuji (@mittu1204) 's Twitter Profile Photo

If you wish to familiarize yourself with recent strong models for sound separation, look into our jounals TISMIR journal that just came out this week Those are reports on the Sound Demixing Challenge 2023 where I served as the general chair Music Track: transactions.ismir.net/articles/10.53…

TISMIR journal (@tismirj) 's Twitter Profile Photo

NEW: "Introducing the TISMIR Education Track: What, Why, How?" The Education Track has a focus on tutorial-style delivery and an emphasis on existing MIR research methods, techniques, principles, and practical matters relevant to the MIR community. transactions.ismir.net/articles/10.53…

TISMIR journal (@tismirj) 's Twitter Profile Photo

NEW: Research paper by Lucas Maia Martín Rocamora Luiz W. P. Biscainho and Magdalena Fuentes: "Selective Annotation of Few Data for Beat Tracking of Latin American Music Using Rhythmic Features" based on rhythmic feature and constrained selection methods. transactions.ismir.net/articles/10.53…

Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

🎉 Exciting news! Our latest article on beat tracking with few data has been published in the TISMIR journal (TISMIR journal)! 📜 article: transactions.ismir.net/articles/10.53… 💻 code: github.com/maia-ls/tismir…

Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

We explore the problem of annotation in culture-specific datasets, building upon our previous research: 📜 ISMIR 2022 (ISMIR Conference): zenodo.org/records/7385261 which showed that SOTA beat trackers yield good results with few training points provided that the dataset is homogeneous.

Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

We propose a methodology for selectively annotating a small subset of meaningful samples with the objective of training a state-of-the-art beat tracker.

Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

Our framework consists of extracting a rhythmic feature from each track and applying selection methods that exploit the internal distribution of the data while leveraging representativeness and diversity. The selection process is subject to a user-informed annotation budget.

Our framework consists of extracting a rhythmic feature from each track and applying selection methods that exploit the internal distribution of the data while leveraging representativeness and diversity. The selection process is subject to a user-informed annotation budget.
Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

Our experiments highlight the importance of carefully selecting the training data for deep-learning-based beat-tracking models in few data scenarios, and our suggested framework shows better results 📈 when compared to random data selection 📉

Our experiments highlight the importance of carefully selecting the training data for deep-learning-based beat-tracking models in few data scenarios, and our suggested framework shows better results 📈 when compared to random data selection 📉
Lucas Maia (@lsimoesmaia) 's Twitter Profile Photo

We hope our study can help alleviate data annotation bottlenecks, especially in the case of culture-specific datasets, and ultimately help build a more culturally diverse perspective in the field of Music Information Retrieval.

TISMIR journal (@tismirj) 's Twitter Profile Photo

NEW: Research article by Isabella Czedik-Eysenberg Oliver_Wieczorek Arthur Flexer & Christoph Reuter: "Charting the Universe of Metal Music Lyrics and Analyzing Their Relation to Perceived Audio Hardness", via a topic model based on 124,288 song lyrics. transactions.ismir.net/articles/10.53…