default search action
Yusuke Ijima
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j8]Takanori Ashihara, Marc Delcroix, Yusuke Ijima, Makio Kashino:
Unveiling the Linguistic Capabilities of a Self-Supervised Speech Model Through Cross-Lingual Benchmark and Layer- Wise Similarity Analysis. IEEE Access 12: 98835-98855 (2024) - [j7]Kenichi Fujita, Atsushi Ando, Yusuke Ijima:
Speech Rhythm-Based Speaker Embeddings Extraction from Phonemes and Phoneme Duration for Multi-Speaker Speech Synthesis. IEICE Trans. Inf. Syst. 107(1): 93-104 (2024) - [c46]Takanori Ashihara, Marc Delcroix, Takafumi Moriya, Kohei Matsuura, Taichi Asami, Yusuke Ijima:
What Do Self-Supervised Speech and Speaker Models Learn? New Findings from a Cross Model Layer-Wise Analysis. ICASSP 2024: 10166-10170 - [c45]Kazuki Yamauchi, Yusuke Ijima, Yuki Saito:
STYLECAP: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-Supervised Learning Models. ICASSP 2024: 11261-11265 - [c44]Kenichi Fujita, Hiroshi Sato, Takanori Ashihara, Hiroki Kanagawa, Marc Delcroix, Takafumi Moriya, Yusuke Ijima:
Noise-Robust Zero-Shot Text-to-Speech Synthesis Conditioned on Self-Supervised Speech-Representation Model with Adapters. ICASSP 2024: 11471-11475 - [i9]Kenichi Fujita, Hiroshi Sato, Takanori Ashihara, Hiroki Kanagawa, Marc Delcroix, Takafumi Moriya, Yusuke Ijima:
Noise-robust zero-shot text-to-speech synthesis conditioned on self-supervised speech-representation model with adapters. CoRR abs/2401.05111 (2024) - [i8]Takanori Ashihara, Marc Delcroix, Takafumi Moriya, Kohei Matsuura, Taichi Asami, Yusuke Ijima:
What Do Self-Supervised Speech and Speaker Models Learn? New Findings From a Cross Model Layer-Wise Analysis. CoRR abs/2401.17632 (2024) - [i7]Kenichi Fujita, Atsushi Ando, Yusuke Ijima:
Speech Rhythm-Based Speaker Embeddings Extraction from Phonemes and Phoneme Duration for Multi-Speaker Speech Synthesis. CoRR abs/2402.07085 (2024) - [i6]Kenichi Fujita, Takanori Ashihara, Marc Delcroix, Yusuke Ijima:
Lightweight Zero-shot Text-to-Speech with Mixture of Adapters. CoRR abs/2407.01291 (2024) - 2023
- [c43]Kenichi Fujita, Takanori Ashihara, Hiroki Kanagawa, Takafumi Moriya, Yusuke Ijima:
Zero-Shot Text-to-Speech Synthesis Conditioned Using Self-Supervised Speech Representation Model. ICASSP Workshops 2023: 1-5 - [c42]Hiroki Kanagawa, Yusuke Ijima:
Enhancement of Text-Predicting Style Token With Generative Adversarial Network for Expressive Speech Synthesis. ICASSP 2023: 1-5 - [c41]Hiroki Kanagawa, Takafumi Moriya, Yusuke Ijima:
VC-T: Streaming Voice Conversion Based on Neural Transducer. INTERSPEECH 2023: 2088-2092 - [c40]Takanori Ashihara, Takafumi Moriya, Kohei Matsuura, Tomohiro Tanaka, Yusuke Ijima, Taichi Asami, Marc Delcroix, Yukinori Honma:
SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge? INTERSPEECH 2023: 2888-2892 - [c39]Mizuki Nagano, Yusuke Ijima, Sadao Hiroya:
A stimulus-organism-response model of willingness to buy from advertising speech using voice quality. INTERSPEECH 2023: 5202-5206 - [c38]Hikaru Yanagida, Yusuke Ijima, Naohiro Tawara:
Influence of Personal Traits on Impressions of One's Own Voice. INTERSPEECH 2023: 5212-5216 - [i5]Kenichi Fujita, Takanori Ashihara, Hiroki Kanagawa, Takafumi Moriya, Yusuke Ijima:
Zero-shot text-to-speech synthesis conditioned using self-supervised speech representation model. CoRR abs/2304.11976 (2023) - [i4]Takanori Ashihara, Takafumi Moriya, Kohei Matsuura, Tomohiro Tanaka, Yusuke Ijima, Taichi Asami, Marc Delcroix, Yukinori Honma:
SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge? CoRR abs/2306.08374 (2023) - [i3]Kazuki Yamauchi, Yusuke Ijima, Yuki Saito:
StyleCap: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-supervised Learning Models. CoRR abs/2311.16509 (2023) - 2022
- [c37]Hiroki Kanagawa, Yusuke Ijima:
Multi-Sample Subband Wavernn Via Multivariate Gaussian. ICASSP 2022: 8427-8431 - [c36]Hiroki Kanagawa, Yusuke Ijima, Hiroyuki Toda:
Joint Modeling of Multi-Sample and Subband Signals for Fast Neural Vocoding on CPU. INTERSPEECH 2022: 1626-1630 - [c35]Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, Yuki Saito, Yusuke Ijima, Ryo Masumura, Hiroshi Saruwatari:
Predicting VQVAE-based Character Acting Style from Quotation-Annotated Text for Audiobook Speech Synthesis. INTERSPEECH 2022: 4551-4555 - [c34]Yusuke Ijima, Yuta Furudate, Kaori Chiba, Yuji Ishida, Sadayoshi Mikami:
Automated Recognition of Off Phenomenon in Parkinson's Disease During Walking : - Measurement in Daily Life with Wearable Device -. LifeTech 2022: 273-275 - [c33]Hiroki Kanagawa, Yusuke Ijima:
SIMD-Size Aware Weight Regularization for Fast Neural Vocoding on CPU. SLT 2022: 955-961 - [i2]Hiroki Kanagawa, Yusuke Ijima:
SIMD-size aware weight regularization for fast neural vocoding on CPU. CoRR abs/2211.00898 (2022) - 2021
- [j6]Katsuki Inoue, Sunao Hara, Masanobu Abe, Nobukatsu Hojo, Yusuke Ijima:
Model architectures to extrapolate emotional expressions in DNN-based text-to-speech. Speech Commun. 126: 35-43 (2021) - [c32]Naohiro Tawara, Atsunori Ogawa, Yuki Kitagishi, Hosana Kamiyama, Yusuke Ijima:
Robust Speech-Age Estimation Using Local Maximum Mean Discrepancy Under Mismatched Recording Conditions. ASRU 2021: 114-121 - [c31]Takafumi Moriya, Takanori Ashihara, Tomohiro Tanaka, Tsubasa Ochiai, Hiroshi Sato, Atsushi Ando, Yusuke Ijima, Ryo Masumura, Yusuke Shinohara:
Simpleflat: A Simple Whole-Network Pre-Training Approach for RNN Transducer-Based End-to-End Speech Recognition. ICASSP 2021: 5664-5668 - [c30]Atsushi Ando, Ryo Masumura, Hiroshi Sato, Takafumi Moriya, Takanori Ashihara, Yusuke Ijima, Tomoki Toda:
Speech Emotion Recognition Based on Listener Adaptive Models. ICASSP 2021: 6274-6278 - [c29]Naoto Kakegawa, Sunao Hara, Masanobu Abe, Yusuke Ijima:
Phonetic and Prosodic Information Estimation from Texts for Genuine Japanese End-to-End Text-to-Speech. Interspeech 2021: 126-130 - [c28]Mizuki Nagano, Yusuke Ijima, Sadao Hiroya:
Impact of Emotional State on Estimation of Willingness to Buy from Advertising Speech. Interspeech 2021: 2486-2490 - [c27]Kenichi Fujita, Atsushi Ando, Yusuke Ijima:
Phoneme Duration Modeling Using Speech Rhythm-Based Speaker Embeddings for Multi-Speaker Speech Synthesis. Interspeech 2021: 3141-3145 - [c26]Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, Naoko Tanji, Yusuke Ijima, Ryo Masumura, Hiroshi Saruwatari:
Audiobook Speech Synthesis Conditioned by Cross-Sentence Context-Aware Word Embeddings. SSW 2021: 211-215 - 2020
- [c25]Hiroki Kanagawa, Yusuke Ijima:
Lightweight LPCNet-Based Neural Vocoder with Tensor Decomposition. INTERSPEECH 2020: 205-209 - [c24]Yuki Yamashita, Tomoki Koriyama, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Ryo Masumura, Hiroshi Saruwatari:
Investigating Effective Additional Contextual Factors in DNN-Based Spontaneous Speech Synthesis. INTERSPEECH 2020: 3201-3205 - [c23]Yuki Yamashita, Tomoki Koriyama, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Ryo Masumura, Hiroshi Saruwatari:
DNN-based Speech Synthesis Using Abundant Tags of Spontaneous Speech Corpus. LREC 2020: 6438-6443
2010 – 2019
- 2019
- [c22]Ryo Masumura, Yusuke Ijima, Satoshi Kobashikawa, Takanobu Oba, Yushi Aono:
Can We Simulate Generative Process of Acoustic Modeling Data? Towards Data Restoration for Acoustic Modeling. APSIPA 2019: 655-661 - [c21]Ryo Masumura, Hiroshi Sato, Tomohiro Tanaka, Takafumi Moriya, Yusuke Ijima, Takanobu Oba:
End-to-End Automatic Speech Recognition with a Reconstruction Criterion Using Speech-to-Text and Text-to-Speech Encoder-Decoders. INTERSPEECH 2019: 1606-1610 - [c20]Hiroki Kanagawa, Yusuke Ijima:
Multi-Speaker Modeling for DNN-based Speech Synthesis Incorporating Generative Adversarial Networks. SSW 2019: 40-44 - [c19]Taiki Nakamura, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Hiroshi Saruwatari:
V2S attack: building DNN-based voice conversion from automatic speaker verification. SSW 2019: 161-165 - [i1]Taiki Nakamura, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Hiroshi Saruwatari:
V2S attack: building DNN-based voice conversion from automatic speaker verification. CoRR abs/1908.01454 (2019) - 2018
- [j5]Nobukatsu Hojo, Yusuke Ijima, Hideyuki Mizuno:
DNN-Based Speech Synthesis Using Speaker Codes. IEICE Trans. Inf. Syst. 101-D(2): 462-472 (2018) - [c18]Atsushi Ando, Satoshi Kobashikawa, Hosana Kamiyama, Ryo Masumura, Yusuke Ijima, Yushi Aono:
Soft-Target Training with Ambiguous Emotional Utterances for DNN-Based Speech Emotion Classification. ICASSP 2018: 4964-4968 - [c17]Yuki Saito, Yusuke Ijima, Kyosuke Nishida, Shinnosuke Takamichi:
Non-Parallel Voice Conversion Using Variational Autoencoders Conditioned by Phonetic Posteriorgrams and D-Vectors. ICASSP 2018: 5274-5278 - [c16]Ryo Masumura, Yusuke Ijima, Taichi Asami, Hirokazu Masataki, Ryuichiro Higashinaka:
Neural Confnet Classification: Fully Neural Network Based Spoken Utterance Classification Using Word Confusion Networks. ICASSP 2018: 6039-6043 - 2017
- [c15]Katsuki Inoue, Sunao Hara, Masanobu Abe, Nobukatsu Hojo, Yusuke Ijima:
An investigation to transplant emotional expressions in DNN-based TTS synthesis. APSIPA 2017: 1253-1258 - [c14]Takuhiro Kaneko, Hirokazu Kameoka, Nobukatsu Hojo, Yusuke Ijima, Kaoru Hiramatsu, Kunio Kashino:
Generative adversarial network-based postfilter for statistical parametric speech synthesis. ICASSP 2017: 4910-4914 - [c13]Yusuke Ijima, Nobukatsu Hojo, Ryo Masumura, Taichi Asami:
Prosody Aware Word-Level Encoder Based on BLSTM-RNNs for DNN-Based Speech Synthesis. INTERSPEECH 2017: 764-768 - [c12]Nobukatsu Hojo, Yasuhito Ohsugi, Yusuke Ijima, Hirokazu Kameoka:
DNN-SPACE: DNN-HMM-Based Generative Model of Voice F0 Contours for Statistical Phrase/Accent Command Estimation. INTERSPEECH 2017: 1074-1078 - 2016
- [c11]Yusuke Ijima, Taichi Asami, Hideyuki Mizuno:
Objective Evaluation Using Association Between Dimensions Within Spectral Features for Statistical Parametric Speech Synthesis. INTERSPEECH 2016: 337-341 - [c10]Nobukatsu Hojo, Yusuke Ijima, Hideyuki Mizuno:
An Investigation of DNN-Based Speech Synthesis Using Speaker Codes. INTERSPEECH 2016: 2278-2282 - 2015
- [j4]Yusuke Ijima, Hideyuki Mizuno:
Similar Speaker Selection Technique Based on Distance Metric Learning Using Highly Correlated Acoustic Features with Perceptual Voice Quality Similarity. IEICE Trans. Inf. Syst. 98-D(1): 157-165 (2015) - [j3]Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno, Sumitaka Sakauchi:
Statistical model training technique based on speaker clustering approach for HMM-based speech synthesis. Speech Commun. 71: 50-61 (2015) - [c9]Tadashi Inai, Sunao Hara, Masanobu Abe, Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno:
Sub-band text-to-speech combining sample-based spectrum with statistically generated spectrum. INTERSPEECH 2015: 264-268 - 2014
- [j2]Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis. Speech Commun. 57: 144-154 (2014) - 2013
- [c8]Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
HMM-based expressive speech synthesis based on phrase-level F0 context labeling. ICASSP 2013: 7859-7863 - [c7]Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno:
Statistical model training technique for speech synthesis based on speaker class. SSW 2013: 141-145 - 2012
- [c6]Yusuke Ijima, Mitsuaki Isogai, Hideyuki Mizuno:
Similar Speaker Selection Technique Based on Distance Metric Learning with Perceptual Voice Quality Similarity. INTERSPEECH 2012: 1997-2000 - 2011
- [c5]Yu Maeno, Takashi Nose, Takao Kobayashi, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka:
HMM-Based Emphatic Speech Synthesis Using Unsupervised Context Labeling. INTERSPEECH 2011: 1849-1852 - [c4]Yusuke Ijima, Mitsuaki Isogai, Hideyuki Mizuno:
Correlation Analysis of Acoustic Features with Perceptual Voice Quality Similarity for Similar Speaker Selection. INTERSPEECH 2011: 2237-2240 - 2010
- [j1]Yusuke Ijima, Takashi Nose, Makoto Tachibana, Takao Kobayashi:
A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM. IEICE Trans. Inf. Syst. 93-D(1): 107-115 (2010)
2000 – 2009
- 2009
- [c3]Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi:
Emotional speech recognition based on style estimation and adaptation with multiple-regression HMM. ICASSP 2009: 4157-4160 - [c2]Yusuke Ijima, Takeshi Matsubara, Takashi Nose, Takao Kobayashi:
Speaking style adaptation for spontaneous speech recognition using multiple-regression HMM. INTERSPEECH 2009: 552-555 - 2008
- [c1]Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi:
An on-line adaptation technique for emotional speech recognition using style estimation with multiple-regression HMM. INTERSPEECH 2008: 1297-1300
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:22 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint