default search action
Masafumi Nishida
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c58]Satoshi Naito, Masafumi Nishimura, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Food Recognition Using Large-scale Pre-trained Speech Models. GCCE 2024: 119-120 - [c57]Yusei Suzuki, Takashi Tsunakawa, Masafumi Nishida:
Pseudo-error Generation for Error Detection and Correction in Japanese Automatic Speech Recognition. GCCE 2024: 129-132 - [c56]Toshihiro Tsukagoshi, Masafumi Nishida, Masafumi Nishimura:
Simultaneous Speech and Eating Behavior Recognition Using Multitask Learning. GCCE 2024: 138-140 - [c55]Takumi Uehara, Shingo Kuroiwa, Yasuo Horiuchi, Masafumi Nishida, Satoru Tsuge:
Template-Based Speech Recognition Using Pre-trained Large Speech Models for Voice-Activated Shower Control. GCCE 2024: 141-143 - [c54]Hibiki Takayama, Masafumi Nishida, Satoru Tsuge, Shingo Kuroiwa:
Emotion-Dependent Speaker Verification Based on Score Integration. GCCE 2024: 805-807 - [c53]Kentaro Kameda, Satoru Tsuge, Shingo Kuroiwa, Yasuo Horiuchi, Masafumi Nishida:
Text-Dependent Speaker Verification Using SSI-DNN Trained on Short Utterance. GCCE 2024: 808-810 - [c52]Mai Takeuchi, Masafumi Nishida, Masafumi Nishimura:
Chewing and Swallowing Pattern Recognition Using Sound Information. GCCE 2024: 968-970 - 2023
- [c51]Kohta Masuda, Jun Ogata, Masafumi Nishida, Masafumi Nishimura:
Multi-Self-Supervised Learning Model-Based Throat Microphone Speech Recognition. APSIPA ASC 2023: 1766-1770 - [c50]Amit Karmakar, Masafumi Nishida, Masafumi Nishimura:
Eating and Drinking Behavior Recognition Using Multimodal Fusion. GCCE 2023: 210-213 - [c49]Ryotaro Sano, Masafumi Nishida, Satoru Tsuge, Shingo Kuroiwa, Hiroyuki Yoshimura:
Cross-Lingual Speaker Identification for Japanese-English Bilinguals. GCCE 2023: 237-239 - [c48]Hibiki Takayama, Masafumi Nishida, Satoru Tsuge, Shingo Kuroiwa, Masafumi Nishimura:
Utterance-style-dependent Speaker Verification by Utilizing Emotions. GCCE 2023: 773-775 - 2022
- [c47]Kohta Masuda, Jun Ogata, Masafumi Nishida, Masafumi Nishimura:
Throat microphone speech recognition using wav2vec 2.0 and feature mapping. GCCE 2022: 395-397 - [c46]Aoi Sugita, Masafumi Nishida, Masafumi Nishimura, Yasuo Horiuchi, Shingo Kuroiwa:
Identification of vocal tract state before and after swallowing using acoustic features. GCCE 2022: 752-753 - [c45]Takuya Suzuki, Ryoga Murate, Masafumi Nishida:
Development and Evaluation of UniTalker, an Application for Simultaneous Presentation of Subtitles from Multiple Speakers. HCI (37) 2022: 612-619 - 2021
- [c44]Kosuke Aigo, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura:
Question Generation using Knowledge Graphs with the T5 Language Model and Masked Self-Attention. GCCE 2021: 85-87 - 2020
- [c43]Haruki Fukuda, Takashi Tsunakawa, Jun Oshima, Ritsuko Oshima, Masafumi Nishida, Masafumi Nishimura:
BERT-based Automatic Text Scoring for Collaborative Learning. GCCE 2020: 917-920
2010 – 2019
- 2019
- [c42]Takahito Suzuki, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura, Jun Ogata:
Effects of Mounting Position on Throat Microphone Speech Recognition. GCCE 2019: 873-874 - [c41]Takahito Suzuki, Jun Ogata, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura:
Knowledge Distillation for Throat Microphone Speech Recognition. INTERSPEECH 2019: 461-465 - 2018
- [j8]Tomoki Hayashi, Masafumi Nishida, Norihide Kitaoka, Tomoki Toda, Kazuya Takeda:
Daily Activity Recognition with Large-Scaled Real-Life Recording Datasets Based on Deep Neural Network Using Multi-Modal Signals. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 101-A(1): 199-210 (2018) - [c40]Takahito Suzuki, Jun Ogata, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura:
Bottleneck feature-mediated DNN-based feature mapping for throat microphone speech recognition. APSIPA 2018: 1738-1741 - [c39]Motoki Abe, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura:
Dialogue Breakdown Detection Based on Nonlinguistic Acoustic Information. GCCE 2018: 689-690 - 2017
- [c38]Shengke Lin, Takashi Tsunakawa, Masafumi Nishida, Masafumi Nishimura:
DNN-based feature transformation for speech recognition using throat microphone. APSIPA 2017: 596-599 - [c37]Masafumi Nishida, Seiichi Yamamoto:
Speaker Clustering Based on Non-Negative Matrix Factorization Using Gaussian Mixture Model in Complementary Subspace. CBMI 2017: 7:1-7:5 - 2015
- [j7]Xiaoyun Wang, Jinsong Zhang, Masafumi Nishida, Seiichi Yamamoto:
Phoneme Set Design for Speech Recognition of English by Japanese. IEICE Trans. Inf. Syst. 98-D(1): 148-156 (2015) - [j6]Seiichi Yamamoto, Keiko Taguchi, Koki Ijuin, Ichiro Umata, Masafumi Nishida:
Multimodal corpus of multiparty conversations in L1 and L2 languages and findings obtained from it. Lang. Resour. Evaluation 49(4): 857-882 (2015) - [c36]Masafumi Nishida, Norihide Kitaoka, Kazuya Takeda:
Daily activity recognition based on acoustic signals and acceleration signals estimated with Gaussian process. APSIPA 2015: 279-282 - [c35]Tomoki Hayashi, Masafumi Nishida, Norihide Kitaoka, Kazuya Takeda:
Daily activity recognition based on DNN using environmental sound and acceleration signals. EUSIPCO 2015: 2306-2310 - 2014
- [c34]Xiaoyun Wang, Jinsong Zhang, Masafumi Nishida, Seiichi Yamamoto:
Phoneme Set Design Using English Speech Database by Japanese for Dialogue-Based English CALL Systems. LREC 2014: 3948-3951 - 2013
- [j5]Yutaka Fukuoka, Kenji Miyazawa, Hiroki Mori, Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa, Hiroshi Hoshino, Makoto Noshiro, Akinori Ueno:
Development of a Compact Wireless Laplacian Electrode Module for Electromyograms and Its Human Interface Applications. Sensors 13(2): 2368-2383 (2013) - [j4]Kristiina Jokinen, Hirohisa Furukawa, Masafumi Nishida, Seiichi Yamamoto:
Gaze and turn-taking behavior in casual conversational interactions. ACM Trans. Interact. Intell. Syst. 3(2): 12:1-12:30 (2013) - [c33]Ichiro Umata, Seiichi Yamamoto, Kosuke Kabashima, Masafumi Nishida:
Differences in Interactional Attitudes in Second Language Conversations: From the Perspective of Expertise. CogSci 2013 - [c32]Seiichi Yamamoto, Keiko Taguchi, Ichiro Umata, Kosuke Kabashima, Masafumi Nishida:
Differences in Interactional Attitudes in Native and Second Languag Conversations: Quantitative Analyses of Multimodal Three-Party Corpus. CogSci 2013 - [c31]Wei Li, Jinsong Zhang, Yanlu Xie, Xiaoyun Wang, Masafumi Nishida, Seiichi Yamamoto:
Using Mutual Information Criterion to Design an Effective Lexicon for Chinese Pinyin-to-Character Conversion. IALP 2013: 269-272 - [c30]Ichiro Umata, Seiichi Yamamoto, Koki Ijuin, Masafumi Nishida:
Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaboration. ICMI 2013: 413-420 - [c29]Jinsong Zhang, Xiaoyun Wang, Yue Sun, Masafumi Nishida, Ting Zou, Seiichi Yamamoto:
Improve Japanese C2L learners' capability to distinguish Chinese tone 2 and tone 3 through perceptual training. O-COCOSDA/CASLRE 2013: 1-6 - 2012
- [c28]Kosuke Kabashima, Kristiina Jokinen, Masafumi Nishida, Seiichi Yamamoto:
Multimodal corpus of conversations in mother tongue and second language by same interlocutors. GazeIn@ICMI 2012: 9:1-9:5 - [c27]Shota Yamasaki, Hirohisa Furukawa, Masafumi Nishida, Kristiina Jokinen, Seiichi Yamamoto:
Multimodal Corpus of Multi-party Conversations in Second Language. LREC 2012: 416-421 - [c26]Yu Nagai, Tomohisa Senzai, Seiichi Yamamoto, Masafumi Nishida:
Sentence Classification with Grammatical Errors and Those Out of Scope of Grammar Assumption for Dialogue-Based CALL Systems. TSD 2012: 616-623 - 2011
- [c25]Masafumi Nishida, Seiichi Yamamoto:
Speaker Clustering Based on Non-Negative Matrix Factorization. INTERSPEECH 2011: 949-952 - 2010
- [j3]Haoze Lu, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Text-Independent speaker identification in phoneme-independent subspace using PCA transformation. Int. J. Biom. 2(4): 379-390 (2010) - [c24]Kristiina Jokinen, Kazuaki Harada, Masafumi Nishida, Seiichi Yamamoto:
Turn-alignment using eye-gaze and speech in conversational interaction. INTERSPEECH 2010: 2018-2021 - [c23]Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa, Akira Ichikawa:
Automatic Speech Recognition Based on Multiple Level Units in Spoken Dialogue System for In-Vehicle Appliances. TSD 2010: 539-546
2000 – 2009
- 2009
- [c22]Haruka Okamoto, Satoru Tsuge, Amira Abdelwahab, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Text-independent speaker verification using rank threshold in large number of speaker models. INTERSPEECH 2009: 2367-2370 - [c21]Kristiina Jokinen, Masafumi Nishida, Seiichi Yamamoto:
Eye-gaze experiments for conversation monitoring. IUCS 2009: 303-308 - [c20]Amira Abdelwahab, Hiroo Sekiya, Ikuo Matsuba, Yasuo Horiuchi, Shingo Kuroiwa, Masafumi Nishida:
An efficient collaborative filtering algorithm using SVD-free latent Semantic indexing and particle swarm optimization. NLPKE 2009: 1-4 - 2008
- [j2]Saori Tanaka, Kaoru Nakazono, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Evaluating Interpreter's Skill by Measurement of Prosody Recognition. Inf. Media Technol. 3(2): 375-384 (2008) - [c19]Shota Sato, Taro Kimura, Yasuo Horiuchi, Masafumi Nishida, Shingo Kuroiwa, Akira Ichikawa:
A method for automatically estimating F0 model parameters and a speech re-synthesis tool using F0 model and STRAIGHT. INTERSPEECH 2008: 545-548 - [c18]Masaru Maebatake, Iori Suzuki, Masafumi Nishida, Yasuo Horiuchi, Shingo Kuroiwa:
Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM. ISUC 2008: 478-481 - 2007
- [c17]Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Unsupervised training of adaptation rate using q-learning in large vocabulary continuous speech recognition. INTERSPEECH 2007: 278-281 - 2006
- [c16]Satoshi Tojo, Yoshinori Oka, Masafumi Nishida:
Analysis of Chord Progression by HPSG. Artificial Intelligence and Applications 2006: 305-310 - [c15]Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Analysis of Prosody in Finger Braille Using Electromyography. EMBC 2006: 4901-4904 - [c14]Manabi Miyagi, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Investigation on Effect of Prosody in Finger Braille. ICCHP 2006: 863-869 - 2005
- [j1]Masafumi Nishida, Tatsuya Kawahara:
Speaker model selection based on the Bayesian information criterion applied to unsupervised speaker indexing. IEEE Trans. Speech Audio Process. 13(4): 583-592 (2005) - [c13]Tomoko Ohsuga, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Investigation of the relationship between turn-taking and prosodic features in spontaneous dialogue. INTERSPEECH 2005: 33-36 - [c12]Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Automatic speech recognition based on adaptation and clustering using temporal-difference learning. INTERSPEECH 2005: 285-288 - [c11]Saori Tanaka, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Production of prominence in Japanese sign language. INTERSPEECH 2005: 2421-2424 - 2004
- [c10]Masafumi Nishida, Tatsuya Kawahara:
Speaker indexing and adaptation using speaker clustering based on statistical model selection. ICASSP (1) 2004: 353-356 - [c9]Masafumi Nishida, Yoshitaka Mamiya, Yasuo Horiuchi, Akira Ichikawa:
On-line incremental adaptation based on reinforcement learning for robust speech recognition. INTERSPEECH 2004: 1985-1988 - [c8]Tomoko Ohsuga, Masafumi Nishida, Yasuo Horiuchi, Akira Ichikawa:
Estimating syntactic structure from prosodic features in Japanese speech. INTERSPEECH 2004: 3041-3044 - 2003
- [c7]Masafumi Nishida, Tatsuya Kawahara:
Unsupervised speaker indexing using speaker model selection based on Bayesian information criterion. ICASSP (1) 2003: 172-175 - [c6]Masafumi Nishida, Tatsuya Kawahara:
Speaker model selection using Bayesian information criterion for speaker indexing and speaker adaptation. INTERSPEECH 2003: 1849-1852 - 2001
- [c5]Masafumi Nishida, Yasuo Ariki:
Speaker recognition by separating phonetic space and speaker space. INTERSPEECH 2001: 1381-1384 - 2000
- [c4]Masafumi Nishida, Yasuo Ariki:
Speaker verification by integrating dynamic and static features using subspace method. INTERSPEECH 2000: 1013-1016
1990 – 1999
- 1999
- [c3]Masafumi Nishida, Yasuo Ariki:
Speaker Indexing for News Articles, Debates and Drama in Broadcasted TV Programs. ICMCS, Vol. 2 1999: 466-471 - 1998
- [c2]Yasuo Ariki, Jun Ogata, Masafumi Nishida:
News Dictation and Article Classification Using Automatically Extracted Announcer Utterance. AMCP 1998: 75-86 - [c1]Masafumi Nishida, Yasuo Ariki:
Real time speaker indexing based on subspace method - application to TV news articles and debate. ICSLP 1998
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-13 19:14 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint