Abstract
In this paper, we propose an evaluation of a Transformer-based punctuation restoration model for the Italian language. Experimenting with a BERT-base model, we perform several fine-tuning with different training data and sizes and tested them in an in- and cross-domain scenario. Moreover, we conducted an error analysis of the main weaknesses of the model related to specific punctuation marks. Finally, we test our system either quantitatively and qualitatively, by offering a typical task-oriented and a perception-based acceptability evaluation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Obviously, the speech must have a close-to-standard accent without using dialectal or slang words.
- 2.
Other than difficult, unpunctuated text can be also ambiguous. Here is an amusing example of two completely different letters, with the same words but different punctuation: https://www.nationalpunctuationday.com/dearjohn.html.
- 3.
- 4.
- 5.
- 6.
This paper is a slightly revised version, updated and integrated with an extensive evaluation, of our previous contribution [21] to the NL4AI Workshop at AI*IA2021 conference.
- 7.
- 8.
- 9.
- 10.
- 11.
The complete collection of comparable corpora in 17 languages is available at: https://www.clarin.si/repository/xmlui/handle/11356/1432.
- 12.
The full dataset is available at: http://www.openslr.org/100/.
- 13.
We recruited 10 volunteer linguists among the staff of our Institute, ILC-CNR.
- 14.
- 15.
- 16.
We inserted an absolutely unreadable text (with periods between auxiliar and main verb, commas in the middle of multiwords and so on) in order to highlight bad raters and exclude them from the evaluation.
References
Alam, T., Khan, A., Alam, F.: Punctuation restoration using transformer models for high-and low-resource languages. In: Proceedings of the Sixth Workshop on Noisy User-Generated Text (W-NUT 2020), pp. 132–142. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.wnut-1.18. https://aclanthology.org/2020.wnut-1.18
Baroni, M., et al.: Introducing the La Repubblica Corpus: a large, annotated, TEI (XML)-compliant Corpus of Newspaper Italian. In: LREC (2004)
Baroni, M., Bernardini, S., Ferraresi, A., Zanchetta, E.: The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Lang. Resour. Eval. 43(3), 209–226 (2009). https://doi.org/10.1007/s10579-009-9081-4
Bosco, C., Simonetta, M., Maria, S.: Converting Italian treebanks: towards an Italian stanford dependency treebank. In: 7th Linguistic Annotation Workshop and Interoperability with Discourse, pp. 61–69. The Association for Computational Linguistics (2013)
Bosco, C., Simonetta, M., Maria, S., et al.: Harmonization and merging of two Italian dependency treebanks. In: LREC 2012 Workshop on Language Resource Merging, pp. 23–30. ELRA (2012)
Che, X., Luo, S., Yang, H., Meinel, C.: Sentence boundary detection based on parallel lexical and acoustic models. In: INTERSPEECH, pp. 2528–2532 (2016)
Christensen, H., Gotoh, Y., Renals, S.: Punctuation annotation using statistical prosody models (2001)
De Mauro, T.: Il Nuovo vocabolario di base della lingua italiana. In: Guida all’uso delle parole. Editori Riuniti (1980)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota (Long and Short Papers), vol. 1, pp. 4171–4186. Association for Computational Linguistics, June 2019. https://doi.org/10.18653/v1/N19-1423. https://aclanthology.org/N19-1423
Erjavec, T., et al.: Multilingual comparable corpora of parliamentary debates ParlaMint 2.1 (2021). http://hdl.handle.net/11356/1432, Slovenian language resource repository CLARIN.SI
Fang, M., Zhao, H., Song, X., Wang, X., Huang, S.: Using bidirectional LSTM with BERT for Chinese punctuation prediction. In: 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), pp. 1–5 (2019). https://doi.org/10.1109/ICSIDP47821.2019.9172986
Gotoh, Y., Renals, S.: Sentence boundary detection in broadcast speech transcripts. In: Automatic Speech Recognition: Challenges for the New Millenium ISCA Tutorial and Research Workshop (ITRW), ASR 2000 (2000)
Gravano, A., Jansche, M., Bacchiani, M.: Restoring punctuation and capitalization in transcribed speech. In: 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4741–4744. IEEE (2009)
Kim, J.H., Woodland, P.C.: A combined punctuation generation and speech recognition system and its performance enhancement using prosody. Speech Commun. 41(4), 563–577 (2003)
Kim, S.: Deep recurrent neural networks with layer-wise multi-head attentions for punctuation restoration. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2019, pp. 7280–7284. IEEE (2019)
Klejch, O., Bell, P., Renals, S.: Punctuated transcription of multi-genre broadcasts using acoustic and lexical approaches. In: 2016 IEEE Spoken Language Technology Workshop (SLT), pp. 433–440. IEEE (2016)
Klejch, O., Bell, P., Renals, S.: Sequence-to-sequence models for punctuated transcription combining lexical and acoustic features. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5700–5704. IEEE (2017)
Levy, T., Silber-Varod, V., Moyal, A.: The effect of pitch, intensity and pause duration in punctuation detection. In: 2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel, pp. 1–4. IEEE (2012)
Lison, P., Tiedemann, J.: OpenSubtitles 2016: extracting large parallel corpora from movie and TV subtitles (2016)
Makhija, K., Ho, T.N., Chng, E.S.: Transfer learning for punctuation prediction. In: 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 268–273. IEEE (2019)
Miaschi, A., Ravelli, A.A., Dell’Orletta, F.: Evaluating transformer models for punctuation restoration in Italian. In: Proceedings of the Fifth Workshop on Natural Language for Artificial Intelligence (NL4AI 2021). CEUR Workshop Proceedings, vol. 3015. CEUR-WS.org (2021)
Nagy, A., Bial, B., Ács, J.: Automatic punctuation restoration with BERT models. arXiv preprint arXiv:2101.07343 (2021)
Nencioni, G.: Parlato-parlato, parlato-scritto, parlato-recitato. Strumenti critici 29 (1976)
Qi, P., Zhang, Y., Zhang, Y., Bolton, J., Manning, C.D.: Stanza: a python natural language processing toolkit for many human languages. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 101–108 (2020)
Sabatini, F.: La comunicazione orale, scritta e trasmessa. In: Boccafurni, A.M., Serromani, S. (eds.) Educazione linguistica nella scuola superiore: sei argomenti per un curricolo, pp. 105–127 (1982)
Salesky, E., et al.: Multilingual TEDx corpus for speech recognition and translation. In: Proceedings of INTERSPEECH (2021)
Salesky, E., et al.: The multilingual TEDx corpus for speech recognition and translation. arXiv:2102.01757 (2021)
Schneider, S., Baevski, A., Collobert, R., Auli, M.: wav2vec: unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862 (2019)
Stolcke, A., et al.: Automatic detection of sentence boundaries and disfluencies based on recognized words. In: ICSLP, vol. 2, pp. 2247–2250. Citeseer (1998)
Suárez, P.J.O., Sagot, B., Romary, L.: Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. In: Challenges in the Management of Large Corpora (CMLC-7), p. 9 (2019)
Tiedemann, J., Nygaard, L.: The OPUS corpus-parallel and free. Citeseer (2004). http://logos.uio.no/opus
Tilk, O., Alumäe, T.: LSTM for punctuation restoration in speech transcripts. In: Sixteenth Annual Conference of the International Speech Communication Association (2015)
Tilk, O., Alumäe, T.: Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In: INTERSPEECH, pp. 3047–3051 (2016)
Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, October 2020. https://doi.org/10.18653/v1/2020.emnlp-demos.6. https://www.aclweb.org/anthology/2020.emnlp-demos.6
Yi, J., Tao, J.: Self-attention based model for punctuation prediction using word and speech embeddings. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2019, pp. 7270–7274. IEEE (2019)
Yi, J., Tao, J., Bai, Y., Tian, Z., Fan, C.: Adversarial transfer learning for punctuation restoration. arXiv preprint arXiv:2004.00248 (2020)
Yi, J., Tao, J., Wen, Z., Li, Y., et al.: Distilling knowledge from an ensemble of models for punctuation prediction. In: INTERSPEECH, pp. 2779–2783 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Miaschi, A., Ravelli, A.A., Dell’Orletta, F. (2022). Punctuation Restoration in Spoken Italian Transcripts with Transformers. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds) AIxIA 2021 – Advances in Artificial Intelligence. AIxIA 2021. Lecture Notes in Computer Science(), vol 13196. Springer, Cham. https://doi.org/10.1007/978-3-031-08421-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-08421-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08420-1
Online ISBN: 978-3-031-08421-8
eBook Packages: Computer ScienceComputer Science (R0)