[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3670105.3670143acmotherconferencesArticle/Chapter ViewAbstractPublication PagescniotConference Proceedingsconference-collections
research-article

News text continuation based on Topic Similarity Evaluation Module

Published: 29 July 2024 Publication History

Abstract

Abstract:Text continuation is based on the input text to automatically generate new texts that are thematically relevant and semantically coherent, ultimately forming a complete story or article.Pre-trained language models have been proven to perform well on various Chinese text generation tasks, among which the Pangu-Alpha model is particularly effective in coherent text generation. Therefore, this paper mainly uses the fine-tuning of the Pangu-Alpha pre-training model to achieve news text continuation. We hope to eventually use AI to automatically write news and generate logical and linguistically coherent news content. In the specific implementation, this paper first fine-tunes the input data through Pangu-Alpha. Considering the actual news environment, the input of text continuation does not strictly follow a single theme, and may involve a mixture of multiple themes. Therefore, this paper proposes a theme similarity measurement task that can divide different themes according to the similarity between input contents to achieve adaptive input processing. To evaluate the effect of the theme similarity measurement method, this paper also proposes a length regularization loss. Experimental results verify the effectiveness of this paper's method.

References

[1]
Sherstinsky A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network[J]. Physica D: Nonlinear Phenomena, 2020, 404: 132306.
[2]
Niu Z, Zhong G, Yu H. A review on the attention mechanism of deep learning[J]. Neurocomputing, 2021, 452: 48-62.
[3]
Goodfellow I, Pouget-Abadie J, Mirza M, Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[4]
Han K, Xiao A, Wu E, Transformer in transformer[J]. Advances in neural information processing systems, 2021, 34: 15908-15919.
[5]
Koroteev M V. BERT: a review of applications in natural language processing and understanding[J]. arXiv preprint arXiv:2103.11943, 2021.
[6]
Topal M O, Bas A, van Heerden I. Exploring transformers in natural language generation: Gpt, bert, and xlnet[J]. arXiv preprint arXiv:2102.08036, 2021.
[7]
Liu X, Zheng Y, Du Z, GPT understands, too[J]. AI Open, 2023.
[8]
Floridi L, Chiriatti M. GPT-3: Its nature, scope, limits, and consequences[J]. Minds and Machines, 2020, 30: 681-694.
[9]
Brown T, Mann B, Ryder N, Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877-1901.
[10]
Zeng W, Ren X, Su T, PanGu-$\alpha $: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation[J]. arXiv preprint arXiv:2104.12369, 2021.
[11]
Alammar, J (2018). The Illustrated Transformer [Blog post]. Retrieved from https://jalammar.github.io/illustrated-transformer/
[12]
Ziyuan Zhuang.Psychological counseling chatbot based on GPT-2 [D]. Nanjing University of Posts and Telecommunications,2022.
[13]
Yang Liu.Science Fiction Writing in the Age of Machine Writing[J]. Science Fiction Creation Review,2022,2(02):13-15.
[14]
Liu X, Zheng Y, Du Z, et al. GPT Understands, Too[J].2021.
[15]
Misra K, Ettinger A, Rayz J T. Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming[J]. ar**v preprint ar**v:2010.03010, 2020.
[16]
Tseng H Y, Chen Y W, Tsai Y H, Regularizing meta-learning via gradient dropout[C]//Proceedings of the Asian Conference on Computer Vision. 2020.
[17]
Hewitt J, Manning C D, Liang P. Truncation sampling as language model desmoothing[J]. arXiv preprint arXiv:2210.15191, 2022.
[18]
Vilnis L, Zemlyanskiy Y, Murray P, Arithmetic sampling: parallel diverse decoding for large language models[C]//International Conference on Machine Learning. PMLR, 2023: 35120-35136.
[19]
**e Y, Dai H, Chen M, Differentiable top-k with optimal transport[J]. Advances in Neural Information Processing Systems, 2020, 33: 20520-20531.
[20]
Ravfogel S, Goldberg Y, Goldberger J. Conformal Nucleus Sampling[J]. ar**v preprint ar**v:2305.02633, 2023.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
CNIOT '24: Proceedings of the 2024 5th International Conference on Computing, Networks and Internet of Things
May 2024
668 pages
ISBN:9798400716751
DOI:10.1145/3670105
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Continuation of news text
  2. Multi-topic text
  3. Natural language generation
  4. Pangu-Alpha
  5. Similarity evaluation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • State Grid Shandong Electric Power Company Technology Project

Conference

CNIOT 2024

Acceptance Rates

Overall Acceptance Rate 39 of 82 submissions, 48%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 5
    Total Downloads
  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media