Sun et al., 2022 - Google Patents
MRGAN: Multi-Criteria Relational GAN for Lyrics-Conditional Melody GenerationSun et al., 2022
- Document ID
- 10062880906096726764
- Author
- Sun F
- Tao Q
- Yan J
- Hu J
- Yang Z
- Publication year
- Publication venue
- 2022 International Joint Conference on Neural Networks (IJCNN)
External Links
Snippet
Music generation, as a creativity problem, attracts growing attention from artificial intelligence researchers. Among the challenging tasks, lyrics-conditional melody generation aims to leverage natural language processing (NLP) techniques to generate music from …
- 230000015654 memory 0 abstract description 22
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0041—Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
- G10H1/0058—Transmission between separate instruments or between individual components of a musical system
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/38—Chord
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/151—Music Composition or musical creation; Tools or processes therefor using templates, i.e. incomplete musical sections, as a basis for composing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H7/00—Instruments in which the tones are synthesised from a data store, e.g. computer organs
- G10H7/08—Instruments in which the tones are synthesised from a data store, e.g. computer organs by calculating functions or polynomial approximations to evaluate amplitudes at successive sample points of a tone waveform
- G10H7/10—Instruments in which the tones are synthesised from a data store, e.g. computer organs by calculating functions or polynomial approximations to evaluate amplitudes at successive sample points of a tone waveform using coefficients or parameters stored in a memory, e.g. Fourier coefficients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | Conditional LSTM-GAN for melody generation from lyrics | |
Zhang | Learning adversarial transformer for symbolic music generation | |
Hadjeres et al. | Deepbach: a steerable model for bach chorales generation | |
Cancino-Chacón et al. | Computational models of expressive music performance: A comprehensive and critical review | |
Lopez-Rincon et al. | Algoritmic music composition based on artificial intelligence: A survey | |
Bao et al. | Generating music with emotions | |
Benetatos et al. | BachDuet: A deep learning system for human-machine counterpoint improvisation | |
Sturm et al. | Folk the algorithms:(Mis) Applying artificial intelligence to folk music | |
Hernandez-Olivan et al. | A survey on artificial intelligence for music generation: Agents, domains and perspectives | |
Jen et al. | Positioning left-hand movement in violin performance: A system and user study of fingering pattern generation | |
Maduskar et al. | Music generation using deep generative modelling | |
Yanchenko et al. | Classical music composition using state space models | |
Kritsis et al. | On the adaptability of recurrent neural networks for real-time jazz improvisation accompaniment | |
Trochidis et al. | CAMeL: Carnatic percussion music generation using n-gram models | |
Sun et al. | MRGAN: Multi-Criteria Relational GAN for Lyrics-Conditional Melody Generation | |
Yanchenko | Classical music composition using hidden Markov models | |
Wang et al. | Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder | |
Jagannathan et al. | Original music generation using recurrent neural networks with self-attention | |
Tang et al. | Music Generation with AI technology: Is It Possible? | |
Madhumani et al. | Automatic neural lyrics and melody composition | |
Dias et al. | Komposer–automated musical note generation based on lyrics with recurrent neural networks | |
Simões et al. | Deep learning for expressive music generation | |
Ma et al. | Coarse-to-fine framework for music generation via generative adversarial networks | |
Xu et al. | Equipping Pretrained Unconditional Music Transformers with Instrument and Genre Controls | |
유승연 | Improving Conditional Generation of Musical Components: Focusing on Chord and Expression |