WO2011115210A1 - ミキシングデータ配信サーバ - Google Patents
ミキシングデータ配信サーバ Download PDFInfo
- Publication number
- WO2011115210A1 WO2011115210A1 PCT/JP2011/056395 JP2011056395W WO2011115210A1 WO 2011115210 A1 WO2011115210 A1 WO 2011115210A1 JP 2011056395 W JP2011056395 W JP 2011056395W WO 2011115210 A1 WO2011115210 A1 WO 2011115210A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- mixing
- singing
- sound
- song
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/02—Synthesis of acoustic waves
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
- G10H1/365—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/04—Sound-producing devices
Definitions
- the present invention relates to a server that distributes audio data obtained by mixing audio and music.
- Patent Document 1 describes that a karaoke contest is performed by listening to the singing sound and musical sound of the voice data distributed by each user and scoring.
- the singing sound and the musical sound are stored in the server as one piece of voice data that has already been mixed. Therefore, the mixing balance between the singing sound and the musical sound cannot be adjusted later. In order to adjust the mixing balance between the singing sound and the musical sound later, it is possible to upload the singing sound and the musical sound as separate audio data. The problem that it takes communication time occurs.
- an object of the present invention is to provide a server capable of separately storing voice and music without increasing communication time.
- the mixing data distribution server includes receiving means, storage means, audio data generation means, and distribution means.
- the receiving means receives the singer's voice data and synchronization information with the karaoke performance of the singer's voice data.
- the storage means stores the received voice data and synchronization information of the singer.
- the storage means also stores music data for performing karaoke performance.
- the sound data generation means reproduces the sound data read from the storage means, reads music data based on the synchronization information, and performs automatic performance. Then, the sound data generating means generates mixing data by mixing the sound based on the reproduced sound data and the musical sound by the automatic performance.
- the generated mixing data is distributed to each terminal, and is ready to be listened to by the user.
- the singing sound of the singer is uploaded to the server as voice data together with the synchronization information of the karaoke performance sound, and the musical sound is generated by automatically playing the music data based on the synchronization information on the server side, and the voice data To generate mixed data (complete data consisting of singing sound and musical sound). Therefore, only by uploading the voice data of the singing sound, the voice and the musical sound are separately stored on the server side, and the time required for the upload is the same as before.
- the synchronization information may be a mode in which tempo and volume information of a karaoke piece played during recording of audio data is described. Thereby, when the voice data of the singing sound is reproduced later, the karaoke performance synchronized with the singing sound is performed.
- the audio data may be composed of a plurality of audio data
- the synchronization information may include information indicating each reproduction timing of the plurality of audio data.
- the singer sings a duet part and the time zone for singing is determined in one song, so the song file can be divided into a plurality of songs.
- karaoke performance that is synchronized with the singing sound can be performed by recording information indicating the passage of time from the start of performance or delta time in the synchronization information, and reproducing each song file with reference to this information during reproduction. Can do.
- the generated mixing data may be stored in a storage means. In this case, even if there are a number of distribution requests at the same time, it is possible to distribute immediately. Even in this case, since the voice data of the singing sound is held in the storage means, the mixing balance can be changed later.
- the synchronization information may include an effect parameter
- the sound data generation unit may reflect the setting of the effect parameter when mixing the musical sound by automatic performance.
- voice and music can be stored separately in the server without increasing the communication time.
- FIG. 1 is a diagram showing a configuration of a mixing data distribution system.
- the mixing data distribution system includes a center (server, mixing data distribution server) 1 connected via a network 2 such as the Internet, a plurality of karaoke stores 3, and a plurality of user PCs 4.
- Each karaoke store 3 is provided with a relay device 5 such as a router connected to the network 2 and a plurality of karaoke devices 7 connected to the network 2 via the relay device 5.
- the repeater 5 is installed in a management room of a karaoke store.
- a plurality of karaoke apparatuses 7 are installed in each private room (karaoke box).
- the user PC 4 is a general home personal computer.
- the mixing data distribution system of the present embodiment is a system in which a voice sung by a certain singer at the karaoke device 7 is recorded in advance and uploaded to the center 1 as a singing file (compressed voice data). Then, the center 1 reproduces a song file in response to a request from each user PC 4, performs a karaoke performance in synchronization with the song file to be reproduced, and generates mixing data (compressed sound data) in which the song sound and the musical sound are mixed.
- mixing data compressed sound data
- FIG. 2 is a block diagram showing the configuration of the karaoke apparatus.
- the karaoke apparatus 7 includes a CPU 11 that controls the operation of the entire apparatus, and various components connected to the CPU 11. Connected to the CPU 11 are a RAM 12, HDD 13, network interface (I / F) 14, operation unit 15, A / D converter 17, sound source 18, mixer (effector) 19, decoder 22 such as MPEG, and display processing unit 23. ing.
- the HDD 13 stores music data for playing karaoke music, video data for displaying a background video on the monitor 24, and the like.
- Video data stores both moving images and still images.
- the RAM 12 which is a work memory, an area for reading out an operation program of the CPU 11 and an area for reading out music data for playing karaoke music are set.
- the CPU 11 has a built-in sequencer.
- the sequencer is a program that reads music data stored in the HDD 13 and executes karaoke performance.
- the music data includes a header in which a music number is written, a musical sound track in which performance MIDI data is written, a guide melody track in which MIDI data for guide melody is written, It consists of a lyric track in which lyric MIDI data is written, a back chorus playback timing, a chorus track in which audio data to be played back is written, and the like.
- the sequencer controls the sound source 18 based on the data of the musical tone track and the guide melody track, and generates the musical tone of the karaoke song.
- the sequencer also reproduces the back chorus audio data (compressed audio data such as MP3 attached to the music data) at the timing designated by the chorus track. Further, the sequencer synthesizes the character pattern of the lyrics in synchronism with the progress of the song based on the lyrics track, converts the character pattern into a video signal, and inputs it to the display processing unit 23.
- back chorus audio data compressed audio data such as MP3 attached to the music data
- the sound source 18 forms a musical sound signal (digital audio signal) according to data (note event data) input from the CPU 11 by processing of the sequencer.
- the formed tone signal is input to the mixer 19.
- the mixer 19 gives effects such as echo to the musical sound signal, chorus sound formed by the sound source 18 and the singing voice signal of the singer input from the microphone 16 via the A / D converter 17. Mix the signal.
- Each mixed digital audio signal is input to the sound system (SS) 20.
- the sound system 20 incorporates a D / A converter and a power amplifier, converts an input digital signal into an analog signal, amplifies it, and emits sound from the speaker 21.
- the effect that the mixer 19 gives to each audio signal and the balance of mixing are controlled by the CPU 11.
- the CPU 11 reads the video data stored in the HDD 13 and reproduces the background video and the like in synchronism with the generation of musical sounds and the generation of the lyrics telop by the sequencer.
- the video data of the moving image is encoded in the MPEG format.
- the CPU 11 inputs the read video data to the MPEG decoder 22.
- the MPEG decoder 22 converts the input MPEG data into a video signal and inputs it to the display processing unit 23.
- the text processing pattern of the lyrics telop is input to the display processing unit 23.
- the display processing unit 23 synthesizes a lyrics telop or the like on the video signal of the background video by OSD (On Screen Display) and outputs it to the monitor 24.
- the monitor 24 displays the video signal input from the display processing unit 23.
- the operation unit 15 includes various key switches provided on the operation panel surface of the karaoke device 7 and a remote controller connected via infrared communication or the like.
- the operation unit 15 accepts various user operations and displays operation information according to operation modes. Input to the CPU 11.
- the operation unit 15 receives a request for a song, recording of a singing sound (registration operation), and the like.
- a singing file is generated based on the singer's singing voice signal input via the network I / F 14 and uploaded to the center 1 via the network I / F 14.
- the song file is generated as compressed audio data such as MP3.
- CPU11 produces
- FIG. 3B is a diagram showing an example of the synchronization information.
- FIG. 3C is a diagram illustrating an example of a song file.
- the synchronization information includes a header, tempo information, volume information (Vol.), And timing information (tempo change amount).
- the header includes a song number, a song name, a file name of a song file to be associated, and the like.
- the song number is data in common format with the song number assigned to the song data of each karaoke song (information indicating alphanumeric characters), and the song number designated by the singer at the time of registration operation is transcribed.
- the tempo information is information indicating the performance tempo of the music designated by the singer during the registration operation, and designates the stepping speed of the sequencer.
- the volume information is information indicating the volume of the song specified by the singer during the registration operation (the volume of the music track).
- the timing information is information indicating the timing of the tempo change (elapsed time from the start of performance) when the singer changes the tempo during singing. By referring to the timing information indicating the tempo change, the performance tempo is changed in the middle of the music during the subsequent reproduction.
- the singing file is composed of a header and singing voice data as shown in FIG. At least the file name is described in the header, and is associated with the header of the synchronization information. If the encoding format is MP3, the header may be recorded as an ID3 tag.
- the above synchronization information and song file are uploaded to the center 1 and stored in the center 1.
- the singer can input his profile, a message, etc. using the operation part 15, and can upload it as singer information.
- the song data of the designated song number is read with reference to the synchronization information of each song file, and the karaoke performance is performed with the tempo and volume described in the synchronization information.
- the performance tempo is changed in the middle of the music according to the timing information.
- the singing file shown in FIG. 3C is a recording of all voices (voices picked up by the microphone 16) from the start to the end of the performance of the karaoke song.
- synchronous playback can be performed by outputting an audio signal based on a song file at the start of karaoke performance.
- FIG. It is also possible to divide into song files. In this case, as shown in FIG.
- the synchronization information may be configured as MIDI data (expansion track of music data) in order to unify the implementation with the music data, and may be in a format readable by the sequencer.
- MIDI data expansion track of music data
- the data capacity of the singing file can be reduced, and the upload time can be further reduced. This is particularly useful when the time zone for singing is limited in one song (for example, when a singer sings only one duet part with a duet song).
- FIG. 4 is a block diagram showing the configuration of the center 1.
- the center 1 includes a CPU 31 that controls the operation of the entire center, and various configurations connected to the CPU 31.
- a RAM 32, HDD 33, network interface (I / F) 34, sound source 38, and mixer (effector) 39 are connected to the CPU 31.
- the HDD 33 stores the same number of music data as the karaoke device 7 in addition to the singing file, synchronization information, and singer information uploaded from each karaoke device 7. In addition, mixing data generated in the past is also stored.
- the HDD 33 stores an operation program for the CPU 31.
- the CPU 31 develops the operation program in the RAM 32 and performs various processes.
- the CPU 31 performs reception data processing for recording the singing file, the synchronization information, and the singer information received from each karaoke device 7 via the network I / F 34 in the HDD 33. Further, the CPU 31 has a functionally built-in sequencer and, like the karaoke device 7, reads music data from the HDD 33, performs karaoke performance, and controls the sound source 38 to generate a musical sound signal. . Moreover, CPU31 produces
- FIG. 5 is a diagram showing an example of a list of song files displayed as a WEB page.
- each song file is displayed in a list on the WEB page with items such as a file name (or song number), a song name, a profile entered by the singer during the registration operation, and a message.
- the popularity (download count) of each song file is displayed.
- the number of downloads of each singing file is recorded in the HDD 33 and is counted up when each karaoke device 7 is made to download the singing file.
- the user PC 4 can refer to this list by accessing the WEB page, and can select a song file of a singer who wants to listen. If each item is selected by operating the user PC 4, the list can be sorted in ascending order or descending order.
- FIG. 6 is a block diagram showing the configuration of the user PC 4.
- the user PC 4 is a general home personal computer, and includes a CPU 41 that controls the overall operation and various configurations connected to the CPU 41. Connected to the CPU 41 are a RAM 42, an HDD 43, a network I / F 44, an operation unit 45, a sound system (SS) 46, and a display processing unit 48.
- the CPU 41 develops the operation program recorded in the HDD 43 in the RAM 42 and performs various processes.
- the CPU 41 transmits a display request to the center 1.
- the CPU 31 of the center 1 that has received the display request transfers the HTML file to the user PC 4 (notifies the URL and accesses the user PC 4).
- a WEB page based on the HTML file transferred from the center 1 is displayed on the monitor 49 via the display processing unit 48. In this way, the list of singing files shown in FIG. 5 is displayed.
- the CPU 41 makes a request for mixing data distribution.
- the request is executed when, for example, information indicating a song file name is transmitted to the center 1.
- the CPU 31 of the center 1 searches the received song file name from the HDD 33 and reads out the corresponding song file and synchronization information.
- the CPU 41 reproduces the read song file to generate a song voice signal, reads the song data of the song number described in the synchronization information, and according to the tempo and volume information described in the synchronization information, the sequencer Perform a karaoke song on Thereby, a musical sound signal is generated.
- the generated musical sound signal and singing voice signal are output to the mixer 39 and mixed.
- This mixed audio signal is input again to the CPU 41 and is generated as one compressed audio data (mixing data).
- the CPU 41 distributes the generated mixing data to the user PC 4 that has made the request.
- the distributed mixing data is reproduced by the CPU 41 of the user PC 4, converted into an analog audio signal at SS 46, and emitted from the speaker 47.
- the center 1 may perform a charging process in conjunction with a predetermined charging system. After charging a predetermined amount to the user who made the distribution request, the user PC 4 is made to download the mixing data. In this case, since the singer can receive a reward every time the singing file is downloaded, it is possible to give an incentive to the singer in conjunction with the billing system.
- a karaoke competition can be realized by scoring the singing sound heard by each user.
- the delivery system shown in this embodiment since it is only necessary to upload a song file to the center 1, it is possible to separately store the song sound and the musical sound in the server without increasing the upload time. it can. Since the singing sound data (singing file) is stored in the HDD 33 of the center 1 separately from the musical sound data (music data), the mixing balance may be changed later or the effect may be changed individually. It can be easily realized.
- the distribution system of this embodiment it is also possible to perform multiple recording in which the singing sounds of a plurality of singers (or the singing sounds of the same singers) are synthesized later.
- the singing sound and the musical sound are stored in the server as one audio data that has already been mixed, when another singing sound is added later, the audio data is once decoded into an audio signal. Since the conversion was performed after mixing and mixing the audio signal of another singing sound, the sound quality was deteriorated.
- the singing sound and the musical sound are held as separate data, so that the singing file that is desired to be multiplexed at the time of reproduction may be decoded and synthesized. For this reason, sound quality deterioration due to multiple recording does not occur.
- FIG. 7 is a flowchart showing an operation during a registration operation.
- FIG. 8 is a flowchart showing an operation at the time of mixing data distribution.
- the CPU 11 accepts the registration operation (s 11). At this time, the CPU 11 also accepts input of a singer's profile, message, and the like from the operation unit 15.
- the CPU 11 When the CPU 11 accepts the registration operation, it reads out the designated music data and performs a karaoke performance (s12), and sings based on the singing voice signal of the singer input from the microphone 16 via the A / D converter 17. A file is generated (s13). Also, synchronization information is generated based on the song number, tempo, volume, etc. of the played song (s14).
- CPU11 uploads the produced
- the center 1 records the uploaded song file and the synchronization information in the HDD 33 (s16). In this way, the song file of the singer is registered in the center 1.
- the user who requests mixing data distribution makes a request to display a song file in order to refer to the list of singers (s21).
- This display request is transmitted to the center 1, and the center 1 accepts the WEB display request (s22).
- the CPU 31 of the center 1 performs a WEB display process for transferring the HTML file to the user PC 4 (s23).
- a WEB page based on the HTML file transferred from the center 1 is displayed on the monitor 49 of the user PC 4 (s24). In this way, the list of song files shown in FIG.
- the user refers to the list of song files displayed on the monitor 49, selects a singer who wants to listen, and makes a distribution request for mixing data (s25).
- the CPU 41 extracts the file name of the song file selected by the user from the HTML file and notifies the center 1 of it. Thereby, a delivery request is received (s26).
- the charging process is performed in the center 1 (or charging server or the like) and the user PC 4 (s27, s28), and when the charging process is completed, the CPU 11 of the center 1 reads the corresponding song file and the synchronization information from the HDD 33, and the song file And singing sound audio signal based on the singing file is generated.
- the billing process is not essential, and the processes of s27 and s28 may be omitted.
- the music data of the music number described in the synchronization information is read from the HDD 33, the music data is sequenced according to the tempo and volume described in the synchronization information, and the sound source 38 is controlled.
- the karaoke performance is reproduced with the same tempo and volume as when the singing sound is recorded, and at the same time, the singing sound of the singer is output to perform synchronous reproduction (s29).
- the same singing file has been reproduced in the past and stored in the HDD 33 as mixing data
- the mixing data is read from the HDD 33, there is no need to perform synchronized reproduction again.
- each mixing data may be generated in advance by using the idle time of the processing of the center 1. In this case, even if there are a number of distribution requests at the same time, it is possible to immediately distribute the mixing data.
- the mixing balance can be changed later.
- the singing voice signal and the musical sound signal of the karaoke song that have been synchronously reproduced are mixed to generate mixing data (s30), and are downloaded to the user PC 4 (s31).
- the CPU 31 of the center 1 counts up the number of downloads of the downloaded song file (s32).
- the CPU 41 of the user PC 4 downloads the mixing data from the center 1 (s33) and stores it in the HDD 43 (or RAM 42). Then, the CPU 41 decodes the mixing data and reproduces the singing sound and the musical sound (s34).
- the generation and distribution of the mixing data may be performed for one song at a time, or may be sequentially downloaded as streaming data. In addition, it may be free when distributing at a low bit rate, and may be charged when distributed at a high bit rate (bit rate during recording).
- the CPU 31 of the center 1 When the first singing file describes the time from the start of the performance), the CPU 31 of the center 1 outputs an audio signal based on each singing file along with the music data sequence.
- the synchronization information is configured as MIDI data (expansion track of music data)
- the sequencer can read the MIDI data of the synchronization information and output an audio signal based on each song file.
- the CPU 41 makes a change request to the center 1 (s36).
- the change request includes information for instructing a mixing balance between the singing sound and the musical sound.
- the CPU 31 of the center 1 changes the mixing balance of the mixer 39 according to the information indicating the mixing balance included in the change request, and regenerates the mixing data (s38). Then, the CPU 31 delivers the regenerated mixing data to the user PC 4 for which the change request has been made (s39).
- the user PC 4 reproduces the redistributed mixing data (s40), and repeats the above processing until the reproduction is completed (s41).
- mixing data to be redistributed may be redistributed from the middle of the song for which the change request has been made, or may be redistributed from the beginning of the song with the mixing balance after the change.
- the volume can be controlled independently, and the mixing balance can be easily changed.
- the user can also issue an effect change instruction or a tempo change instruction in the mixing balance change instruction.
- the CPU 31 of the center 1 performs the music data sequence at the changed tempo and also changes the playback speed of the song file.
- CPU31 (or DSP not shown) performs the process which expands / contracts an audio
- the process of expanding and contracting the audio signal on the time axis while maintaining the pitch of the singing sound is performed as follows, for example.
- the CPU 31 cuts the voice signal of the singing sound based on the singing file into time axis waveforms for each sampling period, and generates a new time axis waveform (intermediate waveform) by combining a plurality of time axis waveforms.
- the intermediate waveform is generated by cross-fading and synthesizing the preceding and following time axis waveforms.
- an intermediate waveform is inserted between the original time axis waveforms, it is possible to extend the time axis while maintaining the pitch of the singing sound.
- the audio data is compressed, a process for replacing the intermediate waveform with the original time axis waveform is performed.
- an insertion process is performed every other sampling, it can be doubled (reproduction speed 1/2), and if a replacement process is performed, it can be compressed (reproduction speed is doubled). If insertion processing is performed every two samplings, it can be expanded by 1.5 times, and if insertion processing is performed every three samplings, it can be expanded by 1.33 times.
- the CPU 31 changes the key of the music data (shifts the note number) and changes the pitch of the singing voice signal.
- the pitch can be changed by resampling the audio signal. Further, the frequency characteristics of the audio signal may be changed.
- a singer performs an operation of registering a singing file using the karaoke device 7
- a microphone and a recording function are added to the center 1, and the singing file is stored using the center 1. It can also be configured to register. More simply, it can be realized by using the user PC 4 that has realized the function of the karaoke apparatus 7.
- the software of the user PC 4 realizes a configuration such as a sequencer and a sound source, and configures a karaoke performance terminal.
- the synchronization information includes a header, tempo information, volume information (Vol.), And timing information (tempo change amount).
- the synchronization information may include effect parameters such as microphone echo, reverb, compressor, and voice change in addition to the above information.
- synchronization information including effect parameters is registered at the time of singing sound registration operation.
- the effect parameter set in the karaoke apparatus 7 at that time is reflected in the synchronization information at the timing of the generation of the synchronization information in s14 of FIG.
- this effect parameter is used to reflect a microphone echo or the like in the audio signal.
- the accounting process is performed when the mixing data is distributed.
- the accounting process may be performed when the singer uploads the song file. That is, the singer is charged at the time of registration operation at s11 in FIG. 7 or at the time of uploading the song file and synchronization information at s15.
- a system can be realized in which money is collected from each singer as an entry fee when uploading his own singing file.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Description
音声データ生成手段は、記憶手段から読み出した音声データを再生するとともに、同期情報に基づいて楽曲データを読み出し、自動演奏を行う。そして、音声データ生成手段は、再生した音声データに基づく音声、および自動演奏による楽音をミキシングしてミキシングデータを生成する。生成されたミキシングデータが、各端末に配信され、ユーザに聴取可能な状態となる。
2…ネットワーク
3…カラオケ店舗
4…ユーザPC
5…中継器
7…カラオケ装置
Claims (5)
- 歌唱者の音声データ、および前記歌唱者の音声データのカラオケ演奏との同期情報を受信する受信手段と、
前記歌唱者の音声データ、同期情報、およびカラオケ演奏を行うための楽曲データを記憶する記憶手段と、
前記音声データを再生するとともに、前記同期情報に基づいて前記楽曲データを読み出し、自動演奏を行い、
前記再生した音声データに基づく音声、および前記自動演奏による楽音をミキシングしてミキシングデータを生成する音声データ生成手段と、
前記音声データ生成手段が生成したミキシングデータを配信する配信手段と、
を備えたミキシングデータ配信サーバ。 - 請求項1に記載のミキシングデータ配信サーバであって、
前記同期情報には、前記音声データの録音時に演奏していたカラオケ楽曲におけるテンポおよびボリューム情報が記載されているミキシングデータ配信サーバ。 - 請求項1または請求項2に記載のミキシングデータ配信サーバであって、
前記音声データは、複数の音声データからなり、
前記同期情報には、前記複数の音声データのそれぞれの再生タイミングを示す情報が含まれるミキシングデータ配信サーバ。 - 請求項1乃至請求項3のいずれかに記載のミキシングデータ配信サーバであって、
前記記憶手段は、前記音声データ生成手段が生成したミキシングデータをさらに記憶し、
前記配信手段は、前記記憶手段から前記ミキシングデータを読み出して配信するミキシングデータ配信サーバ。 - 請求項1乃至請求項4のいずれかに記載のミキシングデータ配信サーバであって、
前記同期情報には、エフェクトパラメータが含まれており、
前記音声データ生成手段が、前記自動演奏による楽音のミキシングを行う際に、前記エフェクトパラメータの設定を反映させるミキシングデータ配信サーバ。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180014836.1A CN102822887B (zh) | 2010-03-19 | 2011-03-17 | 混频数据递送服务器 |
KR1020127024457A KR101453177B1 (ko) | 2010-03-19 | 2011-03-17 | 믹싱 데이터 배신 서버 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010063427A JP5544961B2 (ja) | 2010-03-19 | 2010-03-19 | サーバ |
JP2010-063427 | 2010-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011115210A1 true WO2011115210A1 (ja) | 2011-09-22 |
Family
ID=44649293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/056395 WO2011115210A1 (ja) | 2010-03-19 | 2011-03-17 | ミキシングデータ配信サーバ |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP5544961B2 (ja) |
KR (1) | KR101453177B1 (ja) |
CN (1) | CN102822887B (ja) |
WO (1) | WO2011115210A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104185867A (zh) * | 2012-04-02 | 2014-12-03 | 雅马哈株式会社 | 歌唱支援装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6413828B2 (ja) * | 2015-02-20 | 2018-10-31 | ブラザー工業株式会社 | 情報処理方法、情報処理装置、及びプログラム |
CN105095461A (zh) * | 2015-07-29 | 2015-11-25 | 张阳 | 家庭唱k排序方法及系统 |
CN105791937A (zh) * | 2016-03-04 | 2016-07-20 | 华为技术有限公司 | 一种音视频处理方法以及相关设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1152966A (ja) * | 1997-08-01 | 1999-02-26 | Ricoh Co Ltd | 音楽演奏システム |
JP2004053736A (ja) * | 2002-07-17 | 2004-02-19 | Daiichikosho Co Ltd | 通信カラオケシステムの使用方法 |
JP2005352330A (ja) * | 2004-06-14 | 2005-12-22 | Heartful Wing:Kk | 音声分割記録装置 |
JP2006215460A (ja) * | 2005-02-07 | 2006-08-17 | Faith Inc | カラオケ音声送受信システムおよびその方法 |
JP2007225934A (ja) * | 2006-02-23 | 2007-09-06 | Xing Inc | カラオケシステム及びそのホスト装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0129964B1 (ko) * | 1994-07-26 | 1998-04-18 | 김광호 | 악기선택 가능한 영상노래반주장치 |
JP4042601B2 (ja) * | 2003-03-25 | 2008-02-06 | ブラザー工業株式会社 | 録音再生装置 |
JP2006184684A (ja) * | 2004-12-28 | 2006-07-13 | Xing Inc | 音楽再生装置 |
-
2010
- 2010-03-19 JP JP2010063427A patent/JP5544961B2/ja not_active Expired - Fee Related
-
2011
- 2011-03-17 WO PCT/JP2011/056395 patent/WO2011115210A1/ja active Application Filing
- 2011-03-17 KR KR1020127024457A patent/KR101453177B1/ko not_active IP Right Cessation
- 2011-03-17 CN CN201180014836.1A patent/CN102822887B/zh not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1152966A (ja) * | 1997-08-01 | 1999-02-26 | Ricoh Co Ltd | 音楽演奏システム |
JP2004053736A (ja) * | 2002-07-17 | 2004-02-19 | Daiichikosho Co Ltd | 通信カラオケシステムの使用方法 |
JP2005352330A (ja) * | 2004-06-14 | 2005-12-22 | Heartful Wing:Kk | 音声分割記録装置 |
JP2006215460A (ja) * | 2005-02-07 | 2006-08-17 | Faith Inc | カラオケ音声送受信システムおよびその方法 |
JP2007225934A (ja) * | 2006-02-23 | 2007-09-06 | Xing Inc | カラオケシステム及びそのホスト装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104185867A (zh) * | 2012-04-02 | 2014-12-03 | 雅马哈株式会社 | 歌唱支援装置 |
Also Published As
Publication number | Publication date |
---|---|
CN102822887B (zh) | 2015-09-16 |
CN102822887A (zh) | 2012-12-12 |
JP5544961B2 (ja) | 2014-07-09 |
KR101453177B1 (ko) | 2014-10-22 |
KR20120128142A (ko) | 2012-11-26 |
JP2011197344A (ja) | 2011-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5331494B2 (ja) | カラオケサービスシステム、端末装置 | |
JP4423790B2 (ja) | 実演システム、ネットワークを介した実演方法 | |
KR0152677B1 (ko) | 자동효과기 제어부를 구비하는 노래반주기 | |
JP2004538496A (ja) | ネットワーク基盤の音楽演奏/歌の伴奏サービスシステム及びその方法 | |
JP5544961B2 (ja) | サーバ | |
KR100819775B1 (ko) | 네트워크 기반의 음악연주/노래반주 서비스 장치, 시스템, 방법 및 기록매체 | |
WO2011111825A1 (ja) | カラオケシステム及びカラオケ演奏端末 | |
JP2004233698A (ja) | 音楽支援装置、音楽支援サーバ、音楽支援方法およびプログラム | |
JP4475269B2 (ja) | カラオケ装置、カラオケシステム及びライブ曲再生プログラム | |
JP2008089849A (ja) | リモート演奏システム | |
WO2014142288A1 (ja) | 楽曲編集装置および楽曲編集システム | |
JP2008304821A (ja) | 楽曲合奏公開システム | |
JP4311485B2 (ja) | カラオケ装置 | |
JP4169034B2 (ja) | カラオケ装置および端末装置 | |
JP7468111B2 (ja) | 再生制御方法、制御システムおよびプログラム | |
JP3900576B2 (ja) | 音楽情報再生装置 | |
JP2022114309A (ja) | オンラインセッションサーバ装置 | |
JP3551441B2 (ja) | カラオケ装置 | |
JP2006154777A (ja) | 音楽生成システム | |
JP6783065B2 (ja) | 通信端末装置、サーバ装置及びプログラム | |
JP6453696B2 (ja) | カラオケシステム、プログラム及びカラオケ通信システム | |
JP2003015657A (ja) | カラオケ店で収録したカラオケ歌唱者の歌声をもとに音楽ソフトを編集してインターネット上で公開する音楽工房装置 | |
JP2014048471A (ja) | サーバ、音楽再生システム | |
JP2004191515A (ja) | 配信システム、再生機器およびコンテンツ再生方法 | |
JP2008009452A (ja) | カラオケシステム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180014836.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11756394 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20127024457 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11756394 Country of ref document: EP Kind code of ref document: A1 |