[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018092286A1 - Sound processing device, sound processing method and program - Google Patents

Sound processing device, sound processing method and program Download PDF

Info

Publication number
WO2018092286A1
WO2018092286A1 PCT/JP2016/084368 JP2016084368W WO2018092286A1 WO 2018092286 A1 WO2018092286 A1 WO 2018092286A1 JP 2016084368 W JP2016084368 W JP 2016084368W WO 2018092286 A1 WO2018092286 A1 WO 2018092286A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
unit
performance
indirect
component
Prior art date
Application number
PCT/JP2016/084368
Other languages
French (fr)
Japanese (ja)
Inventor
良太郎 青木
友明 平井
紀幸 大橋
加納 真弥
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Priority to PCT/JP2016/084368 priority Critical patent/WO2018092286A1/en
Publication of WO2018092286A1 publication Critical patent/WO2018092286A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback

Definitions

  • the present invention relates to a sound processing device, a sound processing method, and a program.
  • a device for enjoying content such as music and movies reproduces the sound field of a hall by adding indirect sound components (such as reverberation components) to sound signals of the content and emitting sound from a speaker.
  • Some have a function (Patent Document 1).
  • a pseudo indirect sound component can be added to the sound of an acoustic instrument being played by the user or the user's singing sound and the sound can be emitted from a speaker
  • the user can play the acoustic instrument in a hall or the like. And you can enjoy the feeling of singing.
  • the instrument sound emitted from the acoustic instrument or the singing sound emitted from the user himself
  • the instrument sound emitted from the speaker or the singing sound
  • the sound becomes unnatural for the user, which may give the user a sense of discomfort.
  • the present invention has been made in view of the above problems, and its purpose is to make the user feel that he / she is playing an acoustic instrument or singing a song in a hall or the like while avoiding an uncomfortable feeling to the user. Is to provide a sound processing device, a sound processing method, and a program that can be enjoyed.
  • a sound processing apparatus includes an input unit that receives an input of a user's performance sound, a generation unit that generates an indirect sound component corresponding to the performance sound, and outputs the performance sound.
  • Output control means for outputting the indirect sound component to the output means while restricting output to the means.
  • the sound processing method includes a generation step of generating an indirect sound component corresponding to a performance sound of a user, and the output of the indirect sound component to the output unit while limiting the output of the performance sound to the output means. And an output control step for outputting to the means.
  • the program according to the present invention includes a generating unit that generates an indirect sound component corresponding to a user's performance sound, and the output unit outputs the indirect sound component while limiting the output of the performance sound to the output unit.
  • This is a program for causing a computer to function as output control means for outputting to the computer.
  • An information storage medium according to the present invention is a computer-readable information storage medium recording the above program.
  • performance indicates an action of producing a sound
  • performance includes not only an action of playing a musical instrument but also an action of singing a song. That is, the “performance sound” includes not only the performance sound of the musical instrument but also the singing sound.
  • the present invention it is possible to allow a user to enjoy the feeling of playing an acoustic instrument or singing a song in a hall or the like while making the user feel uncomfortable. .
  • FIG. 1 shows the configuration of a system including a sound processing apparatus according to the first embodiment of the present invention.
  • the system includes a sound processing device 1, a content reproduction device 2, a microphone 3, an electronic musical instrument 4, an electric musical instrument 5, a speaker 6 (an example of sound emission means), and a display device 7.
  • the content playback apparatus 2 may play back content (such as music or video) stored in an optical storage medium, or play back content distributed via a network, for example. Also good.
  • the sound processing device 1 is an AV receiver, for example.
  • the sound processing device 1 includes a CPU 11, a memory 12, an input unit 13, an output unit 14, an acoustic signal processing unit 15, and a video signal processing unit 16.
  • the CPU 11 controls the input unit 13, the output unit 14, the acoustic signal processing unit 15, and the video signal processing unit 16 based on a program stored in the memory 12 and executes information processing.
  • the sound processing apparatus 1 is provided with a network interface for performing data communication via a network, and the program is downloaded via the network and stored in the memory 12.
  • the sound processing apparatus 1 includes a component for reading a program from an information storage medium such as a memory card, and the program is read from the information storage medium and stored in the memory 12.
  • the input unit 13 can accept input of an audio signal and a video signal based on content data from the content reproduction device 2, supply the audio signal to the audio signal processing unit 15, and supply the video signal to the video signal processing unit 16. Supply.
  • the input unit 13 can accept input of performance sound of the user.
  • “Performance” indicates an act of producing a sound
  • “performance” includes not only an act of playing an instrument but also an act of singing a song.
  • the “performance sound” includes not only the performance sound of the musical instrument but also the singing sound.
  • the performance sound of an instrument is referred to as “instrument sound” for convenience.
  • the input unit 13 is connected to the microphone 3 and can receive an input of an acoustic signal output from the microphone 3, and supplies the acoustic signal to the acoustic signal processing unit 15.
  • the microphone 3 collects sound and outputs the collected sound as an acoustic signal.
  • the microphone 3 is used to input an instrumental sound of an acoustic instrument played by a user and a user's singing sound to the sound processing device 1.
  • the input unit 13 is connected to the electronic musical instrument 4 or the electric musical instrument 5 played by the user, and can receive an input of an acoustic signal output from the electronic musical instrument 4 or the electric musical instrument 5. Is supplied to the acoustic signal processing unit 15.
  • the input unit 13 may include a wireless network interface, and an acoustic signal may be input to the input unit 13 via wireless communication.
  • an acoustic signal may be input to the input unit 13 via wireless communication.
  • content sounds and performance sounds may be input to the sound processing device 1 via wireless communication.
  • the acoustic signal processing unit 15 is, for example, a DSP (Digital Signal Processor), and executes processing related to the acoustic signal in accordance with control from the CPU 11.
  • the acoustic signal output from the acoustic signal processing unit 15 is emitted from the speaker 6 via the output unit 14.
  • the video signal processing unit 16 is, for example, a DSP (Digital Signal Processor), and executes processing related to the video signal in accordance with control from the CPU 11.
  • the video signal output from the video signal processing unit 16 is displayed on the display device 7 via the output unit 14.
  • the sound processing apparatus 1 In the sound processing apparatus 1 according to the first embodiment, a user who plays an acoustic instrument or sings a song at home or the like can enjoy the feeling of being played in a hall or the like.
  • the sound processing device 1 has a function of receiving input of musical instrument sounds of the electronic musical instrument 4 or the electric musical instrument 5 and content reproduced by the content reproduction device 2 by the speaker 6 or the display device 7.
  • the output function is provided, these functions are not essential in the first embodiment.
  • FIG. 2 shows an example of the performance environment of the user.
  • the microphone 3 is installed in front of the user U.
  • the microphone 3 is used to collect the performance sound of the user. For example, when the user is playing an acoustic instrument, the instrument sound is collected by the microphone 3 and input to the input unit 13. For example, when the user sings a song, the singing sound is collected by the microphone 3 and input to the input unit 13.
  • a plurality of speakers 6A, 6B, 6C, 6D, and 6E are installed.
  • a speaker 6A is installed in front of the user U.
  • Speakers 6B and 6C are installed on the left front and right front as viewed from the user U, and speakers 6D and 6E are installed on the left rear and right rear as viewed from the user U, respectively.
  • five speakers 6A to 6E are installed, but four or less speakers 6 may be installed, or six or more speakers 6 may be installed.
  • only the speakers 6B and 6C may be installed.
  • FIG. 3 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the first embodiment.
  • the sound processing apparatus 1 according to the first embodiment includes a performance sound adjustment unit 101, a preprocessing unit 102 (an example of a first processing unit), an indirect sound component generation unit 103, and a post processing unit 104. (An example of second processing means) and an output control unit 105.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 4 is a flowchart showing processing executed by the sound processing apparatus 1 according to the first embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the performance sound adjusting unit 101 adjusts the performance sound by performing predetermined processing on the performance sound input from the microphone 3 (S10). For example, the performance sound adjustment unit 101 performs howling reduction processing for reducing howling in the microphone 3 on the performance sound. Further, for example, the performance sound adjustment unit 101 performs effect processing (for example, processing for deleting unnecessary frequency bands or adjusting the sound pressure level before generating the indirect sound) on the performance sound. Also good.
  • the performance sound processed by the performance sound adjustment unit 101 is supplied to the preprocessing unit 102.
  • the preprocessing unit 102 performs preprocessing on the supplied sound (performance sound here) (S11). For example, the preprocessing unit 102 performs sound adjustment processing using an equalizer on the supplied sound.
  • the performance sound that has been processed by the preprocessing unit 102 is supplied to the indirect sound component generation unit 103.
  • the performance sound adjustment unit 101 and the preprocessing unit 102 are shown as separate functional blocks, but they may be configured integrally.
  • the indirect sound component generation unit 103 generates a pseudo indirect sound component corresponding to the performance sound (S12). That is, the indirect sound component generation unit 103 assumes a case where a performance sound is emitted in an acoustic space such as a hall, and generates an indirect sound component (such as a reverberation component) generated in the acoustic space in that case.
  • an indirect sound component such as a reverberation component
  • Various known methods can be employed as a method of generating the pseudo indirect sound component.
  • the indirect sound component generation unit 103 may generate the position of indirect sound (reverberation sound) in the assumed acoustic space, the delay time of the indirect sound relative to the direct sound, the ratio of the level of the indirect sound to the sound pressure level of the direct sound, Based on the information, a pseudo indirect sound component corresponding to the performance sound is generated.
  • the indirect sound component generation unit 103 includes an indirect sound component addition unit that adds an indirect sound component corresponding to the supplied sound to the supplied sound, and the indirect sound component generation unit 103 performs the performance sound. Is supplied to the indirect sound component adding unit. And the indirect sound component production
  • FIG. 5 is a diagram for explaining an example of a method for generating an indirect sound component.
  • FIG. 5A shows an example of a performance sound. This performance sound corresponds to a direct sound component.
  • the performance sound (direct sound component) shown in FIG. 5A is stored in each of the first buffer and the second buffer.
  • the indirect sound component adding unit adds an indirect sound component corresponding to the performance sound to the performance sound (direct sound component) stored in the first buffer.
  • various known methods can be adopted as a method of adding the indirect sound component.
  • the direct sound component and the indirect sound component of the performance sound are stored in the first buffer.
  • the indirect sound component generation unit 103 uses the direct sound component of the performance sound stored in the second buffer (see FIG. 5B) from the direct sound component and the indirect sound component of the performance sound stored in the first buffer (FIG. 5B). By subtracting FIG. 5 (A)), only the indirect sound component as shown in FIG. 5 (C) is acquired.
  • the method of generating the indirect sound component is not limited to the above example.
  • the performance sound (direct sound component) shown in FIG. 5A may be stored in the first buffer, and the indirect sound component corresponding to the performance sound (direct sound component) may be generated in the second buffer. .
  • the indirect sound component generated by the indirect sound component generation unit 103 is supplied to the post processing unit 104.
  • the post-processing unit 104 performs post-processing on the supplied sound (indirect sound component here) (S13). For example, the post processing unit 104 performs processing for adjusting the supplied sound in accordance with the characteristics of the speaker 6.
  • the indirect sound component that has been processed by the post-processing unit 104 is supplied to the output control unit 105.
  • the output control unit 105 outputs the supplied indirect sound component to the output unit 14 (an example of output means) (S14). That is, the output control unit 105 outputs the indirect sound component to the output unit 14 while restricting the performance sound (instrumental instrument sound or singing sound of the acoustic musical instrument) input from the microphone 3 from being output to the output unit 14.
  • the indirect sound component output to the output unit 14 is emitted by the speaker 6.
  • “restricting the output of the performance sound to the output unit 14” means, for example, that the performance sound is not output to the output unit 14. That is, the output control unit 105 outputs only the indirect sound component to the output unit 14 without outputting the performance sound (direct sound component) input from the microphone 3 to the output unit 14. In other words, the output control unit 105 prevents the performance sound (direct sound component) input from the microphone 3 from being emitted from the speaker 6 and causes only the indirect sound component to be emitted from the speaker 6.
  • “Restricting the output of the performance sound to the output unit 14” means, for example, outputting the performance sound to the output unit 14 so that the performance sound is emitted at a considerably lower volume than the indirect sound component.
  • the output control unit 105 emits a performance sound (direct sound component) input from the microphone 3 at a considerably lower volume (a volume that is difficult to hear by the user's ears) than the normal volume.
  • the indirect sound component may be output to the output unit 14 at a normal volume while being output to the output unit 14.
  • the output control unit 105 causes the performance sound (direct sound component) input from the microphone 3 to be emitted from the speaker 6 at a considerably lower volume than the normal volume, and the indirect sound component is the normal volume. The sound is emitted from the speaker 6.
  • the output control unit 105 When the speaker 6 is built in the sound processing device 1, the output control unit 105 outputs the supplied indirect sound component to the speaker 6 (another example of output means).
  • a pseudo indirect sound component (such as a reverberation component) corresponding to a user's performance sound (instrument sound or singing sound of an acoustic instrument) is transmitted from the speaker 6. Since the sound is emitted, the user can enjoy the feeling of playing an acoustic instrument or singing a song in a hall or church.
  • the sound processing device 1 according to the first embodiment since the user's performance sound (instrumental sound or singing sound of an acoustic instrument) is restricted from being emitted from the speaker 6, It is possible to prevent the user from feeling uncomfortable due to the ability to hear performance sounds emitted from different positions.
  • the hardware configuration of the sound processing apparatus 1 according to the second embodiment is the same as that of the first embodiment.
  • the user's performance environment is basically the same as in the first embodiment.
  • the microphone 3 is unnecessary.
  • the sound processing apparatus 1 In the sound processing apparatus 1 according to the second embodiment, a user who is playing the electronic musical instrument 4 or the electric musical instrument 5 at home or the like can enjoy the feeling of playing in a hall or the like.
  • the sound processing device 1 has a function of outputting the content reproduced by the content reproduction device 2 through the speaker 6 and the display device 7, but these functions are the second embodiment. Then it is not essential.
  • FIG. 6 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the second embodiment.
  • the sound processing apparatus 1 according to the second embodiment includes a performance sound adjustment unit 111, a preprocessing unit 112, an indirect sound component generation unit 113, a post processing unit 114, and an output control unit 115.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 7 is a flowchart showing processing executed by the sound processing apparatus 1 according to the second embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the performance sound adjusting unit 111 adjusts the performance sound by performing predetermined processing on the performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 (S20). For example, the performance sound adjustment unit 111 performs effect processing (for example, distortion processing on the guitar sound) on the performance sound. Note that the performance sound adjusting unit 111 does not execute a process that generates a large delay, and only performs a process with a small delay. The performance sound that has been processed by the performance sound adjustment unit 111 is supplied to the preprocessing unit 112.
  • the preprocessing unit 112 performs preprocessing on the supplied sound (performance sound here) (S21). Further, the indirect sound component generation unit 113 generates a pseudo indirect sound component corresponding to the performance sound (S22). Then, the post processing unit 114 performs post processing on the supplied sound (indirect sound component here) (S23). Steps S21 to S23 are basically the same as steps S11 to S13 of the first embodiment, and the preprocessing unit 112, the indirect sound component generation unit 113, and the postprocessing unit 114 are the preprocessing unit 102 of the first embodiment, Since it is basically the same as the indirect sound component generation unit 103 and the post processing unit 104, description thereof is omitted here.
  • the performance sound that has been processed by the performance sound adjustment unit 111 is also supplied to the output control unit 115 via the path 119.
  • a path 119 is a path that reaches the output control unit 115 without going through the preprocessing unit 112, the indirect sound component generation unit 113, and the postprocessing unit 114.
  • the path 119 is a path with less delay compared to the path reaching the output control unit 115 via the preprocessing unit 112, the indirect sound component generation unit 113, and the post processing unit 114.
  • the preprocessing unit 112, the indirect sound component generation unit 113, and the post processing unit 114 execute processing based on the performance sound stored in the buffer. However, in the path 119, the performance sound is stored in the buffer. Without being supplied to the output control unit 115.
  • the output control unit 115 mixes the performance sound (direct sound component) supplied via the path 119 and the indirect sound component generated by the indirect sound component generation unit 113 and outputs the mixed sound to the output unit 14. (S24).
  • the mixed sound output to the output unit 14 is emitted by the speaker 6.
  • FIG. 8 is a diagram for explaining the sound emitted from the speaker 6.
  • the performance sound B is input after the performance sound A is input.
  • These performance sounds A and B correspond to direct sound components.
  • the indirect sound component generation unit 113 generates an indirect sound component A corresponding to the performance sound A and supplies the indirect sound component A to the output control unit 115 as shown in FIG.
  • the indirect sound component generation unit 113 generates the indirect sound component B corresponding to the performance sound B after the indirect sound component A is generated, but is omitted here.
  • the indirect sound component A is generated in these functional blocks.
  • the sound is emitted from the speaker 6 with a delay corresponding to the time required for processing.
  • the performance sounds A and B are emitted from the speaker 6 via a path 119 with little delay (a path without substantial delay). For this reason, as shown in FIG. 8 (C), the indirect sound component A corresponding to the performance sound A is delayed with respect to the actual sound, mixed with the performance sound B after the performance sound A, and the mixed sound. Is emitted from the speaker 6.
  • pseudo indirect sound components such as reverberation components
  • the performance sound of the user is emitted from the speaker 6 via the path 119 with a small delay, and thus the performance sound is played by the user after being played by the user.
  • the delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
  • the indirect sound component corresponding to the performance sound of the user is generated later than the indirect sound component generated when the performance sound is emitted in the actual acoustic space.
  • the indirect sound component generated when the performance sound is emitted in the actual acoustic space.
  • a user who is playing the electronic musical instrument 4 or the electric musical instrument 5 at home or the like enjoys the feeling of playing in a hall or the like as a member of a music content player. Is possible.
  • a configuration for realizing such a function will be described.
  • FIG. 9 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the third embodiment.
  • the sound processing apparatus 1 according to the third embodiment includes a performance sound adjustment unit 121, a preprocessing unit 122, an indirect sound component generation unit 123, a post processing unit 124, an output control unit 125, and a content decoding unit. Part 126 is included.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 10 is a flowchart showing processing executed by the sound processing apparatus 1 according to the third embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the content decoding unit 126 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S30).
  • the performance sound adjusting unit 121 adjusts the performance sound by performing predetermined processing on the performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 (S31). Since step S31 is the same as step S20 of the second embodiment, and the performance sound adjustment unit 121 is the same as the performance sound adjustment unit 111 of the second embodiment, the description thereof is omitted here.
  • steps S30 and S31 are shown to be executed in order, but steps S30 and S31 are executed in parallel.
  • the content sound converted into the PCM signal is mixed with the performance sound converted into the PCM signal by the AD conversion circuit (S32), and the mixed sound is supplied to the preprocessing unit 122. Note that the performance sound is also supplied to the output control unit 125 via the path 129.
  • the route 129 is the same as the route 119 of the second embodiment.
  • the preprocessing unit 122 performs preprocessing on the mixed sound (S33). For example, the preprocessing unit 122 performs an audio adjustment process using an equalizer on the mixed sound.
  • the mixed sound that has been processed by the preprocessing unit 122 is supplied to the indirect sound component generation unit 123.
  • the indirect sound component generation unit 123 generates a pseudo indirect sound component corresponding to the mixed sound (S34). That is, the indirect sound component generation unit 123 generates an indirect sound component corresponding to the performance sound (direct sound component) and the content sound.
  • the indirect sound component generation unit 123 assumes a case where the mixed sound is emitted in an acoustic space such as a hall, and generates an indirect sound component (such as a reverberation component) generated in the acoustic space in that case.
  • an indirect sound component such as a reverberation component
  • Various known methods can be employed as a method of generating the pseudo indirect sound component.
  • the indirect sound component generation unit 123 performs a process of adding an indirect sound to the mixed sound stored in the first buffer, and then stores the mixed sound in the second buffer from the sound stored in the first buffer.
  • the indirect sound component corresponding to the mixed sound is acquired by subtracting the original mixed sound.
  • the indirect sound component generation unit 123 generates an indirect sound corresponding to the mixed sound in the second buffer by executing a process of generating the indirect sound based on the mixed sound stored in the first buffer. By doing so, you may make it acquire the indirect sound component corresponding to the said mixed sound.
  • the indirect sound component generated by the indirect sound component generation unit 123 is supplied to the output control unit 125 through the post processing unit 124 together with the content sound.
  • the post-processing unit 124 executes post-processing (S35).
  • Step S35 is basically the same as step S13 of the first embodiment, and the post-processing unit 124 is basically the same as the post-processing unit 104 of the first embodiment, and thus description thereof is omitted here.
  • the output control unit 125 mixes the performance sound (direct sound component) supplied via the path 129 and the content sound and indirect sound component supplied from the post-processing unit 124, and sends the mixed sound to the output unit 14.
  • Output (S36) The mixed sound output to the output unit 14 is emitted by the speaker 6.
  • FIG. 11 is a diagram for explaining the sound emitted from the speaker 6.
  • the performance sound B and the content sound B are input after the performance sound A and the content sound A are input.
  • the performance sounds A and B correspond to direct sound components.
  • the performance sound A and the content sound A are shown as being slightly shifted in time, but the input time points of the performance sound A and the content sound A are the same. The same applies to the performance sound B and the content sound B.
  • the indirect sound component generation unit 123 generates an indirect sound component A corresponding to the mixed sound of the performance sound A and the content sound A as shown in FIG.
  • the indirect sound component A is supplied to the output control unit 125 together with the content sound A.
  • the indirect sound component generation unit 113 generates the indirect sound component B corresponding to the mixed sound of the performance sound B and the content sound B after the indirect sound component A is generated, but is omitted here. is doing.
  • the amount of processing in the preprocessing unit 122, the indirect sound component generation unit 123, and the post processing unit 124 is large, and it takes time for processing in these functional blocks.
  • the sound is emitted from the speaker 6 with a delay of a delay time corresponding to the time required for processing in the functional block.
  • the performance sounds A and B are emitted from the speaker 6 via a path 129 with little delay (a path without substantial delay). For this reason, as shown in FIG. 11 (C), the indirect sound component A is delayed from the actual time and mixed with the performance sound B after the performance sound A, and the mixed sound is emitted from the speaker 6. Is done.
  • a pseudo indirect sound component with respect to the user's performance sound (instrument sound of the electronic musical instrument 4 or the electric musical instrument 5) and the multi-channel content sound. Since (severance component etc.) is added and sound is emitted from the speaker 6, the user can enjoy the feeling of being performed in a hall or the like as a member of a music content player. Further, according to the sound processing apparatus 1 according to the third embodiment, the performance sound of the user is emitted from the speaker 6 via the path 129 with a small delay, and therefore the performance sound is played by the user after being played by the user. The delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
  • the user can enjoy the feeling of being a member of a music content player and singing a song in a hall or playing an acoustic instrument. .
  • a configuration for realizing such a function will be described.
  • FIG. 12 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the fourth embodiment.
  • the sound processing apparatus 1 according to the fourth embodiment includes a performance sound adjustment unit 131, a preprocessing unit 132, an indirect sound component generation unit 133, a post processing unit 134, an output control unit 135, and a content decoding unit. Part 136.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 13 is a flowchart showing processing executed by the sound processing apparatus 1 according to the fourth embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the content decoding unit 136 converts the multi-channel content sound input from the content reproduction apparatus 2 into a PCM signal by format decoding (S40).
  • Step S40 is basically the same as step S30 of the third embodiment, and the content decoding unit 136 is basically the same as the content decoding unit 126 of the third embodiment.
  • the content decoding unit 136 of the fourth embodiment includes a specific component removing unit 136A, and in step S40, the specific component removing unit 136A removes the specific component included in the content sound.
  • the specific component removing unit 136A removes the specific component corresponding to the performance sound input from the microphone 3 from the content sound. For example, when the user's singing sound is input from the microphone 3, the specific component removing unit 136A removes the vocal component from the content sound. Since the vocal component is often included in the center channel in the multi-channel content sound, the specific component removing unit 136A removes the vocal component from the content sound by removing the center channel.
  • the method of removing the vocal component from the content sound is not limited to this method, and various known methods can be employed.
  • the specific component removal unit 136A may remove the instrument sound component of the acoustic instrument from the content sound.
  • the type of performance sound input from the microphone 3 for example, singing sound, guitar sound, piano sound, or the like
  • the performance sound adjusting unit 131 adjusts the performance sound by performing a predetermined process on the performance sound input from the microphone 3 (S41).
  • Step S41 is the same as step S10 of the first embodiment, and the performance sound adjustment unit 131 is the same as the performance sound adjustment unit 101 of the first embodiment, and thus description thereof is omitted here.
  • steps S40 and S41 are shown to be executed in order, but steps S40 and S41 are executed in parallel.
  • the content sound converted into the PCM signal is mixed with the performance sound converted into the PCM signal by the AD conversion circuit (S42), and the mixed sound is supplied to the preprocessing unit 132.
  • processing by the preprocessing unit 132, the indirect sound component generation unit 133, and the post processing unit 134 is executed (S43, S44, S45).
  • Steps S43 to S45 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 132, the indirect sound component generation unit 133, and the postprocessing unit 134 are the preprocessing unit 122 of the third embodiment, Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
  • the route 139 is the same as the route 119 of the second embodiment.
  • the output control unit 135 includes the performance sound (direct sound component) supplied via the path 139 and the content sound and indirect sound component supplied from the post-processing unit 134. And the mixed sound is output to the output unit 14 (S46). The mixed sound output to the output unit 14 is emitted by the speaker 6.
  • a pseudo indirect sound component with respect to a user's performance sound (singing sound or instrumental sound of an acoustic instrument) and multi-channel content sound. Component) and the like, and the sound is emitted from the speaker 6, the user can enjoy the feeling of being a member of the music content player and singing a song in a hall or playing an acoustic instrument.
  • the performance sound of the user is emitted from the speaker 6 through the path 139 with a small delay, so that the performance sound is played after being played by the user. The delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
  • the sound processing apparatus 1 for example, when the user is singing a song, the vocal component included in the content sound is removed, so the user becomes a vocal of the music content. You can enjoy the feeling of singing in a hall.
  • the vocal component of the content sound is removed before the performance sound and the content sound are mixed.
  • the removal of the vocal component of the content sound is performed by mixing the performance sound and the content sound. It may be done later.
  • the hardware configuration of the sound processing apparatus 1 according to the fifth embodiment is the same as that of the first embodiment.
  • the user's performance environment is the same as in the first embodiment or the second embodiment.
  • the sound processing apparatus 1 it is possible to enjoy the feeling that a user is playing in a hall or the like as a member of a music content performer.
  • the sound processing apparatus 1 according to the fifth embodiment it is possible to feel a sense of unity between the user's performance sound and the content sound.
  • a configuration for realizing such a function will be described.
  • FIG. 14 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the fifth embodiment.
  • the sound processing apparatus 1 according to the fifth embodiment includes a performance sound adjustment unit 141, a preprocessing unit 142, an indirect sound component generation unit 143, a post processing unit 144, an output control unit 145, and a content decoding unit. 146 and a content sound adjustment unit 147.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 15 is a flowchart showing processing executed by the sound processing apparatus 1 according to the fifth embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the content decoding unit 146 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S50).
  • Step S50 is basically the same as step S30 in the third embodiment, and the content decoding unit 146 is basically the same as the content decoding unit 126 in the third embodiment, and thus description thereof is omitted here.
  • the content sound adjustment unit 147 adjusts the content sound in order to match the characteristics of the performance sound and the content sound (S51).
  • the content sound adjustment unit 147 includes an indirect sound component removal unit 147A.
  • the indirect sound component removing unit 147A removes the indirect sound component included in the content sound in order to match the amount of the indirect sound component between the performance sound and the content sound.
  • Various known methods can be adopted as a method for removing the indirect sound component included in the content sound.
  • the performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 includes only the direct sound component and does not include the indirect sound component, whereas the content sound includes the direct sound component and the indirect sound component.
  • the indirect sound component removal unit 147A removes the indirect sound component included in the content sound and outputs only the direct sound component of the content sound.
  • the indirect sound component removing unit 147A identifies an indirect sound component included in the content sound, and removes the indirect sound component by lowering the sound pressure level of the identified indirect sound component. That is, the indirect sound component removing unit 147A removes the indirect sound component almost completely by lowering the sound pressure level of the indirect sound component to zero (almost zero).
  • the indirect sound component removing unit 147A may remove (reduce) the indirect sound component to some extent by lowering the sound pressure level of the indirect sound component to some extent. That is, “removing the indirect sound component” includes not only removing the indirect sound component almost completely but also removing (reducing) the indirect sound component to some extent.
  • “a certain degree” is such a level that the user does not feel uncomfortable even if the indirect sound component remains.
  • the performance sound adjustment unit 131 adjusts the performance sound (S52).
  • Step S52 is the same as steps S10 and 20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 141 is the same as the performance sound adjustment units 110 and 111 of the first embodiment or the second embodiment. The description is omitted here.
  • steps S50, S51 and step S52 are shown to be executed in order, but steps S50, S51 and step S52 are executed in parallel.
  • the content sound from which the indirect sound component has been removed (direct sound component) is mixed with the performance sound (S53), and the mixed sound is supplied to the indirect sound component generation unit 143 via the preprocessing unit 142.
  • the preprocessing unit 142 performs preprocessing on the mixed sound (S54). Since step S54 is the same as step S11 of the first embodiment, and the preprocessing unit 142 is the same as the preprocessing unit 112 of the first embodiment, description thereof is omitted here.
  • the indirect sound component generation unit 143 generates a pseudo indirect sound component corresponding to the mixed sound (the direct sound component of the content sound and the direct sound component of the performance sound) (S55).
  • Step S55 is basically the same as step S34 of the third embodiment, and the indirect sound component generation unit 143 is basically the same as the indirect sound component generation unit 123 of the third embodiment, and thus description thereof is omitted here. To do.
  • the indirect sound component generated by the indirect sound component generation unit 143 is supplied to the output control unit 145 through the post processing unit 144.
  • the post processing unit 144 executes post processing (S56).
  • Step S56 is the same as step S13 of the first embodiment, and the post-processing unit 144 is the same as the post-processing unit 114 of the first embodiment, and thus description thereof is omitted here.
  • the content sound from which the indirect sound component is removed (that is, the direct sound component of the content sound) is also supplied to the output control unit 145.
  • the performance sound (direct sound component) is supplied to the output control unit 145 via the path 149.
  • the route 149 is the same as the route 129 of the second embodiment.
  • the output control unit 145 mixes the performance sound (direct sound component) supplied via the path 149, the content sound (direct sound component), and the indirect sound component generated by the indirect sound component generation unit 143, The mixed sound is output to the output unit 14 (S57). The mixed sound output to the output unit 14 is emitted by the speaker 6.
  • the user can enjoy the feeling of playing in a hall or the like as a member of a music content player.
  • the sound processing device 1 according to the fifth embodiment it is possible to match the amounts of indirect sound components of the user performance sound and the content sound, and as a result, the user performance sound and the content sound The user can feel a sense of unity.
  • the instrument sound or singing sound of an acoustic instrument may be input from the microphone 3.
  • the performance sound may include an indirect sound component
  • the performance sound adjustment unit 141 may remove the indirect sound component included in the performance sound.
  • the sixth embodiment is a modification of the fifth embodiment.
  • the indirect sound component of the content sound is removed only when a performance sound is input.
  • FIG. 16 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the sixth embodiment
  • FIG. 17 is a flowchart showing processing executed by the sound processing apparatus 1 according to the sixth embodiment. It is.
  • the sound processing apparatus 1 according to the sixth embodiment includes a performance sound adjustment unit 141, a preprocessing unit 142, an indirect sound component generation unit 143, a post processing unit 144, an output control unit 145, and a content decoding unit. 146, a content sound adjustment unit 147, and an input detection unit 148.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • the sound processing apparatus 1 according to the sixth embodiment is different from the fifth embodiment in that it includes an input detection unit 148 and includes steps S51A and 51B instead of step S51.
  • steps S51A and 51B instead of step S51.
  • the input detection unit 148 detects that a performance sound is input from the electronic musical instrument 4, the electric musical instrument 5, or the microphone 3.
  • the indirect sound component removal unit 147A removes the indirect sound component included in the content sound according to the detection result of the input detection unit 148. Specifically, when it is detected that a performance sound is input (S51A: Yes), the indirect sound component removing unit 147A removes the indirect sound component from the content sound (S51B). On the other hand, when it is not detected that the performance sound is input (S51A: No), the indirect sound component removing unit 147A does not remove the indirect sound component from the content sound. Although omitted in FIG. 17, in this case, steps S52 and S53 are not executed, only the content sound is supplied to the preprocessing unit 142, and the content sound is output to the output unit 14 in step S57.
  • the indirect sound component is removed from the content sound only when a performance sound is input.
  • the performance sound is not input, it is not necessary to match the amount of the indirect sound component between the performance sound and the content sound, and it is not necessary to remove the indirect sound component from the content sound.
  • the process of removing the indirect sound component from the content sound is not executed. It becomes possible to reduce the processing load of the processing apparatus 1.
  • the hardware configuration of the sound processing apparatus 1 according to the seventh embodiment is the same as that of the first embodiment.
  • the user's performance environment is the same as in the first embodiment or the second embodiment.
  • the indirect sound component included in the content sound is removed, whereas the sound processing according to the seventh embodiment is performed.
  • an indirect sound component is added to the performance sound in accordance with the amount of the indirect sound component of the content sound.
  • FIG. 18 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the seventh embodiment.
  • the sound processing apparatus 1 according to the seventh embodiment includes a performance sound adjustment unit 151, a preprocessing unit 152, an indirect sound component generation unit 153, a post processing unit 154, an output control unit 155, and a content decoding unit. 156 and an indirect sound component amount analysis unit 157.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 19 is a flowchart showing processing executed by the sound processing apparatus 1 according to the seventh embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the content decoding unit 156 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S60).
  • Step S60 is basically the same as step S30 of the third embodiment, and the content decoding unit 156 is basically the same as the content decoding unit 126 of the third embodiment, and thus description thereof is omitted here.
  • the indirect sound component amount analysis unit 157 analyzes the amount of the indirect sound component included in the content sound (S61). For example, the indirect sound component amount analysis unit 157 analyzes the number and magnitude (sound pressure level) of indirect sound components included in the content sound. Various known methods can be adopted as a method of analyzing the amount of indirect sound component included in the content sound.
  • the performance sound adjustment unit 151 adjusts the performance sound (S62).
  • Step S62 is basically the same as steps S10 and S20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 151 is basically the same as the performance sound adjustment units 101 and 111 of the first embodiment or the second embodiment. The same.
  • the performance sound adjustment unit 151 of the seventh embodiment also plays a role of adjusting the performance sound to match the characteristics of the performance sound and the content sound. That is, the performance sound adjustment unit 151 includes an indirect sound component addition unit 151A.
  • the indirect sound component addition unit 151A adds an indirect sound component corresponding to the performance sound to the performance sound.
  • the indirect sound component adding unit 151A sets the amount of the indirect sound component added to the performance sound based on the analysis result of the indirect sound component amount analyzing unit 157. That is, the indirect sound component adding unit 151A sets the number and size of indirect sound components added to the performance sound in accordance with the number and size of indirect sound components included in the content sound. That is, the indirect sound component adding unit 151A sets the number and magnitude of the indirect sound components added to the performance sound to the same extent as the number and magnitude of the indirect sound components included in the content sound.
  • the content sound and the performance sound to which the indirect sound component is added by the indirect sound component adding unit 151A are mixed (S63), and the mixed sound is supplied to the preprocessing unit 152. Is done.
  • processing by the preprocessing unit 152, the indirect sound component generation unit 153, and the post processing unit 154 is executed (S64, S65, S66).
  • Steps S64 to S66 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 152, the indirect sound component generation unit 153, and the postprocessing unit 154 are the same as the preprocessing unit 122 of the third embodiment. Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
  • the route 159 is the same as the route 119 of the second embodiment.
  • the output control unit 155 mixes the content sound, the performance sound to which the indirect sound component is added by the indirect sound component adding unit 151A, and the indirect sound component generated by the indirect sound component generation unit 153, and outputs the mixed sound.
  • the data is output to the output unit 14 (S67).
  • the mixed sound output to the output unit 14 is emitted by the speaker 6.
  • the user can enjoy the feeling of being performed in a hall or the like as a member of a music content player.
  • the sound processing device 1 according to the seventh embodiment it is possible to match the amounts of indirect sound components of the user performance sound and the content sound, and as a result, the user performance sound and the content sound The user can feel a sense of unity.
  • the instrument sound or singing sound of an acoustic instrument may be input from the microphone 3.
  • the performance sound adjusting unit 151 since the indirect sound component may be included in advance in the performance sound included from the microphone 3, the performance sound adjusting unit 151 once removes the indirect sound component included in the performance sound, and then the indirect sound component.
  • the indirect sound component may be added to the performance sound by the adding unit 151A.
  • the eighth embodiment is a modification of the seventh embodiment.
  • the method of adding the indirect sound to the performance sound is changed according to the type of the performance sound.
  • FIG. 20 is a functional block diagram illustrating functions realized by the sound processing device 1 according to the eighth embodiment
  • FIG. 21 is a flowchart illustrating processing executed by the sound processing device 1 according to the eighth embodiment. It is.
  • the sound processing apparatus 1 according to the eighth embodiment includes a performance sound adjustment unit 151, a preprocessing unit 152, an indirect sound component generation unit 153, a post processing unit 154, an output control unit 155, and a content decoding unit. 156, an indirect sound component amount analysis unit 157, and a performance sound type identification unit 158.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • the sound processing apparatus 1 according to the eighth embodiment is different from the seventh embodiment in that it includes a performance sound type specifying unit 158 and includes steps S62A and S62B instead of step S62.
  • steps S62A and S62B instead of step S62.
  • the performance sound type specifying unit 158 specifies the type of the input performance sound (step S62A). For example, the performance sound type identification unit 158 determines whether or not the input performance sound is an instrument sound. When the input performance sound is an instrument sound, the performance sound type specifying unit 158 specifies the type of the instrument sound. In other words, the performance sound type identification unit 158 identifies which of the multiple types of musical instruments (for example, guitar, violin, piano, etc.) the input performance sound is. Further, for example, the performance sound type identification unit 158 determines whether or not the input performance sound is a singing sound. Various known methods can be adopted as a method for specifying the type of performance sound.
  • the indirect sound component adding unit 151A of the eighth embodiment is based on the indirect sound corresponding to the performance sound based not only on the analysis result of the indirect sound component amount analysis unit 157 but also on the specification result of the performance sound type specifying unit 158.
  • the component is added to the performance sound (S62B). That is, the indirect sound component adding unit 151A sets the indirect sound component to be added to the performance sound based on the result of the specification of the performance sound type specifying unit 158 as well as the result of the analysis of the indirect sound component amount analysis unit 157. To do.
  • the indirect sound component adding unit 151A adds the indirect sound to the performance sound based on the radiation characteristic that is different for each type of performance sound. Set ingredients.
  • the indirect sound component adding unit 151A sets the channel corresponding to the frontal direction when the performance sound is the instrumental sound of the guitar. Indirect sound components (reverberation components, etc.) are added to it. Alternatively, the indirect sound component adding unit 151A makes the amount of the indirect sound component added to the channel corresponding to the front direction larger than the amount of the indirect sound component added to the other channels.
  • the indirect sound component adding unit 151 ⁇ / b> A has a channel corresponding to the upward direction when the performance sound is a violin instrument sound. Indirect sound components (reverberation components, etc.) are added to.
  • the indirect sound component adding unit 151A makes the amount of the indirect sound component added to the channel corresponding to the upward direction larger than the amount of the indirect sound component added to the other channel.
  • the indirect sound component adding unit 151A is indirectly connected to the channel corresponding to the front direction. Adds sound components (such as reverberation components). Alternatively, the indirect sound component adding unit 151A increases the amount of the indirect sound component added to the channel corresponding to the front direction than the amount of the indirect sound component added to the other channels.
  • the sound processing device 1 According to the sound processing device 1 according to the eighth embodiment described above, it is possible to add an indirect sound component to the performance sound based on the radiation characteristics of the user's performance sound, and a more natural indirection. Sound components can be added to the performance sound.
  • the hardware configuration of the sound processing apparatus 1 according to the ninth embodiment is the same as that of the first embodiment.
  • the user's performance environment is the same as in the first embodiment or the second embodiment.
  • the amount of the indirect sound component is matched between the performance sound and the content sound, whereas in the sound processing device 1 according to the ninth embodiment, the timbre is changed between the performance sound and the content sound. It is designed to match.
  • FIG. 22 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the ninth embodiment.
  • the sound processing apparatus 1 according to the ninth embodiment includes a performance sound adjustment unit 161, a preprocessing unit 162, an indirect sound component generation unit 163, a post processing unit 164, an output control unit 165, and a content decoding unit. 166, a first timbre analysis unit 167, and a second timbre analysis unit 168.
  • These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
  • FIG. 23 is a flowchart showing processing executed by the sound processing apparatus 1 according to the ninth embodiment.
  • the function of each functional block will be described with reference to FIG.
  • the content decoding unit 166 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S70).
  • Step S70 is basically the same as step S30 of the third embodiment, and the content decoding unit 166 is basically the same as the content decoding unit 126 of the third embodiment, and thus description thereof is omitted here.
  • the second timbre analysis unit 168 analyzes the timbre of the content sound (S71). For example, when a plurality of types of instrument sounds are included in the content sound, the second timbre analysis unit 168 identifies a type of instrument sound included in the performance sound among the plurality of types of instrument sounds, and the instrument sound Analyzing the tone of For example, when the performance sound includes a singing sound, the second timbre analysis unit 168 analyzes the timbre of the singing sound included in the content sound.
  • Various known methods can be adopted as a method for specifying an instrument sound or a singing sound included in the content sound and a method for analyzing a tone color of an instrument sound or a singing sound.
  • the first timbre analysis unit 167 analyzes the timbre of the performance sound (S72).
  • the first timbre analysis unit 167 identifies an instrument sound or singing sound included in the performance sound, and analyzes the timbre of the instrument sound or singing sound.
  • Various known methods can be employed as a method for analyzing the tone color of the performance sound.
  • steps S70 and 71 and step S72 are shown to be executed in order, but steps S70, S71 and step S72 are executed in parallel.
  • the performance sound adjustment unit 161 adjusts the performance sound (S73).
  • Step S62 is basically the same as steps S10 and S20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 161 is basically the same as the performance sound adjustment units 101 and 111 of the first embodiment or the second embodiment. The same.
  • the performance sound adjustment unit 161 of the ninth embodiment also plays a role of adjusting the performance sound to match the characteristics of the performance sound and the content sound. That is, the performance sound adjustment unit 161 includes a timbre adjustment unit 161A.
  • the timbre adjustment unit 161A is based on a comparison between the analysis result of the first timbre analysis unit 167 and the analysis result of the second timbre analysis unit 168. Adjust the tone of the performance sound.
  • the first timbre analysis unit 167 obtains an analysis result that there are many high-frequency components of a violin sound (violin instrument sound) included in the performance sound, and the high-frequency component of the violin sound included in the content sound is
  • the second timbre analysis unit 168 obtains an analysis result indicating that there is little
  • the timbre adjustment unit 161A reduces the high-frequency component of the violin sound included in the performance sound.
  • the tone color adjustment unit 161A adjusts the specific band component of the instrument sound included in the performance sound so as to be set to the same level as the high frequency component of the same type of instrument sound included in the sound.
  • the content sound and the performance sound whose tone color is adjusted by the tone color adjusting unit 161A are mixed (S74), and the mixed sound is supplied to the preprocessing unit 162.
  • processing by the preprocessing unit 162, the indirect sound component generation unit 163, and the post processing unit 164 is executed (S75, S76, S77).
  • Steps S75 to S77 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 162, the indirect sound component generation unit 163, and the postprocessing unit 164 are the preprocessing unit 122 of the third embodiment, Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
  • the performance sound whose tone color has been adjusted by the tone color adjustment unit 161A is supplied to the output control unit 165 via the path 169.
  • the route 169 is the same as the route 119 of the second embodiment.
  • the output control unit 165 outputs the performance sound supplied through the path 169 (the performance sound whose tone color has been adjusted by the tone color adjustment unit 161A) and the post-processing unit 164.
  • the supplied content sound and indirect sound component are mixed and the mixed sound is output to the output unit 14 (S78).
  • the mixed sound output to the output unit 14 is emitted by the speaker 6.
  • the user can enjoy the feeling of playing in a hall or the like as a member of a music content player.
  • the sound processing device 1 according to the ninth embodiment it is possible to match the tone of the user performance sound and the content sound, and as a result, the user's sense of unity of the performance sound and the content sound can be obtained. Will be able to feel enough.
  • the timbre adjustment unit 161A that adjusts the timbre of the performance sound
  • the timbre that adjusts the timbre of the content sound based on the comparison between the analysis result of the first timbre analysis unit 167 and the analysis result of the second timbre analysis unit 168.
  • An adjustment unit may be provided.
  • the timbre adjustment unit obtains, for example, an analysis result that the high frequency component of the violin sound included in the performance sound is obtained by the first timbre analysis unit 167, and the high frequency component of the violin sound included in the content sound is obtained.
  • the second timbre analysis unit 168 obtains an analysis result indicating that there is little, the high frequency component of the violin sound included in the content sound may be increased.
  • the timbre adjustment unit obtains an analysis result by the first timbre analysis unit 167 that the high frequency component of the violin sound (violin instrument sound) is high in the performance sound, and the violin sound in the content sound.
  • the second timbre analysis unit 168 obtains an analysis result indicating that the high-frequency component of the guitar sound, which is a different instrument sound, is obtained, the high-frequency component of the guitar sound included in the content sound is reduced. Also good.
  • both a timbre adjustment unit 161A for adjusting the timbre of the performance sound and a timbre adjustment unit for adjusting the timbre of the content sound may be provided.
  • the first timbre analysis unit 167 obtains an analysis result that the high frequency component of the violin sound included in the performance sound is obtained, and the second timbre analysis unit 168 determines the high frequency of the violin sound included in the content sound. If the analysis result indicates that there are few band components, the high frequency component of the violin sound included in the performance sound and the high frequency component of the violin sound included in the content sound should be set to the same level. The high frequency component of the violin sound included and the high frequency component of the violin sound included in the performance sound may be adjusted respectively.
  • a plurality of the first to ninth embodiments may be combined.
  • the above description is based on the assumption that the sound processing apparatus 1 is an AV receiver, and the functions described above can be realized by using an AV receiver.
  • a device other than the receiver may be used, and the functions described above may be realized by a device other than the AV receiver.
  • the sound processing device 1 may be built in a speaker.
  • the sound processing device 1 may be realized by a desktop computer, a laptop computer, a tablet computer, a smartphone, or the like.
  • the sound processing apparatus limits input means for receiving an input of a user's performance sound, generation means for generating an indirect sound component corresponding to the performance sound, and output of the performance sound to the output means. And output control means for outputting the indirect sound component to the output means.
  • the sound processing method includes a generation step of generating an indirect sound component corresponding to a performance sound of a user, and the output of the indirect sound component to the output unit while limiting the output of the performance sound to the output means. And an output control step for outputting to the means.
  • the program according to the present invention includes a generating unit that generates an indirect sound component corresponding to a user's performance sound, and the output unit outputs the indirect sound component while limiting the output of the performance sound to the output unit.
  • This is a program for causing a computer to function as output control means for outputting to the computer.
  • An information storage medium according to the first invention is a computer-readable information storage medium storing the above program.
  • the generating means supplies the performance sound to the adding means, and from the adding means
  • the indirect sound component may be generated by removing the original performance sound from the output sound.
  • the performance sound may be supplied to the first processing means.
  • the input means receives an input of the performance sound through a microphone, and the sound processing device performs a howling reduction process for reducing howling in the microphone with respect to the input performance sound.
  • the generating means may generate an indirect sound component corresponding to the performance sound subjected to the howling reduction processing.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Abstract

According to the present invention, an indirect sound component generation unit (103) generates an indirect sound component which corresponds to a performance sound of a user. An output control unit (105) outputs the indirect sound component to an output unit, while restricting the performance sound to be output to the output unit.

Description

音処理装置、音処理方法、及びプログラムSound processing apparatus, sound processing method, and program
 本発明は音処理装置、音処理方法、及びプログラムに関する。 The present invention relates to a sound processing device, a sound processing method, and a program.
 音楽や映画等のコンテンツを楽しむための装置には、コンテンツの音響信号に対して擬似的に間接音成分(残響成分等)を付加してスピーカから放音させることによってホールの音場を再現する機能を備えたものがある(特許文献1)。 A device for enjoying content such as music and movies reproduces the sound field of a hall by adding indirect sound components (such as reverberation components) to sound signals of the content and emitting sound from a speaker. Some have a function (Patent Document 1).
特開2015-50493号公報Japanese Patent Laying-Open No. 2015-50493
 例えば、ユーザによって演奏されているアコースティック楽器の楽器音又はユーザの歌唱音に対して擬似的に間接音成分を付加してスピーカから放音させることができれば、ユーザはホール等でアコースティック楽器を演奏したり、歌を歌ったりしている気分を楽しむことができるようになる。しかしながら、この場合、アコースティック楽器から発せられた楽器音(又はユーザ自身から発せられた歌唱音)だけでなく、スピーカから放音された楽器音(又は歌唱音)もユーザに聞こえることになるため、本来の発音位置とは異なる位置から発せられる楽器音(又は歌唱音)が聞こえることにより、ユーザにとって不自然な音となり、ユーザに違和感を与えてしまうおそれがある。 For example, if a pseudo indirect sound component can be added to the sound of an acoustic instrument being played by the user or the user's singing sound and the sound can be emitted from a speaker, the user can play the acoustic instrument in a hall or the like. And you can enjoy the feeling of singing. However, in this case, not only the instrument sound emitted from the acoustic instrument (or the singing sound emitted from the user himself) but also the instrument sound emitted from the speaker (or the singing sound) can be heard by the user. When a musical instrument sound (or singing sound) emitted from a position different from the original sounding position is heard, the sound becomes unnatural for the user, which may give the user a sense of discomfort.
 本発明は上記課題に鑑みてなされたものであって、その目的は、ユーザに違和感を与えないように図りつつ、ユーザがホール等でアコースティック楽器を演奏したり、歌を歌ったりしている気分を楽しむことができるようにすることが可能な音処理装置、音処理方法、及びプログラムを提供することにある。 The present invention has been made in view of the above problems, and its purpose is to make the user feel that he / she is playing an acoustic instrument or singing a song in a hall or the like while avoiding an uncomfortable feeling to the user. Is to provide a sound processing device, a sound processing method, and a program that can be enjoyed.
 上記課題を解決するために、本発明に係る音処理装置は、ユーザの演奏音の入力を受け付ける入力手段と、前記演奏音に対応する間接音成分を生成する生成手段と、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段と、を含む。 In order to solve the above problems, a sound processing apparatus according to the present invention includes an input unit that receives an input of a user's performance sound, a generation unit that generates an indirect sound component corresponding to the performance sound, and outputs the performance sound. Output control means for outputting the indirect sound component to the output means while restricting output to the means.
 また、本発明に係る音処理方法は、ユーザの演奏音に対応する間接音成分を生成する生成ステップと、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御ステップと、を含む。 The sound processing method according to the present invention includes a generation step of generating an indirect sound component corresponding to a performance sound of a user, and the output of the indirect sound component to the output unit while limiting the output of the performance sound to the output means. And an output control step for outputting to the means.
 また、本発明に係るプログラムは、ユーザの演奏音に対応する間接音成分を生成する生成手段、及び、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段、としてコンピュータを機能させるためのプログラムである。また、本発明に係る情報記憶媒体は、上記プログラムを記録したコンピュータ読み取り可能な情報記憶媒体である。 In addition, the program according to the present invention includes a generating unit that generates an indirect sound component corresponding to a user's performance sound, and the output unit outputs the indirect sound component while limiting the output of the performance sound to the output unit. This is a program for causing a computer to function as output control means for outputting to the computer. An information storage medium according to the present invention is a computer-readable information storage medium recording the above program.
 なお、本発明において、「演奏」とは音を出す行為を示し、「演奏」には、楽器を奏でる行為だけでなく、歌を歌う行為も含まれる。すなわち、「演奏音」とは楽器の演奏音だけでなく、歌唱音も含む。 In the present invention, “performance” indicates an action of producing a sound, and “performance” includes not only an action of playing a musical instrument but also an action of singing a song. That is, the “performance sound” includes not only the performance sound of the musical instrument but also the singing sound.
 本発明によれば、ユーザに違和感を与えないように図りつつ、ユーザがホール等でアコースティック楽器を演奏したり、歌を歌ったりしている気分を楽しむことができるようにすることが可能になる。 According to the present invention, it is possible to allow a user to enjoy the feeling of playing an acoustic instrument or singing a song in a hall or the like while making the user feel uncomfortable. .
本発明の実施形態に係る音処理装置を備えたシステムの構成を示す図である。It is a figure which shows the structure of the system provided with the sound processing apparatus which concerns on embodiment of this invention. ユーザの演奏環境の一例を示す図である。It is a figure which shows an example of a user's performance environment. 第1実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 1st Embodiment. 第1実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 1st Embodiment. 間接音成分の生成方法の一例について説明するための図である。It is a figure for demonstrating an example of the production | generation method of an indirect sound component. 第2実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 2nd Embodiment. 第2実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 2nd Embodiment. スピーカから放音される音について説明するための図である。It is a figure for demonstrating the sound emitted from a speaker. 第3実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 3rd Embodiment. 第3実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 3rd Embodiment. スピーカから放音される音について説明するための図である。It is a figure for demonstrating the sound emitted from a speaker. 第4実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 4th Embodiment. 第4実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 4th Embodiment. 第5実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 5th Embodiment. 第5実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 5th Embodiment. 第6実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 6th Embodiment. 第6実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 6th Embodiment. 第7実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 7th Embodiment. 第7実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 7th Embodiment. 第8実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 8th Embodiment. 第8実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 8th Embodiment. 第9実施形態に係る音処理装置の機能ブロック図である。It is a functional block diagram of the sound processing apparatus which concerns on 9th Embodiment. 第9実施形態に係る音処理装置で実行される処理を示すフロー図である。It is a flowchart which shows the process performed with the sound processing apparatus which concerns on 9th Embodiment.
 以下、本発明の実施形態の例を図面に基づいて説明する。 Hereinafter, an example of an embodiment of the present invention will be described with reference to the drawings.
 [第1実施形態]まず、第1実施形態について説明する。図1は、本発明の第1実施形態に係る音処理装置を備えたシステムの構成を示す。図1に示すように、このシステムは、音処理装置1、コンテンツ再生装置2、マイク3、電子楽器4、電気楽器5、スピーカ6(放音手段の一例)、及び表示装置7を含む。なお、コンテンツ再生装置2は、例えば、光学記憶媒体に記憶されたコンテンツ(音楽又は動画等)を再生するものであってもよいし、ネットワークを介して配信されるコンテンツを再生するものであってもよい。 [First Embodiment] First, the first embodiment will be described. FIG. 1 shows the configuration of a system including a sound processing apparatus according to the first embodiment of the present invention. As shown in FIG. 1, the system includes a sound processing device 1, a content reproduction device 2, a microphone 3, an electronic musical instrument 4, an electric musical instrument 5, a speaker 6 (an example of sound emission means), and a display device 7. Note that the content playback apparatus 2 may play back content (such as music or video) stored in an optical storage medium, or play back content distributed via a network, for example. Also good.
 音処理装置1は例えばAVレシーバ等である。音処理装置1は、CPU11、メモリ12、入力部13、出力部14、音響信号処理部15、及び映像信号処理部16を含む。 The sound processing device 1 is an AV receiver, for example. The sound processing device 1 includes a CPU 11, a memory 12, an input unit 13, an output unit 14, an acoustic signal processing unit 15, and a video signal processing unit 16.
 CPU11は、メモリ12に記憶されたプログラムに基づいて、入力部13、出力部14、音響信号処理部15、及び映像信号処理部16を制御したり、情報処理を実行したりする。図1では省略されているが、ネットワークを介してデータ通信を行うためのネットワークインタフェースが音処理装置1に備えられており、プログラムはネットワークを介してダウンロードされてメモリ12に記憶される。または、メモリカード等の情報記憶媒体からプログラムを読み出すための構成要素が音処理装置1に備えられており、プログラムが情報記憶媒体から読み出されてメモリ12に記憶される。 The CPU 11 controls the input unit 13, the output unit 14, the acoustic signal processing unit 15, and the video signal processing unit 16 based on a program stored in the memory 12 and executes information processing. Although omitted in FIG. 1, the sound processing apparatus 1 is provided with a network interface for performing data communication via a network, and the program is downloaded via the network and stored in the memory 12. Alternatively, the sound processing apparatus 1 includes a component for reading a program from an information storage medium such as a memory card, and the program is read from the information storage medium and stored in the memory 12.
 入力部13は、コンテンツ再生装置2からコンテンツデータに基づく音響信号及び映像信号の入力を受け付けることが可能であり、音響信号を音響信号処理部15に供給し、映像信号を映像信号処理部16に供給する。 The input unit 13 can accept input of an audio signal and a video signal based on content data from the content reproduction device 2, supply the audio signal to the audio signal processing unit 15, and supply the video signal to the video signal processing unit 16. Supply.
 また入力部13は、ユーザの演奏音の入力を受け付けることも可能である。なお、「演奏」とは音を出す行為を示し、「演奏」には、楽器を奏でる行為だけでなく、歌を歌う行為も含まれる。このため、「演奏音」には、楽器の演奏音だけでなく、歌唱音も含まれる。なお以下では、楽器の演奏音のことを便宜上「楽器音」と記載する。 Also, the input unit 13 can accept input of performance sound of the user. “Performance” indicates an act of producing a sound, and “performance” includes not only an act of playing an instrument but also an act of singing a song. For this reason, the “performance sound” includes not only the performance sound of the musical instrument but also the singing sound. In the following, the performance sound of an instrument is referred to as “instrument sound” for convenience.
 例えば、入力部13はマイク3と接続されて、マイク3から出力される音響信号の入力を受け付けることが可能であり、当該音響信号を音響信号処理部15に供給する。マイク3は音を収音し、収音された音を音響信号として出力する。マイク3は、ユーザによって演奏されるアコースティック楽器の楽器音や、ユーザの歌唱音を音処理装置1に入力するために用いられる。 For example, the input unit 13 is connected to the microphone 3 and can receive an input of an acoustic signal output from the microphone 3, and supplies the acoustic signal to the acoustic signal processing unit 15. The microphone 3 collects sound and outputs the collected sound as an acoustic signal. The microphone 3 is used to input an instrumental sound of an acoustic instrument played by a user and a user's singing sound to the sound processing device 1.
 また例えば、入力部13はユーザによって演奏される電子楽器4又は電気楽器5と接続されて、電子楽器4又は電気楽器5から出力される音響信号の入力を受け付けることも可能であり、当該音響信号を音響信号処理部15に供給する。 Further, for example, the input unit 13 is connected to the electronic musical instrument 4 or the electric musical instrument 5 played by the user, and can receive an input of an acoustic signal output from the electronic musical instrument 4 or the electric musical instrument 5. Is supplied to the acoustic signal processing unit 15.
 なお、入力部13が無線ネットワークインタフェースを含むようにし、音響信号が無線通信を介して入力部13に入力されるようにしてもよい。すなわち、コンテンツ音や演奏音が無線通信を介して音処理装置1に入力されるようにしてもよい。 Note that the input unit 13 may include a wireless network interface, and an acoustic signal may be input to the input unit 13 via wireless communication. In other words, content sounds and performance sounds may be input to the sound processing device 1 via wireless communication.
 音響信号処理部15は例えばDSP(Digital Signal Processor)であり、CPU11からの制御に従って、音響信号に関する処理を実行する。音響信号処理部15から出力される音響信号は出力部14を介してスピーカ6から放音される。 The acoustic signal processing unit 15 is, for example, a DSP (Digital Signal Processor), and executes processing related to the acoustic signal in accordance with control from the CPU 11. The acoustic signal output from the acoustic signal processing unit 15 is emitted from the speaker 6 via the output unit 14.
 映像信号処理部16は例えばDSP(Digital Signal Processor)であり、CPU11からの制御に従って、映像信号に関する処理を実行する。映像信号処理部16から出力される映像信号は出力部14を介して表示装置7に表示される。 The video signal processing unit 16 is, for example, a DSP (Digital Signal Processor), and executes processing related to the video signal in accordance with control from the CPU 11. The video signal output from the video signal processing unit 16 is displayed on the display device 7 via the output unit 14.
 第1実施形態に係る音処理装置1では、自宅等でアコースティック楽器を奏でたり、歌を歌ったりするユーザがホール等で演奏している気分を楽しむことが可能になっている。以下、このような機能を実現するための構成について説明する。なお、図1に示したように、音処理装置1は、電子楽器4又は電気楽器5の楽器音の入力を受け付ける機能や、コンテンツ再生装置2によって再生されたコンテンツをスピーカ6や表示装置7で出力させる機能を備えているが、これらの機能は第1実施形態では必須のものではない。 In the sound processing apparatus 1 according to the first embodiment, a user who plays an acoustic instrument or sings a song at home or the like can enjoy the feeling of being played in a hall or the like. Hereinafter, a configuration for realizing such a function will be described. As shown in FIG. 1, the sound processing device 1 has a function of receiving input of musical instrument sounds of the electronic musical instrument 4 or the electric musical instrument 5 and content reproduced by the content reproduction device 2 by the speaker 6 or the display device 7. Although the output function is provided, these functions are not essential in the first embodiment.
 図2はユーザの演奏環境の一例を示す。図2に示す例では、ユーザUの目の前にマイク3が設置されている。マイク3はユーザの演奏音を収音するために用いられる。例えば、ユーザがアコースティック楽器を奏でている場合には、楽器音がマイク3によって収音され、入力部13に入力される。また例えば、ユーザが歌を歌っている場合には、歌唱音がマイク3によって収音され、入力部13に入力される。 FIG. 2 shows an example of the performance environment of the user. In the example shown in FIG. 2, the microphone 3 is installed in front of the user U. The microphone 3 is used to collect the performance sound of the user. For example, when the user is playing an acoustic instrument, the instrument sound is collected by the microphone 3 and input to the input unit 13. For example, when the user sings a song, the singing sound is collected by the microphone 3 and input to the input unit 13.
 また図2に示す例では、複数のスピーカ6A,6B,6C,6D,6Eが設置されている。具体的には、ユーザUの正面にスピーカ6Aが設置されている。また、ユーザUから見て左前方、右前方にそれぞれスピーカ6B,6Cが設置され、ユーザUから見て左後方、右後方にそれぞれスピーカ6D,6Eが設置されている。図2に示す例では、5台のスピーカ6A~6Eを設置しているが、4台以下のスピーカ6を設置してもよいし、6台以上のスピーカ6を設置してもよい。例えば、スピーカ6B,6Cのみを設置してもよい。 In the example shown in FIG. 2, a plurality of speakers 6A, 6B, 6C, 6D, and 6E are installed. Specifically, a speaker 6A is installed in front of the user U. Speakers 6B and 6C are installed on the left front and right front as viewed from the user U, and speakers 6D and 6E are installed on the left rear and right rear as viewed from the user U, respectively. In the example shown in FIG. 2, five speakers 6A to 6E are installed, but four or less speakers 6 may be installed, or six or more speakers 6 may be installed. For example, only the speakers 6B and 6C may be installed.
 図3は、第1実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図3に示すように、第1実施形態に係る音処理装置1は、演奏音調整部101、プリプロセッシング部102(第1の処理手段の一例)、間接音成分生成部103、ポストプロセッシング部104(第2の処理手段の一例)、及び出力制御部105を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 3 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the first embodiment. As shown in FIG. 3, the sound processing apparatus 1 according to the first embodiment includes a performance sound adjustment unit 101, a preprocessing unit 102 (an example of a first processing unit), an indirect sound component generation unit 103, and a post processing unit 104. (An example of second processing means) and an output control unit 105. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図4は、第1実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図4を参照しながら各機能ブロックの機能について説明する。 FIG. 4 is a flowchart showing processing executed by the sound processing apparatus 1 according to the first embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、演奏音調整部101は、マイク3から入力された演奏音に対して所定処理を施すことによって、演奏音を調整する(S10)。例えば、演奏音調整部101は、マイク3におけるハウリングを低減するためのハウリング低減処理を演奏音に対して施す。また例えば、演奏音調整部101はエフェクト処理(例えば、間接音を生成する前に不要な周波数帯域を削除したり、音圧レベルを整えたりする処理等)を演奏音に対して施すようにしてもよい。演奏音調整部101による処理が施された演奏音はプリプロセッシング部102に供給される。 First, the performance sound adjusting unit 101 adjusts the performance sound by performing predetermined processing on the performance sound input from the microphone 3 (S10). For example, the performance sound adjustment unit 101 performs howling reduction processing for reducing howling in the microphone 3 on the performance sound. Further, for example, the performance sound adjustment unit 101 performs effect processing (for example, processing for deleting unnecessary frequency bands or adjusting the sound pressure level before generating the indirect sound) on the performance sound. Also good. The performance sound processed by the performance sound adjustment unit 101 is supplied to the preprocessing unit 102.
 プリプロセッシング部102は、供給された音(ここでは演奏音)に対して、プリプロセッシングを実行する(S11)。例えば、プリプロセッシング部102は、供給された音に対して、イコライザによる音声調整処理等を施す。プリプロセッシング部102による処理が施された演奏音は間接音成分生成部103に供給される。なお、図3では、演奏音調整部101とプリプロセッシング部102とが別個の機能ブロックとして示されているが、これらは一体的に構成されるようにしてもよい。 The preprocessing unit 102 performs preprocessing on the supplied sound (performance sound here) (S11). For example, the preprocessing unit 102 performs sound adjustment processing using an equalizer on the supplied sound. The performance sound that has been processed by the preprocessing unit 102 is supplied to the indirect sound component generation unit 103. In FIG. 3, the performance sound adjustment unit 101 and the preprocessing unit 102 are shown as separate functional blocks, but they may be configured integrally.
 間接音成分生成部103は演奏音に対応する擬似的な間接音成分を生成する(S12)。すなわち、間接音成分生成部103は、ホール等の音響空間で演奏音が発せられた場合を想定し、その場合に音響空間で発生する間接音成分(残響成分等)を生成する。擬似的な間接音成分を生成する方法としては公知の各種方法を採用することができる。例えば、間接音成分生成部103は、想定する音響空間における間接音(残響音)の発生位置、直接音に対する間接音の遅延時間や、直接音の音圧レベルに対する間接音のレベルの割合等の情報に基づいて、演奏音に対応する擬似的な間接音成分を生成する。 The indirect sound component generation unit 103 generates a pseudo indirect sound component corresponding to the performance sound (S12). That is, the indirect sound component generation unit 103 assumes a case where a performance sound is emitted in an acoustic space such as a hall, and generates an indirect sound component (such as a reverberation component) generated in the acoustic space in that case. Various known methods can be employed as a method of generating the pseudo indirect sound component. For example, the indirect sound component generation unit 103 may generate the position of indirect sound (reverberation sound) in the assumed acoustic space, the delay time of the indirect sound relative to the direct sound, the ratio of the level of the indirect sound to the sound pressure level of the direct sound, Based on the information, a pseudo indirect sound component corresponding to the performance sound is generated.
 例えば、間接音成分生成部103は、供給された音に対応する間接音成分を当該供給された音に対して付加する間接音成分付加部を含んでおり、間接音成分生成部103は演奏音を間接音成分付加部に供給する。そして、間接音成分生成部103は、間接音成分付加部から出力される音(間接音成分が付加された演奏音)から元々の演奏音を除去することによって、間接音成分のみを取得する。 For example, the indirect sound component generation unit 103 includes an indirect sound component addition unit that adds an indirect sound component corresponding to the supplied sound to the supplied sound, and the indirect sound component generation unit 103 performs the performance sound. Is supplied to the indirect sound component adding unit. And the indirect sound component production | generation part 103 acquires only an indirect sound component by removing the original performance sound from the sound (performance sound to which the indirect sound component was added) output from the indirect sound component addition part.
 図5は間接音成分の生成方法の一例について説明するための図である。図5(A)は演奏音の一例を示す。この演奏音は直接音成分に相当する。例えば、図5(A)に示す演奏音(直接音成分)は第1バッファ及び第2バッファの各々に格納される。間接音成分付加部は、第1バッファに格納された演奏音(直接音成分)に対して、当該演奏音に対応する間接音成分を付加する。ここで、間接音成分を付加する方法として公知の各種方法を採用することができる。この場合、第1バッファには、例えば図5(B)に示すように、演奏音の直接音成分及び間接音成分が格納される。その後、間接音成分生成部103は、第1バッファに格納された演奏音の直接音成分及び間接音成分(図5(B))から、第2バッファに格納された演奏音の直接音成分(図5(A))を減算することによって、図5(C)に示すような間接音成分のみを取得する。 FIG. 5 is a diagram for explaining an example of a method for generating an indirect sound component. FIG. 5A shows an example of a performance sound. This performance sound corresponds to a direct sound component. For example, the performance sound (direct sound component) shown in FIG. 5A is stored in each of the first buffer and the second buffer. The indirect sound component adding unit adds an indirect sound component corresponding to the performance sound to the performance sound (direct sound component) stored in the first buffer. Here, various known methods can be adopted as a method of adding the indirect sound component. In this case, as shown in FIG. 5B, for example, the direct sound component and the indirect sound component of the performance sound are stored in the first buffer. Thereafter, the indirect sound component generation unit 103 uses the direct sound component of the performance sound stored in the second buffer (see FIG. 5B) from the direct sound component and the indirect sound component of the performance sound stored in the first buffer (FIG. 5B). By subtracting FIG. 5 (A)), only the indirect sound component as shown in FIG. 5 (C) is acquired.
 なお、間接音成分を生成する方法は上記の例に限られない。例えば、図5(A)に示す演奏音(直接音成分)を第1バッファに格納し、当該演奏音(直接音成分)に対応する間接音成分を第2バッファに生成するようにしてもよい。 Note that the method of generating the indirect sound component is not limited to the above example. For example, the performance sound (direct sound component) shown in FIG. 5A may be stored in the first buffer, and the indirect sound component corresponding to the performance sound (direct sound component) may be generated in the second buffer. .
 間接音成分生成部103によって生成された間接音成分はポストプロセッシング部104に供給される。ポストプロセッシング部104は、供給された音(ここでは間接音成分)に対して、ポストプロセッシングを実行する(S13)。例えば、ポストプロセッシング部104は、供給された音に対して、スピーカ6の特性に合わせて調整するための処理を施す。ポストプロセッシング部104よる処理が施された間接音成分は出力制御部105に供給される。 The indirect sound component generated by the indirect sound component generation unit 103 is supplied to the post processing unit 104. The post-processing unit 104 performs post-processing on the supplied sound (indirect sound component here) (S13). For example, the post processing unit 104 performs processing for adjusting the supplied sound in accordance with the characteristics of the speaker 6. The indirect sound component that has been processed by the post-processing unit 104 is supplied to the output control unit 105.
 出力制御部105は、供給された間接音成分を出力部14(出力手段の一例)に出力する(S14)。すなわち、出力制御部105は、マイク3から入力された演奏音(アコースティック楽器の楽器音又は歌唱音)を出力部14に出力することを制限しつつ、間接音成分を出力部14に出力する。出力部14に出力された間接音成分はスピーカ6によって放音される。 The output control unit 105 outputs the supplied indirect sound component to the output unit 14 (an example of output means) (S14). That is, the output control unit 105 outputs the indirect sound component to the output unit 14 while restricting the performance sound (instrumental instrument sound or singing sound of the acoustic musical instrument) input from the microphone 3 from being output to the output unit 14. The indirect sound component output to the output unit 14 is emitted by the speaker 6.
 ここで、「演奏音を出力部14に出力することを制限する」とは、例えば、演奏音を出力部14に出力しないようにすることである。すなわち、出力制御部105は、マイク3から入力された演奏音(直接音成分)を出力部14に出力せずに、間接音成分のみを出力部14に出力する。言い換えれば、出力制御部105は、マイク3から入力された演奏音(直接音成分)がスピーカ6から放音されないようにし、間接音成分のみがスピーカ6から放音されるようにする。 Here, “restricting the output of the performance sound to the output unit 14” means, for example, that the performance sound is not output to the output unit 14. That is, the output control unit 105 outputs only the indirect sound component to the output unit 14 without outputting the performance sound (direct sound component) input from the microphone 3 to the output unit 14. In other words, the output control unit 105 prevents the performance sound (direct sound component) input from the microphone 3 from being emitted from the speaker 6 and causes only the indirect sound component to be emitted from the speaker 6.
 「演奏音を出力部14に出力することを制限する」とは、例えば、間接音成分に比べてかなり小さい音量で演奏音を放音されるように出力部14に出力することであってもよい。すなわち、出力制御部105は、マイク3から入力された演奏音(直接音成分)を通常の音量に比べてかなり小さい音量(ユーザの耳に聞こえ難い程度に小さい音量)で放音されるように出力部14に出力しつつ、間接音成分を通常の音量で出力部14に出力するようにしてもよい。言い換えれば、出力制御部105は、マイク3から入力された演奏音(直接音成分)が通常の音量に比べてかなり小さい音量でスピーカ6から放音されるようにし、間接音成分が通常の音量でスピーカ6から放音されるようにする。 “Restricting the output of the performance sound to the output unit 14” means, for example, outputting the performance sound to the output unit 14 so that the performance sound is emitted at a considerably lower volume than the indirect sound component. Good. That is, the output control unit 105 emits a performance sound (direct sound component) input from the microphone 3 at a considerably lower volume (a volume that is difficult to hear by the user's ears) than the normal volume. The indirect sound component may be output to the output unit 14 at a normal volume while being output to the output unit 14. In other words, the output control unit 105 causes the performance sound (direct sound component) input from the microphone 3 to be emitted from the speaker 6 at a considerably lower volume than the normal volume, and the indirect sound component is the normal volume. The sound is emitted from the speaker 6.
 なお、スピーカ6が音処理装置1に内蔵される場合、出力制御部105は、供給された間接音成分をスピーカ6(出力手段の他の一例)に出力することになる。 When the speaker 6 is built in the sound processing device 1, the output control unit 105 outputs the supplied indirect sound component to the speaker 6 (another example of output means).
 以上に説明した第1実施形態に係る音処理装置1によれば、ユーザの演奏音(アコースティック楽器の楽器音又は歌唱音)に対応する擬似的な間接音成分(残響成分等)がスピーカ6から放音されるため、ユーザはホールや教会等でアコースティック楽器を演奏したり、歌を歌ったりしている気分を楽しむことができる。また、第1実施形態に係る音処理装置1によれば、ユーザの演奏音(アコースティック楽器の楽器音又は歌唱音)がスピーカ6から放音されることが制限されるため、本来の発音位置とは異なる位置から発せられる演奏音が聞こえることに起因する違和感をユーザに与えてしまわないように図ることができる。 According to the sound processing device 1 according to the first embodiment described above, a pseudo indirect sound component (such as a reverberation component) corresponding to a user's performance sound (instrument sound or singing sound of an acoustic instrument) is transmitted from the speaker 6. Since the sound is emitted, the user can enjoy the feeling of playing an acoustic instrument or singing a song in a hall or church. In addition, according to the sound processing device 1 according to the first embodiment, since the user's performance sound (instrumental sound or singing sound of an acoustic instrument) is restricted from being emitted from the speaker 6, It is possible to prevent the user from feeling uncomfortable due to the ability to hear performance sounds emitted from different positions.
 [第2実施形態]次に、第2実施形態について説明する。第2実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境も第1実施形態と基本的に同様である。ただし、第2実施形態では、音処理装置1の入力部13と接続された電子楽器4又は電気楽器5がユーザによって演奏されるため、マイク3は不要である。 [Second Embodiment] Next, a second embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the second embodiment is the same as that of the first embodiment. The user's performance environment is basically the same as in the first embodiment. However, in the second embodiment, since the electronic musical instrument 4 or the electric musical instrument 5 connected to the input unit 13 of the sound processing device 1 is played by the user, the microphone 3 is unnecessary.
 第2実施形態に係る音処理装置1では、自宅等で電子楽器4又は電気楽器5を演奏しているユーザがホール等で演奏している気分を楽しむことが可能になっている。以下、このような機能を実現するための構成について説明する。なお、図1に示したように、音処理装置1は、コンテンツ再生装置2によって再生されたコンテンツをスピーカ6や表示装置7で出力させる機能を備えているが、これらの機能は第2実施形態では必須のものではない。 In the sound processing apparatus 1 according to the second embodiment, a user who is playing the electronic musical instrument 4 or the electric musical instrument 5 at home or the like can enjoy the feeling of playing in a hall or the like. Hereinafter, a configuration for realizing such a function will be described. As shown in FIG. 1, the sound processing device 1 has a function of outputting the content reproduced by the content reproduction device 2 through the speaker 6 and the display device 7, but these functions are the second embodiment. Then it is not essential.
 図6は、第2実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図6に示すように、第2実施形態に係る音処理装置1は、演奏音調整部111、プリプロセッシング部112、間接音成分生成部113、ポストプロセッシング部114、及び出力制御部115を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 6 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the second embodiment. As shown in FIG. 6, the sound processing apparatus 1 according to the second embodiment includes a performance sound adjustment unit 111, a preprocessing unit 112, an indirect sound component generation unit 113, a post processing unit 114, and an output control unit 115. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図7は、第2実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図7を参照しながら各機能ブロックの機能について説明する。 FIG. 7 is a flowchart showing processing executed by the sound processing apparatus 1 according to the second embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、演奏音調整部111は、電子楽器4又は電気楽器5から入力された演奏音に対して所定処理を施すことによって、演奏音を調整する(S20)。例えば、演奏音調整部111はエフェクト処理(例えば、ギター音に対するディストーション処理等)を演奏音に対して施す。なお、演奏音調整部111では、大きな遅延を発生させるような処理は実行されず、遅延の小さい処理のみが実行される。演奏音調整部111による処理が施された演奏音はプリプロセッシング部112に供給される。 First, the performance sound adjusting unit 111 adjusts the performance sound by performing predetermined processing on the performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 (S20). For example, the performance sound adjustment unit 111 performs effect processing (for example, distortion processing on the guitar sound) on the performance sound. Note that the performance sound adjusting unit 111 does not execute a process that generates a large delay, and only performs a process with a small delay. The performance sound that has been processed by the performance sound adjustment unit 111 is supplied to the preprocessing unit 112.
 プリプロセッシング部112は、供給された音(ここでは演奏音)に対して、プリプロセッシングを実行する(S21)。また、間接音成分生成部113は、演奏音に対応する擬似的な間接音成分を生成する(S22)。そして、ポストプロセッシング部114は、供給された音(ここでは間接音成分)に対して、ポストプロセッシングを実行する(S23)。ステップS21~S23は第1実施形態のステップS11~S13と基本的に同様であり、プリプロセッシング部112、間接音成分生成部113、及びポストプロセッシング部114は第1実施形態のプリプロセッシング部102、間接音成分生成部103、及びポストプロセッシング部104と基本的に同様であるため、ここでは説明を省略する。 The preprocessing unit 112 performs preprocessing on the supplied sound (performance sound here) (S21). Further, the indirect sound component generation unit 113 generates a pseudo indirect sound component corresponding to the performance sound (S22). Then, the post processing unit 114 performs post processing on the supplied sound (indirect sound component here) (S23). Steps S21 to S23 are basically the same as steps S11 to S13 of the first embodiment, and the preprocessing unit 112, the indirect sound component generation unit 113, and the postprocessing unit 114 are the preprocessing unit 102 of the first embodiment, Since it is basically the same as the indirect sound component generation unit 103 and the post processing unit 104, description thereof is omitted here.
 なお、演奏音調整部111による処理が施された演奏音は、経路119を介して、出力制御部115にも供給される。経路119は、プリプロセッシング部112、間接音成分生成部113、及びポストプロセッシング部114を介さずに出力制御部115へと至る経路である。言い換えれば、経路119は、プリプロセッシング部112、間接音成分生成部113、及びポストプロセッシング部114を介して出力制御部115へと至る経路に比べて遅延の少ない経路である。例えば、プリプロセッシング部112、間接音成分生成部113、及びポストプロセッシング部114では、バッファに格納された演奏音に基づいて処理が実行されるが、経路119では、演奏音がバッファに格納されることなく、出力制御部115まで供給される。 The performance sound that has been processed by the performance sound adjustment unit 111 is also supplied to the output control unit 115 via the path 119. A path 119 is a path that reaches the output control unit 115 without going through the preprocessing unit 112, the indirect sound component generation unit 113, and the postprocessing unit 114. In other words, the path 119 is a path with less delay compared to the path reaching the output control unit 115 via the preprocessing unit 112, the indirect sound component generation unit 113, and the post processing unit 114. For example, the preprocessing unit 112, the indirect sound component generation unit 113, and the post processing unit 114 execute processing based on the performance sound stored in the buffer. However, in the path 119, the performance sound is stored in the buffer. Without being supplied to the output control unit 115.
 出力制御部115は、経路119を介して供給された演奏音(直接音成分)と、間接音成分生成部113によって生成された間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S24)。出力部14に出力されたミックス音はスピーカ6によって放音される。 The output control unit 115 mixes the performance sound (direct sound component) supplied via the path 119 and the indirect sound component generated by the indirect sound component generation unit 113 and outputs the mixed sound to the output unit 14. (S24). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 図8は、スピーカ6から放音される音について説明するための図である。ここでは、図8(A)に示すように、演奏音Aが入力された後で演奏音Bが入力された場合を想定する。これらの演奏音A,Bは直接音成分に相当する。この場合、間接音成分生成部113では、図8(B)に示すように、演奏音Aに対応する間接音成分Aが生成され、当該間接音成分Aが出力制御部115に供給される。なお、間接音成分生成部113では、上記の間接音成分Aが生成された後で、演奏音Bに対応する間接音成分Bも生成されるが、ここでは省略している。 FIG. 8 is a diagram for explaining the sound emitted from the speaker 6. Here, as shown in FIG. 8A, it is assumed that the performance sound B is input after the performance sound A is input. These performance sounds A and B correspond to direct sound components. In this case, the indirect sound component generation unit 113 generates an indirect sound component A corresponding to the performance sound A and supplies the indirect sound component A to the output control unit 115 as shown in FIG. The indirect sound component generation unit 113 generates the indirect sound component B corresponding to the performance sound B after the indirect sound component A is generated, but is omitted here.
 プリプロセッシング部112、間接音成分生成部113、及びポストプロセッシング部114での処理量は大きく、これらの機能ブロックでの処理には時間を要するため、間接音成分Aは、これらの機能ブロックでの処理に要した時間に応じた遅延時間だけ遅延してスピーカ6から放音される。これに対して、演奏音A,B(直接音成分)は、遅延の少ない経路119(実質的な遅延の生じない経路)を介してスピーカ6から放音される。このため、図8(C)に示すように、演奏音Aに対応する間接音成分Aが実際よりも遅延して、演奏音Aよりも後の演奏音Bとミックスされ、当該ミックスされた音がスピーカ6から放音される。 Since the amount of processing in the preprocessing unit 112, the indirect sound component generation unit 113, and the post processing unit 114 is large, and the processing in these functional blocks requires time, the indirect sound component A is generated in these functional blocks. The sound is emitted from the speaker 6 with a delay corresponding to the time required for processing. On the other hand, the performance sounds A and B (direct sound components) are emitted from the speaker 6 via a path 119 with little delay (a path without substantial delay). For this reason, as shown in FIG. 8 (C), the indirect sound component A corresponding to the performance sound A is delayed with respect to the actual sound, mixed with the performance sound B after the performance sound A, and the mixed sound. Is emitted from the speaker 6.
 以上に説明した第2実施形態に係る音処理装置1によれば、ユーザの演奏音(電子楽器4又は電気楽器5の楽器音)に対して擬似的な間接音成分(残響成分等)が付加されてスピーカ6から放音されるため、ユーザはホール等で楽器を演奏している気分を楽しむことができる。また、第2実施形態に係る音処理装置1によれば、ユーザの演奏音は遅延の小さい経路119を介してスピーカ6から放音されるため、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延を小さく抑えることができる。その結果、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延が大きいことに起因する違和感にユーザに与えてしまわないように図ることができる。 According to the sound processing apparatus 1 according to the second embodiment described above, pseudo indirect sound components (such as reverberation components) are added to the user's performance sound (instrument sound of the electronic musical instrument 4 or the electric musical instrument 5). Since the sound is emitted from the speaker 6, the user can enjoy the feeling of playing a musical instrument in a hall or the like. Further, according to the sound processing apparatus 1 according to the second embodiment, the performance sound of the user is emitted from the speaker 6 via the path 119 with a small delay, and thus the performance sound is played by the user after being played by the user. The delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
 なお、第2実施形態に係る音処理装置1では、ユーザの演奏音に対応する間接音成分が、現実の音響空間で演奏音が発せられた場合に生じる間接音成分に比べて遅れて生じることになるが(図8参照)、間接音成分に遅延が多少生じたとしても、それによりユーザに違和感を与える可能性は低いため、特に問題は生じない。 In the sound processing device 1 according to the second embodiment, the indirect sound component corresponding to the performance sound of the user is generated later than the indirect sound component generated when the performance sound is emitted in the actual acoustic space. However, even if some delay occurs in the indirect sound component, there is no possibility that the user will feel uncomfortable, so there is no particular problem.
 [第3実施形態]次に、第3実施形態について説明する。第3実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境は第2実施形態と同様である。 [Third Embodiment] Next, a third embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the third embodiment is the same as that of the first embodiment. The user's performance environment is the same as in the second embodiment.
 第3実施形態に係る音処理装置1では、自宅等で電子楽器4又は電気楽器5を演奏しているユーザが音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことが可能になっている。以下、このような機能を実現するための構成について説明する。 In the sound processing apparatus 1 according to the third embodiment, a user who is playing the electronic musical instrument 4 or the electric musical instrument 5 at home or the like enjoys the feeling of playing in a hall or the like as a member of a music content player. Is possible. Hereinafter, a configuration for realizing such a function will be described.
 図9は、第3実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図9に示すように、第3実施形態に係る音処理装置1は、演奏音調整部121、プリプロセッシング部122、間接音成分生成部123、ポストプロセッシング部124、出力制御部125、及びコンテンツデコード部126を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 9 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the third embodiment. As shown in FIG. 9, the sound processing apparatus 1 according to the third embodiment includes a performance sound adjustment unit 121, a preprocessing unit 122, an indirect sound component generation unit 123, a post processing unit 124, an output control unit 125, and a content decoding unit. Part 126 is included. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図10は、第3実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図10を参照しながら各機能ブロックの機能について説明する。 FIG. 10 is a flowchart showing processing executed by the sound processing apparatus 1 according to the third embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、コンテンツデコード部126は、コンテンツ再生装置2から入力されるマルチチャンネルのコンテンツ音をフォーマットデコードすることによって、PCM信号に変換する(S30)。 First, the content decoding unit 126 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S30).
 また、演奏音調整部121は、電子楽器4又は電気楽器5から入力された演奏音に対して所定処理を施すことによって、演奏音を調整する(S31)。ステップS31は第2実施形態のステップS20と同様であり、演奏音調整部121は第2実施形態の演奏音調整部111と同様であるため、ここでは説明を省略する。 Further, the performance sound adjusting unit 121 adjusts the performance sound by performing predetermined processing on the performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 (S31). Since step S31 is the same as step S20 of the second embodiment, and the performance sound adjustment unit 121 is the same as the performance sound adjustment unit 111 of the second embodiment, the description thereof is omitted here.
 なお、図10では、便宜上、ステップS30,S31が順番に実行されるように示されているが、ステップS30,S31は並列的に実行される。 In FIG. 10, for the sake of convenience, steps S30 and S31 are shown to be executed in order, but steps S30 and S31 are executed in parallel.
 PCM信号に変換されたコンテンツ音は、AD変換回路によってPCM信号に変換された演奏音とミックスされ(S32)、当該ミックス音がプリプロセッシング部122に供給される。なお、演奏音は経路129を介して出力制御部125にも供給される。経路129は第2実施形態の経路119と同様である。 The content sound converted into the PCM signal is mixed with the performance sound converted into the PCM signal by the AD conversion circuit (S32), and the mixed sound is supplied to the preprocessing unit 122. Note that the performance sound is also supplied to the output control unit 125 via the path 129. The route 129 is the same as the route 119 of the second embodiment.
 プリプロセッシング部122は、上記ミックス音に対して、プリプロセッシングを実行する(S33)。例えば、プリプロセッシング部122は、上記ミックス音に対して、イコライザによる音声調整処理等を施す。プリプロセッシング部122による処理が施されたミックス音は間接音成分生成部123に供給される。 The preprocessing unit 122 performs preprocessing on the mixed sound (S33). For example, the preprocessing unit 122 performs an audio adjustment process using an equalizer on the mixed sound. The mixed sound that has been processed by the preprocessing unit 122 is supplied to the indirect sound component generation unit 123.
 間接音成分生成部123は上記ミックス音に対応する擬似的な間接音成分を生成する(S34)。すなわち、間接音成分生成部123は、演奏音(直接音成分)とコンテンツ音とに対応する間接音成分を生成する。間接音成分生成部123は、ホール等の音響空間で上記ミックス音が発せられた場合を想定し、その場合に音響空間で発生する間接音成分(残響成分等)を生成する。擬似的な間接音成分を生成する方法としては公知の各種方法を採用することができる。 The indirect sound component generation unit 123 generates a pseudo indirect sound component corresponding to the mixed sound (S34). That is, the indirect sound component generation unit 123 generates an indirect sound component corresponding to the performance sound (direct sound component) and the content sound. The indirect sound component generation unit 123 assumes a case where the mixed sound is emitted in an acoustic space such as a hall, and generates an indirect sound component (such as a reverberation component) generated in the acoustic space in that case. Various known methods can be employed as a method of generating the pseudo indirect sound component.
 例えば、間接音成分生成部123は、第1バッファに格納された上記ミックス音に対して、間接音を付加する処理を施し、その後、第1バッファに格納された音から、第2バッファに格納された元々の上記ミックス音を減算することによって、上記ミックス音に対応する間接音成分を取得する。なお、間接音成分生成部123は、第1バッファに格納された上記ミックス音に基づいて、間接音を生成する処理を実行することによって、上記ミックス音に対応する間接音を第2バッファに生成することによって、上記ミックス音に対応する間接音成分を取得するようにしてもよい。 For example, the indirect sound component generation unit 123 performs a process of adding an indirect sound to the mixed sound stored in the first buffer, and then stores the mixed sound in the second buffer from the sound stored in the first buffer. The indirect sound component corresponding to the mixed sound is acquired by subtracting the original mixed sound. The indirect sound component generation unit 123 generates an indirect sound corresponding to the mixed sound in the second buffer by executing a process of generating the indirect sound based on the mixed sound stored in the first buffer. By doing so, you may make it acquire the indirect sound component corresponding to the said mixed sound.
 間接音成分生成部123によって生成された間接音成分は、コンテンツ音とともに、ポストプロセッシング部124を経て、出力制御部125に供給される。 The indirect sound component generated by the indirect sound component generation unit 123 is supplied to the output control unit 125 through the post processing unit 124 together with the content sound.
 ポストプロセッシング部124はポストプロセッシングを実行する(S35)。ステップS35は第1実施形態のステップS13と基本的に同様であり、ポストプロセッシング部124は第1実施形態のポストプロセッシング部104と基本的に同様であるため、ここでは説明を省略する。 The post-processing unit 124 executes post-processing (S35). Step S35 is basically the same as step S13 of the first embodiment, and the post-processing unit 124 is basically the same as the post-processing unit 104 of the first embodiment, and thus description thereof is omitted here.
 出力制御部125は、経路129を介して供給された演奏音(直接音成分)と、ポストプロセッシング部124から供給されるコンテンツ音及び間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S36)。出力部14に出力されたミックス音はスピーカ6によって放音される。 The output control unit 125 mixes the performance sound (direct sound component) supplied via the path 129 and the content sound and indirect sound component supplied from the post-processing unit 124, and sends the mixed sound to the output unit 14. Output (S36). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 図11は、スピーカ6から放音される音について説明するための図である。ここでは、図11(A)に示すように、演奏音A及びコンテンツ音Aが入力された後で、演奏音B及びコンテンツ音Bが入力された場合を想定する。なお、演奏音A,Bは直接音成分に相当する。また図11では、便宜上、演奏音Aとコンテンツ音Aとを時間的に少しずらして示しているが、演奏音Aとコンテンツ音Aとの入力時点は同じであることとする。演奏音Bとコンテンツ音Bとに関しても同様である。 FIG. 11 is a diagram for explaining the sound emitted from the speaker 6. Here, as shown in FIG. 11A, it is assumed that the performance sound B and the content sound B are input after the performance sound A and the content sound A are input. The performance sounds A and B correspond to direct sound components. In FIG. 11, for convenience, the performance sound A and the content sound A are shown as being slightly shifted in time, but the input time points of the performance sound A and the content sound A are the same. The same applies to the performance sound B and the content sound B.
 図11(A)に示す例の場合、間接音成分生成部123では、図11(B)に示すように、演奏音Aとコンテンツ音Aとのミックス音に対応する間接音成分Aが生成され、当該間接音成分Aがコンテンツ音Aとともに出力制御部125に供給される。なお、間接音成分生成部113では、上記の間接音成分Aが生成された後で、演奏音Bとコンテンツ音Bとのミックス音に対応する間接音成分Bも生成されるが、ここでは省略している。 In the case of the example shown in FIG. 11A, the indirect sound component generation unit 123 generates an indirect sound component A corresponding to the mixed sound of the performance sound A and the content sound A as shown in FIG. The indirect sound component A is supplied to the output control unit 125 together with the content sound A. The indirect sound component generation unit 113 generates the indirect sound component B corresponding to the mixed sound of the performance sound B and the content sound B after the indirect sound component A is generated, but is omitted here. is doing.
 プリプロセッシング部122、間接音成分生成部123、及びポストプロセッシング部124での処理量は大きく、これらの機能ブロックでの処理には時間を要するため、間接音成分A及びコンテンツ音Aは、これらの機能ブロックでの処理に要した時間に応じた遅延時間だけ遅延してスピーカ6から放音される。これに対して、演奏音A,B(直接音成分)は、遅延の少ない経路129(実質的な遅延の生じない経路)を介してスピーカ6から放音される。このため、図11(C)に示すように、間接音成分Aが実際よりも遅延して、演奏音Aよりも後の演奏音Bとミックスされ、当該ミックスされた音がスピーカ6から放音される。 The amount of processing in the preprocessing unit 122, the indirect sound component generation unit 123, and the post processing unit 124 is large, and it takes time for processing in these functional blocks. The sound is emitted from the speaker 6 with a delay of a delay time corresponding to the time required for processing in the functional block. On the other hand, the performance sounds A and B (direct sound components) are emitted from the speaker 6 via a path 129 with little delay (a path without substantial delay). For this reason, as shown in FIG. 11 (C), the indirect sound component A is delayed from the actual time and mixed with the performance sound B after the performance sound A, and the mixed sound is emitted from the speaker 6. Is done.
 以上に説明した第3実施形態に係る音処理装置1によれば、ユーザの演奏音(電子楽器4又は電気楽器5の楽器音)とマルチチャンネルのコンテンツ音とに対して擬似的な間接音成分(残響成分等)が付加されてスピーカ6から放音されるため、ユーザは音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことができる。また、第3実施形態に係る音処理装置1によれば、ユーザの演奏音は遅延の小さい経路129を介してスピーカ6から放音されるため、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延を小さく抑えることができる。その結果、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延が大きいことに起因する違和感にユーザに与えてしまわないように図ることができる。 According to the sound processing apparatus 1 according to the third embodiment described above, a pseudo indirect sound component with respect to the user's performance sound (instrument sound of the electronic musical instrument 4 or the electric musical instrument 5) and the multi-channel content sound. Since (severance component etc.) is added and sound is emitted from the speaker 6, the user can enjoy the feeling of being performed in a hall or the like as a member of a music content player. Further, according to the sound processing apparatus 1 according to the third embodiment, the performance sound of the user is emitted from the speaker 6 via the path 129 with a small delay, and therefore the performance sound is played by the user after being played by the user. The delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
 [第4実施形態]次に、第4実施形態について説明する。第4実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境は第1実施形態と同様である。 [Fourth Embodiment] Next, a fourth embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the fourth embodiment is the same as that of the first embodiment. The user's performance environment is the same as in the first embodiment.
 第4実施形態に係る音処理装置1では、ユーザが音楽コンテンツの演奏者の一員となってホール等で歌を歌ったり、アコースティック楽器を奏でたりしている気分を楽しむことが可能になっている。以下、このような機能を実現するための構成について説明する。 In the sound processing apparatus 1 according to the fourth embodiment, the user can enjoy the feeling of being a member of a music content player and singing a song in a hall or playing an acoustic instrument. . Hereinafter, a configuration for realizing such a function will be described.
 図12は、第4実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図12に示すように、第4実施形態に係る音処理装置1は、演奏音調整部131、プリプロセッシング部132、間接音成分生成部133、ポストプロセッシング部134、出力制御部135、及びコンテンツデコード部136を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 12 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the fourth embodiment. As shown in FIG. 12, the sound processing apparatus 1 according to the fourth embodiment includes a performance sound adjustment unit 131, a preprocessing unit 132, an indirect sound component generation unit 133, a post processing unit 134, an output control unit 135, and a content decoding unit. Part 136. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図13は、第4実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図13を参照しながら各機能ブロックの機能について説明する。 FIG. 13 is a flowchart showing processing executed by the sound processing apparatus 1 according to the fourth embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、コンテンツデコード部136はコンテンツ再生装置2から入力されるマルチチャンネルのコンテンツ音をフォーマットデコードすることによって、PCM信号に変換する(S40)。ステップS40は第3実施形態のステップS30と基本的に同様であり、コンテンツデコード部136は第3実施形態のコンテンツデコード部126と基本的に同様である。 First, the content decoding unit 136 converts the multi-channel content sound input from the content reproduction apparatus 2 into a PCM signal by format decoding (S40). Step S40 is basically the same as step S30 of the third embodiment, and the content decoding unit 136 is basically the same as the content decoding unit 126 of the third embodiment.
 ただし、第4実施形態のコンテンツデコード部136は特定成分除去部136Aを含み、ステップS40において、特定成分除去部136Aはコンテンツ音に含まれる特定成分を除去する。具体的には、特定成分除去部136Aは、マイク3から入力された演奏音に対応する特定成分をコンテンツ音から除去する。例えば、ユーザの歌唱音がマイク3から入力される場合、特定成分除去部136Aはボーカル成分をコンテンツ音から除去する。マルチチャンネルのコンテンツ音ではボーカル成分がセンターチャンネルに含まれていることが多いため、特定成分除去部136Aはセンターチャンネルを除去することによって、ボーカル成分をコンテンツ音から除去する。ボーカル成分をコンテンツ音から除去する方法はこの方法に限られず、公知の各種方法を採用することができる。また例えば、アコースティック楽器の楽器音がマイク3から入力される場合、特定成分除去部136Aは、当該アコースティック楽器の楽器音成分をコンテンツ音から除去するようにしてもよい。なお、マイク3から入力される演奏音の種類(例えば歌唱音、ギター音、ピアノ音等のいずれであるのか)に関しては、演奏音を解析することによって自動的に判別するようにしてもよいし、ユーザが入力装置を介して指定するようにしてもよい。 However, the content decoding unit 136 of the fourth embodiment includes a specific component removing unit 136A, and in step S40, the specific component removing unit 136A removes the specific component included in the content sound. Specifically, the specific component removing unit 136A removes the specific component corresponding to the performance sound input from the microphone 3 from the content sound. For example, when the user's singing sound is input from the microphone 3, the specific component removing unit 136A removes the vocal component from the content sound. Since the vocal component is often included in the center channel in the multi-channel content sound, the specific component removing unit 136A removes the vocal component from the content sound by removing the center channel. The method of removing the vocal component from the content sound is not limited to this method, and various known methods can be employed. For example, when the instrument sound of an acoustic instrument is input from the microphone 3, the specific component removal unit 136A may remove the instrument sound component of the acoustic instrument from the content sound. Note that the type of performance sound input from the microphone 3 (for example, singing sound, guitar sound, piano sound, or the like) may be automatically determined by analyzing the performance sound. The user may specify via the input device.
 演奏音調整部131は、マイク3から入力された演奏音に対して所定処理を施すことによって、演奏音を調整する(S41)。ステップS41は第1実施形態のステップS10と同様であり、演奏音調整部131は第1実施形態の演奏音調整部101と同様であるため、ここでは説明を省略する。 The performance sound adjusting unit 131 adjusts the performance sound by performing a predetermined process on the performance sound input from the microphone 3 (S41). Step S41 is the same as step S10 of the first embodiment, and the performance sound adjustment unit 131 is the same as the performance sound adjustment unit 101 of the first embodiment, and thus description thereof is omitted here.
 なお、図13では、便宜上、ステップS40,S41が順番に実行されるように示されているが、ステップS40,S41は並列的に実行される。 In FIG. 13, for convenience, steps S40 and S41 are shown to be executed in order, but steps S40 and S41 are executed in parallel.
 PCM信号に変換されたコンテンツ音は、AD変換回路によってPCM信号に変換された演奏音とミックスされ(S42)、当該ミックス音がプリプロセッシング部132に供給される。そして、当該ミックス音に基づいて、プリプロセッシング部132、間接音成分生成部133、及びポストプロセッシング部134による処理が実行される(S43,S44,S45)。ステップS43~S45は第3実施形態のステップS33~S35と基本的に同様であり、プリプロセッシング部132、間接音成分生成部133、及びポストプロセッシング部134は第3実施形態のプリプロセッシング部122、間接音成分生成部123、及びポストプロセッシング部124と基本的に同様であるため、ここでは説明を省略する。 The content sound converted into the PCM signal is mixed with the performance sound converted into the PCM signal by the AD conversion circuit (S42), and the mixed sound is supplied to the preprocessing unit 132. Based on the mixed sound, processing by the preprocessing unit 132, the indirect sound component generation unit 133, and the post processing unit 134 is executed (S43, S44, S45). Steps S43 to S45 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 132, the indirect sound component generation unit 133, and the postprocessing unit 134 are the preprocessing unit 122 of the third embodiment, Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
 なお、演奏音は経路139を介して出力制御部135に供給される。経路139は第2実施形態の経路119と同様である。 Note that the performance sound is supplied to the output control unit 135 via the path 139. The route 139 is the same as the route 119 of the second embodiment.
 第3実施形態の出力制御部125と同様に、出力制御部135は、経路139を介して供給された演奏音(直接音成分)と、ポストプロセッシング部134から供給されるコンテンツ音及び間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S46)。出力部14に出力されたミックス音はスピーカ6によって放音される。 Similar to the output control unit 125 of the third embodiment, the output control unit 135 includes the performance sound (direct sound component) supplied via the path 139 and the content sound and indirect sound component supplied from the post-processing unit 134. And the mixed sound is output to the output unit 14 (S46). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 以上に説明した第4実施形態に係る音処理装置1によれば、ユーザの演奏音(歌唱音又はアコースティック楽器の楽器音)とマルチチャンネルのコンテンツ音とに対して擬似的な間接音成分(残響成分等)が付加されてスピーカ6から放音されるため、ユーザは音楽コンテンツの演奏者の一員となってホール等で歌を歌ったり、アコースティック楽器を奏でたりしている気分を楽しむことができる。また、第4実施形態に係る音処理装置1によれば、ユーザの演奏音は遅延の小さい経路139を介してスピーカ6から放音されるため、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延を小さく抑えることができる。その結果、ユーザによって演奏されてから当該演奏音がスピーカ6から放音されるまでの遅延が大きいことに起因する違和感にユーザに与えてしまわないように図ることができる。 According to the sound processing device 1 according to the fourth embodiment described above, a pseudo indirect sound component (reverberation) with respect to a user's performance sound (singing sound or instrumental sound of an acoustic instrument) and multi-channel content sound. Component) and the like, and the sound is emitted from the speaker 6, the user can enjoy the feeling of being a member of the music content player and singing a song in a hall or playing an acoustic instrument. . Further, according to the sound processing apparatus 1 according to the fourth embodiment, the performance sound of the user is emitted from the speaker 6 through the path 139 with a small delay, so that the performance sound is played after being played by the user. The delay until sound is emitted from can be reduced. As a result, it is possible to prevent the user from feeling uncomfortable due to a large delay from when the performance sound is played by the user until the performance sound is emitted from the speaker 6.
 さらに、第4実施形態に係る音処理装置1によれば、例えば、ユーザが歌を歌っている場合にはコンテンツ音に含まれるボーカル成分が除去されるため、ユーザが音楽コンテンツのボーカルとなってホール等で歌を歌っている気分を楽しむことができる。 Furthermore, according to the sound processing apparatus 1 according to the fourth embodiment, for example, when the user is singing a song, the vocal component included in the content sound is removed, so the user becomes a vocal of the music content. You can enjoy the feeling of singing in a hall.
 なお、以上では、演奏音とコンテンツ音とをミックスする前に、コンテンツ音のボーカル成分を除去することとして説明したが、コンテンツ音のボーカル成分の除去は、演奏音とコンテンツ音とがミックスされた後で行われてもよい。 In the above description, the vocal component of the content sound is removed before the performance sound and the content sound are mixed. However, the removal of the vocal component of the content sound is performed by mixing the performance sound and the content sound. It may be done later.
 [第5実施形態]次に、第5実施形態について説明する。第5実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境は第1実施形態又は第2実施形態と同様である。 [Fifth Embodiment] Next, a fifth embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the fifth embodiment is the same as that of the first embodiment. The user's performance environment is the same as in the first embodiment or the second embodiment.
 第5実施形態に係る音処理装置1でも、ユーザが音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことが可能になっている。特に、第5実施形態に係る音処理装置1では、ユーザの演奏音とコンテンツ音との一体感を感じることが可能になっている。以下、このような機能を実現するための構成について説明する。 Also in the sound processing apparatus 1 according to the fifth embodiment, it is possible to enjoy the feeling that a user is playing in a hall or the like as a member of a music content performer. In particular, in the sound processing apparatus 1 according to the fifth embodiment, it is possible to feel a sense of unity between the user's performance sound and the content sound. Hereinafter, a configuration for realizing such a function will be described.
 図14は、第5実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図14に示すように、第5実施形態に係る音処理装置1は、演奏音調整部141、プリプロセッシング部142、間接音成分生成部143、ポストプロセッシング部144、出力制御部145、コンテンツデコード部146、及びコンテンツ音調整部147を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 14 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the fifth embodiment. As shown in FIG. 14, the sound processing apparatus 1 according to the fifth embodiment includes a performance sound adjustment unit 141, a preprocessing unit 142, an indirect sound component generation unit 143, a post processing unit 144, an output control unit 145, and a content decoding unit. 146 and a content sound adjustment unit 147. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図15は、第5実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図15を参照しながら各機能ブロックの機能について説明する。 FIG. 15 is a flowchart showing processing executed by the sound processing apparatus 1 according to the fifth embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、コンテンツデコード部146は、コンテンツ再生装置2から入力されるマルチチャンネルのコンテンツ音をフォーマットデコードすることによって、PCM信号に変換する(S50)。ステップS50は第3実施形態のステップS30と基本的に同様であり、コンテンツデコード部146は第3実施形態のコンテンツデコード部126と基本的に同様であるため、ここでは説明を省略する。 First, the content decoding unit 146 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S50). Step S50 is basically the same as step S30 in the third embodiment, and the content decoding unit 146 is basically the same as the content decoding unit 126 in the third embodiment, and thus description thereof is omitted here.
 コンテンツ音調整部147は、演奏音とコンテンツ音との特性を合わせるために、コンテンツ音を調整する(S51)。コンテンツ音調整部147は間接音成分除去部147Aを含む。間接音成分除去部147Aは、演奏音とコンテンツ音との間接音成分の量を合わせるために、コンテンツ音に含まれる間接音成分を除去する。コンテンツ音に含まれる間接音成分を除去する方法としては公知の各種方法を採用することができる。 The content sound adjustment unit 147 adjusts the content sound in order to match the characteristics of the performance sound and the content sound (S51). The content sound adjustment unit 147 includes an indirect sound component removal unit 147A. The indirect sound component removing unit 147A removes the indirect sound component included in the content sound in order to match the amount of the indirect sound component between the performance sound and the content sound. Various known methods can be adopted as a method for removing the indirect sound component included in the content sound.
 電子楽器4又は電気楽器5から入力される演奏音は直接音成分のみを含み、間接音成分を含んでいないのに対し、コンテンツ音には直接音成分と間接音成分とが含まれている場合がある。このため、例えば、間接音成分除去部147Aはコンテンツ音に含まれる間接音成分を除去して、コンテンツ音の直接音成分のみを出力する。例えば、間接音成分除去部147Aは、コンテンツ音に含まれる間接音成分を特定し、当該特定された間接音成分の音圧レベルを下げることによって、間接音成分を除去する。すなわち、間接音成分除去部147Aは、間接音成分の音圧レベルを零(ほぼ零)まで下げることによって、間接音成分をほぼ完全に除去する。 The performance sound input from the electronic musical instrument 4 or the electric musical instrument 5 includes only the direct sound component and does not include the indirect sound component, whereas the content sound includes the direct sound component and the indirect sound component. There is. For this reason, for example, the indirect sound component removal unit 147A removes the indirect sound component included in the content sound and outputs only the direct sound component of the content sound. For example, the indirect sound component removing unit 147A identifies an indirect sound component included in the content sound, and removes the indirect sound component by lowering the sound pressure level of the identified indirect sound component. That is, the indirect sound component removing unit 147A removes the indirect sound component almost completely by lowering the sound pressure level of the indirect sound component to zero (almost zero).
 なお、間接音成分除去部147Aは、間接音成分の音圧レベルをある程度まで下げることによって、間接音成分をある程度まで除去(低減)するようにしてもよい。すなわち、「間接音成分を除去する」には、間接音成分をほぼ完全に除去することだけでなく、間接音成分をある程度まで除去(低減)することも含まれる。ここで、「ある程度」とは、間接音成分が残っていたとしてもユーザに違和感を感じさせないような程度である。 Note that the indirect sound component removing unit 147A may remove (reduce) the indirect sound component to some extent by lowering the sound pressure level of the indirect sound component to some extent. That is, “removing the indirect sound component” includes not only removing the indirect sound component almost completely but also removing (reducing) the indirect sound component to some extent. Here, “a certain degree” is such a level that the user does not feel uncomfortable even if the indirect sound component remains.
 演奏音調整部131は演奏音を調整する(S52)。ステップS52は第1実施形態又は第2実施形態のステップS10,20と同様であり、演奏音調整部141は第1実施形態又は第2実施形態の演奏音調整部110,111と同様であるため、ここでは説明を省略する。 The performance sound adjustment unit 131 adjusts the performance sound (S52). Step S52 is the same as steps S10 and 20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 141 is the same as the performance sound adjustment units 110 and 111 of the first embodiment or the second embodiment. The description is omitted here.
 なお、図15では、便宜上、ステップS50,S51とステップS52とが順番に実行されるように示されているが、ステップS50,S51とステップS52とは並列的に実行される。 In FIG. 15, for the sake of convenience, steps S50, S51 and step S52 are shown to be executed in order, but steps S50, S51 and step S52 are executed in parallel.
 間接音成分が除去されたコンテンツ音(直接音成分)は演奏音とミックスされ(S53)、当該ミックス音は、プリプロセッシング部142を経て、間接音成分生成部143に供給される。プリプロセッシング部142は、上記ミックス音に対して、プリプロセッシングを実行する(S54)。ステップS54は第1実施形態のステップS11と同様であり、プリプロセッシング部142は第1実施形態のプリプロセッシング部112と同様であるため、ここでは説明を省略する。 The content sound from which the indirect sound component has been removed (direct sound component) is mixed with the performance sound (S53), and the mixed sound is supplied to the indirect sound component generation unit 143 via the preprocessing unit 142. The preprocessing unit 142 performs preprocessing on the mixed sound (S54). Since step S54 is the same as step S11 of the first embodiment, and the preprocessing unit 142 is the same as the preprocessing unit 112 of the first embodiment, description thereof is omitted here.
 間接音成分生成部143は、上記ミックス音(コンテンツ音の直接音成分と演奏音の直接音成分)に対応する擬似的な間接音成分を生成する(S55)。ステップS55は第3実施形態のステップS34と基本的に同様であり、間接音成分生成部143は第3実施形態の間接音成分生成部123と基本的に同様であるため、ここでは説明を省略する。 The indirect sound component generation unit 143 generates a pseudo indirect sound component corresponding to the mixed sound (the direct sound component of the content sound and the direct sound component of the performance sound) (S55). Step S55 is basically the same as step S34 of the third embodiment, and the indirect sound component generation unit 143 is basically the same as the indirect sound component generation unit 123 of the third embodiment, and thus description thereof is omitted here. To do.
 間接音成分生成部143によって生成された間接音成分は、ポストプロセッシング部144を経て出力制御部145に供給される。ポストプロセッシング部144はポストプロセッシングを実行する(S56)。ステップS56は第1実施形態のステップS13と同様であり、ポストプロセッシング部144は第1実施形態のポストプロセッシング部114と同様であるため、ここでは説明を省略する。なお、図14に示すように、間接音成分が除去されたコンテンツ音(すなわち、コンテンツ音の直接音成分)も出力制御部145に供給される。また、演奏音(直接音成分)は経路149を介して出力制御部145に供給される。経路149は第2実施形態の経路129と同様である。 The indirect sound component generated by the indirect sound component generation unit 143 is supplied to the output control unit 145 through the post processing unit 144. The post processing unit 144 executes post processing (S56). Step S56 is the same as step S13 of the first embodiment, and the post-processing unit 144 is the same as the post-processing unit 114 of the first embodiment, and thus description thereof is omitted here. As shown in FIG. 14, the content sound from which the indirect sound component is removed (that is, the direct sound component of the content sound) is also supplied to the output control unit 145. The performance sound (direct sound component) is supplied to the output control unit 145 via the path 149. The route 149 is the same as the route 129 of the second embodiment.
 出力制御部145は、経路149を介して供給された演奏音(直接音成分)と、コンテンツ音(直接音成分)と、間接音成分生成部143によって生成された間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S57)。出力部14に出力されたミックス音はスピーカ6によって放音される。 The output control unit 145 mixes the performance sound (direct sound component) supplied via the path 149, the content sound (direct sound component), and the indirect sound component generated by the indirect sound component generation unit 143, The mixed sound is output to the output unit 14 (S57). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 以上に説明した第5実施形態に係る音処理装置1によれば、ユーザは音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことができる。また、第5実施形態に係る音処理装置1によれば、ユーザの演奏音とコンテンツ音との間接音成分の量を合わせることが可能になり、その結果、ユーザの演奏音とコンテンツ音との一体感をユーザが十分に感じることが可能になる。 According to the sound processing apparatus 1 according to the fifth embodiment described above, the user can enjoy the feeling of playing in a hall or the like as a member of a music content player. In addition, according to the sound processing device 1 according to the fifth embodiment, it is possible to match the amounts of indirect sound components of the user performance sound and the content sound, and as a result, the user performance sound and the content sound The user can feel a sense of unity.
 なお、図14に示したように、第5実施形態においても、アコースティック楽器の楽器音又は歌唱音がマイク3から入力されるようにしてもよい。ただし、この場合、演奏音に間接音成分が含まれる場合があるため、演奏音調整部141では演奏音に含まれる間接音成分を除去するようにしてもよい。 Note that, as shown in FIG. 14, also in the fifth embodiment, the instrument sound or singing sound of an acoustic instrument may be input from the microphone 3. However, in this case, since the performance sound may include an indirect sound component, the performance sound adjustment unit 141 may remove the indirect sound component included in the performance sound.
 [第6実施形態]次に、第6実施形態について説明する。第6実施形態は第5実施形態の変形例である。第6実施形態に係る音処理装置1では、演奏音が入力されている場合にのみコンテンツ音の間接音成分を除去する。 [Sixth Embodiment] Next, a sixth embodiment will be described. The sixth embodiment is a modification of the fifth embodiment. In the sound processing apparatus 1 according to the sixth embodiment, the indirect sound component of the content sound is removed only when a performance sound is input.
 図16は、第6実施形態に係る音処理装置1で実現される機能を示す機能ブロック図であり、図17は、第6実施形態に係る音処理装置1で実行される処理を示すフロー図である。図16に示すように、第6実施形態に係る音処理装置1は、演奏音調整部141、プリプロセッシング部142、間接音成分生成部143、ポストプロセッシング部144、出力制御部145、コンテンツデコード部146、コンテンツ音調整部147、及び入力検出部148を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 16 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the sixth embodiment, and FIG. 17 is a flowchart showing processing executed by the sound processing apparatus 1 according to the sixth embodiment. It is. As shown in FIG. 16, the sound processing apparatus 1 according to the sixth embodiment includes a performance sound adjustment unit 141, a preprocessing unit 142, an indirect sound component generation unit 143, a post processing unit 144, an output control unit 145, and a content decoding unit. 146, a content sound adjustment unit 147, and an input detection unit 148. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 第6実施形態に係る音処理装置1は入力検出部148を含み、ステップS51の代わりにステップS51A,51Bを含む点で第5実施形態と異なる。以下、第5実施形態との相違点について主に説明する。 The sound processing apparatus 1 according to the sixth embodiment is different from the fifth embodiment in that it includes an input detection unit 148 and includes steps S51A and 51B instead of step S51. Hereinafter, differences from the fifth embodiment will be mainly described.
 第6実施形態に係る音処理装置1では、入力検出部148は、電子楽器4、電気楽器5、又はマイク3から演奏音が入力されていることを検出する。間接音成分除去部147Aは、入力検出部148の検出結果に応じて、コンテンツ音に含まれる間接音成分を除去する。具体的には、演奏音が入力されていることが検出されている場合に(S51A:Yes)、間接音成分除去部147Aはコンテンツ音から間接音成分を除去する(S51B)。一方、演奏音が入力されていることが検出されていない場合に(S51A:No)、間接音成分除去部147Aはコンテンツ音から間接音成分を除去しない。なお、図17では省略されているが、この場合、ステップS52,S53も実行されず、プリプロセッシング部142にはコンテンツ音のみが供給され、ステップS57ではコンテンツ音が出力部14に出力される。 In the sound processing apparatus 1 according to the sixth embodiment, the input detection unit 148 detects that a performance sound is input from the electronic musical instrument 4, the electric musical instrument 5, or the microphone 3. The indirect sound component removal unit 147A removes the indirect sound component included in the content sound according to the detection result of the input detection unit 148. Specifically, when it is detected that a performance sound is input (S51A: Yes), the indirect sound component removing unit 147A removes the indirect sound component from the content sound (S51B). On the other hand, when it is not detected that the performance sound is input (S51A: No), the indirect sound component removing unit 147A does not remove the indirect sound component from the content sound. Although omitted in FIG. 17, in this case, steps S52 and S53 are not executed, only the content sound is supplied to the preprocessing unit 142, and the content sound is output to the output unit 14 in step S57.
 以上に説明した第6実施形態に係る音処理装置1では、演奏音が入力されている場合に限って、コンテンツ音から間接音成分が除去される。演奏音が入力されていない場合には、演奏音とコンテンツ音とで間接音成分の量を合わせる必要がなく、コンテンツ音から間接音成分を除去する必要がない。この点、第6実施形態に係る音処理装置1によれば、コンテンツ音から間接音成分を除去する必要がない場合には、コンテンツ音から間接音成分を除去する処理が実行されなくなるため、音処理装置1の処理負荷を軽減することが可能になる。 In the sound processing apparatus 1 according to the sixth embodiment described above, the indirect sound component is removed from the content sound only when a performance sound is input. When the performance sound is not input, it is not necessary to match the amount of the indirect sound component between the performance sound and the content sound, and it is not necessary to remove the indirect sound component from the content sound. In this regard, according to the sound processing device 1 according to the sixth embodiment, when it is not necessary to remove the indirect sound component from the content sound, the process of removing the indirect sound component from the content sound is not executed. It becomes possible to reduce the processing load of the processing apparatus 1.
 [第7実施形態]次に、第7実施形態について説明する。第7実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境は第1実施形態又は第2実施形態と同様である。 [Seventh Embodiment] Next, a seventh embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the seventh embodiment is the same as that of the first embodiment. The user's performance environment is the same as in the first embodiment or the second embodiment.
 第5実施形態や第6実施形態では、コンテンツ音の間接音成分の量を演奏音と合わせるために、コンテンツ音に含まれる間接音成分を除去するのに対し、第7実施形態に係る音処理装置1では、コンテンツ音の間接音成分の量に合わせて、演奏音に間接音成分を付加するようになっている。 In the fifth and sixth embodiments, in order to match the amount of the indirect sound component of the content sound with the performance sound, the indirect sound component included in the content sound is removed, whereas the sound processing according to the seventh embodiment is performed. In the device 1, an indirect sound component is added to the performance sound in accordance with the amount of the indirect sound component of the content sound.
 図18は、第7実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図18に示すように、第7実施形態に係る音処理装置1は、演奏音調整部151、プリプロセッシング部152、間接音成分生成部153、ポストプロセッシング部154、出力制御部155、コンテンツデコード部156、及び間接音成分量解析部157を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 18 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the seventh embodiment. As shown in FIG. 18, the sound processing apparatus 1 according to the seventh embodiment includes a performance sound adjustment unit 151, a preprocessing unit 152, an indirect sound component generation unit 153, a post processing unit 154, an output control unit 155, and a content decoding unit. 156 and an indirect sound component amount analysis unit 157. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図19は、第7実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図19を参照しながら各機能ブロックの機能について説明する。 FIG. 19 is a flowchart showing processing executed by the sound processing apparatus 1 according to the seventh embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、コンテンツデコード部156は、コンテンツ再生装置2から入力されるマルチチャンネルのコンテンツ音をフォーマットデコードすることによって、PCM信号に変換する(S60)。ステップS60は第3実施形態のステップS30と基本的に同様であり、コンテンツデコード部156は第3実施形態のコンテンツデコード部126と基本的に同様であるため、ここでは説明を省略する。 First, the content decoding unit 156 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S60). Step S60 is basically the same as step S30 of the third embodiment, and the content decoding unit 156 is basically the same as the content decoding unit 126 of the third embodiment, and thus description thereof is omitted here.
 間接音成分量解析部157はコンテンツ音に含まれる間接音成分の量を解析する(S61)。例えば、間接音成分量解析部157はコンテンツ音に含まれる間接音成分の数や大きさ(音圧レベル)を解析する。コンテンツ音に含まれる間接音成分の量を解析する方法としては公知の各種方法を採用することができる。 The indirect sound component amount analysis unit 157 analyzes the amount of the indirect sound component included in the content sound (S61). For example, the indirect sound component amount analysis unit 157 analyzes the number and magnitude (sound pressure level) of indirect sound components included in the content sound. Various known methods can be adopted as a method of analyzing the amount of indirect sound component included in the content sound.
 演奏音調整部151は演奏音を調整する(S62)。ステップS62は第1実施形態又は第2実施形態のステップS10,S20と基本的に同様であり、演奏音調整部151は第1実施形態又は第2実施形態の演奏音調整部101,111と基本的に同様である。 The performance sound adjustment unit 151 adjusts the performance sound (S62). Step S62 is basically the same as steps S10 and S20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 151 is basically the same as the performance sound adjustment units 101 and 111 of the first embodiment or the second embodiment. The same.
 ただし、第7実施形態の演奏音調整部151は、演奏音とコンテンツ音との特性を合わせるために演奏音を調整する役割も果たす。すなわち、演奏音調整部151は間接音成分付加部151Aを含み、ステップS62において、間接音成分付加部151Aは、演奏音に対応する間接音成分を当該演奏音に対して付加する。特に、間接音成分付加部151Aは、演奏音に対して付加する間接音成分の量を、間接音成分量解析部157の解析結果に基づいて設定する。すなわち、間接音成分付加部151Aは、演奏音に対して付加する間接音成分の数や大きさを、コンテンツ音に含まれる間接音成分の数や大きさに合わせて設定する。つまり、間接音成分付加部151Aは、演奏音に対して付加する間接音成分の数や大きさを、コンテンツ音に含まれる間接音成分の数や大きさと同程度に設定する。 However, the performance sound adjustment unit 151 of the seventh embodiment also plays a role of adjusting the performance sound to match the characteristics of the performance sound and the content sound. That is, the performance sound adjustment unit 151 includes an indirect sound component addition unit 151A. In step S62, the indirect sound component addition unit 151A adds an indirect sound component corresponding to the performance sound to the performance sound. In particular, the indirect sound component adding unit 151A sets the amount of the indirect sound component added to the performance sound based on the analysis result of the indirect sound component amount analyzing unit 157. That is, the indirect sound component adding unit 151A sets the number and size of indirect sound components added to the performance sound in accordance with the number and size of indirect sound components included in the content sound. That is, the indirect sound component adding unit 151A sets the number and magnitude of the indirect sound components added to the performance sound to the same extent as the number and magnitude of the indirect sound components included in the content sound.
 第7実施形態に係る音処理装置1では、コンテンツ音と、間接音成分付加部151Aによって間接音成分が付加された演奏音とがミックスされ(S63)、当該ミックス音がプリプロセッシング部152に供給される。そして、当該ミックス音に基づいて、プリプロセッシング部152、間接音成分生成部153、及びポストプロセッシング部154による処理が実行される(S64,S65,S66)。ステップS64~S66は第3実施形態のステップS33~S35と基本的に同様であり、プリプロセッシング部152、間接音成分生成部153、及びポストプロセッシング部154は第3実施形態のプリプロセッシング部122、間接音成分生成部123、及びポストプロセッシング部124と基本的に同様であるため、ここでは説明を省略する。 In the sound processing device 1 according to the seventh embodiment, the content sound and the performance sound to which the indirect sound component is added by the indirect sound component adding unit 151A are mixed (S63), and the mixed sound is supplied to the preprocessing unit 152. Is done. Based on the mixed sound, processing by the preprocessing unit 152, the indirect sound component generation unit 153, and the post processing unit 154 is executed (S64, S65, S66). Steps S64 to S66 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 152, the indirect sound component generation unit 153, and the postprocessing unit 154 are the same as the preprocessing unit 122 of the third embodiment. Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
 なお、間接音成分付加部151Aによって間接音成分が付加された演奏音は、経路159を介して出力制御部155に供給される。経路159は第2実施形態の経路119と同様である。 Note that the performance sound to which the indirect sound component is added by the indirect sound component adding unit 151A is supplied to the output control unit 155 via the path 159. The route 159 is the same as the route 119 of the second embodiment.
 出力制御部155は、コンテンツ音と、間接音成分付加部151Aによって間接音成分が付加された演奏音と、間接音成分生成部153によって生成された間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S67)。出力部14に出力されたミックス音はスピーカ6によって放音される。 The output control unit 155 mixes the content sound, the performance sound to which the indirect sound component is added by the indirect sound component adding unit 151A, and the indirect sound component generated by the indirect sound component generation unit 153, and outputs the mixed sound. The data is output to the output unit 14 (S67). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 以上に説明した第7実施形態に係る音処理装置1によれば、ユーザは音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことができる。また、第7実施形態に係る音処理装置1によれば、ユーザの演奏音とコンテンツ音との間接音成分の量を合わせることが可能になり、その結果、ユーザの演奏音とコンテンツ音との一体感をユーザが十分に感じることが可能になる。 According to the sound processing apparatus 1 according to the seventh embodiment described above, the user can enjoy the feeling of being performed in a hall or the like as a member of a music content player. In addition, according to the sound processing device 1 according to the seventh embodiment, it is possible to match the amounts of indirect sound components of the user performance sound and the content sound, and as a result, the user performance sound and the content sound The user can feel a sense of unity.
 なお、図18に示したように、第7実施形態においても、アコースティック楽器の楽器音又は歌唱音がマイク3から入力されるようにしてもよい。ただし、この場合、マイク3から含まれる演奏音に間接音成分が予め含まれる場合があるため、演奏音調整部151では、演奏音に含まれる間接音成分を一旦除去した後で、間接音成分付加部151Aによって間接音成分を演奏音に対して付加するようにしてもよい。 Note that, as shown in FIG. 18, also in the seventh embodiment, the instrument sound or singing sound of an acoustic instrument may be input from the microphone 3. However, in this case, since the indirect sound component may be included in advance in the performance sound included from the microphone 3, the performance sound adjusting unit 151 once removes the indirect sound component included in the performance sound, and then the indirect sound component. The indirect sound component may be added to the performance sound by the adding unit 151A.
 [第8実施形態]次に、第8実施形態について説明する。第8実施形態は第7実施形態の変形例である。第8実施形態に係る音処理装置1では、演奏音への間接音の付加の仕方を当該演奏音の種類に応じて変える。 [Eighth Embodiment] Next, an eighth embodiment will be described. The eighth embodiment is a modification of the seventh embodiment. In the sound processing apparatus 1 according to the eighth embodiment, the method of adding the indirect sound to the performance sound is changed according to the type of the performance sound.
 図20は、第8実施形態に係る音処理装置1で実現される機能を示す機能ブロック図であり、図21は、第8実施形態に係る音処理装置1で実行される処理を示すフロー図である。図20に示すように、第8実施形態に係る音処理装置1は、演奏音調整部151、プリプロセッシング部152、間接音成分生成部153、ポストプロセッシング部154、出力制御部155、コンテンツデコード部156、間接音成分量解析部157、及び演奏音種類特定部158を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 20 is a functional block diagram illustrating functions realized by the sound processing device 1 according to the eighth embodiment, and FIG. 21 is a flowchart illustrating processing executed by the sound processing device 1 according to the eighth embodiment. It is. As shown in FIG. 20, the sound processing apparatus 1 according to the eighth embodiment includes a performance sound adjustment unit 151, a preprocessing unit 152, an indirect sound component generation unit 153, a post processing unit 154, an output control unit 155, and a content decoding unit. 156, an indirect sound component amount analysis unit 157, and a performance sound type identification unit 158. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 第8実施形態に係る音処理装置1は演奏音種類特定部158を含み、ステップS62の代わりにステップS62A,S62Bを含む点で第7実施形態と異なる。以下、第7実施形態との相違点について主に説明する。 The sound processing apparatus 1 according to the eighth embodiment is different from the seventh embodiment in that it includes a performance sound type specifying unit 158 and includes steps S62A and S62B instead of step S62. Hereinafter, differences from the seventh embodiment will be mainly described.
 第8実施形態に係る音処理装置1では、演奏音種類特定部158は、入力された演奏音の種類を特定する(ステップS62A)。例えば、演奏音種類特定部158は、入力された演奏音が楽器音であるか否かを判定する。また、入力された演奏音が楽器音である場合、演奏音種類特定部158は楽器音の種類を特定する。すなわち、演奏音種類特定部158は、入力された演奏音が複数種類の楽器(例えばギター、バイオリン、又はピアノ等)のうちのいずれの音であるのかを特定する。また例えば、演奏音種類特定部158は、入力された演奏音が歌唱音であるか否かを判定する。なお、演奏音の種類を特定する方法としては公知の各種方法を採用することができる。 In the sound processing device 1 according to the eighth embodiment, the performance sound type specifying unit 158 specifies the type of the input performance sound (step S62A). For example, the performance sound type identification unit 158 determines whether or not the input performance sound is an instrument sound. When the input performance sound is an instrument sound, the performance sound type specifying unit 158 specifies the type of the instrument sound. In other words, the performance sound type identification unit 158 identifies which of the multiple types of musical instruments (for example, guitar, violin, piano, etc.) the input performance sound is. Further, for example, the performance sound type identification unit 158 determines whether or not the input performance sound is a singing sound. Various known methods can be adopted as a method for specifying the type of performance sound.
 また、第8実施形態の間接音成分付加部151Aは、間接音成分量解析部157の解析結果だけでなく、演奏音種類特定部158の特定結果にも基づいて、演奏音に対応する間接音成分を当該演奏音に対して付加する(S62B)。すなわち、間接音成分付加部151Aは、間接音成分量解析部157の解析結果だけでなく、演奏音種類特定部158の特定結果にも基づいて、演奏音に対して付加する間接音成分を設定する。 In addition, the indirect sound component adding unit 151A of the eighth embodiment is based on the indirect sound corresponding to the performance sound based not only on the analysis result of the indirect sound component amount analysis unit 157 but also on the specification result of the performance sound type specifying unit 158. The component is added to the performance sound (S62B). That is, the indirect sound component adding unit 151A sets the indirect sound component to be added to the performance sound based on the result of the specification of the performance sound type specifying unit 158 as well as the result of the analysis of the indirect sound component amount analysis unit 157. To do.
 現実の音響空間における演奏音の放射特性は演奏音の種類ごとに異なるため、間接音成分付加部151Aは、演奏音の種類ごとに異なる放射特性を踏まえて、演奏音に対して付加する間接音成分を設定する。 Since the radiation characteristic of the performance sound in the actual acoustic space is different for each type of performance sound, the indirect sound component adding unit 151A adds the indirect sound to the performance sound based on the radiation characteristic that is different for each type of performance sound. Set ingredients.
 例えば、ギターの楽器音は他の方向に比べて正面方向に放射される傾向があるため、演奏音がギターの楽器音である場合、間接音成分付加部151Aは、正面方向に対応するチャンネルに対して間接音成分(残響成分等)を付加する。または、間接音成分付加部151Aは、正面方向に対応するチャンネルに対して付加する間接音成分の量を、他のチャンネルに対して付加する間接音成分の量よりも大きくする。 For example, since the instrumental sound of a guitar tends to be emitted in the front direction compared to other directions, the indirect sound component adding unit 151A sets the channel corresponding to the frontal direction when the performance sound is the instrumental sound of the guitar. Indirect sound components (reverberation components, etc.) are added to it. Alternatively, the indirect sound component adding unit 151A makes the amount of the indirect sound component added to the channel corresponding to the front direction larger than the amount of the indirect sound component added to the other channels.
 また例えば、バイオリンの楽器音は他の方向に比べて上方向に放射される傾向があるため、演奏音がバイオリンの楽器音である場合、間接音成分付加部151Aは、上方向に対応するチャンネルに対して間接音成分(残響成分等)を付加する。または、間接音成分付加部151Aは、上方向に対応するチャンネルに対して付加する間接音成分の量を、他のチャンネルに対して付加する間接音成分の量よりも大きくする。 Further, for example, since the violin instrument sound tends to be radiated upward as compared to other directions, the indirect sound component adding unit 151 </ b> A has a channel corresponding to the upward direction when the performance sound is a violin instrument sound. Indirect sound components (reverberation components, etc.) are added to. Alternatively, the indirect sound component adding unit 151A makes the amount of the indirect sound component added to the channel corresponding to the upward direction larger than the amount of the indirect sound component added to the other channel.
 また例えば、歌唱音は他の方向に比べて正面方向に放射される傾向があるため、演奏音が歌唱音である場合、間接音成分付加部151Aは、正面方向に対応するチャンネルに対して間接音成分(残響成分等)を付加する。または、間接音成分付加部151Aは、正面方向に対応するチャンネルに対して付加する間接音成分の量を、他のチャンネルに対して付加する間接音成分の量よりも多くする。 Further, for example, since the singing sound tends to be emitted in the front direction compared to other directions, when the performance sound is a singing sound, the indirect sound component adding unit 151A is indirectly connected to the channel corresponding to the front direction. Adds sound components (such as reverberation components). Alternatively, the indirect sound component adding unit 151A increases the amount of the indirect sound component added to the channel corresponding to the front direction than the amount of the indirect sound component added to the other channels.
 以上に説明した第8実施形態に係る音処理装置1によれば、ユーザの演奏音の放射特性を踏まえて、間接音成分を演奏音に対して付加することが可能になり、より自然な間接音成分を演奏音に対して付加できるようになる。 According to the sound processing device 1 according to the eighth embodiment described above, it is possible to add an indirect sound component to the performance sound based on the radiation characteristics of the user's performance sound, and a more natural indirection. Sound components can be added to the performance sound.
 [第9実施形態]次に、第9実施形態について説明する。第9実施形態に係る音処理装置1のハードウェア構成は第1実施形態と同様である。また、ユーザの演奏環境は第1実施形態又は第2実施形態と同様である。 [Ninth Embodiment] Next, a ninth embodiment will be described. The hardware configuration of the sound processing apparatus 1 according to the ninth embodiment is the same as that of the first embodiment. The user's performance environment is the same as in the first embodiment or the second embodiment.
 第6実施形態~第8実施形態では、演奏音とコンテンツ音とで間接音成分の量を合わせるのに対し、第9実施形態に係る音処理装置1では、演奏音とコンテンツ音とで音色を合わせるようになっている。 In the sixth to eighth embodiments, the amount of the indirect sound component is matched between the performance sound and the content sound, whereas in the sound processing device 1 according to the ninth embodiment, the timbre is changed between the performance sound and the content sound. It is designed to match.
 図22は、第9実施形態に係る音処理装置1で実現される機能を示す機能ブロック図である。図22に示すように、第9実施形態に係る音処理装置1は、演奏音調整部161、プリプロセッシング部162、間接音成分生成部163、ポストプロセッシング部164、出力制御部165、コンテンツデコード部166、第1音色解析部167、及び第2音色解析部168を含む。これらの機能ブロックはCPU11及び音響信号処理部15によって実現される。例えば、CPU11がプログラムに従って音響信号処理部15を制御することによって、上記の機能ブロックが実現される。 FIG. 22 is a functional block diagram showing functions realized by the sound processing apparatus 1 according to the ninth embodiment. As shown in FIG. 22, the sound processing apparatus 1 according to the ninth embodiment includes a performance sound adjustment unit 161, a preprocessing unit 162, an indirect sound component generation unit 163, a post processing unit 164, an output control unit 165, and a content decoding unit. 166, a first timbre analysis unit 167, and a second timbre analysis unit 168. These functional blocks are realized by the CPU 11 and the acoustic signal processing unit 15. For example, when the CPU 11 controls the acoustic signal processing unit 15 according to a program, the above functional block is realized.
 図23は、第9実施形態に係る音処理装置1で実行される処理を示すフロー図である。以下、図23を参照しながら各機能ブロックの機能について説明する。 FIG. 23 is a flowchart showing processing executed by the sound processing apparatus 1 according to the ninth embodiment. Hereinafter, the function of each functional block will be described with reference to FIG.
 まず、コンテンツデコード部166は、コンテンツ再生装置2から入力されるマルチチャンネルのコンテンツ音をフォーマットデコードすることによって、PCM信号に変換する(S70)。ステップS70は第3実施形態のステップS30と基本的に同様であり、コンテンツデコード部166は第3実施形態のコンテンツデコード部126と基本的に同様であるため、ここでは説明を省略する。 First, the content decoding unit 166 converts the multi-channel content sound input from the content playback device 2 into a PCM signal by format decoding (S70). Step S70 is basically the same as step S30 of the third embodiment, and the content decoding unit 166 is basically the same as the content decoding unit 126 of the third embodiment, and thus description thereof is omitted here.
 第2音色解析部168はコンテンツ音の音色を解析する(S71)。例えば、コンテンツ音に複数種類の楽器音が含まれる場合に、第2音色解析部168は、当該複数種類の楽器音のうちの、演奏音に含まれる種類の楽器音を特定し、当該楽器音の音色を解析する。また例えば、演奏音に歌唱音が含まれる場合に、第2音色解析部168はコンテンツ音に含まれる歌唱音の音色を解析する。なお、コンテンツ音に含まれる楽器音又は歌唱音を特定する方法や、楽器音又は歌唱音の音色を解析する方法としては、公知の各種方法を採用することができる。 The second timbre analysis unit 168 analyzes the timbre of the content sound (S71). For example, when a plurality of types of instrument sounds are included in the content sound, the second timbre analysis unit 168 identifies a type of instrument sound included in the performance sound among the plurality of types of instrument sounds, and the instrument sound Analyzing the tone of For example, when the performance sound includes a singing sound, the second timbre analysis unit 168 analyzes the timbre of the singing sound included in the content sound. Various known methods can be adopted as a method for specifying an instrument sound or a singing sound included in the content sound and a method for analyzing a tone color of an instrument sound or a singing sound.
 第1音色解析部167は演奏音の音色を解析する(S72)。第1音色解析部167は演奏音に含まれる楽器音又は歌唱音を特定し、当該楽器音又は歌唱音の音色を解析する。演奏音の音色を解析する方法としては公知の各種方法を採用することができる。 The first timbre analysis unit 167 analyzes the timbre of the performance sound (S72). The first timbre analysis unit 167 identifies an instrument sound or singing sound included in the performance sound, and analyzes the timbre of the instrument sound or singing sound. Various known methods can be employed as a method for analyzing the tone color of the performance sound.
 なお、図23では、便宜上、ステップS70,71とステップS72とが順番に実行されるように示されているが、ステップS70,S71とステップS72とは並列的に実行される。 In FIG. 23, for convenience, steps S70 and 71 and step S72 are shown to be executed in order, but steps S70, S71 and step S72 are executed in parallel.
 演奏音調整部161は演奏音を調整する(S73)。ステップS62は第1実施形態又は第2実施形態のステップS10,S20と基本的に同様であり、演奏音調整部161は第1実施形態又は第2実施形態の演奏音調整部101,111と基本的に同様である。 The performance sound adjustment unit 161 adjusts the performance sound (S73). Step S62 is basically the same as steps S10 and S20 of the first embodiment or the second embodiment, and the performance sound adjustment unit 161 is basically the same as the performance sound adjustment units 101 and 111 of the first embodiment or the second embodiment. The same.
 ただし、第9実施形態の演奏音調整部161は、演奏音とコンテンツ音との特性を合わせるために演奏音を調整する役割も果たす。すなわち、演奏音調整部161は音色調整部161Aを含み、ステップS73において、音色調整部161Aは、第1音色解析部167の解析結果と、第2音色解析部168の解析結果との比較に基づいて、演奏音の音色を調整する。 However, the performance sound adjustment unit 161 of the ninth embodiment also plays a role of adjusting the performance sound to match the characteristics of the performance sound and the content sound. That is, the performance sound adjustment unit 161 includes a timbre adjustment unit 161A. In step S73, the timbre adjustment unit 161A is based on a comparison between the analysis result of the first timbre analysis unit 167 and the analysis result of the second timbre analysis unit 168. Adjust the tone of the performance sound.
 例えば、演奏音に含まれるバイオリン音(バイオリンの楽器音)の高域成分が多いとの解析結果が第1音色解析部167によって得られ、かつ、コンテンツ音に含まれるバイオリン音の高域成分が少ないとの解析結果が第2音色解析部168によって得られた場合、音色調整部161Aは演奏音に含まれるバイオリン音の高域成分を減少させる。要するに、演奏音に含まれる楽器音の特定帯域成分の量とコンテンツ音に含まれる同種の楽器音の特定帯域成分の量とが異なる場合に、演奏音に含まれる楽器音の特定帯域成分とコンテンツ音に含まれる同種の楽器音の高域成分と同程度に設定すべく、音色調整部161Aは演奏音に含まれる楽器音の特定帯域成分を調整する。 For example, the first timbre analysis unit 167 obtains an analysis result that there are many high-frequency components of a violin sound (violin instrument sound) included in the performance sound, and the high-frequency component of the violin sound included in the content sound is When the second timbre analysis unit 168 obtains an analysis result indicating that there is little, the timbre adjustment unit 161A reduces the high-frequency component of the violin sound included in the performance sound. In short, if the amount of the specific band component of the instrument sound included in the performance sound and the amount of the specific band component of the same type of instrument sound included in the content sound are different, the specific band component and content of the instrument sound included in the performance sound The tone color adjustment unit 161A adjusts the specific band component of the instrument sound included in the performance sound so as to be set to the same level as the high frequency component of the same type of instrument sound included in the sound.
 第9実施形態に係る音処理装置1では、コンテンツ音と、音色調整部161Aによって音色が調整された演奏音とがミックスされ(S74)、当該ミックス音がプリプロセッシング部162に供給される。そして、当該ミックス音に基づいて、プリプロセッシング部162、間接音成分生成部163、及びポストプロセッシング部164による処理が実行される(S75,S76,S77)。ステップS75~S77は第3実施形態のステップS33~S35と基本的に同様であり、プリプロセッシング部162、間接音成分生成部163、及びポストプロセッシング部164は第3実施形態のプリプロセッシング部122、間接音成分生成部123、及びポストプロセッシング部124と基本的に同様であるため、ここでは説明を省略する。 In the sound processing apparatus 1 according to the ninth embodiment, the content sound and the performance sound whose tone color is adjusted by the tone color adjusting unit 161A are mixed (S74), and the mixed sound is supplied to the preprocessing unit 162. Based on the mixed sound, processing by the preprocessing unit 162, the indirect sound component generation unit 163, and the post processing unit 164 is executed (S75, S76, S77). Steps S75 to S77 are basically the same as steps S33 to S35 of the third embodiment, and the preprocessing unit 162, the indirect sound component generation unit 163, and the postprocessing unit 164 are the preprocessing unit 122 of the third embodiment, Since it is basically the same as the indirect sound component generation unit 123 and the post processing unit 124, description thereof is omitted here.
 なお、音色調整部161Aによって音色が調整された演奏音は、経路169を介して、出力制御部165に供給される。経路169は第2実施形態の経路119と同様である。 Note that the performance sound whose tone color has been adjusted by the tone color adjustment unit 161A is supplied to the output control unit 165 via the path 169. The route 169 is the same as the route 119 of the second embodiment.
 第3実施形態の出力制御部125と同様に、出力制御部165は、経路169を介して供給された演奏音(音色調整部161Aによって音色が調整された演奏音)と、ポストプロセッシング部164から供給されるコンテンツ音及び間接音成分とをミックスし、当該ミックス音を出力部14に出力する(S78)。出力部14に出力されたミックス音はスピーカ6によって放音される。 Similar to the output control unit 125 of the third embodiment, the output control unit 165 outputs the performance sound supplied through the path 169 (the performance sound whose tone color has been adjusted by the tone color adjustment unit 161A) and the post-processing unit 164. The supplied content sound and indirect sound component are mixed and the mixed sound is output to the output unit 14 (S78). The mixed sound output to the output unit 14 is emitted by the speaker 6.
 以上に説明した第9実施形態に係る音処理装置1によれば、ユーザは音楽コンテンツの演奏者の一員となってホール等で演奏している気分を楽しむことができる。また、第9実施形態に係る音処理装置1によれば、ユーザの演奏音とコンテンツ音との音色を合わせることが可能になり、その結果、ユーザの演奏音とコンテンツ音との一体感をユーザが十分に感じることが可能になる。 According to the sound processing apparatus 1 according to the ninth embodiment described above, the user can enjoy the feeling of playing in a hall or the like as a member of a music content player. In addition, according to the sound processing device 1 according to the ninth embodiment, it is possible to match the tone of the user performance sound and the content sound, and as a result, the user's sense of unity of the performance sound and the content sound can be obtained. Will be able to feel enough.
 なお、演奏音の音色を調整する音色調整部161Aの代わりに、コンテンツ音の音色を第1音色解析部167の解析結果と第2音色解析部168の解析結果との比較に基づいて調整する音色調整部を設けるようにしてもよい。 Instead of the timbre adjustment unit 161A that adjusts the timbre of the performance sound, the timbre that adjusts the timbre of the content sound based on the comparison between the analysis result of the first timbre analysis unit 167 and the analysis result of the second timbre analysis unit 168. An adjustment unit may be provided.
 この音色調整部は、例えば、演奏音に含まれるバイオリン音の高域成分が多いとの解析結果が第1音色解析部167によって得られ、かつ、コンテンツ音に含まれるバイオリン音の高域成分が少ないとの解析結果が第2音色解析部168によって得られた場合に、コンテンツ音に含まれるバイオリン音の高域成分を増加させるようにしてもよい。 The timbre adjustment unit obtains, for example, an analysis result that the high frequency component of the violin sound included in the performance sound is obtained by the first timbre analysis unit 167, and the high frequency component of the violin sound included in the content sound is obtained. When the second timbre analysis unit 168 obtains an analysis result indicating that there is little, the high frequency component of the violin sound included in the content sound may be increased.
 また、この音色調整部は、例えば、演奏音ではバイオリン音(バイオリンの楽器音)の高域成分が多いとの解析結果が第1音色解析部167によって得られ、かつ、コンテンツ音では、バイオリン音とは異なる楽器音であるギター音の高域成分が多いとの解析結果が第2音色解析部168によって得られた場合に、コンテンツ音に含まれるギター音の高域成分を減少させるようにしてもよい。 In addition, for example, the timbre adjustment unit obtains an analysis result by the first timbre analysis unit 167 that the high frequency component of the violin sound (violin instrument sound) is high in the performance sound, and the violin sound in the content sound. When the second timbre analysis unit 168 obtains an analysis result indicating that the high-frequency component of the guitar sound, which is a different instrument sound, is obtained, the high-frequency component of the guitar sound included in the content sound is reduced. Also good.
 なお、演奏音の音色を調整する音色調整部161Aと、コンテンツ音の音色を調整する音色調整部との両方を設けるようにしてもよい。例えば、第1音色解析部167によって、演奏音に含まれるバイオリン音の高域成分が多いとの解析結果が得られ、かつ、第2音色解析部168によって、コンテンツ音に含まれるバイオリン音の高域成分が少ないとの解析結果が得られた場合に、演奏音に含まれるバイオリン音の高域成分とコンテンツ音に含まれるバイオリン音の高域成分とを同程度に設定すべく、演奏音に含まれるバイオリン音の高域成分と、演奏音に含まれるバイオリン音の高域成分とをそれぞれ調整するようにしてもよい。 Note that both a timbre adjustment unit 161A for adjusting the timbre of the performance sound and a timbre adjustment unit for adjusting the timbre of the content sound may be provided. For example, the first timbre analysis unit 167 obtains an analysis result that the high frequency component of the violin sound included in the performance sound is obtained, and the second timbre analysis unit 168 determines the high frequency of the violin sound included in the content sound. If the analysis result indicates that there are few band components, the high frequency component of the violin sound included in the performance sound and the high frequency component of the violin sound included in the content sound should be set to the same level. The high frequency component of the violin sound included and the high frequency component of the violin sound included in the performance sound may be adjusted respectively.
 [変形例]本発明は以上説明した第1実施形態~第9実施形態に限定されるものではない。 [Modification] The present invention is not limited to the first to ninth embodiments described above.
 例えば、第1実施形態~第9実施形態のうちの複数を組み合わせるようにしてもよい。 For example, a plurality of the first to ninth embodiments may be combined.
 また例えば、以上では、音処理装置1がAVレシーバであることを前提として説明しており、以上に説明したような機能はAVレシーバを用いて実現することができるが、音処理装置1はAVレシーバ以外の装置であってもよく、以上に説明したような機能はAVレシーバ以外の装置によって実現するようにしてもよい。例えば、音処理装置1はスピーカに内蔵されるようにしてもよい。また例えば、音処理装置1は、デスクトップ型コンピュータ、ラップトップ型コンピュータ、タブレット型コンピュータ、又はスマートフォン等によって実現するようにしてもよい。 Further, for example, the above description is based on the assumption that the sound processing apparatus 1 is an AV receiver, and the functions described above can be realized by using an AV receiver. A device other than the receiver may be used, and the functions described above may be realized by a device other than the AV receiver. For example, the sound processing device 1 may be built in a speaker. Further, for example, the sound processing device 1 may be realized by a desktop computer, a laptop computer, a tablet computer, a smartphone, or the like.
 [付記]以上に説明した実施形態についての記載から把握されるように、本明細書では以下に記載の発明を含む多様な技術的思想が開示されている。 [Appendix] As can be understood from the description of the embodiment described above, various technical ideas including the invention described below are disclosed in this specification.
 本発明に係る音処理装置は、ユーザの演奏音の入力を受け付ける入力手段と、前記演奏音に対応する間接音成分を生成する生成手段と、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段と、を含む。 The sound processing apparatus according to the present invention limits input means for receiving an input of a user's performance sound, generation means for generating an indirect sound component corresponding to the performance sound, and output of the performance sound to the output means. And output control means for outputting the indirect sound component to the output means.
 また、本発明に係る音処理方法は、ユーザの演奏音に対応する間接音成分を生成する生成ステップと、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御ステップと、を含む。 The sound processing method according to the present invention includes a generation step of generating an indirect sound component corresponding to a performance sound of a user, and the output of the indirect sound component to the output unit while limiting the output of the performance sound to the output means. And an output control step for outputting to the means.
 また、本発明に係るプログラムは、ユーザの演奏音に対応する間接音成分を生成する生成手段、及び、前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段、としてコンピュータを機能させるためのプログラムである。また、第1の発明に係る情報記憶媒体は、上記プログラムを記録したコンピュータ読み取り可能な情報記憶媒体である。 In addition, the program according to the present invention includes a generating unit that generates an indirect sound component corresponding to a user's performance sound, and the output unit outputs the indirect sound component while limiting the output of the performance sound to the output unit. This is a program for causing a computer to function as output control means for outputting to the computer. An information storage medium according to the first invention is a computer-readable information storage medium storing the above program.
 上記発明では、供給された音に対応する間接音成分を前記供給された音に対して付加する付加手段を含み、前記生成手段は、前記演奏音を前記付加手段に供給し、前記付加手段から出力される音から元々の前記演奏音を除去することによって、前記間接音成分を生成するようにしてもよい。 In the above-mentioned invention, it includes an adding means for adding an indirect sound component corresponding to the supplied sound to the supplied sound, and the generating means supplies the performance sound to the adding means, and from the adding means The indirect sound component may be generated by removing the original performance sound from the output sound.
 上記発明では、供給される音を調整して前記付加手段に供給する第1の処理手段と、前記付加手段から出力される音を放音手段の特性に合わせて調整する第2の処理手段と、を含み、前記演奏音は前記第1の処理手段に供給されるようにしてもよい。 In the above invention, the first processing means for adjusting the supplied sound and supplying it to the adding means, and the second processing means for adjusting the sound output from the adding means in accordance with the characteristics of the sound emitting means; The performance sound may be supplied to the first processing means.
 上記発明では、前記入力手段は、前記演奏音の入力をマイクロフォンを介して受け付け、前記音処理装置は、入力された前記演奏音に対して、前記マイクロフォンにおけるハウリングを低減するためのハウリング低減処理を施す手段を含み、前記生成手段は、前記ハウリング低減処理が施された前記演奏音に対応する間接音成分を生成するようにしてもよい。 In the above invention, the input means receives an input of the performance sound through a microphone, and the sound processing device performs a howling reduction process for reducing howling in the microphone with respect to the input performance sound. The generating means may generate an indirect sound component corresponding to the performance sound subjected to the howling reduction processing.
 1 音処理装置、2 コンテンツ再生装置、3 マイク、4 電子機器、5 電気機器、6,6A,6B,6C,6D,6E スピーカ、11 CPU、12 メモリ、13 入力部、14 出力部、15 音信号処理部、16 映像信号処理部、101,111,121,131,141,151,161 演奏音調整部、102,112,122,132,142,152,162 プリプロセッシング部、103,113,123,133,143,153,163 間接音成分生成部、104,114,124,134,144,154,164 ポストプロセッシング部、105,115,125,135,145,155,165 出力制御部、119,129,139,149,159,169 経路、126,136,146,156,166 コンテンツデコード部、136A 特定成分除去部、147 コンテンツ音調整部、147A 間接音成分除去部、148 入力検出部、151A 間接音成分付加部、157 間接音成分解析部、158 演奏音種類特定部、161A 音色調整部、167 第1音色解析部、168 第2音色解析部、U ユーザ。 1 sound processing device, 2 content playback device, 3 microphone, 4 electronic device, 5 electrical device, 6, 6A, 6B, 6C, 6D, 6E speaker, 11 CPU, 12 memory, 13 input unit, 14 output unit, 15 sound Signal processing unit, 16 video signal processing unit, 101, 111, 121, 131, 141, 151, 161 performance sound adjustment unit, 102, 112, 122, 132, 142, 152, 162 preprocessing unit, 103, 113, 123 133, 143, 153, 163 Indirect sound component generation unit, 104, 114, 124, 134, 144, 154, 164 Post processing unit, 105, 115, 125, 135, 145, 155, 165 Output control unit, 119, 129, 139, 149, 159, 169 route, 126, 136, 1 6,156,166 Content decoding unit, 136A specific component removal unit, 147 content sound adjustment unit, 147A indirect sound component removal unit, 148 input detection unit, 151A indirect sound component addition unit, 157 indirect sound component analysis unit, 158 performance sound Type identification unit, 161A tone color adjustment unit, 167 first tone color analysis unit, 168 second tone color analysis unit, U user.

Claims (6)

  1.  ユーザの演奏音の入力を受け付ける入力手段と、
     前記演奏音に対応する間接音成分を生成する生成手段と、
     前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段と、
     を含むことを特徴とする音処理装置。
    Input means for receiving input of the user's performance sound;
    Generating means for generating an indirect sound component corresponding to the performance sound;
    Output control means for outputting the indirect sound component to the output means while restricting output of the performance sound to the output means;
    A sound processing apparatus comprising:
  2.  請求項1に記載の音処理装置において、
     供給された音に対応する間接音成分を前記供給された音に対して付加する付加手段を含み、
     前記生成手段は、前記演奏音を前記付加手段に供給し、前記付加手段から出力される音から元々の前記演奏音を除去することによって、前記間接音成分を生成する、
     ことを特徴とする音処理装置。
    The sound processing apparatus according to claim 1,
    Adding means for adding an indirect sound component corresponding to the supplied sound to the supplied sound;
    The generating means supplies the performance sound to the adding means and generates the indirect sound component by removing the original performance sound from the sound output from the adding means.
    A sound processing apparatus.
  3.  請求項2に記載の音処理装置において、
     供給される音を調整して前記付加手段に供給する第1の処理手段と、
     前記付加手段から出力される音を放音手段の特性に合わせて調整する第2の処理手段と、を含み、
     前記演奏音は前記第1の処理手段に供給される、
     ことを特徴とする音処理装置。
    The sound processing apparatus according to claim 2,
    First processing means for adjusting the supplied sound and supplying it to the additional means;
    Second processing means for adjusting the sound output from the adding means in accordance with the characteristics of the sound emitting means,
    The performance sound is supplied to the first processing means.
    A sound processing apparatus.
  4.  請求項1乃至3のいずれかに記載の音処理装置において、
     前記入力手段は、前記演奏音の入力をマイクロフォンを介して受け付け、
     前記音処理装置は、入力された前記演奏音に対して、前記マイクロフォンにおけるハウリングを低減するためのハウリング低減処理を施す手段を含み、
     前記生成手段は、前記ハウリング低減処理が施された前記演奏音に対応する間接音成分を生成する、
     ことを特徴とする音処理装置。
    The sound processing apparatus according to any one of claims 1 to 3,
    The input means accepts an input of the performance sound via a microphone,
    The sound processing apparatus includes means for performing howling reduction processing for reducing howling in the microphone on the input performance sound,
    The generating means generates an indirect sound component corresponding to the performance sound that has been subjected to the howling reduction processing.
    A sound processing apparatus.
  5.  ユーザの演奏音に対応する間接音成分を生成する生成ステップと、
     前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御ステップと、
     を含むことを特徴とする音処理方法。
    A generation step for generating an indirect sound component corresponding to the user's performance sound;
    An output control step for outputting the indirect sound component to the output means while restricting the performance sound from being output to the output means;
    A sound processing method comprising:
  6.  ユーザの演奏音に対応する間接音成分を生成する生成手段、及び、
     前記演奏音を出力手段に出力することを制限しつつ、前記間接音成分を前記出力手段に出力する出力制御手段、
     としてコンピュータを機能させるためのプログラム。
    Generating means for generating an indirect sound component corresponding to the user's performance sound; and
    Output control means for outputting the indirect sound component to the output means while restricting output of the performance sound to the output means,
    As a program to make the computer function as.
PCT/JP2016/084368 2016-11-18 2016-11-18 Sound processing device, sound processing method and program WO2018092286A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/084368 WO2018092286A1 (en) 2016-11-18 2016-11-18 Sound processing device, sound processing method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/084368 WO2018092286A1 (en) 2016-11-18 2016-11-18 Sound processing device, sound processing method and program

Publications (1)

Publication Number Publication Date
WO2018092286A1 true WO2018092286A1 (en) 2018-05-24

Family

ID=62146302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/084368 WO2018092286A1 (en) 2016-11-18 2016-11-18 Sound processing device, sound processing method and program

Country Status (1)

Country Link
WO (1) WO2018092286A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122023A (en) * 2003-10-20 2005-05-12 Nippon Hoso Kyokai <Nhk> High-presence audio signal output device, high-presence audio signal output program, and high-presence audio signal output method
JP2010193105A (en) * 2009-02-17 2010-09-02 Nihon Univ Sound field creating apparatus
JP2012168367A (en) * 2011-02-15 2012-09-06 Nippon Telegr & Teleph Corp <Ntt> Reproducer, method thereof, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122023A (en) * 2003-10-20 2005-05-12 Nippon Hoso Kyokai <Nhk> High-presence audio signal output device, high-presence audio signal output program, and high-presence audio signal output method
JP2010193105A (en) * 2009-02-17 2010-09-02 Nihon Univ Sound field creating apparatus
JP2012168367A (en) * 2011-02-15 2012-09-06 Nippon Telegr & Teleph Corp <Ntt> Reproducer, method thereof, and program

Similar Documents

Publication Publication Date Title
EP2974010B1 (en) Automatic multi-channel music mix from multiple audio stems
JP7517500B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, AND PROGRAM
US8838835B2 (en) Session terminal apparatus and network session system
KR101569032B1 (en) A method and an apparatus of decoding an audio signal
CN116095568A (en) Audio playing method, vehicle-mounted sound system and storage medium
Mores Music studio technology
JP6834398B2 (en) Sound processing equipment, sound processing methods, and programs
JP6819236B2 (en) Sound processing equipment, sound processing methods, and programs
WO2018092286A1 (en) Sound processing device, sound processing method and program
WO2023026555A1 (en) Information processing device, information processing method, and program
JP2009031357A (en) Music piece distribution system, music piece editing device, music piece playback device, music piece storage device, and program
CN113348508B (en) Electronic device, method and computer program
JP5454530B2 (en) Karaoke equipment
Cooper et al. The impact of a prototype acoustically transparent headphone system on the recording studio performances of professional trumpet players
KR20140125440A (en) Acoustic processing device and communication acoustic processing system
US20230269552A1 (en) Electronic device, system, method and computer program
KR100703923B1 (en) 3d sound optimizing apparatus and method for multimedia devices
WO2024177629A1 (en) Dynamic audio mixing in a multiple wireless speaker environment
JP4479635B2 (en) Audio apparatus and karaoke apparatus
JP5510435B2 (en) Karaoke device and program
JP2991159B2 (en) Karaoke equipment
Karlsson Mixing pop in 9.1: How do listeners perceive different delay/panning combinations, applied to solo pop guitar?
Dine Recording the Classical Tuba
Moulton The creation of musical sounds for playback through loudspeakers
JP5273402B2 (en) Karaoke equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16921786

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16921786

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP