[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022209557A1 - Electronic musical instrument, electronic musical instrument control method, and program - Google Patents

Electronic musical instrument, electronic musical instrument control method, and program Download PDF

Info

Publication number
WO2022209557A1
WO2022209557A1 PCT/JP2022/009013 JP2022009013W WO2022209557A1 WO 2022209557 A1 WO2022209557 A1 WO 2022209557A1 JP 2022009013 W JP2022009013 W JP 2022009013W WO 2022209557 A1 WO2022209557 A1 WO 2022209557A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
chord
section
data
indicated
Prior art date
Application number
PCT/JP2022/009013
Other languages
French (fr)
Japanese (ja)
Inventor
正行 伊藤
Original Assignee
Foot-Skills合同会社
正行 伊藤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foot-Skills合同会社, 正行 伊藤 filed Critical Foot-Skills合同会社
Publication of WO2022209557A1 publication Critical patent/WO2022209557A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments

Definitions

  • the present invention relates to an electronic musical instrument, an electronic musical instrument control method, and a program.
  • the present invention has been made in view of the above circumstances, and when a chord in a piece of music is played, a sound such as singing at the beginning of the song is automatically reproduced based on the timing of the chord selection operation in the performance, and further, To provide an electronic musical instrument imitating a stringed instrument such as a guitar, in which subsequent sounds such as singing are automatically reproduced according to the progress of performance based on the timing of chord sounding operation.
  • the electronic musical instrument of the present invention comprises: section data reading means for sequentially reading the stored music data for each section divided by each section length; comparison means for detecting a match between the chord indicated by the data included in the interval read by the interval data reading means and the chord selected by the chord selection operation; If the result of detection by the comparing means is coincidence based on the timing of the chord selection operation, reproduction of the first sound, which is the sound at the beginning of the song of the section indicated by the data included in the section, is started.
  • a chord is pronounced based on the timing of the chord pronunciation operation, and if the result of detection by the comparing means is a match, a second sound, which is the sound following the start of the section indicated by the data included in the section, is generated.
  • a second sound reproduction means for initiating reproduction of a sound characterized by comprising
  • FIG. 1 is a block diagram showing the configuration of a terminal device according to an embodiment of the present invention.
  • FIG. 2 is a diagram explaining a chord selection table according to the embodiment of the present invention.
  • FIG. 3 is a diagram for explaining how the player holds the terminal device according to the embodiment of the present invention.
  • FIG. 4 is a diagram for explaining display contents and controls on the display screen from the player's point of view of the terminal device according to the embodiment of the present invention.
  • FIG. 5 is a flow chart for explaining the configuration of main processing for realizing the automatic sound reproduction function according to the embodiment of the present invention.
  • FIG. 6 is a flow chart for explaining the configuration of performance processing for realizing the automatic sound reproduction function according to the embodiment of the present invention.
  • FIG. 7 is a diagram for explaining the operation of the terminal device according to the embodiment of the present invention when a performer plays a piece of music.
  • FIG. 1 is a block diagram showing the configuration of a terminal device 1 according to an embodiment of the invention.
  • a terminal device 1 which is an example of the electronic musical instrument of the present invention, is a smart device such as a smartphone or a tablet terminal, and has an application program installed therein for reproducing a guitar as an electronic musical instrument. When this application program is executed, the terminal device 1 functions as an electronic musical instrument imitating a guitar. Based on this, it includes a configuration that realizes a function that automatically reproduces sounds such as singing in accordance with the progress of the music. Details of the automatic sound reproduction function will be described later.
  • the CPU unit 10 may be equipped with a single CPU, or may be equipped with a plurality of CPUs.
  • the storage unit 11 is a storage device that includes an internal storage device and a main memory.
  • the internal storage device stores various programs executed by the CPU unit 10 and various data used by the programs, such as music data, chord sound data, first sound data, and second sound data, which will be described later. ing.
  • Main memory temporarily stores computer programs and information.
  • the operation unit 12 is, for example, an input device for receiving operations from the performer.
  • the display unit 13 is typically a liquid crystal display device. In addition, in the processing according to the present embodiment, a touch panel integrated with a liquid crystal screen is assumed as the operation unit 12 and the display unit 13 .
  • FIG. 2 is a diagram of a chord selection table.
  • the chords here refer to 14 chords including major chords A, B, C, . . . , G and minor chords Am, Bm, Cm, .
  • the chord selection table shows chords to be selected according to the content indicated by the combination of the presence or absence of the touch operation on the operators of the first operator group, which will be described later.
  • the operation unit 12 has a touch sensor or the like provided on the housing of the terminal device 1 to detect contact, and outputs operation data indicating the operation contents to the CPU unit 10 according to the player's operation. Furthermore, even when the touch sensor is touched at a plurality of points at the same time, the operation unit 12 is configured to output a plurality of touched operation data, and multi-point detection is possible.
  • the display unit 13 is a display device such as a liquid crystal display that displays an image on a display screen 130 provided in a partial area of the housing of the terminal device 1 .
  • the display unit 13 displays an image on the display screen 130 under the control of the CPU unit 10 .
  • Images displayed on the display screen 130 include menu screens, setting screens, and images such as operation screens of applications (see FIG. 4).
  • a touch sensor of the operation unit 12 is provided on the surface of the display screen 130 and functions as a touch panel.
  • Pronunciation section 14 generates an audio signal indicating the sound instructed by the pronunciation instruction function realized by CPU section 10 , and outputs the generated audio signal to audio output section 141 . At this time, the output level of the audio signal may also be adjusted according to the instruction.
  • the audio output unit 141 has an amplifier unit that amplifies the audio signal input from the sound generator 14 and a sound output unit such as a speaker that outputs the amplified audio signal.
  • a sound output unit such as a speaker that outputs the amplified audio signal.
  • the sound generator 14 and the voice output section 141 function as sound generator.
  • the interface 15 includes, for example, a connection terminal for wired connection with an external device, a wireless connection means for wireless connection, a communication means for connection via a base station or a network, etc., and transmits and receives various data to and from the connected external device.
  • a connection terminal for wired connection with an external device for example, a Wi-Fi connection, a Wi-Fi connection, a Wi-Fi connection, a Wi-Fi connection, etc., and transmits and receives various data to and from the connected external device.
  • a connection terminal for wired connection with an external device a wireless connection means for wireless connection, a communication means for connection via a base station or a network, etc.
  • FIG. 3 is a diagram illustrating how the terminal device 1 is held by a performer.
  • the terminal device 1 is placed near the base of each of the four fingers of the index finger, the middle finger, the ring finger and the little finger of one palm so that the touch panel surface faces the fingertip side, and is held between the tip of the thumb and the other hand. It is possible to operate the end portion of the touch panel in the horizontal direction with the fingertips of the other hand.
  • FIG. 4 is an explanatory diagram of the display contents and operators on the display screen 130 of the terminal device 1 from the player's point of view.
  • the display screen 130 displays operation markers indicating the placement of each finger, but dashed lines and dashed lines are not actually displayed.
  • Each of the plurality of areas surrounded by the dashed lines indicates an area that the CPU unit 10 recognizes as an operator for receiving each operation regarding the operation on the touch sensor while this image is being displayed. That is, the CPU unit 10 assigns each operator to each area on the touch sensor. . . 121-4 enclosed by dashed lines (referred to as first operators 121 when not distinguished from each other), and a dashed line It is composed of a second operator 122 surrounded by .
  • the touch sensor functions as operating means for the first operator 121 and the second operator 122 provided in each area.
  • Each of the first manipulators 121 is a manipulator that accepts an operation for designating a chord, and is arranged horizontally adjacent to each other at the vertical end, and is one of the four fingers of one hand of the player. Operation with three fingers (index finger, middle finger, ring finger and little finger) is assumed.
  • the second manipulator 122 is a manipulator that accepts an operation for designating the sounding timing of chords, is arranged at the end in the horizontal direction, and is assumed to be slid with the fingers of the other hand of the performer.
  • the display screen 130 displays operation markers and the like indicating the placement of each finger so that the player can recognize the area on the touch panel corresponding to each operator. Furthermore, song titles, chord names, selection method icons, and lyrics are displayed and presented to the performer as musical scores.
  • Fig. 5 is a flowchart for explaining the configuration of the main processing that realizes the automatic sound reproduction function.
  • step S501 After first executing initialization processing (step S501), the CPU unit 10 repeatedly executes a series of processing from steps S502 to S505.
  • the CPU unit 10 determines whether or not any piece of music to be played has been selected (step S502). If the determination is YES, the CPU unit 10 executes music data read processing (step S503). Here, data such as the title of the song, the lyrics, the number of sections, etc. are read. When the process of step S503 is completed, the process proceeds to step S504, and information based on the read music data is displayed on the screen. Next, in step S505, performance processing including an automatic sound reproduction function is executed. Details of the performance processing will be described later with reference to FIG. If the determination in step S502 is NO, the CPU section 10 skips the processing of steps S503 to S505.
  • the CPU section 10 executes performance processing including an automatic sound reproduction function in the performance processing. Details of this process will be described later with reference to FIG.
  • steps S601 to S612 is repeatedly executed, and when some operation is performed on the terminal device 1, processing according to that operation is performed.
  • the CPU unit 10 executes section data reading processing (step S601).
  • data such as chord information, first sound and second sound are read.
  • the CPU unit 10 turns off the first sound flag (step S602).
  • step S603 determines whether the first sound is included in the interval data read in step S601 (step S603). If the determination is YES, the CPU unit 10 turns ON the first sound flag (step S604). If the determination in step S603 is NO, the CPU unit 10 skips the process of step S604.
  • step S605 to step S612 is repeatedly executed, and processing is performed while the section data read in step 601 is played.
  • the CPU unit 10 determines whether or not a performance end operation has been performed (step S605). If the determination is YES, the CPU section 10 terminates the performance processing of step S505 illustrated in the flowchart of FIG. If the determination in step S605 is NO, the CPU section 10 skips ending the performance process.
  • the CPU unit 10 determines whether or not a selection operation has been detected (step S606). If the determination is YES, the CPU unit 10 acquires chord information associated in advance from a combination of presence/absence of touch operations on the manipulators of the first manipulator group (step S607). If the determination in step S606 is NO, the CPU unit 10 skips the processing of steps S607 through S611.
  • the CPU unit 10 determines whether or not the first sound flag is ON (step S608). If the determination is YES, the CPU unit 10 determines whether or not the chord information of the section data matches the chord information selected by the chord selection operation (step S609). If the determination is YES, the CPU section 10 starts reproducing the first sound (step S610). Next, the CPU unit 10 turns off the first sound flag (step S611). If the determination in step S609 is NO, the CPU section 10 skips the processing of steps S610 through S611. If the determination in step S608 is NO, the CPU section 10 skips the processing of steps S609 through S611.
  • the CPU unit 10 determines whether or not a sounding operation has been detected (step S612). If the determination is YES, the CPU unit 10 pronounces the selected chord (step S613). Next, the CPU unit 10 determines whether or not the chords match (step S614). If the determination is YES, the CPU unit 10 determines whether or not there is a sound currently being reproduced (step S615). If the determination is YES, the CPU unit 10 silences the sound (step S616). If the determination in step S615 is NO, the CPU unit 10 skips the process of step S616. Next, the CPU unit 10 starts playing the second sound (step S617). If the determination in step S614 is NO, the CPU section 10 skips the processing of steps S615 to S617. If the determination in step S612 is NO, the CPU unit 10 skips the processes of steps S613 and S614.
  • FIG. 7 is a diagram for explaining the operation of the terminal device 1 of the present invention when a player performs a performance.
  • chord performance sections T1, T2, . Lyrics are provided. Also, t0, t1, . Further below, examples of the timing of the chord selection operation or chord pronunciation operation performed by the performer on the music data are shown. Furthermore, in the lower row, the playback intervals of the sounds automatically played back by this operation are arranged in elongated rectangles, and the lyrics of the sounds played in each interval are shown. t0', t1', . . . , t6' indicate the sound reproduction start time of each section.
  • the section data of T1 is read by section data reading processing (step S601).
  • step S601 when the performer performs an operation to select a chord E that matches the chord of the section data, and then performs a sounding operation, the sound is muted if the second sound that was played last time is being played. Then, reproduction of the second sound of the section data is started. Next, the section data of T3 is read by the section data reading process (step S601).
  • REFERENCE SIGNS LIST 1 terminal device 10 CPU unit 101 touch operation specifying unit 102 operator specifying unit 103 sounding operation specifying unit 104 sounding instruction unit 11 storage unit 12 operation unit 121 th 1 operator 122 second operator 13 display unit 130 display screen 14 sound generator 140 sound source unit 141 sound output unit 15 interface

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

[Problem] To provide an electronic musical instrument imitating a stringed instrument such as a guitar, in which when a chord in a musical piece is played at a desired timing, a sound of singing or the like is automatically reproduced according to the progress of playing. [Solution] Provided is an electronic musical instrument imitating a stringed instrument such as a guitar, wherein when a chord in a musical piece is played, a sound of singing or the like of a first part of a song is automatically reproduced on the basis of the timing of a chord selection operation in the playing, and a succeeding sound of singing or the like is automatically reproduced according to the progress of playing on the basis of the timing of a chord emission operation.

Description

電子楽器、電子楽器の制御方法、及びプログラムElectronic musical instrument, electronic musical instrument control method, and program
 本発明は、電子楽器、電子楽器の制御方法、及びプログラムに関するものである。 The present invention relates to an electronic musical instrument, an electronic musical instrument control method, and a program.
 演奏者が優先音を示す識別子を指定すると、前記指定された優先音及び、前記指定された優先音と次の区間の優先音の間の前記優先音以外の音の自動演奏を進行させる電子楽器がある(例えば、特許文献1参照)。 An electronic musical instrument that, when a performer designates an identifier indicating a priority tone, advances automatic performance of the designated priority tone and sounds other than the priority tone between the designated priority tone and the priority tone of the next section. There is (for example, see Patent Document 1).
特開昭57-129495号公報JP-A-57-129495 特開2018-163183号公報JP 2018-163183 A
 特許文献1および特許文献2に記載された技術においては、和音もしくは優先音を示す識別子を指定することにより、次の和音もしくは優先音の前までの区間の自動演奏を進行させることができるが、楽曲によっては、楽器演奏を行うよりも前のタイミングで歌唱等が開始される場合があり、楽器演奏と歌唱等のサウンドの再生タイミングを演奏者が共に指定することは困難であった。 In the techniques described in Patent Documents 1 and 2, by designating an identifier indicating a chord or priority tone, automatic performance of the interval up to the next chord or priority tone can proceed. Depending on the piece of music, singing or the like may start before the musical instrument is played, and it is difficult for the performer to specify the timing of reproducing the sounds of the musical instrument and the singing.
 本発明は、上述の事情を鑑みてなされたものであり、楽曲における和音を演奏すると、前記演奏における、和音選択操作のタイミングに基づいて、歌い出しの歌唱等のサウンドが自動再生され、さらに、和音発音操作のタイミングに基づいて後続する歌唱等のサウンドが、演奏の進行に合わせて自動再生される、ギター等の弦楽器を模した電子楽器を提供することを目的としている。 The present invention has been made in view of the above circumstances, and when a chord in a piece of music is played, a sound such as singing at the beginning of the song is automatically reproduced based on the timing of the chord selection operation in the performance, and further, To provide an electronic musical instrument imitating a stringed instrument such as a guitar, in which subsequent sounds such as singing are automatically reproduced according to the progress of performance based on the timing of chord sounding operation.
 上記目的を達成するため、本発明の電子楽器は、
 記憶されている楽曲データをそれぞれの区間長で区切った区間ごとに順次読み出す区間データ読出し手段と、
 前記区間データ読み出し手段により読み出された区間に含まれるデータが示す和音と和音選択操作により選択された和音との一致を検出する比較手段と、
 和音選択操作のタイミングに基づいて、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しのサウンドである第1のサウンドの再生を開始する第1サウンド再生手段と、
 和音発音操作のタイミングに基づいて、和音が発音され、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しに後続するサウンドである第2のサウンドの再生を開始する第2サウンド再生手段と、
 を具備することを特徴とする。
In order to achieve the above object, the electronic musical instrument of the present invention comprises:
section data reading means for sequentially reading the stored music data for each section divided by each section length;
comparison means for detecting a match between the chord indicated by the data included in the interval read by the interval data reading means and the chord selected by the chord selection operation;
If the result of detection by the comparing means is coincidence based on the timing of the chord selection operation, reproduction of the first sound, which is the sound at the beginning of the song of the section indicated by the data included in the section, is started. 1 sound reproducing means;
A chord is pronounced based on the timing of the chord pronunciation operation, and if the result of detection by the comparing means is a match, a second sound, which is the sound following the start of the section indicated by the data included in the section, is generated. a second sound reproduction means for initiating reproduction of a sound;
characterized by comprising
 本発明では、楽曲における和音を所望のタイミングで演奏すると、前記演奏における、和音選択操作、もしくは、和音発音操作のタイミングに基づいて楽曲の進行に合わせた歌唱等のサウンドが自動再生される、ギター等の弦楽器を模した電子楽器を提供することができる。 In the present invention, when a chord in a musical piece is played at a desired timing, sounds such as singing are automatically reproduced in accordance with the progress of the musical piece based on the timing of the chord selection operation or the chord pronunciation operation in the performance. It is possible to provide an electronic musical instrument imitating a stringed instrument such as
図1は本発明の実施形態に係る端末装置の構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of a terminal device according to an embodiment of the present invention. 図2は本発明の実施形態に係る和音選択テーブルを説明する図である。FIG. 2 is a diagram explaining a chord selection table according to the embodiment of the present invention. 図3は本発明の実施形態に係る端末装置の演奏者による持ち方を説明する図である。FIG. 3 is a diagram for explaining how the player holds the terminal device according to the embodiment of the present invention. 図4は本発明の実施形態に係る端末装置の演奏者の視点からの表示画面の表示内容および操作子を説明する図である。FIG. 4 is a diagram for explaining display contents and controls on the display screen from the player's point of view of the terminal device according to the embodiment of the present invention. 図5は本発明の実施形態に係るサウンド自動再生機能を実現するメイン処理の構成を説明するフローチャートである。FIG. 5 is a flow chart for explaining the configuration of main processing for realizing the automatic sound reproduction function according to the embodiment of the present invention. 図6は本発明の実施形態に係るサウンド自動再生機能を実現する演奏処理の構成を説明するフローチャートである。FIG. 6 is a flow chart for explaining the configuration of performance processing for realizing the automatic sound reproduction function according to the embodiment of the present invention. 図7は本発明の実施形態に係る端末装置の演奏者による楽曲演奏がなされた際の動作を説明する図である。FIG. 7 is a diagram for explaining the operation of the terminal device according to the embodiment of the present invention when a performer plays a piece of music.
 <実施形態>
 図1は、本発明の実施形態に係る端末装置1の構成を示すブロック図である。
 本発明の電子楽器の一例である端末装置1は、例えば、スマートフォン、タブレット端末などのスマートデバイスであって、ギターを電子楽器として再現するアプリケーションプログラムがインストールされている。
 このアプリケーションプログラムを実行すると、この端末装置1は、ギターを模した電子楽器として機能し、楽曲における和音を所望のタイミングで演奏すると、前記演奏における、和音選択操作、もしくは、和音発音操作のタイミングに基づいて楽曲の進行に合わせた歌唱等のサウンドが自動再生される機能を実現する構成を含んだものとなっている。
 サウンド自動再生機能の詳細については後述する。
<Embodiment>
FIG. 1 is a block diagram showing the configuration of a terminal device 1 according to an embodiment of the invention.
A terminal device 1, which is an example of the electronic musical instrument of the present invention, is a smart device such as a smartphone or a tablet terminal, and has an application program installed therein for reproducing a guitar as an electronic musical instrument.
When this application program is executed, the terminal device 1 functions as an electronic musical instrument imitating a guitar. Based on this, it includes a configuration that realizes a function that automatically reproduces sounds such as singing in accordance with the progress of the music.
Details of the automatic sound reproduction function will be described later.
[構成]
 CPU部10は、単一のCPUを搭載してもよいし、複数のCPUを搭載するようにしてもよい。
 記憶部11は、内部記憶装置とメインメモリを備える記憶装置である。内部記憶装置には、CPU部10によって実行される各種プログラムやそのプログラムで利用される各種データ、例えば、後述する楽曲データ、和音サウンドデータ、第1サウンドデータ、および、第2サウンドデータが格納されている。メインメモリは、コンピュータプログラムや情報を一時的に記憶する。
 操作部12は、例えば、演奏者からの操作を受け付けるための入力装置である。
 表示部13は、典型的には液晶表示装置である。
 なお、本実施形態に係る処理では、操作部12及び表示部13として、液晶画面と一体化したタッチパネルを想定する。
[Constitution]
The CPU unit 10 may be equipped with a single CPU, or may be equipped with a plurality of CPUs.
The storage unit 11 is a storage device that includes an internal storage device and a main memory. The internal storage device stores various programs executed by the CPU unit 10 and various data used by the programs, such as music data, chord sound data, first sound data, and second sound data, which will be described later. ing. Main memory temporarily stores computer programs and information.
The operation unit 12 is, for example, an input device for receiving operations from the performer.
The display unit 13 is typically a liquid crystal display device.
In addition, in the processing according to the present embodiment, a touch panel integrated with a liquid crystal screen is assumed as the operation unit 12 and the display unit 13 .
 図2は和音選択テーブルの図である。
 和音とは、ここでは、メジャーコードA、B、C、・・・、Gと、マイナーコードAm、Bm、Cm、・・・Gmとの14の和音をいう。
 和音選択テーブルは、後述する第1操作子群の操作子へのタッチ操作の有無の組み合わせが示す内容に応じて選択される和音を示している。
FIG. 2 is a diagram of a chord selection table.
The chords here refer to 14 chords including major chords A, B, C, . . . , G and minor chords Am, Bm, Cm, .
The chord selection table shows chords to be selected according to the content indicated by the combination of the presence or absence of the touch operation on the operators of the first operator group, which will be described later.
 操作部12は、端末装置1の筐体に設けられた接触を検出するタッチセンサなどを有し、演奏者の操作によりその操作内容を表す操作データをCPU部10に出力する。
 さらに操作部12は、同時に複数の箇所でタッチセンサに対する接触が行われている場合にも、接触された複数の操作データを出力する構成であり、多点検出が可能となっている。
The operation unit 12 has a touch sensor or the like provided on the housing of the terminal device 1 to detect contact, and outputs operation data indicating the operation contents to the CPU unit 10 according to the player's operation.
Furthermore, even when the touch sensor is touched at a plurality of points at the same time, the operation unit 12 is configured to output a plurality of touched operation data, and multi-point detection is possible.
 表示部13は、端末装置1の筐体の一部の領域に設けられた表示画面130に画像を表示する液晶ディスプレイなどの表示デバイスである。
 表示部13は、CPU部10の制御により、表示画面130に画像を表示する。
 表示画面130に表示される画像は、メニュー画面、設定画面の他、アプリケーションの動作画面などの画像(図4参照)である。
この例においては、表示画面130の表面部分には、操作部12のタッチセンサが設けられ、タッチパネルとして機能する。
The display unit 13 is a display device such as a liquid crystal display that displays an image on a display screen 130 provided in a partial area of the housing of the terminal device 1 .
The display unit 13 displays an image on the display screen 130 under the control of the CPU unit 10 .
Images displayed on the display screen 130 include menu screens, setting screens, and images such as operation screens of applications (see FIG. 4).
In this example, a touch sensor of the operation unit 12 is provided on the surface of the display screen 130 and functions as a touch panel.
 発音部14は、CPU部10によって実現される発音指示機能によって指示された音を示すオーディオ信号を生成し、生成したオーディオ信号を音声出力部141に出力する。
 このときオーディオ信号の出力レベルについても指示に応じて調整してもよい。
Pronunciation section 14 generates an audio signal indicating the sound instructed by the pronunciation instruction function realized by CPU section 10 , and outputs the generated audio signal to audio output section 141 .
At this time, the output level of the audio signal may also be adjusted according to the instruction.
 音声出力部141は、発音部14から入力されるオーディオ信号を増幅する増幅部、増幅されたオーディオ信号を放音するスピーカなどの放音部を有する。
 このように、発音部14および音声出力部141は発音手段として機能する。
The audio output unit 141 has an amplifier unit that amplifies the audio signal input from the sound generator 14 and a sound output unit such as a speaker that outputs the amplified audio signal.
Thus, the sound generator 14 and the voice output section 141 function as sound generator.
 インターフェース15は、例えば、外部装置と有線接続する接続端子、無線接続する無線接続手段、基地局やネットワークを介して接続する通信手段などがあって、接続した外部装置と各種データの送受信を行う。
 以上が端末装置1の各部の構成についての説明である。
The interface 15 includes, for example, a connection terminal for wired connection with an external device, a wireless connection means for wireless connection, a communication means for connection via a base station or a network, etc., and transmits and receives various data to and from the connected external device.
The configuration of each part of the terminal device 1 has been described above.
[動作]
 次に、CPU部10がアプリケーションプログラムを実行することによって、ギターを模した電子楽器として機能する際の発音指示機能について説明する。
 まず、CPU部10は、表示部13を制御して、各指の配置を示す操作マーカーなどを表す図4に示すような画像を表示画面130に表示するとともに、タッチセンサ上の各領域に操作子を割り当てる。
 演奏者による端末装置1の持ち方、および、この表示内容と操作子について説明する。
[motion]
Next, a description will be given of the sound generation instruction function when the CPU section 10 executes an application program to function as an electronic musical instrument that imitates a guitar.
First, the CPU unit 10 controls the display unit 13 to display on the display screen 130 an image as shown in FIG. assign children.
The manner in which the player holds the terminal device 1, and the display contents and controls will be described.
 図3は、端末装置1の演奏者による持ち方を説明する図である。
 端末装置1を、一方の手のひらの人差し指、中指、薬指および小指のそれぞれ4つの指の付け根付近に、タッチパネル面が指先側を向くように乗せ、親指の先との間で挟み込むように持ち、他方の手の指先によりタッチパネルの水平方向における端部の操作が行えるようにする。
FIG. 3 is a diagram illustrating how the terminal device 1 is held by a performer.
The terminal device 1 is placed near the base of each of the four fingers of the index finger, the middle finger, the ring finger and the little finger of one palm so that the touch panel surface faces the fingertip side, and is held between the tip of the thumb and the other hand. It is possible to operate the end portion of the touch panel in the horizontal direction with the fingertips of the other hand.
 図4は、端末装置1の表示画面130の演奏者の視点からの表示内容および操作子についての説明図である。
 図4に示すように表示画面130には、各指の配置を示す操作マーカーなどが表示されるが、破線、および、一点鎖線については、実際に表示されるわけではない。
 この破線で囲まれた複数の領域の各々は、この画像を表示しているときにおけるタッチセンサへの操作について、CPU部10が各々の操作を受け付ける操作子として認識する領域を示している。
 すなわち、CPU部10は、タッチセンサ上の各領域に各操作子を割り当てている。
 この操作子については、破線で囲まれた第1操作子121-1、121-2、・・・121-4(それぞれを区別しない場合には、第1操作子121という)、および、一点鎖線で囲まれた第2操作子122により構成されている。
 このようにして、タッチセンサは、各領域に設けられた第1操作子121、第2操作子122の、操作手段として機能する。
FIG. 4 is an explanatory diagram of the display contents and operators on the display screen 130 of the terminal device 1 from the player's point of view.
As shown in FIG. 4, the display screen 130 displays operation markers indicating the placement of each finger, but dashed lines and dashed lines are not actually displayed.
Each of the plurality of areas surrounded by the dashed lines indicates an area that the CPU unit 10 recognizes as an operator for receiving each operation regarding the operation on the touch sensor while this image is being displayed.
That is, the CPU unit 10 assigns each operator to each area on the touch sensor.
. . 121-4 enclosed by dashed lines (referred to as first operators 121 when not distinguished from each other), and a dashed line It is composed of a second operator 122 surrounded by .
Thus, the touch sensor functions as operating means for the first operator 121 and the second operator 122 provided in each area.
 第1操作子121の各々は、和音を指定するための操作を受け付ける操作子であり、垂直方向の端部にて水平方向に互いに隣接して配置されており、演奏者の一方の片手の4つの指(人差し指、中指、薬指、および、小指)による操作が想定される。 Each of the first manipulators 121 is a manipulator that accepts an operation for designating a chord, and is arranged horizontally adjacent to each other at the vertical end, and is one of the four fingers of one hand of the player. Operation with three fingers (index finger, middle finger, ring finger and little finger) is assumed.
 第2操作子122は、和音の発音タイミングを指定するための操作を受け付ける操作子であり、水平方向の端部に配置されており、演奏者の他方の片手指によるスライド操作が想定される。 The second manipulator 122 is a manipulator that accepts an operation for designating the sounding timing of chords, is arranged at the end in the horizontal direction, and is assumed to be slid with the fingers of the other hand of the performer.
 表示画面130には、図4に示すように、各指の配置を示す操作マーカーなどが表示され、各操作子に対応したタッチパネル上の領域を演奏者に認識させるようになっている。さらに、曲タイトル、和音名、選択方法アイコン、および、歌詞が表示され、演奏者に楽譜として提示している。 As shown in FIG. 4, the display screen 130 displays operation markers and the like indicating the placement of each finger so that the player can recognize the area on the touch panel corresponding to each operator. Furthermore, song titles, chord names, selection method icons, and lyrics are displayed and presented to the performer as musical scores.
 図5はサウンド自動再生機能を実現するメイン処理の構成を説明するフローチャートである。  Fig. 5 is a flowchart for explaining the configuration of the main processing that realizes the automatic sound reproduction function.
 CPU部10は、まず初期化処理を実行した後(ステップS501)、ステップS502ないしS505の一連の処理を繰り返し実行する。 After first executing initialization processing (step S501), the CPU unit 10 repeatedly executes a series of processing from steps S502 to S505.
 CPU部10は、何れかの演奏曲が選曲されたか否かを判定する(ステップS502)。その判定がYESならば、CPU部10は、楽曲データ読込み処理を実行する(ステップS503)。ここでは、曲のタイトル、歌詞、区間数等のデータを読み込む。ステップS503の処理が終了するとステップS504に進み、読み込んだ楽曲データに基づいた情報を画面表示する。次いで、ステップS505にて自動サウンド再生機能を含む演奏処理を実行する。演奏処理の詳細については図6にて後述する。ステップS502の判定がNOならば、CPU部10は、ステップS503ないしステップS505の処理はスキップする。 The CPU unit 10 determines whether or not any piece of music to be played has been selected (step S502). If the determination is YES, the CPU unit 10 executes music data read processing (step S503). Here, data such as the title of the song, the lyrics, the number of sections, etc. are read. When the process of step S503 is completed, the process proceeds to step S504, and information based on the read music data is displayed on the screen. Next, in step S505, performance processing including an automatic sound reproduction function is executed. Details of the performance processing will be described later with reference to FIG. If the determination in step S502 is NO, the CPU section 10 skips the processing of steps S503 to S505.
 次に、CPU部10は、演奏処理において自動サウンド再生機能を含む演奏処理を実行する。この処理の詳細は、図6を用いて後述する。 Next, the CPU section 10 executes performance processing including an automatic sound reproduction function in the performance processing. Details of this process will be described later with reference to FIG.
 ステップS601ないしステップS612のメイン処理は繰り返し実行されて、端末装置1において何らかの操作が行われた際に、その操作に応じた処理が行われる。 The main processing of steps S601 to S612 is repeatedly executed, and when some operation is performed on the terminal device 1, processing according to that operation is performed.
 CPU部10は、区間データ読み込み処理を実行する(ステップS601)。ここでは、和音情報、第1サウンド、および、第2サウンド、等のデータを読み込む。 The CPU unit 10 executes section data reading processing (step S601). Here, data such as chord information, first sound and second sound are read.
 CPU部10は、第1サウンドフラグをOFFにする(ステップS602)。 The CPU unit 10 turns off the first sound flag (step S602).
 次に、CPU部10は、ステップS601で読み込んだ区間データ内に第1サウンドが含まれるか否かを判定する(ステップS603)。その判定がYESならば、CPU部10は、第1サウンドフラグをONにする(ステップS604)。ステップS603の判定がNOならば、CPU部10は、ステップS604の処理はスキップする。 Next, the CPU unit 10 determines whether the first sound is included in the interval data read in step S601 (step S603). If the determination is YES, the CPU unit 10 turns ON the first sound flag (step S604). If the determination in step S603 is NO, the CPU unit 10 skips the process of step S604.
 ステップS605ないしステップS612の処理は繰り返し実行されて、ステップ601において読み込んだ区間データが演奏される間の処理が行われる。 The processing from step S605 to step S612 is repeatedly executed, and processing is performed while the section data read in step 601 is played.
 CPU部10は、演奏終了操作が行われたか否かを判定する(ステップS605)。その判定がYESならば、CPU部10は、図5のフローチャートで例示されるステップS505の演奏処理を終了する。ステップS605の判定がNOならば、CPU部10は、演奏処理の終了をスキップする。 The CPU unit 10 determines whether or not a performance end operation has been performed (step S605). If the determination is YES, the CPU section 10 terminates the performance processing of step S505 illustrated in the flowchart of FIG. If the determination in step S605 is NO, the CPU section 10 skips ending the performance process.
 CPU部10は、選択操作が検出されたか否かを判定する(ステップS606)。その判定がYESならば、CPU部10は、第1操作子群の操作子へのタッチ操作の有無の組み合わせから予め対応付けられた和音情報を取得する(ステップS607)。ステップS606の判定がNOならば、CPU部10は、ステップS607ないしステップS611の処理はスキップする。 The CPU unit 10 determines whether or not a selection operation has been detected (step S606). If the determination is YES, the CPU unit 10 acquires chord information associated in advance from a combination of presence/absence of touch operations on the manipulators of the first manipulator group (step S607). If the determination in step S606 is NO, the CPU unit 10 skips the processing of steps S607 through S611.
 CPU部10は、第1サウンドフラグがONであるか否かを判定する(ステップS608)。その判定がYESならば、CPU部10は、区間データの和音情報と前記和音選択操作により選択された和音情報とが一致しているか否かを判定する(ステップS609)。その判定がYESならば、CPU部10は、第1サウンドの再生を開始する(ステップS610)。次いで、CPU部10は、第1サウンドフラグをOFFにする(ステップS611)。ステップS609の判定がNOならば、CPU部10は、ステップS610ないしステップS611の処理はスキップする。ステップS608の判定がNOならば、CPU部10は、ステップS609ないしステップS611の処理はスキップする。 The CPU unit 10 determines whether or not the first sound flag is ON (step S608). If the determination is YES, the CPU unit 10 determines whether or not the chord information of the section data matches the chord information selected by the chord selection operation (step S609). If the determination is YES, the CPU section 10 starts reproducing the first sound (step S610). Next, the CPU unit 10 turns off the first sound flag (step S611). If the determination in step S609 is NO, the CPU section 10 skips the processing of steps S610 through S611. If the determination in step S608 is NO, the CPU section 10 skips the processing of steps S609 through S611.
 CPU部10は、発音操作が検出されたか否かを判定する(ステップS612)。その判定がYESならば、CPU部10は、選択中の和音を発音する(ステップS613)。次いで、CPU部10は、和音が一致しているか否かを判定する(ステップS614)。その判定がYESならば、CPU部10は、現在再生中のサウンドがあるか否かを判定する(ステップS615)。その判定がYESならば、CPU部10は、そのサウンドを消音する(ステップS616)。ステップS615の判定がNOならば、CPU部10は、ステップS616の処理はスキップする。次いで、CPU部10は、第2サウンドの再生を開始する(ステップS617)。ステップS614の判定がNOならば、CPU部10は、ステップS615ないしステップS617の処理はスキップする。ステップS612の判定がNOならば、CPU部10は、ステップS613およびステップS614の処理はスキップする。 The CPU unit 10 determines whether or not a sounding operation has been detected (step S612). If the determination is YES, the CPU unit 10 pronounces the selected chord (step S613). Next, the CPU unit 10 determines whether or not the chords match (step S614). If the determination is YES, the CPU unit 10 determines whether or not there is a sound currently being reproduced (step S615). If the determination is YES, the CPU unit 10 silences the sound (step S616). If the determination in step S615 is NO, the CPU unit 10 skips the process of step S616. Next, the CPU unit 10 starts playing the second sound (step S617). If the determination in step S614 is NO, the CPU section 10 skips the processing of steps S615 to S617. If the determination in step S612 is NO, the CPU unit 10 skips the processes of steps S613 and S614.
 図7は本発明の端末装置1の演奏者による演奏がなされた際の動作を説明する図である。 FIG. 7 is a diagram for explaining the operation of the terminal device 1 of the present invention when a player performs a performance.
 図7の一番上の段には、演奏者が演奏しようとする楽曲データにおける和音演奏区間T1、T2、・・・T5が時系列的に並べられており、区間のそれぞれにおける和音、および、歌詞が示されている。また、t0、t1、・・・、t6は、それぞれの区間の演奏開始時刻が示されている。さらにその下の段には、楽曲データに対して演奏者が行った和音選択操作、もしくは、和音発音操作のタイミングの例が示されている。さらに、その下の段には、この操作によって自動再生されたサウンドの再生区間が細長い長方形で並べられており、区間のそれぞれにおいて再生されるサウンドの歌詞が示されている。また、t0’、t1’、・・・、t6’は、それぞれの区間のサウンド再生開始時刻が示されている。 At the top of FIG. 7, chord performance sections T1, T2, . Lyrics are provided. Also, t0, t1, . Further below, examples of the timing of the chord selection operation or chord pronunciation operation performed by the performer on the music data are shown. Furthermore, in the lower row, the playback intervals of the sounds automatically played back by this operation are arranged in elongated rectangles, and the lyrics of the sounds played in each interval are shown. t0', t1', . . . , t6' indicate the sound reproduction start time of each section.
 まず、区間データ読み込み処理(ステップS601)により、T1の区間データが読み込まれる。 First, the section data of T1 is read by section data reading processing (step S601).
 t0’の時点で演奏者が区間データの和音と一致するコードCを選択する操作を行うと、区間データの第1サウンドの再生が開始される。 At time t0', when the performer performs an operation to select chord C that matches the chord of the section data, the first sound of the section data is started to be reproduced.
 t1’の時点で、演奏者が区間データの和音と一致するコードCを選択した状態で発音操作を行うと、区間データの第2サウンドの再生が開始され、次いで、区間データ読み込み処理により、T2の区間データが読み込まれる。 At time t1′, when the performer selects chord C that matches the chord of the section data and performs a sounding operation, the second sound of the section data is started to be reproduced. interval data is read.
 t1’からt2’の間、演奏者は、コードCを複数回演奏し、この演奏に応じて和音サウンドが発音される。 Between t1' and t2', the performer plays chord C multiple times, and chord sounds are produced in response to this performance.
 t2’の時点で演奏者が区間データの和音と一致するコードEを選択する操作を行い、次いで、発音操作を行うと、前回再生された第2サウンドが再生中の場合には消音される。次いで、区間データの第2サウンドの再生が開始される。次いで、区間データ読み込み処理(ステップS601)により、T3の区間データが読み込まれる。 At time t2', when the performer performs an operation to select a chord E that matches the chord of the section data, and then performs a sounding operation, the sound is muted if the second sound that was played last time is being played. Then, reproduction of the second sound of the section data is started. Next, the section data of T3 is read by the section data reading process (step S601).
 t2’からt3’の間、演奏者は、コードEを複数回演奏し、和音サウンドがそれに応じて発音される。 Between t2' and t3', the performer plays chord E multiple times and chord sounds are pronounced accordingly.
 1…端末装置、10…CPU部、101…タッチ操作特定部、102…操作子特定部、103…発音操作特定部、104…発音指示部、11…記憶部、12…操作部、121…第1操作子、122…第2操作子、13…表示部、130…表示画面、14…発音部、140…音源部、141…音声出力部、15…インターフェイス REFERENCE SIGNS LIST 1 terminal device 10 CPU unit 101 touch operation specifying unit 102 operator specifying unit 103 sounding operation specifying unit 104 sounding instruction unit 11 storage unit 12 operation unit 121 th 1 operator 122 second operator 13 display unit 130 display screen 14 sound generator 140 sound source unit 141 sound output unit 15 interface

Claims (3)

  1.  記憶されている楽曲データをそれぞれの区間長で区切った区間ごとに順次読み出す区間データ読出し手段と、
     前記区間データ読み出し手段により読み出された区間に含まれるデータが示す和音と和音選択操作により選択された和音との一致を検出する比較手段と、
     和音選択操作のタイミングに基づいて、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しのサウンドである第1のサウンドの再生を開始する第1サウンド再生手段と、
     和音発音操作のタイミングに基づいて、和音が発音され、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しに後続するサウンドである第2のサウンドの再生を開始する第2サウンド再生手段と、
     を具備することを特徴とする電子楽器。
    section data reading means for sequentially reading the stored music data for each section divided by each section length;
    comparison means for detecting a match between the chord indicated by the data included in the interval read by the interval data reading means and the chord selected by the chord selection operation;
    If the result of detection by the comparing means is coincidence based on the timing of the chord selection operation, reproduction of the first sound, which is the sound at the beginning of the song of the section indicated by the data included in the section, is started. 1 sound reproducing means;
    A chord is pronounced based on the timing of the chord pronunciation operation, and if the result of detection by the comparing means is a match, a second sound, which is the sound following the start of the section indicated by the data included in the section, is generated. a second sound reproduction means for initiating reproduction of a sound;
    An electronic musical instrument comprising:
  2.  コンピュータの制御方法であって、
     区間データ読出し手段が、記憶されている楽曲データをそれぞれの区間長で区切った区間ごとに順次読み出し、
    比較手段が、前記区間データ読み出し手段により読み出された区間に含まれるデータが示す和音と和音選択操作により選択された和音との一致を検出し、
     第1サウンド再生手段が、和音選択操作のタイミングに基づいて、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しのサウンドである第1のサウンドの再生を開始し、
    第2サウンド再生手段が、和音発音操作のタイミングに基づいて、和音が発音され、比較手段による検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しに後続するサウンドである第2のサウンドの再生を開始する、
     ことを特徴とする制御方法。
    A computer control method comprising:
    section data reading means for sequentially reading the stored music data for each section divided by each section length;
    the comparing means detects a match between the chord indicated by the data included in the interval read by the interval data reading means and the chord selected by the chord selecting operation;
    If the result of detection by the comparison means is a match based on the timing of the chord selection operation, the first sound reproduction means reproduces the first sound, which is the sound of the beginning of the song of the section indicated by the data included in the section. start playing a sound,
    The second sound reproducing means generates chords based on the timing of the chord pronunciation operation, and if the result of detection by the comparing means is a match, the second sound reproduction means follows the beginning of the section indicated by the data included in the section. start playing a second sound that is the sound that
    A control method characterized by:
  3.  コンピュータが各ステップを実行するコンピュータの制御方法であって、
     記憶されている楽曲データをそれぞれの区間長で区切った区間ごとに順次読み出す区間データ読出しステップと、
     前記区間データ読み出しステップにより読み出された区間に含まれるデータが示す和音と和音選択操作により選択された和音との一致を検出する比較ステップと、
     和音選択操作のタイミングに基づいて、比較ステップによる検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しのサウンドである第1のサウンドの再生を開始する第1サウンド再生ステップと、
     和音発音操作のタイミングに基づいて、和音が発音され、比較ステップによる検出の結果が一致であるならば、前記区間に含まれるデータが示す、前記区間の歌い出しに後続するサウンドである第2のサウンドの再生を開始する第2サウンド再生ステップと、
     を備えるコンピュータの制御方法。
    A computer control method in which a computer executes each step,
    a section data reading step of sequentially reading the stored music data for each section divided by each section length;
    a comparison step of detecting a match between the chord indicated by the data included in the interval read by the interval data reading step and the chord selected by the chord selection operation;
    Based on the timing of the chord selection operation, if the result of the detection by the comparison step is a match, then the reproduction of the first sound, which is the sound at the beginning of the section, indicated by the data included in the section is started. 1 sound playback step;
    A chord is pronounced based on the timing of the chord pronunciation operation, and if the result of detection by the comparing step is a match, a second sound, which is the sound following the beginning of the section indicated by the data included in the section, is generated. a second sound playing step of starting playing a sound;
    A method of controlling a computer comprising
PCT/JP2022/009013 2021-03-27 2022-03-03 Electronic musical instrument, electronic musical instrument control method, and program WO2022209557A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-054440 2021-03-27
JP2021054440A JP6991620B1 (en) 2021-03-27 2021-03-27 Electronic musical instruments, control methods and programs for electronic musical instruments

Publications (1)

Publication Number Publication Date
WO2022209557A1 true WO2022209557A1 (en) 2022-10-06

Family

ID=80185524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/009013 WO2022209557A1 (en) 2021-03-27 2022-03-03 Electronic musical instrument, electronic musical instrument control method, and program

Country Status (2)

Country Link
JP (1) JP6991620B1 (en)
WO (1) WO2022209557A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57129495A (en) * 1981-02-03 1982-08-11 Nippon Musical Instruments Mfg Electronic musical instrument
JP2019184935A (en) * 2018-04-16 2019-10-24 カシオ計算機株式会社 Electronic musical instrument, control method of electronic musical instrument, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6023467B2 (en) * 2012-05-21 2016-11-09 株式会社河合楽器製作所 Automatic accompaniment device for electronic keyboard instruments

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57129495A (en) * 1981-02-03 1982-08-11 Nippon Musical Instruments Mfg Electronic musical instrument
JP2019184935A (en) * 2018-04-16 2019-10-24 カシオ計算機株式会社 Electronic musical instrument, control method of electronic musical instrument, and program

Also Published As

Publication number Publication date
JP2022151396A (en) 2022-10-07
JP6991620B1 (en) 2022-01-12

Similar Documents

Publication Publication Date Title
US8536437B2 (en) Musical score playing device and musical score playing program
JPWO2005062289A1 (en) Musical score display method using a computer
WO2008004690A1 (en) Portable chord output device, computer program and recording medium
US20050257667A1 (en) Apparatus and computer program for practicing musical instrument
WO2022209557A1 (en) Electronic musical instrument, electronic musical instrument control method, and program
JP6589356B2 (en) Display control device, electronic musical instrument, and program
JP5387642B2 (en) Lyric telop display device and program
JP2006106641A (en) Electronic musical device
JP2005249844A (en) Device and program for performance indication
JP5394301B2 (en) Timing designation device, music playback device, karaoke system, and timing designation method
JP5935815B2 (en) Speech synthesis apparatus and program
JP5510207B2 (en) Music editing apparatus and program
JP6774889B2 (en) Karaoke equipment, programs
US20130204628A1 (en) Electronic apparatus and audio guide program
JP5590350B2 (en) Music performance device and music performance program
JP2001013964A (en) Playing device and recording medium therefor
JP5888295B2 (en) Performance information display device, program
JP2570214B2 (en) Performance information input device
JP6782491B2 (en) Musical tone generator, musical tone generator and program
JP3706386B2 (en) Karaoke device characterized by key change user interface
JP2012220884A (en) Performance evaluation device and performance evaluation program
JP5754449B2 (en) Music code score generator
JP2023092596A (en) Information processor, electronic musical instrument, pitch picking-out system, method, and program
JP6196571B2 (en) Performance device and program
JP3760940B2 (en) Automatic performance device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22779782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22779782

Country of ref document: EP

Kind code of ref document: A1