WO2016204069A1 - 入力装置、物体検出装置及びその方法 - Google Patents
入力装置、物体検出装置及びその方法 Download PDFInfo
- Publication number
- WO2016204069A1 WO2016204069A1 PCT/JP2016/067260 JP2016067260W WO2016204069A1 WO 2016204069 A1 WO2016204069 A1 WO 2016204069A1 JP 2016067260 W JP2016067260 W JP 2016067260W WO 2016204069 A1 WO2016204069 A1 WO 2016204069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- peak
- label
- detection position
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/0418—Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045
- G06F2203/04104—Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
Definitions
- the present invention relates to an input device that inputs information according to the proximity state of an object to a detection surface.
- Patent Document 1 discloses an example of such a hovering function. In particular, as the distance between the finger and the panel surface approaches, the detection resolution and the detection sensitivity are changed step by step, and scanning is sequentially performed for each step. The technology to detect is described.
- the object is specified by identifying and separating the object on the image data based on the change in capacitance.
- a method for identifying an object there are a method using template matching and a method using binarization.
- template matching a template of an object to be identified is stored, and a plurality of objects can be identified by searching for a portion that matches the template in the image.
- binarization a plurality of objects can be separated and recognized by binarizing image data and applying labeling to the binarized data.
- Template matching is effective when the object you want to recognize is fixed, but it is necessary to prepare multiple templates to recognize various types of objects. It is not suitable for recognizing different objects.
- the number of recognized objects depends on the threshold value to be binarized, and in some cases, binarization may not be able to detect the number of fingers successfully.
- a threshold when there are three objects, when a threshold is set by focusing on a certain finger, the setting of the threshold is not appropriate for the other fingers, and it may be recognized as two objects.
- a new recognition algorithm that does not use pattern recognition or binarization is required to recognize objects with different peak heights and shapes.
- the present invention has been made in view of such circumstances, and an object thereof is to provide an input device capable of recognizing a plurality of objects having different shapes and distances with high accuracy.
- An input device includes: a sensor unit that detects a proximity state of a plurality of objects at a plurality of detection positions; and a proximity region specification that specifies a proximity region of the plurality of objects based on detection data from the sensor unit.
- the proximity region specifying unit specifies a peak detection position for specifying a peak detection position in which the value of the detection data satisfies a predetermined peak condition among the plurality of detection positions, and the specified peak detection Among the detection positions around the peak detection position, the position is not labeled and has detection data equal to or higher than a first threshold defined based on the detection data of the specified peak detection position.
- a label applying unit that performs a label applying process for applying a label applied to the peak detection position.
- the peak position specifying unit of the input device is configured to detect the detection position where the label is not applied among the plurality of detection positions before starting the labeling process by the label applying unit.
- the maximum peak detection position where the value of the detection data of the position satisfies a predetermined peak condition is specified, and the label attaching unit performs the label attaching process for the maximum peak detection position specified immediately before by the peak position specifying unit
- the peak position specifying unit specifies the maximum peak detection position after the label applying process by the label applying unit.
- the optimum maximum peak detection position can be specified based on the state after labeling every time the labeling process for the maximum peak detection position is performed.
- the peak position specifying unit of the input device includes a plurality of peaks in which detection data values satisfy a predetermined peak condition for the plurality of detection positions before the labeling process is started by the labeling unit.
- a detection position is specified, and the label assigning unit specifies, as the maximum peak detection position in order from the peak detection position in which the detection data is large, for the plurality of specified peak detection positions, and the label for the maximum peak detection position Perform the grant process.
- the label applying unit specifies as a contour a detection position equal to or higher than a first threshold value that is smaller than the detection data of the peak detection position for performing the labeling process, and for each of the specified contours The labeling process for the detection position in the contour is performed.
- the contour is specified on the basis of the first threshold value, it is possible to specify and separate adjacent regions for each specified contour. Furthermore, since the labeling process is performed for each contour, it is possible to specify the range of the object while excluding the noise region where the contour having the peak detection position is labeled and the label is not applied.
- the label assigning unit applies the label closer to the detection position relative to the detection position in the contour when there are two or more labels in the contour.
- the process of assigning the label to the detected position in the outline is performed.
- the peak position specifying unit selects a detection position from among the plurality of detection positions where the detection data is equal to or greater than a second threshold and the detection data is larger than the surrounding detection positions. Specify as the detection position.
- the peak detection position it is possible to prevent a detection position having a larger detection data at the surrounding detection position as a peak position, and to detect a detection position having a smaller detection data even if the detection data is larger than the surroundings.
- the peak detection position By preventing the peak detection position, the peak detection position can be appropriately specified.
- the peak position specifying unit determines the second threshold value based on a maximum value of a change value of the detection data.
- the second threshold value can be appropriately determined, and an appropriate peak detection position is specified. Therefore, it is possible to specify an appropriate proximity region excluding a detection position that causes noise. It becomes.
- the input device includes a position specifying unit that specifies the proximity positions of the plurality of objects based on the label.
- the position specifying unit obtains the proximity position by obtaining the value of the center of gravity of the detection position in the proximity area to which the label is attached.
- the object detection device of the present invention is based on detection data indicating a proximity state of a plurality of objects at a plurality of detection positions, and a peak detection position where a value of the detection data satisfies a predetermined peak condition among the plurality of detection positions
- the peak position specifying unit for specifying the peak detection position and the specified peak detection position are defined based on detection data of the specified peak detection position that is not labeled among the detection positions around the peak detection position.
- a label applying unit that performs a label applying process for applying a label applied to the peak detection position with respect to the detected position having detection data equal to or greater than the first threshold value.
- a peak detection position where a value of the detection data satisfies a predetermined peak condition among the plurality of detection positions based on detection data indicating proximity states of the plurality of objects at the plurality of detection positions.
- a label applying step for performing a label applying process for applying a label applied to the peak detection position with respect to the detected position having the detection data equal to or more than the first threshold value.
- an input device capable of recognizing a plurality of objects having different shapes and distances with high accuracy can be provided.
- FIG. 4 It is a figure which shows the input device which concerns on embodiment of this invention. It is a figure explaining the outline
- FIG. 7 is a diagram for explaining a case where the maximum peak detection position detection process is performed for the first time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 7 is a diagram for explaining a case where the contour tracking process and the labeling process are performed for the first time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 7 is a diagram for explaining a case where the detection processing of the maximum peak detection position is performed a second time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 7 is a diagram for explaining a case where the contour tracking process and the label attaching process are performed for the second time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 7 is a diagram for explaining a case where detection processing for a maximum peak detection position, contour tracking processing, and label addition processing are performed for the third time using the example illustrated in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 7 is a diagram for describing a case where the maximum peak detection position detection process is performed a fourth time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention. It is a flowchart for demonstrating the process of the adjacent area specific
- FIG. 1 is a diagram showing an input device according to an embodiment of the present invention.
- the input device 1 is preferably a capacitive finger input device.
- the input device 1 includes a sensor unit 10, a proximity region specifying unit 20, and a PC 30.
- the sensor unit 10 includes a detection unit 11 and a sensor circuit 12.
- the sensor unit 10 detects a proximity state of a plurality of objects at a plurality of detection positions by reading a capacitance change caused by an approaching finger or the like, and sets the detection data 100 in proximity. It passes to the area specifying unit 20.
- the proximity state is a state in which an object such as a finger is close to the input device 1, for example, a distance or a positional relationship of the close object to the input device 1.
- the detection unit 11 includes detection electrodes in a matrix, and when an object such as a finger or a pen comes close to the detection electrodes, the capacitance formed between the detection electrodes and the ground changes. The generated capacitance change is sent to the sensor circuit 12,
- the sensor circuit 12 generates detection data 100 composed of multi-value data from the change amount and change value of the capacitance output from the detection unit 11.
- the detection data 100 will be described later with reference to FIGS.
- the proximity region specifying unit 20 specifies the proximity regions of a plurality of objects based on the detection data 100 from the sensor unit 10.
- the proximity region refers to, for example, a region occupied by a predetermined object on a matrix-like plane formed by the detection unit 11.
- a plurality of adjacent areas are generated by the object.
- the proximity region specifying unit 20 calculates the number of fingers and their coordinates based on the detection data 100 and outputs the calculation result to the PC 30.
- FIG. 2 is a diagram for explaining the outline of the detection data 100 output by the sensor unit shown in FIG.
- the detection data 100 is a value obtained by detecting a change in capacitance in units of a predetermined area with “99” as an upper limit value, and is, for example, data illustrated in FIG. 2.
- the detection data 100 is composed of a set of detection data 100x at each of the detection positions 105 in a matrix.
- detection data 100 x indicating a value “90” is present in the third frame from the left and the fourth frame from the top, which is one of the detection positions 105. This is the largest value among all the detection data 100x because the change in capacitance is “90” out of “99”, and that the finger will have come close to this detection position 105. Can be analogized.
- values such as 4 and 5 are shown in the lower right, and are extremely small values, so it can be inferred that the finger is not close to this vicinity.
- FIG. 3 is a diagram three-dimensionally showing an outline of the detection data 100 output from the sensor unit 10 shown in FIG.
- the detection data 100 is shown as the multi-value data itself in a plane.
- this multivalued data is shown as the height and shade of color.
- high swells are generated in the upper left, the lower right, and the upper right.
- Capacitance peaks occur around each vertex, and it is presumed from the multi-value data that a finger or the like will be close to this. Note that when the proximity of this finger is represented by binary data divided by upper and lower threshold values rather than multivalued data, it is possible to capture the outline, but the inside and division of the outline are unclear, and the figure is not clear. It is difficult to classify the proximity states of a plurality of objects as shown in FIG.
- the input device 1 according to the present embodiment can recognize objects having different peak heights and shapes with high accuracy.
- FIG. 4 is a functional block diagram of the input device 1 according to the embodiment of the present invention.
- the proximity region specifying unit 20 will be described in more detail with reference to FIG. 4.
- the proximity region specifying unit 20 includes a peak position specifying unit 40, a label applying unit 50, and a position specifying unit 60.
- the peak position specifying unit 40 is a maximum peak detection position indicating a maximum value satisfying a predetermined condition among a plurality of peak detection positions 110 in which the value of the detection data 100 satisfies a predetermined peak condition among the plurality of detection positions 105. Specify 110max. At this time, label data described later and detection data 100 are used.
- the label assigning unit 50 detects that the label is not attached and the detection data 100x is equal to or greater than the first threshold Th_Finger_p among the detection positions 105 around the maximum peak detection position 110max specified by the peak position specifying unit 40. For the position 105, a label applying process for applying a label already applied to the maximum peak detection position 110max is performed.
- the label given here is a unique number or symbol for each neighboring area that identifies each neighboring area.
- the label is, for example, a serial number.
- the first label is given first, the next label is number 2, the next is number 3, and so on.
- the next label is number 2, the next is number 3, and so on.
- the label assigning unit 50 includes a contour tracking unit 51 that forms a contour with respect to the detection data 100, and a label determination unit 52 that determines a label to be assigned to the formed contour.
- the contour tracking unit 51 identifies, as a contour, a detection position that is equal to or greater than the first threshold Th_Finger_p whose value is smaller than the detection data 100x of the maximum peak detection position 110max on which the labeling process is performed. At this time, label data, detection data 100, and maximum peak detection position 110max are used.
- the processing of the contour tracking unit 51 will be described with reference to FIG.
- the processing of the label determination unit 52 will be described with reference to FIG.
- the label determination part 52 performs the label provision process about the detection position 105 in an outline for every specified outline. For each of the specified contours, the label determination unit 52 assigns a label closer to the detection position 105 to the detection position 105 in the contour when there are two or more labels in the contour, and there is one label in the contour. In some cases, a process of assigning a label to the detection position 105 in the contour is performed.
- the proximity region specifying unit 20 performs the update of the maximum peak detection position by the peak position specifying unit 40 and the update of the label by the label applying unit 50 until there is no peak detection position that satisfies the condition of the maximum peak detection position 110max. repeat.
- the position specifying unit 60 specifies the proximity positions of the plurality of objects based on the labels given by the label giving unit 50.
- a label is assigned to each adjacent area by the processing of the label attaching unit 50.
- the position specifying unit 60 assigns predetermined position information to each label and the adjacent area, and the position of the label and the adjacent area is determined by this position information.
- the proximity position that becomes the coordinates of the label is specified, and the information on the proximity position is updated if it originally has.
- the proximity position for example, the value of the center of gravity of the detection position in the proximity area to which the label is attached is used, but the value of the peak detection position 110 in the label may be used as it is.
- FIG. 5 is a flowchart for explaining the processing of the neighboring area specifying unit 20 shown in FIG. Step S701:
- the peak position specifying unit 40 detects the maximum peak detection position 110max based on the detection data 100 input from the sensor unit 10 and the label data from the label applying unit 50. At this time, the peak position specifying unit 40 detects the maximum peak detection position 110max for the detection position 105 that is not labeled.
- Step S702 The peak position specifying unit 40 determines whether or not the maximum peak detection position 110max is detected in step S701. If the determination is affirmative, the process proceeds to step S703. If the determination is negative, the series of processing ends. In the present embodiment, after performing the contour tracking process and the labeling process in step 703, the process of detecting the maximum peak detection position in step S701 is performed again, and these are repeated until the maximum peak detection position disappears.
- Step S703 The label assigning unit 50 performs the contour tracking process and the label assigning process for the maximum peak detection position 110max detected in step S701.
- Step S704 The position specifying unit 60 specifies the proximity positions of the plurality of objects based on the labels applied by the label applying unit 50 in step S703.
- FIG. 6 is a diagram for explaining the peak detection processing of the peak position specifying unit 40 shown in FIG. 6 .
- the peak detection position 110 shown in FIG. 6 is a peak condition in which the detection data 100x of the plurality of detection positions 105 is equal to or larger than the second threshold Th_Exis and the detection data 100 is larger than the surrounding detection positions. Is the detection position 105 that satisfies the above.
- the peak position specifying unit 40 specifies the detection position 105 obtained here as the peak detection position 110.
- FIG. 6 there are three values “90”, “58”, and “31” as the detection data 100x of the detection position 105. If the second threshold value Th_Exis is “20”, these values all exceed the second threshold value Th_Exis. In addition, there are values “31”, “47”, “39”, “49”, “67”, “48”, “82”, “63” around the detection position 105 whose value is “90”. “90” is larger than any value of the surrounding detection position 105. The same applies to the values “58” and “31”. Since both conditions are satisfied, these satisfy the peak condition and are detected as the peak detection position 110.
- the peak position specifying unit 40 determines the second threshold value Th_Exis based on the maximum change value of the detection data 100x.
- the maximum value is “90”
- the peak position specifying unit 40 determines “20” as the second threshold Th_Exis based on the value “90”.
- the second threshold value Th_Exis is described as a fixed value.
- the second threshold value Th_Exis may be set so as to decrease as the distance increases in accordance with the decreasing tendency.
- the peak position specifying unit 40 sets the second threshold Th_Exis to be inversely proportional to the distance or non-linear based on the relationship between the detection data 100x and the object distance shown in FIG. Can do.
- the peak position specifying unit 40 obtains the above-described peak detection position 110 based on the second threshold value Th_Exis determined as described above. This process will be described with reference to FIG.
- FIG. 8 is a flowchart for explaining the maximum peak detection position detection process performed by the peak position specifying unit 40 shown in FIG. 4 in step S701 shown in FIG.
- the peak position specifying unit 40 selects the detection position 105 to be processed (step S801). That is, one is selected as the detection position 105.
- the detection position 105 is selected in the order of the detection data 100x shown in FIG. 2, for example, starting from the upper left and sequentially selecting the right one by one. It may be performed in order, or may be selected one by one in order downward, and when the vertical column ends, it may be moved to the right by one and further in the vertical column order. Moreover, it is not limited to starting from the upper left, and starting from various positions such as upper right, lower right, lower left, and center is also conceivable.
- the peak position specifying unit 40 determines whether the value indicated by the detection data 100x (that is, the amount of change in capacitance) is greater than the second threshold Th_Exis. These are compared and determined (step S802). If ThExist is set to “20”, for example, if the detection data 100 at the detection position 105 is “25”, the answer is Yes, and if it is “10”, the answer is No. When the detection data 100 at the detection position 105 is small (step S802: No), the process proceeds to step S806.
- step S802: Yes When the detection data 100 of the detection position 105 is large (step S802: Yes), the peak position specifying unit 40 labels the detection position 105 by referring to the label data (object range data) from the label determination unit 52. Is determined (step S803). Although the label will be described later, the fact that the label is assigned to the detection position 105 means that the maximum peak detection position 110max is specified in the region including the detection position 105. Therefore, if there is a label (step S803: No), the process proceeds to step S806.
- the peak position specifying unit 40 determines whether or not the detection data 100x at the detection position 105 is larger than the detection data 100x in the vicinity of the surrounding 8 (step S804).
- the detection data 100x at the detection position 105 around the detection position 105 where the detection data 100x is “90” is “31”, “47”, “39”, “49”. , “67”, “48”, “82”, “63”, and “90” is larger than these.
- the peak position specifying unit 40 determines that the detection data 100 at the detection position 105 where the detection data 100x is “90” is larger than the detection data 100 in the vicinity of the surrounding 8, that is, the peak condition is satisfied ( In step S804: Yes), the detected position 105 is stored as the peak detected position 110 (step S805).
- the peak position specifying unit 40 stores the detected position 105 as the peak detected position 110 (step S805) or determines that the detected data 100 at the detected position 105 is smaller than any detected data 100 in the vicinity of the surrounding 8. If (step S804: No), the process is terminated for the detected position 105. And the peak position specific
- the peak position specifying unit 40 determines that the peak position detection processing has been performed for all the detection positions 105 (step S806: Yes)
- the peak detection position 110 having the maximum detection data 100x among the plurality of extracted peak detection positions 110 is maximized. Detection is performed as the peak detection position 110max (step S807). For example, as shown in FIG. 6, when the detection data 100x at the peak detection position 110 is “90”, “58”, “31”, the peak position specifying unit 40 sets the detection position 105 with the detection data 100x “90”. The maximum peak detection position 110max is detected.
- the peak position specifying unit 40 uses the detection data 100 x as “peak detection position 110” by the peak position specifying unit 40 described above (step S ⁇ b> 701).
- the detection positions 105 of “90”, “58”, and “31” are specified.
- the peak position specifying unit 40 detects, from among these detection positions 105, the detection position 105 in which the detection data 100x indicates the maximum value “90” as the maximum peak detection position 110max.
- the label applying unit 50 includes, for example, an outline tracking unit 51 and a label determination unit 52.
- the label assigning unit 50 performs the label assigning process in step S703 shown in FIG.
- the label applying unit 50 performs a label applying process on the detected position 105 based on the maximum peak detection position 110max detected by the peak position specifying unit 40 described with reference to step S701 of FIG. 5 and FIG.
- the processes of the contour tracking unit 51 and the label determination unit 52 of the label applying unit 50 will be described in detail.
- FIG. 9 is a flowchart for explaining the contour tracking process performed by the contour tracking unit 51 shown in FIG. This processing starts when the object range update processing in step S703 shown in FIG. 5 is performed.
- Step S901 First, the contour tracking unit 51 determines whether or not the maximum peak detection position 110max has been input from the peak position specifying unit 40, proceeds to step S902 if the determination is affirmative, and repeats the determination if the determination is negative. In the case of a positive determination, the contour tracking unit 51 proceeds to step S902.
- Step S902 The contour tracking unit 51 gives a new label to the maximum peak detection position 110max. Then, based on the maximum peak detection position 110max, the following labeling process is performed on the surrounding detection positions 105. In the present embodiment, the label is assigned as described in step S701 shown in FIG. 5, and the contour tracking unit 51 performs the first processing based on the detection data 100x of the maximum peak detection position 110max input in step S901.
- the threshold value Th_Finger_p is calculated (step S902).
- the detection position 105 having a value larger than the first threshold Th_Finger_p as the detection data 100 is within the range of the object, and the detection position 105 having a value less than or equal to the first threshold Th_Finger_p is the object.
- the contour of the range of the object is specified by making it outside the range.
- the first threshold value Th_Finger_p is derived by a calculation formula that is smaller than the detection data 100 at the maximum peak detection position 110max.
- the first threshold value Th_Finger_p is a threshold value that absorbs a peak due to noise and does not absorb a peak due to a finger.
- the first threshold Th_Finger_p may be a value prepared in advance, or may be calculated based on some calculation formula, and is calculated by multiplying the detection data 100x at the maximum peak detection position 110max by a certain coefficient. Also good. For example, in the example illustrated in FIG. 6, the first threshold Th_Finger_p is set to “41”, for example, for the maximum peak detection position 110max where the detection data 100x is “90”.
- Step S903 The contour tracking unit 51 performs contour tracking using the first threshold Th_Finger_p calculated in step S902 (step S903). That is, a contour is formed in which the detection position 105 where the detection data 100x is larger than the first threshold Th_Finger_p is within the object range.
- the contour to be formed is not limited to one, and in that case, the contour tracking unit 51 tracks a plurality of contours from one first threshold Th_Finger_p. Subsequent to the contour tracking process of the contour tracking unit 51, the label determination process shown in FIG.
- FIG. 10 is a flowchart for explaining the label assignment processing in step S703 shown in FIG. 5 performed by the label determination unit 52 shown in FIG.
- the process shown in FIG. 10 is performed following the contour pursuit process shown in FIG. Since a plurality of contours are formed in the contour tracking process described with reference to FIG. 9, the label determination unit 52 selects one of the contours, and whether a label is attached to the detection position 105 inside the contour. Whether or not is checked (step S910). When there is one contour, that contour is the selection target.
- the label determination unit 52 determines whether or not the number of labels in the selected contour is two or more (step S911). If two or more labels are included, the process proceeds to step S913. The process proceeds to step S912. For example, when step S703 is performed for the first time, since only the first label assigned in step S902 described above exists, the number of labels in the selected contour is 0 or 1, and the process proceeds to step S912.
- step S703 after the labeling process in step S703 is completed, the process returns to step S701, and the peak position specifying unit 40 detects the new maximum peak detection position 110max, and the maximum peak detection is performed.
- a new labeling process is performed on the position 110max. That is, a label is added. Therefore, the number of labels is not limited to 0 or 1, and may be 2 or more. In this case, an affirmative determination is made in step S911, and the process in step S914 is performed.
- step S911 determines whether two or more labels are not included. If it is determined that two or more labels are not included (step S911: No), the label determination unit 52 determines whether the number of labels included is one (step S912). When it is determined that no label is included (step S912: No), that is, the maximum peak detection position 110max is not included in the contour, so that no new label processing is performed in the contour. The process proceeds to step S915.
- step S912 When it is determined that one label is included (step S912: Yes), the label determination unit 52 fills all the detection positions 105 within the range corresponding to the inside of the contour with the label (step S913), and proceeds to step S915. move on.
- step S911 when it is determined in step S911 that two or more labels are included (step S911: Yes), since the number of labels given to the detection position 105 is one, which of the detection positions 105 inside the contour is which? It is necessary to determine whether it belongs to the label. Therefore, the label determination unit 52 assigns the label having the closest label coordinate among the plurality of labels to the detection position 105 within the range corresponding to the inside of the contour (step S914), and proceeds to step S915.
- the label determination unit 52 calculates the distance between the label A and the label B in the contour for each predetermined detection position 105 in the contour, and gives the closest label to the detection position 105 to be processed.
- the coordinates of label A are (3, 3) and the coordinates of label B are (6, 7).
- the predetermined detection position 105 is (4, 4)
- the label A is given because the label A is closer.
- the detection position 105 to be processed is (5, 6)
- the label A is given because the label B is closer. This is executed for all detection positions 105 in the contour.
- the label determination unit 52 determines whether or not labeling processing has been performed for all the contours formed by the contour tracking processing described with reference to FIG. 9 (step S915). In the case of a negative determination (step S915: No), the label determination unit 52 returns to step S910 and executes a label addition process for the next contour. On the other hand, in the case of an affirmative determination (step S915: Yes), the label determination unit 52 ends the label assignment process for the maximum peak detection position 110max that is the current processing target.
- the peak position specifying unit 40 starts the maximum peak detection position detecting process in step S701 illustrated in FIG. A process of detecting the maximum peak detection position 110max from the detection positions 105 to which no label is attached is performed.
- step S702 when it is determined in step S702 shown in FIG. 5 that the maximum peak detection position 110max does not exist, the neighboring area specifying unit 20 performs a position specifying process (step S704) by the position specifying unit 60.
- the position specifying unit 60 specifies each of the proximity positions of a plurality of objects close to the detection unit 11 based on the coordinates (position information) of the detection position 105 labeled by the label determination unit 52 (FIG. 5). Step S704) shown in FIG. For example, the position specifying unit 60 specifies, for each label given to the detection position 105 by the label determination unit 52, the barycentric position of the detection position 105 to which the label is assigned as the proximity position of the object corresponding to the label. To do.
- the method for specifying the proximity position of the object in the position specifying unit 60 is not particularly limited as long as it uses a label and the detection position 105 provided with the label.
- step S703 contour tracking processing and label determination processing
- FIG. 11 is a diagram for explaining a case where the process of step S701 shown in FIGS. 5 and 8 is performed for the first time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- a detection position 105 where the detection data 100 indicates “90” (hereinafter, detected) (Denoted as position 120) is detected as the maximum peak detection position 110max.
- the peak position specifying unit 40 specifies the peak detection position 110 that is the target and the detection data 100x is the largest. As shown in FIG. 8, the peak detection position 110 has detection data 100x larger than the second threshold Th_Exist (in this example, “20”), and no label is attached. The detection position 105 has detection data 100x larger than the detection positions 105 in the vicinity of the surrounding eight.
- the detection position 120 shown in FIG. 11 has a value “90” of the detection data 100 larger than “20” which is the second threshold Th_Exist.
- the detection position 120 is not given a label.
- “90” in the detection data 100 x at the detection position 120 indicates a larger value than the detection data 100 in the vicinity of the surrounding 8 that is the surrounding data 121. Therefore, the detection position 120 satisfies the peak requirement as the peak detection position 110. Since the detection data 100 at the detection position 120 is the maximum value, the detection position 120 is detected as the maximum peak detection position 110max.
- the peak position specifying unit 40 performs a comparison that excludes the four values of the upper left, the upper, the upper right, and the left in the case of the same value when performing the comparison with the eight vicinity. However, the same value is allowed for the four neighborhoods of right, lower left, lower and lower right. This is a measure for when the same value is lined up. This is because if the comparison that does not allow the case of the same value is performed for all eight neighbors, the definition of the peak pixel cannot be satisfied when the same value is aligned.
- FIG. 12A, FIG. 12B, and FIG. 12C show the process of step S703 shown in FIG. 5, FIG. 9, and FIG. 10 for the first time using the example shown in FIG. 2 and FIG. 6 in the first embodiment of the present invention. It is a figure for demonstrating the case where it performs.
- the contour tracking unit 51 labels the detection position 120 as shown in FIG. 12A. “1” is given.
- the contour tracking unit 51 performs a contour tracking process based on the detection data 100 at the detection position 120.
- the contour tracking unit 51 calculates the first threshold Th_Finger_p, and based on this, the contour is traced to form the contour 122 and the contour 123.
- the first threshold value Th_Finger_p is “41”.
- the contour tracking unit 51 assumes that a detection position 105 greater than the first threshold Th_Finger_p is within the object range among the detection positions 105 around the detection position 120. Create a contour.
- the label determination unit 52 performs a process of filling the inside of the contour formed by the contour tracking unit 51 with a label. As shown in FIG. 12C, since only the label “1” exists in the upper left contour 122, the label determination unit 52 assigns all the labels “1” at the detection position 105 inside the upper left contour 122. . Since the outline 123 does not include a label inside, the label determination unit 52 does not give a label this time.
- the object range data 124 to which the label “1” is assigned is obtained by the first processing shown in FIGS. 12A, 12B, and 12C.
- FIG. 13 is a diagram for explaining a case where the process of step S701 shown in FIGS. 5 and 8 is performed a second time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention. .
- the process of step S701 shown in FIG. 13 is performed. Since the first process described with reference to FIG. 12 has been completed, the detection position 120, which is the previous maximum peak detection position 110max, has already been labeled “1”. In the processing, the maximum peak detection position 110max is not selected.
- the detection position 130 which is the detection position 105 having the second largest detection data 100x of “58” is detected as the maximum peak detection position 110max, and “58” of the detection data 100x is 2 is larger than the threshold value Th_Exist value “20”, the detection position 130 is not labeled, and as shown in the surrounding data 131, the detection data 100 at the detection position 105 near the surrounding 8 is larger than the detection data 100. It is a big value.
- FIG. 14 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed a second time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 14 As described above with reference to FIG. 13, when the peak position specifying unit 40 detects the detection position 130 as the maximum peak detection position 110max, the contour tracking unit 51 labels the detection position 130 as shown in FIG. 14A. “2” is given.
- the contour tracking unit 51 performs a contour tracking process based on the detection data 100x of the detection position 130.
- the contour tracking unit 51 calculates the first threshold Th_Finger_p, and based on this, the contour is traced to form the contour 133.
- the first threshold value Th_Finger_p is “31”.
- the contour tracking unit 51 assumes that a detection position 105 that is greater than the first threshold Th_Finger_p is within the object range among the detection positions 105 around the detection position 120.
- a contour 133 is formed.
- the label determination unit 52 performs a process of filling the inside of the contour formed by the contour tracking unit 51 with a label. As illustrated in FIG. 14B, the label “1” and the label “2” exist inside the contour 133. The label determination unit 52 calculates a distance d_1 from the coordinates of each detection position 105 in the contour 133 to the coordinates of the label “1” and a distance d_2 to the label “2”. Then, the label determination unit 52 assigns the label having the shorter distance between the distances d_1 and d_2 to each detection position 105 in the contour. As a result, in the second process, as shown in FIG. 14C, the object range data 134 to which the label “1” and the label “2” are assigned is obtained.
- step S703 shown in FIG. 5 described with reference to FIG. 14 the third process of step S701 shown in FIG. 13 is performed. Since the second process described with reference to FIG. 14 has been completed, the detection position 120, which is the first maximum peak detection position 110max, has already been assigned a label of “1”. Since the detection position 130 that is the maximum peak detection position 110max has already been labeled “2”, these are not selected as the maximum peak detection position 110max in the third processing.
- the detection position 140 that is the detection position 105 having the third largest detection data 100 of “31” is detected as the maximum peak detection position 110max.
- FIG. 15 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed for the third time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 15 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed for the third time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 15 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed for the third time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 15 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed for the third time using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 15 illustrates a case where the process of step S703 shown in FIGS. 5, 9, and 10 is performed for the third time using the example shown in FIGS. 2 and 6
- the label determination unit 52 performs a process of filling the inside of the contour formed by the contour tracking unit 51 with a label.
- the label “1”, the label “2”, and the label “3” exist inside the contour 141.
- the label determination unit 52 determines the distance d_1 from the coordinates of each detection position 105 in the contour 141 to the coordinates of the label “1”, the distance d_2 to the label “2”, and the distance d_3 to the label “3”. And calculate. Then, the label determination unit 52 assigns the label having the shorter distance among the distances d_1, d_2, and d_3 to each detection position 105 in the contour.
- the object range data 142 to which the label “1”, the label “2”, and the level “3” are assigned is obtained.
- FIG. 16 is a diagram for explaining a case where the process of step S701 shown in FIGS. 5 and 8 is performed for the fourth time (final) using the example shown in FIGS. 2 and 6 in the first embodiment of the present invention.
- FIG. 16 in the detection data 100, the detection position 105 having the fourth largest value “25” has a value larger than the vicinity 8 and the label “3” is obtained by the labeling process described with reference to FIG. "Has already been assigned, it is not a candidate for the maximum peak detection position 110max.
- the detection position 105 having the fifth largest value “12” shown in FIG. 16 has detection data 100 that is larger than the vicinity 8 and is smaller than “20” of the second threshold Th_Exist. Position 110 is not reached.
- the reason why the second threshold value Th_Exist is provided in the condition of the peak detection position 110 is to prevent such a peak due to noise from being recognized as a finger.
- Th_Exist an appropriate value is selected from the noise tolerance of the sensor.
- the maximum peak detection position 110max is not detected in the fourth process shown in FIG. Therefore, the neighborhood area specifying unit 20 is a figure?
- step S702 shown in FIG. 1 a negative determination (No) is made, and finally, the position specifying process (S704) by the position specifying unit 60 shown in FIG. 5 is performed based on the object range data 142 shown in FIG.
- the peak detection positions are the center.
- An object can be identified for each proximity region. Since it is possible to label each adjacent area having a peak detection position, it is possible to specify an appropriate adjacent area excluding the detection position that causes noise. Can be improved. In particular, it is possible to provide an input device that can recognize a plurality of objects having different shapes and distances with high accuracy without using a template.
- the contour is specified on the basis of the first threshold value, it is possible to specify and separate adjacent regions for each specified contour. Furthermore, since the labeling process is performed for each contour, it is possible to specify the range of the object while excluding the noise region where the contour having the peak detection position is labeled and the label is not applied.
- the basic configuration and the flow of processing are the same as those in the first embodiment.
- the case where the detection processing of the maximum peak detection position 110max is repeated every time the contour tracking processing and the labeling processing are performed is illustrated.
- the contour tracking processing and the labeling processing are performed.
- FIG. 17 is a flowchart for explaining the processing of the neighboring area specifying unit shown in FIG. 4 according to the second embodiment of this invention.
- Step S1701 Based on the detection data 100 input from the sensor unit 10, the peak position specifying unit 40 has a detection data 100x of a plurality of detection positions 105 that is equal to or greater than the second threshold value Th_Exis and the surrounding detection. A detection position 105 having detection data 100x larger than the position 105 is detected as a peak detection position 110 and stored in the array Peaks.
- the peak position specifying unit 40 stores the peak detection positions 110 in the array Peaks so that the detection data 100x is in descending order. Thereby, the top of the array Peaks becomes the peak detection position 110 having the largest detection data 100x.
- the detection data 100x is stored in the array Peaks in the order of the peak detection positions 110 of “90”, “58”, “31”, “25” from the top.
- Step ST1702 The contour tracking unit 51 determines whether or not there is a peak detection position 110 that has not been processed in step S703 in the array Peaks. If the determination is affirmative (yes), the process proceeds to step S1703. If the determination is negative (no). The process proceeds to step S1704.
- Step S1703 The contour tracking unit 51 extracts the unprocessed peak detection position 110 from the top of the array Peaks as the maximum peak detection position 110max.
- Step S1703 The contour tracking unit 51 performs a contour tracking process and a labeling process on the maximum peak detection position 110max extracted in step S1703.
- the contour tracking process and the labeling process of the present embodiment are the processes of steps S902 and S093 shown in FIG. 9 described in the first embodiment for the maximum peak detection position 110max extracted from the array Peaks, Basically, the same process as the process 10 is performed.
- the contour tracking unit 51 moves to the peak detection position 110 fixed peak array Fixed_Peak extracted from the top of the array Peaks. After that, contour tracking is performed on the peak detection position 110 extracted as the maximum peak detection position 110max from the top of the array Peaks, as in the first embodiment.
- the contour tracking unit 51 gives a label of the peak detection position 110 to the detection position 105 inside the contour.
- the contour tracking unit 51 performs the same processing as the first embodiment with respect to the detection position 105 inside the contour. A label of the peak detection position 110 having a shorter distance is assigned. The contour tracking unit 51 also labels the peak detection position 110 of the array Peaks when the peak detection position 110 of the array Peaks is within the contour together with the peak detection position 110 of the one or more fixed peak arrays Fixed_Peak. To the set Clear_Peaks of peak detection positions 110 not considered as.
- Step S704 As in the case of the first embodiment, the position specifying unit 60 specifies the proximity positions of a plurality of objects based on the labels applied by the label applying unit 50 in step S1703.
- the peak detection position 110 of the detection data 100x of “90”, ⁇ ⁇ “58”, and “31” is detected in the fixed peak array Fixed_Peak, and the array Peaks is detected.
- the peak detection position 110 with the data 100x being “25” is stored.
- the detection data 100x in the fixed peak array Fixed_Peak is “90”, “58”, “31” includes all peak detection positions 110 within the contour.
- the peak detection position 110 whose detection data 100x of the array Peaks is “25” is also included in the contour.
- this peak detection position 110 is not the peak detection position 110, it is determined as a noise-derived peak, and the set Clear_Peaks Be put in.
- the result finally obtained is the same as in the first embodiment, but the peak detection position 110 selected as the maximum peak detection position 110max is searched together, so that the calculation time can be shortened. it can.
- the present invention is not limited to the embodiment described above. That is, those skilled in the art may make various modifications, combinations, sub-combinations, and alternatives for the components of the above-described embodiment within the technical scope of the present invention or an equivalent scope thereof.
- FIGS. 5 and 17 the case where the position specifying process by the position specifying unit 60 is performed after the labeling process is performed on all the detection positions 105 is illustrated.
- the position specifying process by the position specifying unit 60 may be performed every time the label applying process is performed on the position 110max.
- the proximity positions of a plurality of objects are specified.
- the proximity positions of the plurality of objects may be specified instead of the specific positions.
- the present invention is applied to the input device of the present invention.
- the present invention is applied to an object detection device that detects a proximity region and a proximity position of a plurality of objects in addition to the input device. May be.
- the invention has been described based on a user interface device that inputs information by an operation of a finger or the like.
- the input device of the present invention is a detection electrode formed by the proximity of various objects not limited to a human body.
- the present invention can be widely applied to devices that input information according to changes in capacitance.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
図1は、本発明の実施の形態に係る入力装置を示す図である。入力装置1は、好適には静電容量式の指入力デバイスである。入力装置1は、センサ部10と、近接領域特定部20と、PC30を備える。センサ部10は、検出部11とセンサ回路12からなり、近づいた指等による静電容量変化を読み取ることにより、複数の検出位置において複数の物体の近接状態を検出し、その検出データ100を近接領域特定部20に渡す。近接状態とは、例えば指などの物体が入力装置1に近接している状態であり、近接している物体の入力装置1への距離や位置関係等である。
検出部11に指を近づけることにより、上述のように静電容量の変化が生じ、その変化の大きさは、指が検出部11に近接するほど大きくなる。検出データ100は、静電容量の変化を「99」を上限値として所定領域単位で検出した値であり、例えば、図2に示すデータである。検出データ100はマトリクス状の検出位置105のそれぞれの検出データ100xの集合により構成される。
本実施形態の入力装置1は、このようなピークの高さや形が異なる物体を高精度に認識できる。
入力装置1については、図1を参照してすでに説明したが、図4では、近接領域特定部20についてさらに詳細に説明する。
図4に示すように近接領域特定部20は、ピーク位置特定部40と、ラベル付与部50と、位置特定部60を備える。
輪郭追跡部51は、ラベル付与処理を行う最大ピーク検出位置110maxの検出データ100xより値が小さい第1のしきい値Th_Finger_p以上の検出位置を輪郭として特定する。この際、ラベルデータ、検出データ100、最大ピーク検出位置110maxを用いる。輪郭追跡部51の処理は図9を参照して説明する。ラベル判定部52の処理は図10を参照して説明する。
ステップS701:
ピーク位置特定部40は、センサ部10から入力した検出データ100と、ラベル付与部50からのラベルデータとを基に、最大ピーク検出位置110maxの検出を行う。このとき、ピーク位置特定部40は、ラベルが付されていない検出位置105を対象として最大ピーク検出位置110maxの検出を行う
ピーク位置特定部40は、ステップS701において最大ピーク検出位置110maxが検出されたか否かを判定し、肯定判定の場合ステップS703に進み、否定判定の場合は一連の処理を終了する。
本実施の形態では、ステップ703での輪郭追跡処理およびラベル付与処理を行った後に再度、ステップS701の最大ピーク検出位置を検出する処理を行い、これらを最大ピーク検出位置がなくなるまで繰り返す。
ラベル付与部50は、ステップS701で検出した最大ピーク検出位置110maxについて、輪郭追跡処理およびラベル付与処理を行う。
位置特定部60は、ステップS703においてラベル付与部50で付与されたラベルを基に、複数の物体の近接位置をそれぞれ特定する。
図6は、図4に示すピーク位置特定部40のピーク検出処理を説明する図である。
図6に示すピーク検出位置110は、複数の検出位置105のうち、その検出データ100xが第2のしきい値Th_Exis以上であり、且つ、その周囲の検出位置より検出データ100が大きいというピーク条件を満たした検出位置105である。ピーク位置特定部40はここで求めた検出位置105をピーク検出位置110として特定する。
また、本実施の形態では第2のしきい値Th_Exisは固定値として説明するが、指の位置が検出部11から離れるにつれて静電容量の変化値は減少していく傾向にあることから、この減少傾向に合わせて第2のしきい値Th_Exisも距離が離れるごとに減少させるように設定してもよい。例えば、ピーク位置特定部40は、図7に示す検出データ100xと物体の距離との関係を基に、第2のしきい値Th_Exisを、距離に反比例、又は非線形的にするように設定させることができる。
まずピーク位置特定部40は、処理対象の検出位置105を選択する(ステップS801)。つまり1つを検出位置105として選択する。検出位置105を選択する順序は、図2に示す検出データ100xの例えば左上から開始して、右へと順に1つずつ選択し、横一列が終わったら1つ下に移動してさらに横一列という順序で行ってもよいし、下へと順に1つずつ選択し、縦一列が終わったら1つ右に移動してさらに縦一列という順序にしてもよい。また、左上からの開始に限られず、右上、右下、左下、中心など様々な位置からの開始も考えられる。
図4に示すように、ラベル付与部50は、例えば、輪郭追跡部51およびラベル判定部52を有する。ラベル付与部50は、図5に示すステップS703のラベル付与処理を行う。
ラベル付与部50は、例えば、図5のステップS701および図8を用いて説明したピーク位置特定部40が検出した最大ピーク検出位置110maxを基に、検出位置105に対してラベル付与処理を行う。
以下、ラベル付与部50の輪郭追跡部51およびラベル判定部52の処理を詳細に説明する。
まず輪郭追跡部51は、ピーク位置特定部40から最大ピーク検出位置110maxを入力したか否かを判定し、肯定判定の場合にステップS902に進み、否定判定の場合に当該判定を繰り返す。輪郭追跡部51は、肯定判定の場合に、ステップS902に進む。
輪郭追跡部51は、最大ピーク検出位置110maxに新たなラベルを付与する。そして、最大ピーク検出位置110maxを基にその周囲の検出位置105に対して以下に示すラベル付与処理を行う。
本実施の形態では、図5に示すステップS701で説明したようにラベルが付与されて
輪郭追跡部51は、ステップS901で入力した最大ピーク検出位置110maxの検出データ100xを基に、第1のしきい値Th_Finger_pを計算する(ステップS902)。
輪郭追跡部51は、ステップS902で算出した第1のしきい値Th_Finger_pを用いて輪郭追跡を行う(ステップS903)。つまり、検出データ100xが第1のしきい値Th_Finger_pより大きい検出位置105を物体の範囲内とする輪郭を形成する。形成される輪郭は1つに限らず、その場合、輪郭追跡部51は、1つの第1のしきい値Th_Finger_pから複数の輪郭を追跡する。輪郭追跡部51の輪郭追跡処理に続いて、ラベル判定部52による図10に示すラベル付与処理が行われる。
図10に示す処理は、図9に示す輪郭追及処理に続いて行われる。
図9を用いて説明した輪郭追跡処理では複数の輪郭が形成されているので、ラベル判定部52は、そのなかから輪郭を1つ選択し、輪郭内部の検出位置105にラベルが付与されているか否かを調べる(ステップS910)。輪郭が1つの場合はその輪郭が選択対象となる。
位置特定部60は、例えば、ラベル判定部52で検出位置105に付与された各ラベル毎に、当該ラベルが付与された検出位置105の重心位置を、当該ラベルに対応した物体の近接位置として特定する。位置特定部60における物体の近接位置特定方法は、ラベルと、当該ラベルが付与された検出位置105とを用いる方法であれば特に限定されない。
この例では、図6に示すように、検出データ100xが「90」,「58」,「31」の3つのピーク検出位置110があり、図5に示すステップS702の判定処理で3回の肯定判定がなされ、ステップS703の処理(輪郭追跡処理およびラベル判定処理)が3回繰り返される。
図11は、本発明の第1実施の形態において、図2および図6に示す例を用いて図5および図8に示すステップS701の処理を1回目に行う場合を説明するための図である。
図6に示すピーク検出位置110について、1回目に図8に示す最大ピーク検出位置検出処理を行うと、図11に示すように、検出データ100が「90」を示す検出位置105(以下、検出位置120と記す)が最大ピーク検出位置110maxとして検出される。
図11を用いて前述したように、ピーク位置特定部40において、最大ピーク検出位置110maxとして検出位置120を検出されると、輪郭追跡部51によって、図12Aに示すように、検出位置120にラベル「1」が付与される。
図12A,図12B,図12Cを用いて説明した図5に示すステップS703の処理に続いて、図13に示すステップS701の処理が行われる。
上述した図12を用いて説明した1回目の処理が終了しているため、前回の最大ピーク検出位置110maxである検出位置120には「1」のラベルが既に付与されているので、2回目の処理では、最大ピーク検出位置110maxには選ばれない。
図13を用いて前述したように、ピーク位置特定部40において、最大ピーク検出位置110maxとして検出位置130を検出されると、輪郭追跡部51によって、図14Aに示すように、検出位置130にラベル「2」が付与される。
ラベル判定部52は、輪郭133内の各々の検出位置105の座標からの、ラベル「1」の座標への距離d_1と、ラベル「2」への距離d_2とを計算する。そして、ラベル判定部52は、距離d_1とd_2のうち距離の短いほうのラベルを輪郭内の各検出位置105に付与する。
その結果、2回目の処理では、図14Cに示すように、ラベル「1」とラベル「2」が付与された物体範囲データ134が得られる。
上述した図14を用いて説明した2回目の処理が終了しているため、1回目の最大ピーク検出位置110maxである検出位置120には「1」のラベルが既に付与されており、2回目の最大ピーク検出位置110maxである検出位置130には「2」のラベルが既に付与されているので、これらは3回目の処理では、最大ピーク検出位置110maxには選ばれない。
前述したように、ピーク位置特定部40において、最大ピーク検出位置110maxとして検出位置140が検出されると、輪郭追跡部51によって、図15Aに示すように、検出位置140にラベル「3」が付与される。
そして、輪郭追跡部51は、検出位置130の検出データ100に基づき輪郭追跡処理を行う。輪郭追跡部51は第1のしきい値Th_Finger_pとして「19」を算出し、これを基に輪郭追跡して輪郭141を形成する。
ラベル判定部52は、輪郭141内の各々の検出位置105の座標からの、ラベル「1」の座標への距離d_1と、ラベル「2」への距離d_2と、ラベル「3」への距離d_3とを計算する。そして、ラベル判定部52は、距離d_1とd_2とd_3のうち距離の短いほうのラベルを輪郭内の各検出位置105に付与する。
その結果、3回目の処理では、図15Bに示すように、ラベル「1」とラベル「2」、レベル「3」が付与された物体範囲データ142が得られる。
図16に示すように、検出データ100は4番目に大きい値「25」を持つ検出位置105は周囲8近傍よりも大きな値を有するが、図15を用いて説明したラベル付与処理によってラベル「3」が既に付与されているので、最大ピーク検出位置110maxの候補とならない。
第2の実施の形態も、基本的な構成及び処理の流れは第1の実施の形態と同じである。第1の実施の形態では最大ピーク検出位置110maxの検出処理を、輪郭追跡処理およびラベル付与処理を行う度に繰り返す場合を例示したが、第2の実施の形態では、輪郭追跡処理およびラベル付与処理を行う前に、最大ピーク検出位置110maxとして順に処理を行う全てのピーク検出位置110の検出を行う。これにより、処理時間を短縮できる。
図17は、本発明の第2の実施の形態の図4に示す近隣領域特定部の処理を説明するためのフローチャートである。
ピーク位置特定部40は、センサ部10から入力した検出データ100を基に、複数の検出位置105のうち、その検出データ100xが第2のしきい値Th_Exis以上であり、且つ、その周囲の検出位置105より検出データ100xが大きい検出位置105をピーク検出位置110として検出し、配列Peaksに格納する。
輪郭追跡部51は、配列Peaks内にステップS703が未処理のピーク検出位置110があるか否かを判断し、肯定判定の場合(yes)にステップS1703に進み、否定判定の場合(no)にステップS1704に進む。
輪郭追跡部51は、配列Peaksの先頭から未処理のピーク検出位置110を最大ピーク検出位置110maxとして取り出す。
輪郭追跡部51は、ステップS1703で取り出した最大ピーク検出位置110maxについて、輪郭追跡処理およびラベル付与処理を行う。
ここで、本実施の形態の輪郭追跡処理およびラベル付与処理は、配列Peaksから取り出した最大ピーク検出位置110maxについて第1の実施の形態で説明した図9に示すステップS902,S093の処理、並びに図10の処理と基本的には同様の処理を行う。
輪郭追跡部51は、輪郭内部に確定ピーク配列Fixed_Peakあるピーク検出位置110(ラベルに相当)が存在したときは、輪郭内部の検出位置105に対して、当該ピーク検出位置110のラベルを付与する。
また、輪郭追跡部51は、1つ以上の確定ピーク配列Fixed_Peakのピーク検出位置110と一緒に、配列Peaksのピーク検出位置110が輪郭内部に入っていた場合、配列Peaksのピーク検出位置110のラベルとして考慮しないピーク検出位置110の集合Clear_Peaksに移動する。
位置特定部60は、第1の実施の形態の場合と同様に、ステップS1703においてラベル付与部50で付与されたラベルを基に、複数の物体の近接位置をそれぞれ特定する。
上述した実施の形態では、図5および図17に示すように、全ての検出位置105についてラベル付与処理を行ってから、位置特定部60による位置特定処理を行う場合を例示したが、最大ピーク検出位置110maxに対してラベル付与処理を行う度に位置特定部60による位置特定処理を行うようにしてもよい。
10…センサ部
11…検出部
12…センサ回路
20…近接領域特定部
30…PC
40…ピーク位置特定部
50…ラベル付与部
51…輪郭追跡部
52…ラベル判定部
60…位置特定部
100…検出データ
105…検出位置
110…ピーク検出位置
110max…最大ピーク検出位置
Th_Finger_p…第1のしきい値
Th_Exis…第2のしきい値
Claims (12)
- 複数の検出位置において複数の物体の近接状態を検出するセンサ部と、
前記センサ部からの検出データに基づいて、前記複数の物体の近接領域を特定する近接領域特定部と
を有し、
前記近接領域特定部は、
前記複数の検出位置のなかで前記検出データの値が所定のピーク条件を満たすピーク検出位置を特定するピーク位置特定部と、
前記特定したピーク検出位置について、当該ピーク検出位置の周囲の検出位置のうち、ラベルが付されてなく且つ当該特定されたピーク検出位置の検出データを基に規定された第1のしきい値以上の検出データを持つ検出位置に対して、当該ピーク検出位置に付与されているラベルを付与するラベル付与処理を行うラベル付与部と
を有する入力装置。 - 前記ピーク位置特定部は、前記ラベル付与部による前記ラベル付与処理の開始前に、前記複数の検出位置のうち前記ラベルが付与されてない検出位置について、当該検出位置の前記検出データの値が所定のピーク条件を満たす最大ピーク検出位置を特定し、
前記ラベル付与部は、直前に前記ピーク位置特定部で前記特定された前記最大ピーク検出位置について前記ラベル付与処理を行い、
前記ピーク位置特定部は、前記ラベル付与部による前記ラベル付与処理の後に、前記最大ピーク検出位置を特定する
請求項1に記載の入力装置。 - 前記ピーク位置特定部は、前記ラベル付与部による前記ラベル付与処理の開始前に、前記複数の検出位置について検出データの値が所定のピーク条件を満たす複数のピーク検出位置を特定し、
前記ラベル付与部は、前記特定された複数のピーク検出位置について、前記検出データが大きい前記ピーク検出位置から順に最大ピーク検出位置として特定し、当該最大ピーク検出位置について前記ラベル付与処理を行う
請求項1に記載の入力装置。 - 前記ラベル付与部は、
前記ラベル付与処理を行う前記ピーク検出位置の前記検出データより値が小さい前記第1のしきい値以上の検出位置を輪郭として特定し、当該特定した輪郭毎に当該輪郭内の前記検出位置についての前記ラベル付与処理を行う
請求項1~3のいずれかに記載の入力装置。 - 前記ラベル付与部は、
前記特定した前記輪郭の各々について、当該輪郭内に前記ラベルが2つ以上ある場合には当該輪郭内の検出位置に対して当該検出位置に近い前記ラベルを付与し、当該輪郭内にラベルが1つある場合には当該輪郭内の検出位置に対して当該ラベルを付与する処理を行う
請求項4に記載の入力装置。 - ピーク位置特定部は、前記複数の検出位置のなかから、前記検出データが第2のしきい値以上であり、且つ、その周囲の検出位置より前記検出データが大きい検出位置を前記ピーク検出位置として特定する
請求項1~5のいずれかに記載の入力装置。 - 前記ピーク位置特定部は、前記第2のしきい値を、前記検出データの変化値の最大値を基に決定する
請求項6に記載の入力装置。 - 前記ラベルを基に、前記複数の物体の近接位置をそれぞれ特定する位置特定部
をさらに有する請求項1~7のいずれかに記載の入力装置。 - 前記位置特定部は、同じ前記ラベルが付与された前記近接領域内の検出位置の重心の値を求めることにより、前記近接位置を求める
請求項8に記載の入力装置。 - 前記ラベル付与部は、全ての前記最大ピーク検出位置について前記ラベル付与処理を行う
請求項2または請求項3に記載の入力装置。 - 複数の検出位置における複数の物体の近接状態を示す検出データに基づいて、前記複数の検出位置のなかで前記検出データの値が所定のピーク条件を満たすピーク検出位置を特定するピーク位置特定部と、
前記特定したピーク検出位置について、当該ピーク検出位置の周囲の検出位置のうち、ラベルが付されてなく且つ当該特定されたピーク検出位置の検出データを基に規定された第1のしきい値以上の検出データを持つ検出位置に対して、当該ピーク検出位置に既に付与されているラベルを付与するラベル付与処理を行うラベル付与部と
を有する物体検出装置。 - 複数の検出位置における複数の物体の近接状態を示す検出データに基づいて、前記複数の検出位置のなかで前記検出データの値が所定のピーク条件を満たすピーク検出位置を特定するピーク位置特定工程と、
前記特定したピーク検出位置について、当該ピーク検出位置の周囲の検出位置のうち、ラベルが付されてなく且つ当該特定されたピーク検出位置の検出データを基に規定された第1のしきい値以上の検出データを持つ検出位置に対して、当該ピーク検出位置に既に付与されているラベルを付与するラベル付与処理を行うラベル付与工程と
を有する物体検出方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020177036167A KR102024180B1 (ko) | 2015-06-18 | 2016-06-09 | 입력 장치, 물체 검출 장치 및 그 방법 |
CN201680034712.2A CN107683452B (zh) | 2015-06-18 | 2016-06-09 | 输入装置、物体检测装置及其方法 |
JP2017525193A JP6402251B2 (ja) | 2015-06-18 | 2016-06-09 | 入力装置、物体検出装置及びその方法 |
EP16811544.2A EP3312704B1 (en) | 2015-06-18 | 2016-06-09 | Input device, and object detection device and method |
US15/840,291 US10712881B2 (en) | 2015-06-18 | 2017-12-13 | Input device, object detection device, and method thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-122879 | 2015-06-18 | ||
JP2015122879 | 2015-06-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/840,291 Continuation US10712881B2 (en) | 2015-06-18 | 2017-12-13 | Input device, object detection device, and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016204069A1 true WO2016204069A1 (ja) | 2016-12-22 |
Family
ID=57545614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/067260 WO2016204069A1 (ja) | 2015-06-18 | 2016-06-09 | 入力装置、物体検出装置及びその方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10712881B2 (ja) |
EP (1) | EP3312704B1 (ja) |
JP (1) | JP6402251B2 (ja) |
KR (1) | KR102024180B1 (ja) |
CN (1) | CN107683452B (ja) |
WO (1) | WO2016204069A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109537175B (zh) * | 2018-12-28 | 2020-09-04 | 杰克缝纫机股份有限公司 | 一种商标切割系统以及方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010533329A (ja) * | 2007-07-12 | 2010-10-21 | アトメル・コーポレイション | 2次元タッチパネル |
JP2012068893A (ja) * | 2010-09-24 | 2012-04-05 | Hitachi Displays Ltd | 表示装置 |
JP2013541088A (ja) * | 2010-09-15 | 2013-11-07 | アドヴァンスト・シリコン・ソシエテ・アノニム | マルチタッチ装置から任意の数のタッチを検出する方法 |
JP2015032235A (ja) * | 2013-08-06 | 2015-02-16 | ソニー株式会社 | タッチ検出回路、タッチ検出方法、および電子機器 |
JP2015125569A (ja) * | 2013-12-26 | 2015-07-06 | エルジー ディスプレイ カンパニー リミテッド | タッチ検出装置およびタッチ検出方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4766340B2 (ja) | 2006-10-13 | 2011-09-07 | ソニー株式会社 | 近接検知型情報表示装置およびこれを使用した情報表示方法 |
US8284165B2 (en) | 2006-10-13 | 2012-10-09 | Sony Corporation | Information display apparatus with proximity detection performance and information display method using the same |
TWI398807B (zh) * | 2009-04-07 | 2013-06-11 | Ite Tech Inc | 觸控裝置之定位裝置及其定位方法 |
US8154529B2 (en) * | 2009-05-14 | 2012-04-10 | Atmel Corporation | Two-dimensional touch sensors |
JP5451181B2 (ja) * | 2009-05-25 | 2014-03-26 | 株式会社ジャパンディスプレイ | 物体の接触または近接を検出するセンサ装置 |
KR101395991B1 (ko) * | 2011-09-01 | 2014-05-19 | 엘지디스플레이 주식회사 | 터치센서를 가지는 표시장치와 그의 터치성능 향상방법 |
KR101885216B1 (ko) * | 2011-12-30 | 2018-08-30 | 삼성전자주식회사 | 터치 센서 시스템의 멀티 터치 구분 방법 |
KR102092664B1 (ko) * | 2013-02-21 | 2020-03-24 | 주식회사 실리콘웍스 | 차동 터치 센싱 시스템의 좌표 선택 회로 및 방법 |
US9158411B2 (en) * | 2013-07-12 | 2015-10-13 | Tactual Labs Co. | Fast multi-touch post processing |
US20160070413A1 (en) * | 2013-04-08 | 2016-03-10 | 3M Innovative Properties Company | Method and System for Resolving Multiple Proximate Touches |
-
2016
- 2016-06-09 EP EP16811544.2A patent/EP3312704B1/en active Active
- 2016-06-09 WO PCT/JP2016/067260 patent/WO2016204069A1/ja active Application Filing
- 2016-06-09 KR KR1020177036167A patent/KR102024180B1/ko active IP Right Grant
- 2016-06-09 JP JP2017525193A patent/JP6402251B2/ja active Active
- 2016-06-09 CN CN201680034712.2A patent/CN107683452B/zh active Active
-
2017
- 2017-12-13 US US15/840,291 patent/US10712881B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010533329A (ja) * | 2007-07-12 | 2010-10-21 | アトメル・コーポレイション | 2次元タッチパネル |
JP2013541088A (ja) * | 2010-09-15 | 2013-11-07 | アドヴァンスト・シリコン・ソシエテ・アノニム | マルチタッチ装置から任意の数のタッチを検出する方法 |
JP2012068893A (ja) * | 2010-09-24 | 2012-04-05 | Hitachi Displays Ltd | 表示装置 |
JP2015032235A (ja) * | 2013-08-06 | 2015-02-16 | ソニー株式会社 | タッチ検出回路、タッチ検出方法、および電子機器 |
JP2015125569A (ja) * | 2013-12-26 | 2015-07-06 | エルジー ディスプレイ カンパニー リミテッド | タッチ検出装置およびタッチ検出方法 |
Also Published As
Publication number | Publication date |
---|---|
US20180101265A1 (en) | 2018-04-12 |
US10712881B2 (en) | 2020-07-14 |
EP3312704B1 (en) | 2022-05-11 |
KR102024180B1 (ko) | 2019-09-23 |
EP3312704A4 (en) | 2018-05-02 |
CN107683452A (zh) | 2018-02-09 |
JP6402251B2 (ja) | 2018-10-10 |
EP3312704A1 (en) | 2018-04-25 |
KR20180008680A (ko) | 2018-01-24 |
CN107683452B (zh) | 2021-01-12 |
JPWO2016204069A1 (ja) | 2018-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106575170B (zh) | 在触摸敏感设备中执行触摸动作的方法 | |
KR20190054100A (ko) | 터치 센서 상의 입력을 검출하고 특징화하기 위한 시스템 | |
KR102347248B1 (ko) | 터치 제스쳐 인식 방법 및 장치 | |
JP2011134069A (ja) | タッチパネル装置 | |
CN104503275A (zh) | 基于手势的非接触式控制方法及其设备 | |
EP2672363A2 (en) | Display device and method using a plurality of display panels | |
US20150378497A1 (en) | Determining finger separation through groove analysis in a touch screen device | |
US20150116280A1 (en) | Electronic apparatus and method of recognizing a user gesture | |
CN105589588B (zh) | 触控系统、触控笔、触控装置及其控制方法 | |
CN111492407B (zh) | 用于绘图美化的系统和方法 | |
CN107272970B (zh) | 电容性侧面位置外推 | |
JP6402251B2 (ja) | 入力装置、物体検出装置及びその方法 | |
KR101706864B1 (ko) | 모션 센싱 입력기기를 이용한 실시간 손가락 및 손동작 인식 | |
JP2018116397A (ja) | 画像処理装置、画像処理システム、画像処理プログラム、及び画像処理方法 | |
US20150277609A1 (en) | Touch data segmentation method of touch controller | |
JP2022550431A (ja) | タッチ感応型センサマトリクスによる認識の為の装置 | |
JP6705052B2 (ja) | 入力装置とその制御方法及びプログラム | |
US20160054830A1 (en) | Information processing device, method of identifying operation of fingertip, and program | |
TW201543301A (zh) | 觸碰感測器系統與分段觸碰資料之方法 | |
CN113126795A (zh) | 一种触控显示装置的触控识别方法及相关设备 | |
JP6061426B2 (ja) | 入力装置及びその情報入力方法 | |
CN110134269B (zh) | 通过环状触摸岛验证多指触摸检测的电子设备及相关方法 | |
CN112650414A (zh) | 触控装置、触控点定位方法、模块、设备及介质 | |
KR101822400B1 (ko) | 광학방식의 터치스크린장치 및 이를 이용한 좌표 검출 방법 | |
CN105183239A (zh) | 光学触控装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16811544 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017525193 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20177036167 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016811544 Country of ref document: EP |