[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016197714A1 - 操作模式自动识别方法及终端 - Google Patents

操作模式自动识别方法及终端 Download PDF

Info

Publication number
WO2016197714A1
WO2016197714A1 PCT/CN2016/080055 CN2016080055W WO2016197714A1 WO 2016197714 A1 WO2016197714 A1 WO 2016197714A1 CN 2016080055 W CN2016080055 W CN 2016080055W WO 2016197714 A1 WO2016197714 A1 WO 2016197714A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate point
standard model
operation mode
touch
contact
Prior art date
Application number
PCT/CN2016/080055
Other languages
English (en)
French (fr)
Inventor
姚均营
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016197714A1 publication Critical patent/WO2016197714A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • This document relates to, but is not limited to, the field of communications, and in particular, to an automatic recognition mode and terminal for operating modes.
  • the screen size of the intelligent terminal is getting larger and larger, which causes great trouble for one-hand operation.
  • the interface or the button of the mobile terminal does not change according to the user's usage habits, for example, the most commonly used return button during the user's use. If it is set to the far right, it is very inconvenient to operate the return key when using the left hand operation.
  • Solution 1 The scheme of dynamically configuring the button by judging the screen rotation state of the terminal;
  • Solution 2 automatically determine the user's handheld state to the terminal through a light sensing module disposed on both sides of the terminal;
  • Scheme 3 Identifying the left-right or right-hand operation scheme by detecting the offset angle of the terminal when the user operates by using the built-in deflection angle sensor of the terminal;
  • Option 4 Identify the scheme of left or right hand operation by acquiring the fingerprint pattern of the finger.
  • the rotation angle of the terminal is obtained by the function of the sensor, and then the judgment of the left and right hands is performed.
  • This scheme may be misjudged because the user may use a left or right hand to operate, and the terminal may have multiple types of rotation angles. Separately based on the angle of rotation, not sufficient;
  • Option 2 needs to be equipped with additional light sensing modules, which increases equipment cost and structural complexity, and is not versatile;
  • Scheme 4 is based on graphics and fingerprint pattern recognition. This method requires an additional graphics discriminating module to recognize the user's graphics on the screen. Then, the fingerprint pattern recognition function needs to be added. The first step requires additional hardware or module support to increase terminal costs. In addition, in the process of pattern or fingerprint pattern recognition, certain algorithms and operations are required, which consume more power, and the image recognition algorithm takes up CPU and affects terminal performance. Finally, graphics or fingerprint recognition itself is not 100% accurate, and there is a certain possibility of misjudgment.
  • Embodiments of the present invention provide an automatic operation mode identification method and a terminal, which are capable of identifying an operation mode without adding additional hardware and identifying operations.
  • An embodiment of the present invention provides an automatic operation mode identification method, including:
  • the input event includes: a touch center coordinate point position and a center coordinate of the touch event on the touch screen Point is the direction of the contact ellipse region of the contact center coordinate point;
  • a standard model of all the operational modes established in advance is called, and standard model matching is performed according to the input event, and the operation mode corresponding to the matched standard model is taken as the recognized operation mode.
  • the standard model of the operation mode includes: an operation area corresponding to the operation mode, and a direction of the plurality of contact ellipse areas in the operation area corresponding to the operation mode.
  • the input event further includes: a central coordinate point is a size of a contact ellipse region of the contact center coordinate point;
  • the standard model of the operating mode further includes: a size of a plurality of contact elliptical regions in the operating region corresponding to the operating mode.
  • performing standard model matching according to the input event includes:
  • the standard model is matched by the similarity between the coordinate point and the direction of the contact ellipse of the same or close coordinate point of the contact center coordinate point.
  • performing standard model matching according to the input event includes:
  • the operation region to which the touch center point belongs is located, and the direction and size of the contact ellipse region where the center coordinate point is the contact center coordinate point and the standard model corresponding to the operation region to which the contact center coordinate point position belongs.
  • the similarity between the center coordinate point and the contact center coordinate point at the same or similar coordinate point position in the direction and size of the contact ellipse area is performed, and standard model matching is performed.
  • a standard model for each mode of operation is established based on the collected touch sample events.
  • the operation modes include a left-hand one-hand operation mode, a right-hand one-hand operation mode, and a two-hand operation mode.
  • it also includes:
  • the layout mode of the terminal operation interface is adjusted according to the recognized operation mode.
  • the embodiment of the invention further provides a terminal, including:
  • a touch screen driver configured to collect touch events acting on the touch screen
  • a processor configured to extract an input event generated by the touch event collected by the touch screen driver; invoke a standard model of all operation modes established in advance, perform standard model matching according to the input event, and match the standard model Corresponding operation mode as the recognized operation mode;
  • the input event includes: a touch center coordinate point position of the touch event on the touch screen and a contact ellipse area where the center coordinate point is the contact center coordinate point The direction.
  • the processor is a standard model set to implement an invoked operation mode by: an operation region corresponding to the operation mode, and a direction of the plurality of contact elliptical regions in the operation region corresponding to the operation mode.
  • the input event further includes: a central coordinate point is a size of a contact ellipse region of the contact center coordinate point;
  • the processor is further configured to implement a standard model of the invoked operational mode by: The size of the plurality of contact elliptical regions in the operation region corresponding to the mode.
  • the processor is configured to perform standard model matching according to the input event in the following manner:
  • the central coordinate of the standard model corresponding to the operation area corresponding to the operation center of the contact center coordinate point according to the operation area to which the touch event event contact center coordinate point position belongs and the center coordinate point is the contact ellipse area of the contact center coordinate point
  • the standard model matching is performed by the similarity of the point contact angles of the contact ellipse regions having the same or close coordinate points as the contact center coordinate points.
  • the processor is configured to perform standard model matching according to the input event in the following manner:
  • the operation region to which the touch center point belongs is located, and the direction and size of the contact ellipse region where the center coordinate point is the contact center coordinate point and the standard model corresponding to the operation region to which the contact center coordinate point position belongs.
  • the similarity between the center coordinate point and the contact center coordinate point at the same or similar coordinate point position in the direction and size of the contact ellipse area is performed, and standard model matching is performed.
  • the processor is further configured to:
  • the user is provided with an interactive interface, and collects touch sample events in each operation mode input by the user through the interaction interface; and establishes a standard model of each operation mode according to the collected touch sample events.
  • the operation modes include a left-hand one-hand operation mode, a right-hand one-hand operation mode, and a two-hand operation mode.
  • the processor is further configured to adjust a layout manner of the terminal operation interface according to the identified operation mode.
  • the solution according to the embodiment of the present invention can intelligently identify the operation mode of the user without adding additional hardware modules on the basis of the terminal related input system, and can automatically adjust the terminal setting according to the identified operation mode, thereby facilitating user operation, and a large degree. Enhance the user experience of the product.
  • FIG. 1 is a flowchart of a method for automatically identifying a terminal operation mode according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of meanings of input event parameters of a multi-touch protocol according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of partitioning a screen area of a terminal according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of input event characteristics when a left hand is operated by one hand according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of input event characteristics when a right hand is operated by one hand according to an embodiment of the present invention
  • FIG. 6 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • the embodiment of the present invention provides an automatic operation mode recognition method, which is based on a multi-touch protocol (Multi-touch Protocol) of a terminal touch screen input event.
  • Multi-touch Protocol Multi-touch Protocol
  • the touch screen driver reports the input event to the upper layer according to the rules in the multi-touch protocol.
  • the user can be intelligently judged as the left hand and the right hand. Still the operation of both hands.
  • an embodiment of the present invention provides an automatic operation mode identification method, where the method includes the following steps:
  • Step S101 Collect a touch event applied to the touch screen to extract an input event generated by the touch event;
  • the input event includes: a touch event contact center coordinate point position and a center coordinate point being a contact ellipse region of the touch event contact center coordinate point (for example, a direction indicated by a long axis contacting the elliptical region);
  • the input event further includes a size of a contact ellipse region where the central coordinate point is a touch event contact center coordinate point; (eg, a size of an area contacting the elliptical region, or a perimeter, or a long axis, or a short axis )
  • an explanation of a partial input event (ie, an input event used in the embodiment of the present invention) specified by the multi-touch protocol includes:
  • ABS_MT_POSITION_X indicating the X-axis coordinate of the contact center of the touch event
  • ABS_MT_POSITION_Y indicating the contact center Y-axis coordinate of the touch event
  • ABS_MT_TOUCH_MAJOR indicating that the touch event contacts the long axis of the elliptical region
  • ABS_MT_TOUCH_MINOR indicating that the touch event contacts the short axis of the elliptical region
  • ABS_MT_ORIENTATION indicates the direction in which the touch event contacts the elliptical region; optionally, when the long axis of the touch event contact ellipse region is aligned with the Y coordinate of the screen coordinate system, a value of 0 is returned, and the touch event contacts the long axis of the elliptical region. Returns a negative value and returns a positive value when turning right.
  • Step S102 calling a standard model of all the operation modes established in advance, performing standard model matching according to the input event, and using the operation mode corresponding to the matched standard model as the recognized operation mode.
  • the standard model of the operation mode may include an inherent configuration in the terminal (ie, the default configuration when the terminal is shipped from the factory), and optionally a standard model established by collecting the touch operation of the user and conforming to the usage habit of the user.
  • the standard model building process includes:
  • the standard model of the operation mode includes: an operation area corresponding to the operation mode, and an operation mode The direction of the plurality of contact elliptical regions in the operation region corresponding to the formula; optionally, the size of the plurality of contact elliptical regions in the operation region corresponding to the operation mode.
  • the standard model matching is mainly to consider the operation area to which the touch event contact center point position belongs, and the direction of the contact ellipse area where the center coordinate point is the contact center coordinate point and the operation area to which the contact center coordinate point belongs.
  • the central coordinate point in the corresponding standard model is the similarity of the direction of the contact ellipse region which is the same as or close to the coordinate point of the contact center coordinate point;
  • how to calculate the central coordinate point is the contact ellipse region of the contact center coordinate point and the standard model corresponding to the operation region to which the contact center coordinate point belongs is the central coordinate point of the contact ellipse region which is the same as or close to the contact center coordinate point.
  • the similarity of the directions may be implemented by well-known techniques of those skilled in the art, and is not intended to limit the scope of the present invention, and details are not described herein.
  • the size of the contact ellipse region where the central coordinate point is the contact center coordinate point and the central coordinate point in the standard model corresponding to the operation region to which the contact center coordinate point belongs to and the contact center coordinate point is also necessary to consider the size of the contact ellipse region where the central coordinate point is the contact center coordinate point and the central coordinate point in the standard model corresponding to the operation region to which the contact center coordinate point belongs to and the contact center coordinate point.
  • the similarity of the size of the contact elliptical regions of the same or similar coordinate points is also necessary to consider the size of the contact ellipse region where the central coordinate point is the contact center coordinate point and the central coordinate point in the standard model corresponding to the operation region to which the contact center coordinate point belongs to and the contact center coordinate point.
  • how to calculate the size of the contact ellipse region where the central coordinate point is the contact center coordinate point and the size of the contact ellipse region in the standard model corresponding to the operation region to which the contact center coordinate point belongs is the same or close to the coordinate point of the contact center coordinate point.
  • the similarity can be achieved by using the well-known technology of the person skilled in the art, and is not intended to limit the scope of the present invention, and details are not described herein.
  • standard model matching is performed by using input events of multiple touch events.
  • the main consideration is the proportion of each touch event contact center coordinate point position falling within the operation area corresponding to each operation mode, and each center coordinate point is the contact ellipse of the contact center coordinate point.
  • those skilled in the art can also customize the model matching strategy under the idea of the embodiment of the present invention.
  • the result of the recognition may be notified to the user for the user to judge whether the result of the recognition is correct, and in the correct case, the input event of the collected touch event is determined. Whether the information is in the standard model library corresponding to the operation mode, if it is, no processing is performed. If not, the input events of the touch events in the standard model library are not stored in the standard model library, and the self-learning of the standard model is realized.
  • the method of the embodiment of the present invention further includes: after identifying the operation mode, adjusting a layout manner of the terminal operation interface according to the identified operation mode. That is, the layout mode of the terminal operation interface is adjusted to the layout mode corresponding to the current operation mode.
  • the layout mode of the terminal operation interface includes: a layout mode of the virtual keyboard.
  • the method in this embodiment can intelligently identify the operation mode of the user without automatically adding an additional hardware and software module on the basis of the terminal-related input system, and automatically adjust the operation interface setting of the terminal, thereby facilitating user operation, and a large degree. Enhance the user experience of the product.
  • FIGS. 3 to 5 will explain the method of the present invention in more detail by disclosing more technical details.
  • the technical details disclosed are intended to be illustrative of the invention and are not intended to limit the invention.
  • the terminal screen area is divided into four areas: A, B, C, and D.
  • the coordinates of the touch event are mainly concentrated in the A area and the C area;
  • the coordinates of the touch event are mainly concentrated in the B area and the C area;
  • the coordinates of the touch event in the left-hand single-hand operation are mainly concentrated in the A area and the C area, and the size and direction of the contact ellipse area in each of the A area and the C area have regularity. Sex.
  • the coordinates of the touch event in the right-handed one-hand operation are mainly concentrated in the B area and the C area, and the size and direction of the contact ellipse area in each of the B area and the C area have regularity. Sex.
  • the position of operating the touch screen is mainly concentrated in the sector area of the lower left corner of the terminal screen, and the touch screen touches the elliptical area size of each operation position in each position in the area,
  • the direction is basically the same, that is, the size of ABS_MT_TOUCH_MAJOR and ABS_MT_TOUCH_MINOR of the touch event at the fixed position remains basically unchanged, and the positive and negative values of ABS_MT_ORIENTATION remain basically unchanged; when the user uses one-handed operation with the right hand, the position of the touch screen is mainly concentrated on the terminal screen.
  • the direction of the touch screen touching the ellipse area at each position in the area is basically the same, that is, the size of the ABS_MT_TOUCH_MAJOR and ABS_MT_TOUCH_MINOR of the touch event at the fixed position remains basically unchanged, and the positive and negative values of ABS_MT_ORIENTATION are basically the same. constant.
  • the embodiment of the present invention can establish a standard model of multiple operation modes as a basis for judging the automatic recognition of the operation mode.
  • the standard model modeling mode of the operation mode includes:
  • Step 1 Collect the input events when the left hand is operated by one hand.
  • the terminal may provide an interactive interface to the user, prompting the user to use the left-hand one-hand operation, and collecting a certain number of left-hand one-hand operation screen events as standard sample events.
  • the specific content of the collection includes an input event generated by the user using a left-handed one-hand operation terminal, and the user can perform a touch operation such as clicking, sliding, etc. within a range that the left hand can touch.
  • Step 2 Generate a standard model of the left-hand one-hand operation mode input event. After the input event collection work is completed, the standard modeling is performed according to the collected event feature points, and the left hand one-hand operation input event standard model is obtained.
  • the specific modeling content includes: the input center coordinate point range determined by ABS_MT_POSITION_X, ABS_MT_POSITION_Y, to establish the area where A and C are shown in FIG. 3; each contact center determined by ABS_MT_TOUCH_MAJOR, ABS_MT_TOUCH_MINOR, ABS_MT_ORIENTATION The size and direction of the elliptical area in contact with the coordinate point.
  • Step 3 Acquire an input event when the right hand is operated by one hand, and the process is similar to step 1.
  • step 4 a right-hand one-hand operation mode input event standard model is generated, and the process is similar to step 2.
  • the standard model of the two-hand operation mode input event can also be obtained, but considering that the operation mode that the user can use is nothing more than three types: left-hand one-hand operation, right-hand one-hand operation, and two-hand operation. Therefore, instead of establishing a standard model of the two-hand operation mode input event, it is possible to judge whether it is the two-hand operation mode by excluding one-hand operation.
  • the ranges of the A area, the B area, the C area, and the D area shown in FIG. 3 are comprehensively obtained, and the analysis is performed by using the left hand one-handed operation and The difference in the elliptical direction in the C area when the right hand is operated with one hand.
  • the operating mode can be automatically identified using the standard model.
  • the identification process is as follows:
  • ABS_MT_POSITION_X ABS_MT_POSITION_Y
  • ABS_MT_TOUCH_MAJOR ABS_MT_TOUCH_MINOR
  • ABS_MT_ORIENTATION event sequences
  • S3 Statistically collect the coordinate position distribution determined by ABS_MT_POSITION_X and ABS_MT_POSITION_Y in the collected event sequence, and statistically analyze the size and direction of the touch panel contact ellipse area determined by ABS_MT_TOUCH_MAJOR, ABS_MT_TOUCH_MINOR, ABS_MT_ORIENTATION at each coordinate position.
  • determining an operation mode that the statistical result conforms to includes:
  • each center coordinate point is the size, direction, and left hand of the contact ellipse region contacting the center coordinate point.
  • the central coordinate point under the operation standard model is the coordinate position of the contact center coordinate point or the coordinate point close to the contact center coordinate point position (that is, the data of the coordinate point position is not included in the standard model, so the coordinate point close to the coordinate point position is adopted. Instead of) the size and direction of the ellipse is similar When the degree reaches the set threshold, it is considered that the user is currently using the left-hand one-hand operation.
  • the ratio of the coordinate distribution in the B and C regions in the lower right corner of the screen reaches the set threshold, and each center coordinate point is the size, direction, and right hand of the contact ellipse region contacting the center coordinate point.
  • the central coordinate point of the operation standard model is the contact center coordinate point position or the ellipse size and direction similarity of the coordinate point close to the contact center coordinate point position reaches the set threshold value, the user is currently considered to be using the right hand one-handed hand. operating.
  • the user When the collected touch event does not reach the threshold in the region where the coordinates are distributed, and the similarity between the size and direction of the contact ellipse region where the central coordinate point is the contact center coordinate point does not reach the set threshold, the user is considered as the user.
  • the user Currently using two-handed operation.
  • Each of the above thresholds can be flexibly set according to requirements, and the values of each threshold can be the same or different.
  • each central coordinate point in the A region is the contact elliptical region contacting the central coordinate point.
  • the location of the contact center coordinate point of the touch event is analyzed.
  • the position of the touch event center coordinate point is concentrated in the amount of the A area and the C area in FIG. 3, the user is determined to use the left hand with one hand.
  • One of the necessary conditions is already in place.
  • the direction of the touch event touching the elliptical region in the C region is analyzed, and the result is compared with the direction of the touch ellipse region in the C region of the left hand one-handed standard model.
  • the similarity reaches a preset threshold, then One of the other necessary conditions for judging the user's one-handed operation with the left hand is already in place.
  • the size and direction of each contact ellipse region in the A region of the touch event are analyzed, and the result is compared with the size and direction of the contact elliptical region in the A region of the left-hand one-handed standard model, when the similarity is reached.
  • the threshold is preset, it is judged that a supplementary condition that the user uses the left-hand one-hand operation is already available.
  • the embodiment of the invention further provides a terminal, as shown in FIG. 6, comprising:
  • the touch screen driver 610 is configured to collect a touch event that acts on the touch screen
  • the processor 620 is configured to extract an input event generated by the touch event collected by the touch screen driver 610; call a standard model of all the operation modes established in advance, perform standard model matching according to the input event, and match the matched standard model
  • the operation mode is the recognized operation mode; the input event includes: a contact center coordinate point position of the touch event on the touch screen and a center coordinate point being a contact ellipse area of the contact center coordinate point.
  • the operation modes include: a left-hand one-hand operation mode, a right-hand one-hand operation mode, and a two-hand operation mode.
  • the standard model of the operation mode includes: an operation area corresponding to the operation mode, and a direction of the plurality of contact ellipse areas in the operation area corresponding to the operation mode.
  • the processor 620 is configured to perform standard model matching according to the input event according to the ratio of the touch area to the operation area to which the central coordinate point position belongs, And the central coordinate point is the direction in which the contact center coordinate point contacts the elliptical region and the standard model corresponding to the operation region to which the contact center coordinate point belongs, and the central coordinate point and the contact center coordinate point are in the same or close to the elliptical region. Similarity, standard model matching.
  • the input event further includes: the central coordinate point is a size of the contact ellipse region contacting the central coordinate point; and the standard model of the operation mode further includes: multiple operation regions corresponding to the operation mode Contact the size of the elliptical area.
  • the processor 620 is configured to perform standard model matching according to the input event in the following manner: contacting the operation area to which the center coordinate point position belongs according to the touch event, and the contact of the center coordinate point as the contact center coordinate point The similarity between the direction and size of the elliptical area and the direction and size of the contact ellipse of the same or similar contact point of the standard coordinate point in the standard model corresponding to the operation area to which the contact center coordinate point belongs, for standard model matching .
  • the processor is further configured to:
  • the user is provided with an interactive interface, and collects touch sample events in each operation mode input by the user through the interaction interface; and establishes a standard model of each operation mode according to the collected touch sample events.
  • the processor is further configured to adjust a layout manner of the terminal operation interface according to the identified operation mode.
  • the terminal in the embodiment can be based on the terminal-related input system, without adding additional hardware modules, intelligently identifying the operation mode of the user, and automatically adjusting the operation interface setting of the terminal, thereby facilitating user operation and greatly improving the product. user experience.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: ROM, RAM, disk or CD.
  • Embodiments of the present invention also provide a computer readable storage medium storing computer executable instructions for performing any of the methods described above.
  • each module/unit in the foregoing embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program in a storage and a memory by a processor. / instruction to achieve its corresponding function.
  • the invention is not limited to any specific form of combination of hardware and software.
  • the above technical solution does not add additional hardware modules, intelligently recognizes the operation mode of the user, and facilitates the user operation, thereby greatly improving the user experience of the product.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种操作模式自动识别方法及终端,所述方法包括:采集作用于触控屏幕上的触控事件,提取所有触控事件产生的输入事件;调用预先建立的各操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式。

Description

操作模式自动识别方法及终端 技术领域
本文涉及但不限于通信领域,尤其涉及一种操作模式自动识别方法及终端。
背景技术
智能终端的屏幕尺寸越来越大,这个情况给单手操作带来很大困扰,移动终端的界面或者按键并不随着用户的使用习惯发生变化,比如,用户使用过程中最常用的返回按键,如果设置在最右侧,使用左手操作时,操作返回键非常不方便。
关于智能判断左手右手操作的方法,目前已经存在一些方案,例如:
方案一:通过判断终端的屏幕旋转状态,动态配置按键的方案;
方案二:通过设置在终端两侧的光感模块自动判断用户对终端的手持状态的方案;
方案三:通过终端内置偏转角度传感器,检测用户操作时终端的偏移角度来识别左手或右手操作的方案;
方案四:通过获取手指的指纹纹路进而识别左手或右手操作的方案。
上述已存在的技术方案,一定程度上解决了智能判断左手或右手操作的问题,但同时也可以看到,上述几种方案均存在一些弊端,有进一步需要改进的地方。具体表现为:
方案一、方案三都是借助传感器的作用获取终端的旋转角度,进而进行左右手的判断,这种方案可能会出现误判,因为用户使用左手或右手操作,终端都会出现多类旋转角度,所以,单独以旋转角度作为依据,不够充分;
方案二需要配备额外的光感模块,增加了设备成本和结构复杂度,通用性不强;
方案四以图形和指纹纹路识别为基础,这种方法需要额外的图形判别模块来识别用户按压屏幕的图形,然后,需要增加指纹纹路识别功能,这些方 式首先需要额外的硬件或模块支持,增加终端成本。另外,在图形或指纹纹路识别过程中,需要一定的算法和运算,这些会消耗更多的电量,图像识别算法很占用CPU,给终端性能带来影响。最后,图形或指纹识别本身并不是百分之百准确,存在一定的误判可能性。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种操作模式自动识别方法及终端,能够在不增加额外的硬件以及识别运算的情况下识别操作模式。
本发明实施例提供一种操作模式自动识别方法,包括:
采集作用于触控屏幕上的触控事件,提取所述触控事件产生的输入事件;所述输入事件包括:所述触控事件在所述触控屏幕上的接触中心坐标点位置和中心坐标点为所述接触中心坐标点的接触椭圆区域的方向;
调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式。
可选的,所述操作模式的标准模型包括:操作模式对应的操作区域、操作模式对应的操作区域内多个接触椭圆区域的方向。
可选的,所述输入事件还包括:中心坐标点为所述接触中心坐标点的接触椭圆区域的大小;
所述操作模式的标准模型还包括:操作模式对应的操作区域内多个接触椭圆区域的大小。
可选的,根据所述输入事件,进行标准模型匹配包括:
根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向与该接触中心坐标点位置所属的操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度,进行标准模型匹配。
可选的,根据所述输入事件,进行标准模型匹配包括:
根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向、大小与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点位置上接触椭圆区域的方向、大小的相似度,进行标准模型匹配。
可选的,操作模式的标准模型建立方式包括:
为用户提供交互界面;
采集用户通过所述交互界面输入的每一种操作模式下的触控样本事件;
根据采集的触控样本事件,建立每一种操作模式的标准模型。
可选的,
所述操作模式包括:左手单手操作模式、右手单手操作模式、以及双手操作模式。
可选的,还包括:
在识别出操作模式后,按照识别的操作模式,调整终端操作界面的布局方式。
本发明实施例还提供一种终端,包括:
触摸屏驱动器,设置为采集作用于触控屏幕上的触控事件;
处理器,设置为提取所述触摸屏驱动器采集的所述触控事件产生的输入事件;调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式;所述输入事件包括:所述触控事件在所述触控屏幕上的接触中心坐标点位置和中心坐标点为所述接触中心坐标点的接触椭圆区域的方向。
可选的,所述处理器是设置为通过如下方式实现调用的操作模式的标准模型:操作模式对应的操作区域、操作模式对应的操作区域内多个接触椭圆区域的方向。
可选的,所述输入事件还包括:中心坐标点为所述接触中心坐标点的接触椭圆区域的大小;
所述处理器还设置为通过如下方式实现调用的操作模式的标准模型:操 作模式对应的操作区域内多个接触椭圆区域的大小。
可选的,所述处理器,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:
根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度,进行标准模型匹配。
可选的,所述处理器,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:
根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向、大小与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点位置上接触椭圆区域的方向、大小的相似度,进行标准模型匹配。
可选的,所述处理器,还设置为:
为用户提供交互界面,采集用户通过所述交互界面输入的每一种操作模式下的触控样本事件;根据采集的触控样本事件,建立每一种操作模式的标准模型。
可选的,
所述操作模式包括:左手单手操作模式、右手单手操作模式、以及双手操作模式。
可选的,所述处理器,还设置为按照识别的操作模式,调整终端操作界面的布局方式。
本发明实施例有益效果如下:
本发明实施例所述方案能够在终端相关输入系统基础上,不增加额外的硬件模块,智能识别用户的操作方式,并可根据识别的操作方式自动调整终端设置,方便了用户操作,较大程度的提升产品的用户体验。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本发明实施例提供的一种终端操作模式自动识别方法的流程图;
图2为本发明实施例中多点触控协议输入事件参数含义示意图;
图3为本发明实施例中终端屏幕区域划分示意图;
图4为本发明实施例中左手单手操作时输入事件特征示意图;
图5为本发明实施例中右手单手操作时输入事件特征示意图;
图6为本发明实施例提供的一种终端的结构框图。
本发明的实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为了解决相关技术中操作模式识别方式存在的多种不足,本发明实施例提供一种操作模式自动识别方法,该方法基于终端触摸屏输入事件遵循多点触控协议(Multi-touch Protocol),在用户操作屏幕过程中,触摸屏驱动程序会按照多点触控协议中的制定的规则,上报输入事件给上层程序,本发明实施例通过解析驱动上报的这些输入事件,可以实现智能判断用户为左手、右手还是双手的操作方式。
如图1所示,本发明实施例提供一种操作模式自动识别方法,所述方法包括如下步骤:
步骤S101,采集作用于触控屏幕上的触控事件,提取触控事件产生的输入事件;
所述输入事件包括:触控事件接触中心坐标点位置和中心坐标点为触控事件接触中心坐标点的接触椭圆区域的方向(例如,接触椭圆区域的长轴所指示的方向);
可选地,所述输入事件还包括中心坐标点为触控事件接触中心坐标点的接触椭圆区域的大小;(例如,接触椭圆区域的面积的大小、或周长、或长轴、或短轴)
如图2所示,为多点触控协议规定的部分输入事件(即本发明实施例所用到的输入事件)的解释,包括:
ABS_MT_POSITION_X,表示触控事件的接触中心X轴坐标;
ABS_MT_POSITION_Y,表示触控事件的接触中心Y轴坐标;
ABS_MT_TOUCH_MAJOR,表示触控事件接触椭圆区域的长轴;
ABS_MT_TOUCH_MINOR,表示触控事件接触椭圆区域的短轴;
ABS_MT_ORIENTATION,表示触控事件接触椭圆区域的方向;可选的,当触控事件接触椭圆区域的长轴和屏幕坐标系Y方向对齐时返回0值,触控事件接触椭圆区域的长轴向左转时返回负值,向右转时返回正值。
具体如何提取触控事件产生的输入事件可以采用本领域技术人员的公知技术实现,并不用于限定本发明的保护范围,这里不再赘述。比如安卓(Android)系统下,可以通过监控/dev/input下的输入设备节点文件获取输入事件。
步骤S102,调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式。
本实施例中,操作模式的标准模型可以包括终端内的固有配置(即终端出厂时的默认配置),可选的是通过采集用户的触控操作而建立的符合本用户使用习惯的标准模型。此时,标准模型建立过程包括:
(1)为用户提供交互界面,并提示用户在一种操作模式下进行操作;
(2)采集一定数量的触控事件作为标准样本事件,并记录标准样本事件产生的输入事件;
(3)根据记录的标准样本事件产生的输入事件,对当前操作模式进行标准模型建模;
(4)提示用户在下一种操作模式下进行操作,重复上述采集及建模过程,直到所有操作模式均建模完成。
可选的,操作模式的标准模型包括:操作模式对应的操作区域、操作模 式对应的操作区域内多个接触椭圆区域的方向;可选地,还包括操作模式对应的操作区域内多个接触椭圆区域的大小。
而根据输入事件,进行标准模型匹配,主要是考量触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为接触中心坐标点的接触椭圆区域的方向与接触中心坐标点所属的操作区域对应的标准模型中中心坐标点为与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度;
具体如何计算中心坐标点为接触中心坐标点的接触椭圆区域的方向与接触中心坐标点所属的操作区域对应的标准模型中中心坐标点为与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度可以采用本领域技术人员的熟知技术实现,并不用于限定本发明的保护范围,这里不再赘述。
可选地,为了提高判断的精确度,还需要考量中心坐标点为接触中心坐标点的接触椭圆区域的大小与接触中心坐标点所属操作区域对应的标准模型中中心坐标点为与接触中心坐标点相同或相近坐标点的接触椭圆区域的大小的相似度。
具体如何计算中心坐标点为接触中心坐标点的接触椭圆区域的大小与接触中心坐标点所属操作区域对应的标准模型中中心坐标点为与接触中心坐标点相同或相近坐标点的接触椭圆区域的大小的相似度可以采用本领域技术人员的公知技术实现,并不用于限定本发明的保护范围,这里不再赘述。
可选地,为了提高识别准确度,本实施例中,根据输入事件进行标准模型匹配过程中,采用多个触控事件的输入事件进行的标准模型匹配。此时,在模型匹配时,主要考量的是每一个触控事件接触中心坐标点位置落在每一个操作模式对应的操作区域内的比例,以及每一个中心坐标点为接触中心坐标点的接触椭圆区域的方向(或者方向和大小)与每一个接触中心坐标点所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向(或者方向和大小)的相似度。例如,当比例和相似度都达到设定的对应阈值则可判定为符合某操作模式。当然,本领域技术人员也可以在本发明实施例思想下,自定义模型匹配策略。
另外,在操作模式识别后,可以将识别的结果通知给用户,供用户判断识别的结果是否正确,在正确的情况下,判断采集到的触控事件的输入事件 信息是否均在对应操作模式的标准模型库内,若在,则不作处理,若不在,将不在标准模型库内的触控事件的输入事件存入标准模型库中,实现标准模型的自学习。
可选地,本发明实施例所述方法,还包括:在识别出操作模式后,按照识别的操作模式,调整终端操作界面的布局方式。即调整终端操作界面的布局方式为当前操作模式对应的布局方式。其中,终端操作界面的布局方式包括:虚拟键盘的布局方式。
具体如何调整终端操作界面的布局方式可以采用本领域技术人员的熟知技术实现,并不用于限定本发明的保护范围,这里不再赘述。
可见,本实施例所述方法能够在终端相关输入系统基础上,不增加额外的硬件和软件模块,智能识别用户的操作方式,并自动调整终端的操作界面设置,方便了用户操作,较大程度的提升产品的用户体验。
为了更清楚的阐述本发明,下面结合附图3至5给出本发明的一个较佳实施例,该实施例通过披露更多的技术细节对本发明所述方法进行更详尽的阐述,需要指出的是,披露的技术细节用于解释本发明并不用于唯一限定本发明。
如图3所示,本发明实施例中终端屏幕区域划分为A、B、C、D四个区域。当用户使用左手单手操作终端时,触控事件的坐标主要集中在A区域和C区域;当用户使用右手单手操作终端时,触控事件的坐标主要集中在B区域和C区域;D区域靠近终端屏幕上方,使用单手操作时,触控事件的坐标较少位于此区域内。
如图4所示,本发明实施例中左手单手操作时触控事件的坐标主要集中在A区域和C区域,且A区域和C区域内每个位置的接触椭圆区域的大小、方向具有规律性。
如图5所示,本发明实施例中右手单手操作时触控事件的坐标主要集中在B区域和C区域,且B区域和C区域内每个位置的接触椭圆区域的大小、方向具有规律性。
通过图4和图5可以看出,当使用左手单手操作和右手单手操作时,触控事件的坐标位置均会落在C区域,但使用左单手操作和右手单手操作时,C区域内接触椭圆的方向是不一样的,二者的方向一般具有约90度的差异,也就是ABS_MT_ORIENTATION的值存在正负的差异。
由图3至图5可知,当用户使用左手单手操作时,操作触摸屏的位置主要集中在终端屏幕左下角的扇形区域内,且区域内每个位置每次操作的触摸屏接触椭圆区域的大小、方向基本保持一致,即固定位置上触控事件的ABS_MT_TOUCH_MAJOR和ABS_MT_TOUCH_MINOR的大小基本保持不变,ABS_MT_ORIENTATION正负值基本保持不变;当用户使用右手单手操作时,操作触摸屏的位置主要集中在终端屏幕右下角的扇形区域内,且区域内每个位置每次操作的触摸屏接触椭圆区域的方向基本保持一致,即固定位置上触控事件的ABS_MT_TOUCH_MAJOR和ABS_MT_TOUCH_MINOR的大小基本保持不变,ABS_MT_ORIENTATION正负值基本保持不变。
所以,本发明实施例可以建立多种操作模式的标准模型,作为操作模式自动识别的判断依据。
本实施例中,操作模式的标准模型建模方式包括:
步骤1,采集左手单手操作时的输入事件。为了提高识别准确度,在具体实施时,可以由终端提供交互界面给用户,提示用户使用左手单手操作,采集一定数量的左手单手操作屏幕时的事件作为标准样本事件。
采集的具体内容包括用户使用左手单手操作终端,用户在左手单手可以触及的范围内进行点击、滑动等触控操作产生的输入事件。
步骤2,生成左手单手操作模式输入事件的标准模型。当输入事件采集工作完成后,根据采集到的事件特征点进行标准建模,得到左手单手操作输入事件标准模型。
具体的建模内容包括:由ABS_MT_POSITION_X、ABS_MT_POSITION_Y确定的输入中心坐标点范围,以此来建立图3中所示A和C所在的区域;由ABS_MT_TOUCH_MAJOR、ABS_MT_TOUCH_MINOR、ABS_MT_ORIENTATION确定的每个接触中心 坐标点位置上接触椭圆区域的大小和方向。
步骤3,采集右手单手操作时的输入事件,其过程类似步骤1。
步骤4,生成右手单手操作模式输入事件标准模型,其过程类似步骤2。
按照与左右手的标准模型建模方式,也可以得到双手操作模式输入事件标准模型,但考虑到用户所能使用的操作模式不外乎三种:左手单手操作、右手单手操作和双手操作,所以,也可以不建立双手操作模式输入事件的标准模型,而是通过排除单手操作的方式来判断是否为双手操作模式。
可选地,在步骤2和步骤4所建立的标准模型基础上,综合得到图3所示的A区域、B区域、C区域、D区域的范围,以及分析得到分别在使用左手单手操作和右手单手操作时,C区域内椭圆方向的差异。
在标准模型建立后,即可利用标准模型对操作模式进行自动识别,识别过程如下:
S1,识别初始化,清空已缓存的输入事件,开始进行操作模式识别。
S2,采集一定数量的连续的用户操作触摸屏时的触控事件,并记录下一系列的ABS_MT_POSITION_X、ABS_MT_POSITION_Y、ABS_MT_TOUCH_MAJOR、ABS_MT_TOUCH_MINOR、ABS_MT_ORIENTATION事件序列;
S3,统计采集到的事件序列中的ABS_MT_POSITION_X、ABS_MT_POSITION_Y所确定的坐标位置分布,并统计分析每个坐标位置上由ABS_MT_TOUCH_MAJOR、ABS_MT_TOUCH_MINOR、ABS_MT_ORIENTATION所确定的触摸屏接触椭圆区域的大小和方向。
S4,根据预先建立的所有操作模式的标准模型,确定统计结果所符合的操作模式包括:
当采集到的触控事件中,坐标分布在屏幕左下角A和C区域的比例达到设定的阈值,且每个中心坐标点为接触中心坐标点的接触椭圆区域的大小、方向和左手单手操作标准模型下中心坐标点为该接触中心坐标点位置或者与该接触中心坐标点位置相近的坐标点(即标准模型中没有该坐标点位置的数据,所以采用与该坐标点位置相近的坐标点代替)的椭圆的大小、方向相似 度达到设定的阈值时,则认为用户当前使用的是左手单手操作。
当采集到的触控事件中,坐标分布在屏幕右下角B和C区域的比例达到设定的阈值,且每个中心坐标点为接触中心坐标点的接触椭圆区域的大小、方向和右手单手操作标准模型下中心坐标点为该接触中心坐标点位置或者与该接触中心坐标点位置相近的坐标点的椭圆大小、方向相似度达到设定的阈值时,则认为用户当前使用的是右手单手操作。
当采集到的触控事件中,坐标分布在的区域未达到阈值,每个中心坐标点为接触中心坐标点的接触椭圆区域的大小、方向的相似度也未达到设定阈值时,则认为用户当前使用的是双手操作。
上述每一个阈值可以根据需求灵活设定,每一个阈值的取值可以相同或不同。
在一个可选实施例中,为了提高识别效率,可以仅考虑坐标位置分布比例和C区域内接触椭圆区域的方向,而将A区域内每个中心坐标点为接触中心坐标点的接触椭圆区域的大小和方向的判断作为补充,可以根据设置选择是否将A区域内每个中心坐标点为接触中心坐标点的接触椭圆区域的大小和方向作为判断依据。
下面以识别左手单手操作为例对该优选实施例进行阐述:
首先,分析触控事件的接触中心坐标点位置所在区域,当触控事件中心坐标点位置集中在图3中A区域和C区域的数量比例达到一定阈值时,则判定用户使用左手单手操作的必要条件之一已经具备。
其次,分析触控事件在C区域内接触椭圆区域的方向,并将结果和左手单手操作标准模型中C区域内的触摸椭圆区域的方向进行比较,当相似度达到预设的阈值时,则判断用户使用左手单手操作的另外一个必要条件之一已经具备。
作为补充,分析触控事件在A区域内每个接触椭圆区域的大小和方向,并将结果和左手单手操作标准模型中A区域内的接触椭圆区域的大小和方向进行比较,当相似度达到预设的阈值时,则判断用户使用左手单手操作的一个补充条件已经具备。
最后,根据上述分析判断结果,综合断定用户是否使用的是左手单手操作。对右手单手操作进行识别判断的过程和上述步骤类似,在此不再赘述。
S5,当成功识别操作模式后,根据识别结果,自动调整终端的相关设置,包括虚拟键盘的布局、操作界面布局等,并弹出消息提示用户终端已经设置为左手操作模式或右手操作模式。
S6,当单手操作自动识别开关始终打开时,流程进入S1开始下一轮识别;当自动识别开关关闭时,流程结束。
本发明实施例还提供一种终端,如图6所示,包括:
触摸屏驱动器610,设置为采集作用于触控屏幕上的触控事件;
处理器620,设置为提取触摸屏驱动器610采集的触控事件产生的输入事件;调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式;所述输入事件包括:触控事件在触控屏幕上的接触中心坐标点位置和中心坐标点为接触中心坐标点的接触椭圆区域的方向。
基于上述结构框架及实施原理,下面给出在上述结构下的几个具体及可选实施方式,用以细化和优化本发明所述终端的功能,以使本发明方案的实施更方便,准确。具体涉及如下内容:
本实施例中,操作模式包括:左手单手操作模式、右手单手操作模式、以及双手操作模式。
可选的,操作模式的标准模型包括:操作模式对应的操作区域、操作模式对应的操作区域内多个接触椭圆区域的方向。
在本发明的一个可选实施例中,处理器620,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:根据触控事件接触中心坐标点位置所属的操作区域内的比例、以及中心坐标点为接触中心坐标点接触椭圆区域的方向与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点位置上接触椭圆区域的方向的相似度,进行标准模型匹配。
在本发明的一个可选实施例中:所述输入事件还包括:中心坐标点为接触中心坐标点的接触椭圆区域的大小;操作模式的标准模型还包括:操作模式对应的操作区域内多个接触椭圆区域的大小。
此时,处理器620,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:根据触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为接触中心坐标点的接触椭圆区域的方向、大小与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与接触中心坐标点相同或相近坐标点的接触椭圆区域的方向、大小的相似度,进行标准模型匹配。
在本发明的又一可选实施例中,所述处理器,还设置为:
为用户提供交互界面,采集用户通过所述交互界面输入的每一种操作模式下的触控样本事件;根据采集的触控样本事件,建立每一种操作模式的标准模型。
在本发明的又一可选实施例中,所述处理器,还设置为按照识别的操作模式,调整终端操作界面的布局方式。
本实施例所述终端能够在终端相关输入系统基础上,不增加额外的硬件模块,智能识别用户的操作方式,并自动调整终端的操作界面设置,方便了用户操作,较大程度的提升产品的用户体验。
本说明书中的每一个实施例均采用递进的方式描述,不同实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是其与其他实施例的不同之处。尤其对于终端实施例而言,由于其基本相似与方法实施例,所以,描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
本发明实施例还提出了一种计算机可读存储介质,存储有计算机可执行指令,计算机可执行指令用于执行上述描述的任意一个方法。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储与存储器中的程序/指令来实现其相应功能。本发明不限于任何特定形式的硬件和软件的结合。
总之,以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
上述技术方案不增加额外的硬件模块,智能识别用户的操作方式,并方便了用户操作,较大程度的提升产品的用户体验。

Claims (16)

  1. 一种操作模式自动识别方法,包括:
    采集作用于触控屏幕上的触控事件,提取所述触控事件产生的输入事件;所述输入事件包括:所述触控事件在所述触控屏幕上的接触中心坐标点位置和中心坐标点为所述接触中心坐标点的接触椭圆区域的方向;
    调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式。
  2. 如权利要求1所述的方法,其中,所述操作模式的标准模型包括:操作模式对应的操作区域、操作模式对应的操作区域内多个接触椭圆区域的方向。
  3. 如权利要求2所述的方法,所述输入事件还包括:中心坐标点为所述接触中心坐标点的接触椭圆区域的大小;
    所述操作模式的标准模型还包括:操作模式对应的操作区域内多个接触椭圆区域的大小。
  4. 如权利要求2所述的方法,其中,根据所述输入事件,进行标准模型匹配包括:
    根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向与该接触中心坐标点位置所属的操作区域对应的标准模型中中心坐标点与所述接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度,进行标准模型匹配。
  5. 如权利要求3所述的方法,其中,根据所述输入事件,进行标准模型匹配包括:
    根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向、大小与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与所述接触中心坐标点相同或相近坐标点位置上接触椭圆区域的方向、大小的相似度,进行标准模型匹配。
  6. 如权利要求2至5任意一项所述的方法,其中,操作模式的标准模型建立方式包括:
    为用户提供交互界面;
    采集用户通过所述交互界面输入的每一种操作模式下的触控样本事件;
    根据采集的触控样本事件,建立每一种操作模式的标准模型。
  7. 如权利要求1所述的方法,其中,
    所述操作模式包括:左手单手操作模式、右手单手操作模式、以及双手操作模式。
  8. 如权利要求1至5、7任意一项所述的方法,还包括:
    在识别出操作模式后,按照识别的操作模式,调整终端操作界面的布局方式。
  9. 一种终端,包括:
    触摸屏驱动器,设置为采集作用于触控屏幕上的触控事件;
    处理器,设置为提取所述触摸屏驱动器采集的所述触控事件产生的输入事件;调用预先建立的所有操作模式的标准模型,根据所述输入事件,进行标准模型匹配,并将匹配的标准模型对应的操作模式作为识别出的操作模式;所述输入事件包括:所述触控事件在所述触控屏幕上的接触中心坐标点位置和中心坐标点为所述接触中心坐标点的接触椭圆区域的方向。
  10. 如权利要求9所述的终端,其中,所述处理器是设置为通过如下方式实现调用的操作模式的标准模型:操作模式对应的操作区域、操作模式对应的操作区域内多个坐标点位置上接触椭圆区域的方向。
  11. 如权利要求10所述的终端,所述输入事件还包括:中心坐标点为所述接触中心坐标点的接触椭圆区域的大小;
    所述处理器还设置为通过如下方式实现调用的操作模式的标准模型:操作模式对应的操作区域内多个接触椭圆区域的大小。
  12. 如权利要求10所述的终端,其中,所述处理器,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:
    根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与所述接触中心坐标点相同或相近坐标点的接触椭圆区域的方向的相似度,进行标准模型匹配。
  13. 如权利要求11所述的终端,其中,所述处理器,是设置为采用以下方式实现根据所述输入事件,进行标准模型匹配:
    根据所述触控事件接触中心坐标点位置所属的操作区域、以及中心坐标点为所述接触中心坐标点的接触椭圆区域的方向、大小与该接触中心坐标点位置所属操作区域对应的标准模型中中心坐标点与所述接触中心坐标点相同或相近坐标点的接触椭圆区域的方向、大小的相似度,进行标准模型匹配。
  14. 如权利要求10至13任意一项所述的终端,所述处理器,还设置为:
    为用户提供交互界面,采集用户通过所述交互界面输入的每一种操作模式下的触控样本事件;根据采集的触控样本事件,建立每一种操作模式的标准模型。
  15. 如权利要求9所述的终端,其中,
    所述操作模式包括:左手单手操作模式、右手单手操作模式、以及双手操作模式。
  16. 如权利要求9至13、15任意一项所述的终端,所述处理器,还设置为按照识别的操作模式,调整终端操作界面的布局方式。
PCT/CN2016/080055 2016-02-04 2016-04-22 操作模式自动识别方法及终端 WO2016197714A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610076799.1 2016-02-04
CN201610076799.1A CN107037951B (zh) 2016-02-04 2016-02-04 操作模式自动识别方法及终端

Publications (1)

Publication Number Publication Date
WO2016197714A1 true WO2016197714A1 (zh) 2016-12-15

Family

ID=57503126

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/080055 WO2016197714A1 (zh) 2016-02-04 2016-04-22 操作模式自动识别方法及终端

Country Status (2)

Country Link
CN (1) CN107037951B (zh)
WO (1) WO2016197714A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933271B (zh) * 2017-12-18 2022-06-17 佳能株式会社 数据处理装置和方法、用户界面调节装置和方法及介质
CN110858120B (zh) * 2018-08-24 2023-02-17 北京搜狗科技发展有限公司 输入键盘推荐方法及装置
CN113996058B (zh) * 2021-11-01 2023-07-25 腾讯科技(深圳)有限公司 信息处理方法、装置、电子设备和计算机可读存储介质
CN114103845B (zh) * 2022-01-25 2022-04-15 星河智联汽车科技有限公司 一种车辆中控屏操作者身份识别方法、装置及车辆

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916161A (zh) * 2010-08-04 2010-12-15 宇龙计算机通信科技(深圳)有限公司 基于手指按压区域图形选择界面模式的方法及移动终端
WO2012019350A1 (en) * 2010-08-12 2012-02-16 Google Inc. Finger identification on a touchscreen
CN103488406A (zh) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 调整移动终端屏幕键盘的方法、装置及移动终端
CN103927105A (zh) * 2013-01-11 2014-07-16 联想(北京)有限公司 一种用户界面显示方法及电子设备
CN104932825A (zh) * 2015-06-15 2015-09-23 金陵科技学院 一种自动感知左右手操作手机并确定拇指活动热区的方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8154529B2 (en) * 2009-05-14 2012-04-10 Atmel Corporation Two-dimensional touch sensors
CN104281368A (zh) * 2014-09-29 2015-01-14 小米科技有限责任公司 界面的显示方法、装置及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916161A (zh) * 2010-08-04 2010-12-15 宇龙计算机通信科技(深圳)有限公司 基于手指按压区域图形选择界面模式的方法及移动终端
WO2012019350A1 (en) * 2010-08-12 2012-02-16 Google Inc. Finger identification on a touchscreen
CN103488406A (zh) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 调整移动终端屏幕键盘的方法、装置及移动终端
CN103927105A (zh) * 2013-01-11 2014-07-16 联想(北京)有限公司 一种用户界面显示方法及电子设备
CN104932825A (zh) * 2015-06-15 2015-09-23 金陵科技学院 一种自动感知左右手操作手机并确定拇指活动热区的方法

Also Published As

Publication number Publication date
CN107037951B (zh) 2020-02-21
CN107037951A (zh) 2017-08-11

Similar Documents

Publication Publication Date Title
WO2018107900A1 (zh) 一种触摸屏的防误触方法、装置、移动终端及存储介质
KR101932210B1 (ko) 터치 신호에 의하여 이동 단말기의 조작을 실현하는 방법, 시스템 및 이동 단말기
CN106598335B (zh) 一种移动终端的触摸屏控制方法、装置及移动终端
CN106681638B (zh) 一种触摸屏控制方法、装置及移动终端
KR101844366B1 (ko) 터치 제스처 인식 장치 및 방법
CN104216642B (zh) 一种终端控制方法
US20130222338A1 (en) Apparatus and method for processing a plurality of types of touch inputs
CN106681554B (zh) 一种移动终端触摸屏的控制方法、装置及移动终端
CN108055405B (zh) 唤醒终端的方法及终端
US20120176322A1 (en) Systems and methods to present multiple frames on a touch screen
CN106681636B (zh) 一种防误触的方法、装置及移动终端
CN105528130B (zh) 一种控制方法、装置和电子设备
CN106775405A (zh) 一种移动终端的触摸屏防误触方法、装置及移动终端
CN107132986B (zh) 一种虚拟按键智能调节触控响应区域的方法及装置
WO2016197714A1 (zh) 操作模式自动识别方法及终端
CN104216516B (zh) 一种终端
US9189152B2 (en) Touch device and method for dynamically setting touch inactive area, and non-transitory recording medium
CN105353927B (zh) 电容式触控装置及其物体辨识方法
CN108874234B (zh) 一种触控识别方法、装置及触控显示装置
WO2017161826A1 (zh) 一种功能的控制方法和终端
TW201525849A (zh) 多邊形手勢偵測及互動方法、裝置及電腦程式產品
CN104571882A (zh) 基于终端的用户操作模式的判断方法及装置、终端
EP3792740A1 (en) Key setting method and device, and storage medium
US20230188638A1 (en) Control method and device
CN108920055A (zh) 触控操作方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16806612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16806612

Country of ref document: EP

Kind code of ref document: A1