[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020211596A1 - 控制方法及终端设备 - Google Patents

控制方法及终端设备 Download PDF

Info

Publication number
WO2020211596A1
WO2020211596A1 PCT/CN2020/080679 CN2020080679W WO2020211596A1 WO 2020211596 A1 WO2020211596 A1 WO 2020211596A1 CN 2020080679 W CN2020080679 W CN 2020080679W WO 2020211596 A1 WO2020211596 A1 WO 2020211596A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
target
terminal device
screen
user
Prior art date
Application number
PCT/CN2020/080679
Other languages
English (en)
French (fr)
Inventor
杨阳
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2020211596A1 publication Critical patent/WO2020211596A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Definitions

  • the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to a control method and terminal equipment.
  • terminal devices With the continuous development of terminal technology, the functions of terminal equipment are becoming more and more powerful. Among them, in order to facilitate user operations, most terminal devices can support multi-finger touch input functions.
  • a user uses the multi-touch input function of a terminal device, generally one hand holds the terminal device or places the terminal device on a supporting object (for example, a table), and the other hand performs multi-touch input. That is, the user needs the assistance of both hands or other supports to realize multi-finger touch input.
  • a supporting object for example, a table
  • the embodiments of the present disclosure provide a control method and a terminal device to solve the problem that when the user holds and operates the terminal device with one hand, the user cannot perform multi-finger touch input well, thereby causing the terminal device to fail to implement multi-finger touch The problem of poor human-computer interaction performance.
  • embodiments of the present disclosure provide a control method applied to a terminal device.
  • the method includes: detecting a user's input in a target area in a case where a touch event is detected on a first screen, and the first The screen and the target area are on different surfaces of the terminal device, and the corresponding position of the touch event on the first screen and the target area are both located when the terminal device is in a one-handed holding state.
  • the user’s input in the target area is the first target input
  • the user’s input on the first screen is detected
  • the user’s input on the first screen is detected as In the case of the second target input, the target action corresponding to the second target input is executed.
  • the embodiments of the present disclosure provide a terminal device, the terminal device includes: a detection module and an execution module; the detection module is used to detect that the user is on the target when a touch event is detected on the first screen Input within the area, the first screen and the target area are on different surfaces of the terminal device, and the corresponding position of the touch event in the first screen and the target area are both located in the terminal device in a single-handed holding state
  • the execution module uses When the detection module detects that the user's input on the first screen is the second target input, execute the target action corresponding to the second target input.
  • embodiments of the present disclosure provide a terminal device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • the computer program is executed by the processor to achieve the following The steps of the control method in one aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the control method in the first aspect are implemented.
  • the terminal device can detect the user's input in the target area by detecting the touch event on the first screen, and the first screen and the target area are on different surfaces of the terminal device.
  • the corresponding position of the touch event on the first screen and the target area are both located in the operation area of the hand holding the terminal device (hereinafter referred to as the one-hand operation area) when the terminal device is in the one-handed holding state ;
  • the user can trigger the terminal device to execute through a combination of input on the terminal device (touch input on the first screen, first target input in the target area, and second target input on the first screen) Target action.
  • the terminal device can be triggered to achieve the target function through the combined input, and the human-computer interaction performance is better.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
  • FIG. 2 is one of the flowcharts of the control method provided by an embodiment of the disclosure
  • FIG. 3 is the second flowchart of the control method provided by an embodiment of the disclosure.
  • FIG. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the disclosure.
  • FIG. 6 is a schematic diagram of hardware of a terminal device provided by an embodiment of the disclosure.
  • first”, “second”, “third”, and “fourth” in the specification and claims of the present disclosure are used to distinguish different objects, rather than describing a specific order of objects.
  • first input, the second input, the third input, and the fourth input are used to distinguish different inputs, rather than to describe a specific order of inputs.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present disclosure should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • multiple refers to two or more than two, for example, multiple processing units refers to two or more processing units; multiple elements Refers to two or more elements, etc.
  • a terminal device can detect a user's input in a target area by detecting a touch event on a first screen.
  • the first screen and the target area are in the terminal device.
  • the corresponding position of the touch event on the first screen and the target area are both located in the operation area of the hand holding the terminal device when the terminal device is in the one-handed holding state;
  • the input in the target area is the first target input
  • detect the user's input on the first screen in the case where the user's input on the first screen is detected as the second target input, execute the and
  • the second target input corresponds to the target action.
  • the user can trigger the terminal device to execute through a combination of input on the terminal device (touch input on the first screen, first target input in the target area, and second target input on the first screen) Target action.
  • the terminal device can be triggered to achieve the target function through the combined input, and the human-computer interaction performance is better.
  • the following uses the Android operating system as an example to introduce the software environment to which the control method provided in the embodiments of the present disclosure is applied.
  • FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
  • the architecture of the Android operating system includes 4 layers, namely: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
  • the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • the system runtime layer includes a library (also called a system library) and an Android operating system runtime environment.
  • the library mainly provides various resources needed by the Android operating system.
  • the Android operating system operating environment is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software level.
  • the kernel layer is based on the Linux kernel to provide core system services and hardware-related drivers for the Android operating system.
  • developers can develop software programs that implement the control method provided by the embodiments of the present disclosure based on the system architecture of the Android operating system as shown in FIG. 1, so that the control method It can run based on the Android operating system as shown in Figure 1. That is, the processor or the terminal device can implement the control method provided by the embodiment of the present disclosure by running the software program in the Android operating system.
  • the terminal device in the embodiment of the present disclosure may be a mobile terminal device or a non-mobile terminal device.
  • the mobile terminal device can be a mobile phone, tablet computer, notebook computer, handheld computer, car terminal, wearable device, ultra-mobile personal computer (UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc.
  • the non-mobile terminal device may be a personal computer (PC), a television (television, TV), a teller machine, or a self-service machine, etc.; the embodiment of the present disclosure does not specifically limit it.
  • the execution subject of the control method provided by the embodiments of the present disclosure may be the aforementioned terminal device (including mobile terminal devices and non-mobile terminal devices), or may be a functional module and/or functional entity in the terminal device that can implement the method. Can be determined according to actual usage requirements, and the embodiment of the present disclosure does not limit it.
  • the following takes a terminal device as an example to illustrate the control method provided by the embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a control method, which may include the following steps 201 to 203.
  • Step 201 When a touch event is detected on the first screen, the terminal device detects the user's input in the target area.
  • the first screen and the target area are on different surfaces of the terminal device, and the corresponding position of the touch event in the first screen and the target area are both located when the terminal device is in a one-handed holding state, hold the terminal In the operation area of the device's hand (hereinafter referred to as the one-hand operation area).
  • One-handed operation area refers to the comfortable area where the user's hand holding the terminal device operates on each surface of the terminal device when the user holds the terminal device with one hand.
  • the user's left hand is operating comfort zone on each surface of the terminal device; or, in the case of the user holding the terminal device with the right hand, the user's right hand is on each surface of the terminal device Comfort zone for upper operation.
  • the above-mentioned comfortable area may be that when the user holds the terminal device with one hand, the fingers of the user's hand holding the terminal device are in a natural state (it can also be understood that the grip area of the user's hand holding the terminal device is relatively No squeeze) under the reachable area.
  • the aforementioned one-handed operation area includes: when the user holds the terminal device with one hand, the user operates the comfortable area on the user-facing front of the terminal device (usually the screen surface, such as the surface where the first screen is located).
  • the comfortable area for operating on the two sides of the terminal device (the side is usually the non-screen surface including the button area, the fingerprint collection area, etc.), and the user’s back facing the user (the screen surface or the fingerprint collection area) Zone on the non-screen surface) of the comfort zone.
  • a touch event is detected on the first screen, specifically it can be any number of tap inputs detected at any position within the one-handed operation area of the first screen, or it can be on the first screen.
  • a sliding input in any direction is detected at any position in the single-handed operation area, and other feasible touch events may also be detected, which is not limited in the embodiment of the present disclosure.
  • the target area can be any area on the terminal device that is located in the one-handed operation area and is not on the same surface as the first screen. It can be a preset area (factory set), or it can be set by the user according to actual needs The embodiments of the present disclosure are not limited.
  • the target area is a touch area on the second screen of the terminal device, a button area of the terminal device, or a fingerprint collection area of the terminal device.
  • the target area may be a touch area on the second screen.
  • the target area may also be a button area located on the side of the terminal device.
  • the target area may also be a fingerprint collection area on the side of the terminal device or a rear fingerprint collection area on the back of the terminal device.
  • the terminal device detects the user's input in the target area. Specifically, it is detected whether the user has input in the target area, and if the user has input in the target area, it is detected whether the user's input in the target area is the first target input.
  • step 202 If it is detected that the user's input in the target area is the first target input within the first preset time period, the following step 202 is executed; otherwise, within the first preset time period, the terminal device continues to detect that the user is in the target area After the input in the target area exceeds the first preset time period, the terminal device stops detecting the user's input in the target area.
  • the first preset duration may be preset by the terminal device, or may be set by the user according to actual use requirements, which is not limited in the embodiment of the present disclosure.
  • the first preset duration may be, for example, 2s, 5s, or 10s.
  • the first target input may be preset by the terminal device, or may be set by the user according to actual use requirements, which is not limited in the embodiment of the present disclosure.
  • the first target input may be a user's pressing input in the one-handed operation area on the second screen.
  • specific parameters of the pressing input may be set.
  • the specific parameters of the pressing input may include the following: At least one of the above: set the touch area of the pressing input to be greater than or equal to the first threshold, set the touch force of the pressing input to be greater than or equal to the second threshold, and set the touch duration of the pressing input to be greater than or equal to the third threshold , And setting the capacitance change of the touch area corresponding to the pressing input to be greater than or equal to the fourth threshold.
  • Other parameters of the pressing input can also be set, which is not limited in the embodiment of the present disclosure.
  • the first target input may also be a user's key input on a target key on the side of the terminal device.
  • the first target input may also be the user's fingerprint input on the fingerprint collection area of the terminal device.
  • the terminal device may collect the user's preset fingerprint information (corresponding to the first target input) in advance during the setting phase, and After receiving the user’s input in the target area, the terminal device verifies whether the fingerprint information corresponding to the user’s input in the target area (hereinafter referred to as the first fingerprint information) conforms to the preset fingerprint information. If it matches, the user is in the target area.
  • the input in the area is the first target input, otherwise the user's input in the target area is not the first target input.
  • the terminal device can collect the user's fingerprint information through fingerprint recognition technology, for example, the user's fingerprint image is read by the fingerprint reading device, and after the fingerprint image is read, the original fingerprint image is preprocessed to make it clearer. Then, the preprocessed fingerprint image is converted into fingerprint characteristic data through fingerprint recognition software.
  • fingerprint recognition technology for example, the user's fingerprint image is read by the fingerprint reading device, and after the fingerprint image is read, the original fingerprint image is preprocessed to make it clearer. Then, the preprocessed fingerprint image is converted into fingerprint characteristic data through fingerprint recognition software.
  • the first fingerprint information conforms to preset fingerprint information means that the first fingerprint information is the same as the preset fingerprint information.
  • that the first fingerprint information conforms to the preset fingerprint information means that the similarity between the first fingerprint information and the preset fingerprint information is greater than or equal to a preset threshold.
  • the preset threshold may be 95%, that is, if the similarity between the first fingerprint information and the preset fingerprint information is greater than or equal to 95%, the first fingerprint information conforms to the preset fingerprint information.
  • the security of the terminal device can be improved.
  • Step 202 When it is detected that the user's input in the target area is the first target input, the terminal device detects the user's input on the first screen.
  • the terminal device detects the user's input on the first screen. Specifically, detecting whether the user has input on the first screen, and if the user has input on the first screen, detecting whether the user's input on the first screen is the second target input.
  • step 203 is executed; otherwise, within the second preset time period, the terminal device continues to detect the user’s After the input on the first screen exceeds the second preset time period, the terminal device stops detecting the user's input on the first screen.
  • the second preset duration may be preset by the terminal device, or may be set by the user according to actual use requirements, which is not limited in the embodiment of the present disclosure.
  • the second preset duration may be, for example, 2s, 5s, or 10s.
  • the second preset duration and the first preset duration may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the second target input can be any number of clicks input by the user on any area within the one-hand operation area of the terminal device, or it can be the user's sliding in any direction on any area within the one-hand operation area of the terminal device
  • the input may also be a user's drag input in any direction on any area within the one-handed operation area of the terminal device, or other feasible input, which is not limited in the embodiment of the present disclosure.
  • the second target input and the first target input may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the second target input and the touch event may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the second target input and the touch event can be the same input (for example, from the time the user triggers the terminal device to detect the touch event to the trigger terminal device to detect the second target input, the user’s finger has not left the first screen, and the terminal device has been
  • the user’s input on the first screen can be detected, that is, the second target input and the touch event are a continuous input triggered by the user), or two different inputs (for example, the user triggers the terminal device to detect The touch event and the user triggering the terminal device to detect that the second target input is two independent inputs) are not limited in the embodiment of the present disclosure.
  • Step 203 When it is detected that the user's input on the first screen is a second target input, the terminal device executes a target action corresponding to the second target input.
  • the target action includes any of the following: changing the size of the target object, rotating the direction of the target object, deleting the target object, updating the display target object, taking a screenshot of the target object, encrypting the target object, decrypting the target object, Control the target object to hide, control the target object to unhide, control the first screen to return to the previous interface, control the preset application or preset function to turn on, control the preset application or preset function to turn off, and adjust the screen brightness of the first screen , Switch the wallpaper of the first screen, and control the first screen to display preset content.
  • the terminal device changes the size of the target object, rotates the direction of the target object, deletes the target object, updates the display target object, takes a screenshot of the target object, and Encrypt the target object, decrypt the target object, control the hiding of the target object, control the unhide of the target object, control the first screen to return to the previous interface, control the preset application or preset function to open, control the preset application or preset function to close , Adjust the screen brightness of the first screen, switch the wallpaper of the first screen or control the first screen to display preset content.
  • the object currently displayed on the first screen is the target object.
  • the terminal device when the first screen is currently displayed as an image (hereinafter referred to as a target image), in response to the second target input, the terminal device changes the size of the target image (for example, the second target input is an upward sliding input , The target image is enlarged; the second target input is a downward sliding input to reduce the target image), the direction of rotating the target image (for example, the second target input is a leftward sliding input, and the target image is rotated to the left.
  • the target image for example, the second target input is an upward sliding input , The target image is enlarged; the second target input is a downward sliding input to reduce the target image), the direction of rotating the target image (for example, the second target input is a leftward sliding input, and the target image is rotated to the left.
  • the second target input is a sliding input to the right, the target image is rotated to the right by a preset angle, the preset angle can be set according to actual usage requirements, and the embodiment of the present disclosure does not limit it), delete the target image, and set the target The image is updated and displayed as another image.
  • the terminal device when the first screen is currently displayed as an application interface (hereinafter referred to as the target application interface), in response to the second target input, the terminal device screenshots the target application interface, encrypts the target application interface, and encrypts the target application interface Decrypt and control the target application interface to hide, and control the target application interface to unhide.
  • the target application interface an application interface
  • the terminal device screenshots the target application interface, encrypts the target application interface, and encrypts the target application interface Decrypt and control the target application interface to hide, and control the target application interface to unhide.
  • the second target input is the user's input on the target object.
  • the second target input is the user's input on the target icon.
  • the terminal device changes the size of the target icon (for example, the second target input Is an upward sliding input to enlarge the target icon; the second target input is a downward sliding input to reduce the target icon), the direction of rotating the target icon (for example, the second target input is a leftward sliding input, the target The icon rotates to the left by a preset angle; the second target input is a sliding input to the right, and the target icon is rotated to the right by a preset angle.
  • the preset angle can be set according to actual usage requirements.
  • the embodiment of the present disclosure does not limit it), delete The target icon, the target icon is updated and displayed as another icon.
  • the second target input is the user's input on the target control.
  • the terminal device takes a screenshot of the target control, encrypts the target control, decrypts the target control, and controls The target control is hidden, and the target control is controlled to unhide.
  • the relationship between the second target input and the target action is preset.
  • the combination of the first target input and the second target input can be used Input, trigger the terminal device to return to the previous interface.
  • the terminal device often uses a certain application (such as a camera application) or a certain function (screenshot function), and the combination of the first target input and the second target input can be used to trigger the terminal device to control the preset application or enable the preset function , Control the preset application or preset function to turn off, adjust the screen brightness of the first screen or switch the wallpaper of the first screen or control the first screen to display preset content.
  • this step 203 can be specifically implemented by the following steps 203a to 203c.
  • Step 203a In a case where it is detected that the user's input on the first screen is the second target input, the terminal device displays prompt information on the first screen.
  • the prompt information is used to prompt whether to execute the target action corresponding to the second target input.
  • Step 203b The terminal device receives the user's second input of the prompt information.
  • the second input is an input of the user selecting "execute this target action".
  • the second input may specifically be a user's arbitrary number of click inputs, sliding inputs in any direction, etc., which are not limited in the embodiment of the present disclosure. If the user selects "do not execute the target action", the terminal device does not execute the target action.
  • Step 203c In response to the second input, the terminal device executes the target action.
  • control method provided in the embodiment of the present disclosure may further include the following steps 204-205.
  • Step 204 The terminal device receives the user's first input on the setting interface.
  • the first input is an input for a user to set a first corresponding relationship, and the first corresponding relationship is a corresponding relationship between the target area, the first target input, the second target input, and the target action.
  • the user selects the option "set combined input function (also called auxiliary touch function)", and the terminal device displays a setting interface corresponding to the "set combined input function” option.
  • set combined input function also called auxiliary touch function
  • the first correspondence relationship may be factory-set by the terminal device, or may be set by the user according to actual use requirements.
  • the user can set at least one of the target area, the first target input, the second target input, and the target action through the first input.
  • the others are factory-set for the terminal device, which is determined according to actual usage. The disclosed embodiments are not limited.
  • the first input may include at least one of the following: the user selects the target area from multiple preset areas, the user selects the first target input from multiple preset target inputs, and the user The input of the second target input is selected from a plurality of preset inputs, and the input of the user selects the target action from the plurality of preset actions.
  • the first input may also include at least one of the following: input of a user-defined target area, input of a user-defined first target, input of a user-defined second target, and user-defined target Action input.
  • the user-defined target area may be, for example, the user selects an area as the target area through input on the second screen.
  • Step 205 In response to the first input, the terminal device saves the first correspondence.
  • a terminal device can detect a user's input in a target area by detecting a touch event on a first screen.
  • the first screen and the target area are in the terminal device.
  • the corresponding position of the touch event on the first screen and the target area are both located in the operation area of the hand holding the terminal device when the terminal device is in the one-handed holding state;
  • the user can trigger the terminal device to execute through a combination of input on the terminal device (touch input on the first screen, first target input in the target area, and second target input on the first screen) Target action.
  • the terminal device can be triggered to achieve the target function through the combined input, and the human-computer interaction performance is better.
  • an embodiment of the present disclosure provides a terminal device 120.
  • the terminal device 120 includes a detection module 121 and an execution module 122; the detection module 121 is used to detect a touch event on the first screen Next, the user’s input in the target area is detected.
  • the first screen and the target area are on different surfaces of the terminal device, and the corresponding position of the touch event in the first screen and the target area are both located in the terminal device In the one-handed holding state, within the operating area of the hand holding the terminal device; in the case of detecting that the user's input in the target area is the first target input, detecting the user's input on the first screen
  • the execution module 122 is configured to execute the target action corresponding to the second target input when the detection module 121 detects that the user's input on the first screen is the second target input.
  • the target area is a touch area on the second screen of the terminal device, a button area of the terminal device, or a fingerprint collection area of the terminal device.
  • the terminal device 120 further includes: a receiving module 123 and a saving module 124; the receiving module 123 is used to detect the user's input in the target area when the touch event is detected on the first screen Previously, the first input of the user on the setting interface was received, and the first input was the input for the user to set a first corresponding relationship, and the first corresponding relationship was the target area, the first target input, the second target input, and the target Correspondence between actions; the saving module 124 is configured to save the first correspondence in response to the first input received by the receiving module 123.
  • the execution module 122 is specifically configured to display prompt information on the first screen when it is detected that the user's input on the first screen is the second target input, and the prompt information is used to prompt Whether to execute the target action corresponding to the second target input; receive the user's second input of the prompt information; in response to the second input, execute the target action.
  • the target action includes any of the following: changing the size of the target object, rotating the direction of the target object, deleting the target object, updating the display target object, taking a screenshot of the target object, encrypting the target object, decrypting the target object, Control the target object to hide, control the target object to unhide, control the first screen to return to the previous interface, control the preset application or preset function to turn on, control the preset application or preset function to turn off, and adjust the screen brightness of the first screen , Switch the wallpaper of the first screen, and control the first screen to display preset content.
  • the terminal device provided in the embodiment of the present disclosure can implement each process shown in any one of FIG. 2 to FIG. 4 in the foregoing method embodiment, and to avoid repetition, details are not described herein again.
  • the embodiments of the present disclosure provide a terminal device.
  • the terminal device can detect a user's input in a target area by detecting a touch event on a first screen.
  • the first screen and the target area are in the terminal device.
  • the corresponding position of the touch event on the first screen and the target area are both located in the operation area of the hand holding the terminal device when the terminal device is in the one-handed holding state;
  • the user’s input in the target area is the first target input
  • detect the user’s input on the first screen when it is detected that the user’s input on the first screen is the second target input, execute The target action corresponding to the second target input.
  • the user can trigger the terminal device to execute through a combination of input on the terminal device (touch input on the first screen, first target input in the target area, and second target input on the first screen) Target action.
  • the terminal device can be triggered to achieve the target function through the combined input, and the human-computer interaction performance is better.
  • Fig. 6 is a schematic diagram of the hardware structure of a terminal device that implements various embodiments of the present disclosure.
  • the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, and a memory 109 , The processor 110, and the power supply 111 and other components.
  • the structure of the terminal device shown in FIG. 6 does not constitute a limitation on the terminal device, and the terminal device may include more or fewer components than shown in the figure, or a combination of certain components, or different components Layout.
  • terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminal devices, wearable devices, and pedometers.
  • the processor 110 is configured to detect a user’s input in a target area when a touch event is detected on the first screen.
  • the first screen and the target area are on different surfaces of the terminal device, and the touch The corresponding position of the control event in the first screen and the target area are both located in the operation area of the hand holding the terminal device when the terminal device is in the one-handed holding state; when it is detected that the user is in the target area
  • detect the user’s input on the first screen when it detects that the user’s input on the first screen is the second target input, execute the input corresponding to the second target The target action.
  • the terminal device can detect a user's input in a target area by detecting a touch event on the first screen.
  • the first screen and the target area are different from each other in the terminal device.
  • the corresponding position of the touch event on the first screen and the target area are both located in the operation area of the hand holding the terminal device when the terminal device is in the one-handed holding state;
  • the input in the target area is the first target input
  • it is detected that the user’s input on the first screen is the second target input
  • execute the The second target input corresponds to the target action.
  • the user can trigger the terminal device to execute through a combination of input on the terminal device (touch input on the first screen, first target input in the target area, and second target input on the first screen) Target action.
  • the terminal device can be triggered to achieve the target function through the combined input, and the human-computer interaction performance is better.
  • the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; Uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into audio signals and output them as sounds. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042.
  • the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the terminal device 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the posture of the terminal device (such as horizontal and vertical screen switching, related games , Magnetometer posture calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), etc.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 110, the command sent by the processor 110 is received and executed.
  • the touch panel 1071 can be realized by various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 1071 can be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of the touch event.
  • the type of event provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
  • the implementation of the input and output functions of the terminal device is not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device with the terminal device 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the terminal device 100 or can be used to connect to the terminal device 100 and external Transfer data between devices.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the terminal device. It uses various interfaces and lines to connect the various parts of the entire terminal device, runs or executes the software programs and/or modules stored in the memory 109, and calls data stored in the memory 109 , Perform various functions of the terminal equipment and process data, so as to monitor the terminal equipment as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the terminal device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal device 100 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present disclosure further provides a terminal device, which may include the aforementioned processor 110 shown in FIG. 6, a memory 109, and a computer program stored on the memory 109 and running on the processor 110, When the computer program is executed by the processor 110, each process of the control method shown in any one of FIG. 2 to FIG. 4 in the foregoing method embodiment is realized, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program When the computer program is executed by a processor, the computer program shown in any one of FIGS. 2 to 4 in the above method embodiment is implemented.
  • Each process of the control method can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the technical solution of the present disclosure essentially or the part that contributes to the related technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ) Includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present disclosure.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种控制方法、终端设备及计算机可读存储介质,其中,该方法包括:在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,第一屏和目标区域在终端设备的不同表面上,触控事件在第一屏中对应的位置和目标区域均位于终端设备处于单手握持状态时,握持终端设备的手的操作区域内;在检测到用户在目标区域内的输入为第一目标输入的情况下,检测用户在第一屏上的输入;在检测到用户在第一屏上的输入为第二目标输入的情况下,执行与第二目标输入对应的目标动作。

Description

控制方法及终端设备
相关申请的交叉引用
本申请要求于2019年04月18日提交国家知识产权局、申请号为201910314770.6、申请名称为“一种控制方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及通信技术领域,尤其涉及一种控制方法及终端设备。
背景技术
随着终端技术的不断发展,终端设备的功能越来越强大。其中,为了方便用户操作,大多数终端设备可以支持多指触控输入功能。
目前,用户在使用终端设备的多指触控输入功能时,一般一只手握持终端设备或者将终端设备放在支撑物体(例如,桌子)上,另一只手执行多指触控输入,即用户需要通过双手或者其他支撑物的辅助才能实现多指触控输入。
然而,当用户单手握持并操作终端设备时,由于用户不便于执行多指触控输入,因此使得用户无法很好地执行多指触控输入,从而导致终端设备无法实现多指触控功能,人机交互性能较差。
发明内容
本公开实施例提供一种控制方法及终端设备,以解决当用户单手握持并操作终端设备时,由于用户无法很好地执行多指触控输入,从而导致终端设备无法实现多指触控功能,人机交互性能较差的问题。
为了解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种控制方法,应用于终端设备,该方法包括:在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。
第二方面,本公开实施例提供了一种终端设备,该终端设备包括:检测模块和执行模块;该检测模块,用于在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;该执行模块,用于在该检测模块检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。
第三方面,本公开实施例提供了一种终端设备,包括处理器、存储器及存储在该 存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现如第一方面中的控制方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面中的控制方法的步骤。
在本公开实施例中,终端设备可以通过在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域(以下简称单手操作区域)内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。通过该方案,用户可以通过在终端设备上的组合输入(依次为在第一屏上的触摸输入、在目标区域的第一目标输入和在第一屏上的第二目标输入)触发终端设备执行目标动作。相比相关技术的多指触控输入,由于该组合输入是在终端设备不同表面的单手操作区域内的输入,因此当用户单手握持并操作终端设备时,方便实施该组合输入,从而可以很好地通过该组合输入触发终端设备实现目标功能,人机交互性能较好。
附图说明
图1为本公开实施例提供的一种可能的安卓操作系统的架构示意图;
图2为本公开实施例提供的控制方法的流程图之一;
图3为本公开实施例提供的控制方法的流程图之二;
图4为本公开实施例提供的控制方法的流程图之三;
图5为本公开实施例提供的终端设备的结构示意图;
图6为本公开实施例提供的终端设备的硬件示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本文中术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本文中符号“/”表示关联对象是或者的关系,例如A/B表示A或者B。
本公开的说明书和权利要求书中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一输入、第二输入、第三输入和第四输入等是用于区别不同的输入,而不是用于描述输入的特定顺序。
在本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或者两个以上,例如,多个处理单元是指两个或者两个以上的处理单元;多个元件是指两个或者两个以上的元件等。
本公开实施例提供一种控制方法,终端设备可以通过在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。通过该方案,用户可以通过在终端设备上的组合输入(依次为在第一屏上的触摸输入、在目标区域的第一目标输入和在第一屏上的第二目标输入)触发终端设备执行目标动作。相比相关技术的多指触控输入,由于该组合输入是在终端设备不同表面的单手操作区域内的输入,因此当用户单手握持并操作终端设备时,方便实施该组合输入,从而可以很好地通过该组合输入触发终端设备实现目标功能,人机交互性能较好。
下面以安卓操作系统为例,介绍一下本公开实施例提供的控制方法所应用的软件环境。
如图1所示,为本公开实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用程序层、应用程序框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作系统中的各个应用程序(包括系统应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。库主要为安卓操作系统提供其所需的各类资源。安卓操作系统运行环境用于为安卓操作系统提供软件环境。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。内核层基于Linux内核为安卓操作系统提供核心系统服务和与硬件相关的驱动程序。
以安卓操作系统为例,本公开实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本公开实施例提供的控制方法的软件程序,从而使得该控制方法可以基于如图1所示的安卓操作系统运行。即处理器或者终端设备可以通过在安卓操作系统中运行该软件程序实现本公开实施例提供的控制方法。
本公开实施例中的终端设备可以为移动终端设备,也可以为非移动终端设备。移动终端设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等;非移动终端设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等;本公开实施例不作具体限定。
本公开实施例提供的控制方法的执行主体可以为上述的终端设备(包括移动终端 设备和非移动终端设备),也可以为该终端设备中能够实现该方法的功能模块和/或功能实体,具体的可以根据实际使用需求确定,本公开实施例不作限定。下面以终端设备为例,对本公开实施例提供的控制方法进行示例性的说明。
参考图2所示,本公开实施例提供了一种控制方法,该方法可以包括下述的步骤201-步骤203。
步骤201、在第一屏上检测到触控事件的情况下,终端设备检测用户在目标区域内的输入。
该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持终端设备的手的操作区域(以下简称单手操作区域)内。
单手操作区域:是指在用户单手握持终端设备的情况下,用户握持终端设备的手在终端设备的各个表面上操作的舒适区域。例如,在用户左手握持终端设备的情况下,用户的左手在终端设备的各个表面上操作的舒适区域;或者,在用户右手握持终端设备的情况下,用户的右手在终端设备的各个表面上操作的舒适区域。
其中,上述舒适区域可以为在用户单手握持终端设备的情况下,用户握持终端设备的手的手指呈自然状态(也可以理解为用户握持终端设备的手的握持区域对终端设备无挤压)下可触及的区域。
上述单手操作区域包括:在用户单手握持终端设备的情况下,用户在终端设备的面向用户的正面(通常为屏幕表面,例如上述第一屏所在的表面)上操作的舒适区域,用户在终端设备的两个侧面(侧面通常为包括按键区域,指纹采集区域等的非屏幕表面)上操作的舒适区域,以及用户在背对用户的背面(可以为屏幕表面,也可以为包括指纹采集区的非屏幕表面)上操作的舒适区域。
在本公开实施例中,在第一屏上检测到触控事件,具体可以是在第一屏的单手操作区域内的任意位置上检测到任意次数的点击输入,也可以是在第一屏的单手操作区域内的任意位置上检测到向任意方向的滑动输入,还可以是检测到其他的可行性的触控事件,本公开实施例不作限定。
目标区域可以为终端设备上位于单手操作区域内,且与第一屏不在同一表面的任意区域,可以是预设的区域(出厂设置好的),也可以是用户根据实际使用需求设定的,本公开实施例不作限定。
这样可以在用户单手握持终端设备的情况下,方便用户单手操作终端设备,可以提高人机交互性能。
可选的,该目标区域为该终端设备的第二屏上的触控区域、该终端设备的按键区域或者该终端设备的指纹采集区域。
其中,若终端设备为包括位于该终端设备的相反表面上的第一屏和第二屏的终端设备,则该目标区域可以为该第二屏上的触控区域。该目标区域也可以是位于终端设备的侧面上的按键区域。该目标区域还可以是位于终端设备的侧面的指纹采集区域或位于终端设备的背面的后置指纹采集区域。具体的可以根据实际使用需求确定,本公开实施例不作限定。这样可以增加目标区域的可选择性,方便用户选择,可以提高人机交互性能。
示例性的,在第一屏上检测到触控事件的情况下,在第一预设时长内(从在第一屏上检测到触控事件开始,到第一预设时长完截止的时间段内),终端设备检测用户在目标区域内的输入。具体的,检测用户在目标区域内是否有输入,以及若用户在目标区域内有输入,检测用户在该目标区域内的输入是否为第一目标输入。若在第一预设时长内,检测到用户在该目标区域内的输入为第一目标输入,则执行下述的步骤202;否则,在第一预设时长内,终端设备继续检测用户在该目标区域内的输入,超过第一预设时长后,则终端设备停止检测用户在该目标区域内的输入。
第一预设时长可以是终端设备预设好的,也可以是用户根据实际使用需求设定的,本公开实施例不作限定。第一预设时长例如可以是2s、5s或10s等。
第一目标输入可以是终端设备预设的,也可以是用户根据实际使用需求设定的,本公开实施例不作限定。
示例性的,第一目标输入可以是用户在第二屏上的单手操作区域内的按压输入,具体的可以设置该按压输入的具体参数,例如,设置的该按压输入的具体参数可以包括下述的至少一项:设置该按压输入的触控面积大于或等于第一阈值,设置该按压输入的触控力度大于或等于第二阈值,设置该按压输入的触控时长大于或等于第三阈值,以及设置该按压输入对应的触控区域的电容变化量大于或等于第四阈值。还可以设置该按压输入的其他参数,本公开实施例不作限定。
示例性的,第一目标输入也可以是用户在终端设备的侧面上的目标按键上的按键输入。
示例性的,第一目标输入还可以是用户在终端设备的指纹采集区域上的指纹输入,具体的,终端设备可以在设置阶段提前采集用户的预设指纹信息(对应第一目标输入),且终端设备在接收到用户在目标区域内的输入后,验证与用户在目标区域内的输入对应的指纹信息(以下称为第一指纹信息)是否符合预设指纹信息,若符合,则用户在目标区域内的输入为该第一目标输入,否则用户在目标区域内的输入不是该第一目标输入。
终端设备可以通过指纹识别技术采集用户的指纹信息,例如,通过指纹读取设备读取到用户的指纹图像,读取到指纹图像之后,先对原始指纹图像进行预处理,使之更清晰。然后通过指纹识别软件将预处理后的指纹图像转换成指纹特征数据。具体的采集过程参考任意相关技术,此处不予赘述。
可选的,本公开实施例中,第一指纹信息符合预设指纹信息是指该第一指纹信息和该预设指纹信息相同。或者,第一指纹信息符合预设指纹信息是指该第一指纹信息和该预设指纹信息的相似度大于或者等于预设阈值。例如,该预设阈值可以为95%,即如果该第一指纹信息和该预设指纹信息的相似度大于或者等于95%,那么该第一指纹信息符合预设指纹信息。
第一目标输入为指纹输入的情况,可以提高终端设备的安全性。
步骤202、在检测到用户在该目标区域内的输入为第一目标输入的情况下,终端设备检测用户在该第一屏上的输入。
示例性的,在检测到用户在该目标区域内的输入为第一目标输入的情况下,在第二预设时长内(从检测到第一目标输入开始,到第二预设时长完截止的时间段内), 终端设备检测用户在该第一屏上的输入。具体的,检测用户在第一屏上是否有输入,以及若用户在第一屏上有输入,检测用户在该第一屏上的输入是否为第二目标输入。若在第二预设时长内,检测到用户在第一屏上的输入为第二目标输入,则执行下述的步骤203;否则,在第二预设时长内,终端设备继续检测用户在该第一屏上的输入,超过第二预设时长后,则终端设备停止检测用户在该第一屏上的输入。
第二预设时长可以是终端设备预设好的,也可以是用户根据实际使用需求设定的,本公开实施例不作限定。第二预设时长例如可以是2s、5s或10s等。
第二预设时长与第一预设时长可以相同,也可以不相同,本公开实施例不作限定。
第二目标输入可以是用户在终端设备的单手操作区域内的任意区域上的任意次数的点击输入、也可以是用户在终端设备的单手操作区域内的任意区域上的向任意方向的滑动输入,还可以是用户在终端设备的单手操作区域内的任意区域上的向任意方向的拖动输入,也可以是其他的可行性输入,本公开实施例不作限定。
第二目标输入与第一目标输入可以相同,也可以不相同,本公开实施例不作限定。
第二目标输入和触控事件可以相同,也可以不相同,本公开实施例不作限定。
第二目标输入和触控事件可以是同一个输入(例如,用户从触发终端设备检测到触控事件到触发终端设备检测到第二目标输入,用户的手指一直未离开第一屏,终端设备一直可以检测到用户在第一屏上的输入,也就是说,第二目标输入和触控事件是用户触发的一个连续的输入),也可以是两个不同的输入(例如,用户触发终端设备检测到触控事件和用户触发终端设备检测到第二目标输入是两个独立的输入),本公开实施例不作限定。
步骤203、在检测到用户在该第一屏上的输入为第二目标输入的情况下,终端设备执行与该第二目标输入对应的目标动作。
可选的,该目标动作包括下述任意一项:改变目标对象的尺寸、旋转目标对象的方向、删除目标对象、更新显示目标对象、对目标对象截图、对目标对象加密、对目标对象解密、控制目标对象隐藏,控制目标对象取消隐藏、控制该第一屏返回上一级界面、控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节该第一屏的屏幕亮度、切换该第一屏的壁纸、控制该第一屏显示预设内容。
在检测到用户在该第一屏上的输入为第二目标输入的情况下,终端设备改变目标对象的尺寸、旋转目标对象的方向、删除目标对象、更新显示目标对象、对目标对象截图、对目标对象加密、对目标对象解密、控制目标对象隐藏,控制目标对象取消隐藏、控制该第一屏返回上一级界面、控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节该第一屏的屏幕亮度、切换该第一屏的壁纸或控制该第一屏显示预设内容。
可选的,若第一屏当前仅显示一个对象(一张图像、一个图标、一个应用界面或一个控件等),则第一屏中当前显示的该一个对象即为目标对象。
示例性的,当第一屏当前显示为一张图像(以下称为目标图像)时,响应于该第二目标输入,终端设备改变目标图像的尺寸(例如,第二目标输入为向上的滑动输入,放大该目标图像;第二目标输入为向下的滑动输入,缩小该目标图像)、旋转目标图像的方向(例如,第二目标输入为向左的滑动输入,将该目标图像向左旋转预设角度; 第二目标输入为向右的滑动输入,将该目标图像向右旋转预设角度,预设角度可以根据实际使用需求设定,本公开实施例不作限定)、删除目标图像、将目标图像更新显示为其他图像。
示例性的,当第一屏当前显示为一个应用界面(以下称为目标应用界面)时,响应于该第二目标输入,终端设备对目标应用界面截图、对目标应用界面加密、对目标应用界面解密、控制目标应用界面隐藏,控制目标应用界面取消隐藏。
可选的,若第一屏当前显示多个对象(多张图像、多个图标、多个应用界面或多个控件等),则第二目标输入为用户在目标对象上的输入。
示例性的,当第一屏当前显示多个图标时,第二目标输入为用户在目标图标上的输入,响应于该第二目标输入,终端设备改变目标图标的尺寸(例如,第二目标输入为向上的滑动输入,放大该目标图标;第二目标输入为向下的滑动输入,缩小该目标图标)、旋转目标图标的方向(例如,第二目标输入为向左的滑动输入,将该目标图标向左旋转预设角度;第二目标输入为向右的滑动输入,将该目标图标向右旋转预设角度,预设角度可以根据实际使用需求设定,本公开实施例不作限定)、删除目标图标、将目标图标更新显示为其他图标。
当第一屏当前显示多个控件时,第二目标输入为用户在目标控件上的输入,响应于该第二目标输入,终端设备对目标控件截图、对目标控件加密、对目标控件解密、控制目标控件隐藏,控制目标控件取消隐藏。
可选的,该第二目标输入与该目标动作的关系是预设好的。
示例性的,随着终端设备的屏幕越来愈大,在用户单手握持并操作终端设备时,用户很难够到返回键,则可以通过上述第一目标输入和第二目标输入的组合输入,触发终端设备返回上一级界面。终端设备经常使用某一应用(如相机应用)或某一功能(截屏功能),则可以通过上述第一目标输入和第二目标输入的组合输入,触发终端设备控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节该第一屏的屏幕亮度或者切换该第一屏的壁纸或控制该第一屏显示预设内容。
这样可以方便用户操作,提高终端设备的响应效率,提高人机交互性能。
示例性的,结合图2,如图3所示,该步骤203具体的可以通过下述的步骤203a-步骤203c实现。
步骤203a、在检测到用户在该第一屏上的输入为该第二目标输入的情况下,终端设备在该第一屏上显示提示信息。
该提示信息用于提示是否执行与第二目标输入对应的目标动作。
步骤203b、终端设备接收用户对该提示信息的第二输入。
第二输入为用户选择“执行该目标动作”的输入。第二输入具体的可以是用户的任意次数的点击输入、向任意方向的滑动输入等,本公开实施例不作限定。若用户选择“不执行该目标动作”,则终端设备不执行该目标动作。
步骤203c、响应于该第二输入,终端设备执行该目标动作。
这样可以防止用户由于误输入触发终端设备执行目标动作,可以提高人机交互性能。
可选的,结合图2,如图4所示,在步骤201之前,本公开实施例提供的控制方 法还可以包括下述的步骤204-步骤205。
步骤204、终端设备接收用户在设置界面的第一输入。
该第一输入为用户设置第一对应关系的输入,该第一对应关系为该目标区域、该第一目标输入、该第二目标输入和该目标动作之间的对应关系。
用户在终端设备的应用设置菜单中,选择“设置组合输入功能(也可以称为辅助触控功能)”选项,终端设备显示与该“设置组合输入功能”选项对应的设置界面。
可选的,第一对应关系可以是终端设备出厂设置好的,也可以是用户根据实际使用需求设定的。
用户可以通过第一输入设置该目标区域、该第一目标输入、该第二目标输入和该目标动作中的至少一项,其他的为终端设备出厂设置好的,具体根据实际使用情况确定,本公开实施例不作限定。
可选的,第一输入可以包括下述中的至少一项:用户从多个预设区域中选择该目标区域的输入、用户从多个预设目标输入中选择第一目标输入的输入、用户从多个预设输入中选择第二目标输入的输入,以及用户从多个预设动作中选择目标动作的输入。
可选的,第一输入也可以包括下述中的至少一项:用户自定义目标区域的输入,用户自定义第一目标输入的输入,用户自定义第二目标输入的输入以及用户自定义目标动作的输入。
示例性的,用户自定义目标区域,例如可以是用户在第二屏上通过输入选择一个区域作为该目标区域。
步骤205、响应于该第一输入,终端设备保存该第一对应关系。
这样用户可以根据需要设置不同的对应关系,通过组合输入触发终端设备执行不同的动作,使单手操作更便利,提高人机交互性能。
本公开实施例中的各个附图均是结合独权实施例附图示例的,具体实现时,各个附图还可以结合其它任意可以结合的附图实现,本公开实施例不作限定。
本公开实施例提供了一种控制方法,终端设备可以通过在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。通过该方案,用户可以通过在终端设备上的组合输入(依次为在第一屏上的触摸输入、在目标区域的第一目标输入和在第一屏上的第二目标输入)触发终端设备执行目标动作。相比相关技术的多指触控输入,由于该组合输入是在终端设备不同表面的单手操作区域内的输入,因此当用户单手握持并操作终端设备时,方便实施该组合输入,从而可以很好地通过该组合输入触发终端设备实现目标功能,人机交互性能较好。
如图5所示,本公开实施例提供一种终端设备120,该终端设备120包括:检测模块121和执行模块122;该检测模块121,用于在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状 态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;该执行模块122,用于在该检测模块121检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。
可选的,该目标区域为该终端设备的第二屏上的触控区域、该终端设备的按键区域或者该终端设备的指纹采集区域。
可选的,该终端设备120还包括:接收模块123和保存模块124;该接收模块123,用于在该在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入之前,接收用户在设置界面的第一输入,该第一输入为用户设置第一对应关系的输入,该第一对应关系为该目标区域、该第一目标输入、该第二目标输入和该目标动作之间的对应关系;该保存模块124,用于响应于该接收模块123接收的该第一输入,保存该第一对应关系。
可选的,该执行模块122,具体用于在检测到用户在该第一屏上的输入为该第二目标输入的情况下,在该第一屏上显示提示信息,该提示信息用于提示是否执行与该第二目标输入对应的目标动作;接收用户对该提示信息的第二输入;响应于该第二输入,执行该目标动作。
可选的,该目标动作包括下述任意一项:改变目标对象的尺寸、旋转目标对象的方向、删除目标对象、更新显示目标对象、对目标对象截图、对目标对象加密、对目标对象解密、控制目标对象隐藏,控制目标对象取消隐藏、控制该第一屏返回上一级界面、控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节该第一屏的屏幕亮度、切换该第一屏的壁纸、控制该第一屏显示预设内容。
本公开实施例提供的终端设备能够实现上述方法实施例中图2至图4任意之一所示的各个过程,为避免重复,此处不再赘述。
本公开实施例提供了一种终端设备,终端设备可以通过在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。通过该方案,用户可以通过在终端设备上的组合输入(依次为在第一屏上的触摸输入、在目标区域的第一目标输入和在第一屏上的第二目标输入)触发终端设备执行目标动作。相比相关技术的多指触控输入,由于该组合输入是在终端设备不同表面的单手操作区域内的输入,因此当用户单手握持并操作终端设备时,方便实施该组合输入,从而可以很好地通过该组合输入触发终端设备实现目标功能,人机交互性能较好。
图6为实现本公开各个实施例的一种终端设备的硬件结构示意图。如图6所示,该终端设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图6中示出的终端设备结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件, 或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端设备、可穿戴设备、以及计步器等。
其中,处理器110,用于在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。
本公开实施例提供的终端设备,终端设备可以通过在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,该第一屏和该目标区域在该终端设备的不同表面上,该触控事件在该第一屏中对应的位置和该目标区域均位于该终端设备处于单手握持状态时,握持该终端设备的手的操作区域内;在检测到用户在该目标区域内的输入为第一目标输入的情况下,检测用户在该第一屏上的输入;在检测到用户在该第一屏上的输入为第二目标输入的情况下,执行与该第二目标输入对应的目标动作。通过该方案,用户可以通过在终端设备上的组合输入(依次为在第一屏上的触摸输入、在目标区域的第一目标输入和在第一屏上的第二目标输入)触发终端设备执行目标动作。相比相关技术的多指触控输入,由于该组合输入是在终端设备不同表面的单手操作区域内的输入,因此当用户单手握持并操作终端设备时,方便实施该组合输入,从而可以很好地通过该组合输入触发终端设备实现目标功能,人机交互性能较好。
应理解的是,本公开实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与终端设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
终端设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在终端设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图6中,触控面板1071与显示面板1061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与终端设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端设备100内的一个或多个元件或者可以用于在终端设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创 建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器110可包括一个或多个处理单元;可选的,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
终端设备100还可以包括给各个部件供电的电源111(比如电池),可选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备100包括一些未示出的功能模块,在此不再赘述。
可选的,本公开实施例还提供一种终端设备,可以包括上述如图6所示的处理器110,存储器109,以及存储在存储器109上并可在该处理器110上运行的计算机程序,该计算机程序被处理器110执行时实现上述方法实施例中图2至图4任意之一所示的控制方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中图2至图4任意之一所示的控制方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (12)

  1. 一种控制方法,应用于终端设备,所述方法包括:
    在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,所述第一屏和所述目标区域在所述终端设备的不同表面上,所述触控事件在所述第一屏中对应的位置和所述目标区域均位于所述终端设备处于单手握持状态时,握持所述终端设备的手的操作区域内;
    在检测到用户在所述目标区域内的输入为第一目标输入的情况下,检测用户在所述第一屏上的输入;
    在检测到用户在所述第一屏上的输入为第二目标输入的情况下,执行与所述第二目标输入对应的目标动作。
  2. 根据权利要求1所述的方法,其中,所述目标区域为所述终端设备的第二屏上的触控区域、所述终端设备的按键区域或者所述终端设备的指纹采集区域。
  3. 根据权利要求1所述的方法,其中,所述在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入之前,还包括:
    接收用户在设置界面的第一输入,所述第一输入为用户设置第一对应关系的输入,所述第一对应关系为所述目标区域、所述第一目标输入、所述第二目标输入和所述目标动作之间的对应关系;
    响应于所述第一输入,保存所述第一对应关系。
  4. 根据权利要求1所述的方法,其中,所述在检测到用户在所述第一屏上的输入为第二目标输入的情况下,执行与所述第二目标输入对应的目标动作,包括:
    在检测到用户在所述第一屏上的输入为所述第二目标输入的情况下,在所述第一屏上显示提示信息,所述提示信息用于提示是否执行与所述第二目标输入对应的目标动作;
    接收用户对所述提示信息的第二输入;
    响应于所述第二输入,执行所述目标动作。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述目标动作包括下述任意一项:
    改变目标对象的尺寸、旋转目标对象的方向、删除目标对象、更新显示目标对象、对目标对象截图、对目标对象加密、对目标对象解密、控制目标对象隐藏,控制目标对象取消隐藏、控制所述第一屏返回上一级界面、控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节所述第一屏的屏幕亮度、切换所述第一屏的壁纸、控制所述第一屏显示预设内容。
  6. 一种终端设备,所述终端设备包括:检测模块和执行模块;
    所述检测模块,用于在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入,所述第一屏和所述目标区域在所述终端设备的不同表面上,所述触控事件在所述第一屏中对应的位置和所述目标区域均位于所述终端设备处于单手握持状态时,握持所述终端设备的手的操作区域内;在检测到用户在所述目标区域内的输入为第一目标输入的情况下,检测用户在所述第一屏上的输入;
    所述执行模块,用于在所述检测模块检测到用户在所述第一屏上的输入为第二目 标输入的情况下,执行与所述第二目标输入对应的目标动作。
  7. 根据权利要求6所述的终端设备,其中,所述目标区域为所述终端设备的第二屏上的触控区域、所述终端设备的按键区域或者所述终端设备的指纹采集区域。
  8. 根据权利要求6所述的终端设备,其中,所述终端设备还包括:接收模块和保存模块;
    所述接收模块,用于在所述在第一屏上检测到触控事件的情况下,检测用户在目标区域内的输入之前,接收用户在设置界面的第一输入,所述第一输入为用户设置第一对应关系的输入,所述第一对应关系为所述目标区域、所述第一目标输入、所述第二目标输入和所述目标动作之间的对应关系;
    所述保存模块,用于响应于所述接收模块接收的所述第一输入,保存所述第一对应关系。
  9. 根据权利要求6所述的终端设备,其中,所述执行模块,具体用于在检测到用户在所述第一屏上的输入为所述第二目标输入的情况下,在所述第一屏上显示提示信息,所述提示信息用于提示是否执行与所述第二目标输入对应的目标动作;接收用户对所述提示信息的第二输入;响应于所述第二输入,执行所述目标动作。
  10. 根据权利要求6至9中任一项所述的终端设备,其中,所述目标动作包括下述任意一项:
    改变目标对象的尺寸、旋转目标对象的方向、删除目标对象、更新显示目标对象、对目标对象截图、对目标对象加密、对目标对象解密、控制目标对象隐藏,控制目标对象取消隐藏、控制所述第一屏返回上一级界面、控制预设应用或预设功能开启、控制预设应用或预设功能关闭、调节所述第一屏的屏幕亮度、切换所述第一屏的壁纸、控制所述第一屏显示预设内容。
  11. 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至5中任一项所述的控制方法的步骤。
  12. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的控制方法的步骤。
PCT/CN2020/080679 2019-04-18 2020-03-23 控制方法及终端设备 WO2020211596A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910314770.6 2019-04-18
CN201910314770.6A CN110147174A (zh) 2019-04-18 2019-04-18 一种控制方法及终端设备

Publications (1)

Publication Number Publication Date
WO2020211596A1 true WO2020211596A1 (zh) 2020-10-22

Family

ID=67588509

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080679 WO2020211596A1 (zh) 2019-04-18 2020-03-23 控制方法及终端设备

Country Status (2)

Country Link
CN (1) CN110147174A (zh)
WO (1) WO2020211596A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147174A (zh) * 2019-04-18 2019-08-20 东莞市步步高通信软件有限公司 一种控制方法及终端设备
CN110851810A (zh) * 2019-10-31 2020-02-28 维沃移动通信有限公司 响应方法及电子设备
CN114139403B (zh) * 2021-12-13 2023-08-25 中国核动力研究设计院 一种基于概率论的事故规程整定值优化方法、装置和设备
CN114594897A (zh) * 2022-03-10 2022-06-07 维沃移动通信有限公司 触摸屏的单手控制方法、控制装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8786569B1 (en) * 2013-06-04 2014-07-22 Morton Silverberg Intermediate cursor touchscreen protocols
CN107272946A (zh) * 2017-06-09 2017-10-20 宇龙计算机通信科技(深圳)有限公司 一种屏幕控制方法和装置
CN108920075A (zh) * 2018-06-26 2018-11-30 努比亚技术有限公司 双屏移动终端控制方法、移动终端及计算机可读存储介质
CN110147174A (zh) * 2019-04-18 2019-08-20 东莞市步步高通信软件有限公司 一种控制方法及终端设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467283A (zh) * 2010-11-19 2012-05-23 纬创资通股份有限公司 具有多点触控功能的触控装置以及触控操作方法
CN103677600B (zh) * 2012-09-10 2017-05-24 联想(北京)有限公司 一种输入方法和电子设备
CN103812996B (zh) * 2012-11-08 2018-10-09 腾讯科技(深圳)有限公司 一种信息提示方法、装置及终端
CN105183236A (zh) * 2015-10-19 2015-12-23 上海交通大学 一种触屏输入装置与方法
CN105843533A (zh) * 2016-03-15 2016-08-10 乐视网信息技术(北京)股份有限公司 一种列表的调用方法及装置
CN106254551A (zh) * 2016-09-30 2016-12-21 北京珠穆朗玛移动通信有限公司 一种双系统的文件传输方法及移动终端
CN106973330B (zh) * 2017-03-20 2021-03-02 腾讯科技(深圳)有限公司 一种屏幕直播方法、装置和系统
CN107239199A (zh) * 2017-06-29 2017-10-10 珠海市魅族科技有限公司 一种操作响应的方法及相关装置
CN108958615B (zh) * 2018-07-25 2021-03-02 维沃移动通信有限公司 一种显示控制方法、终端及计算机可读存储介质
CN109343788B (zh) * 2018-09-30 2021-08-17 维沃移动通信有限公司 一种移动终端的操作控制方法及移动终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8786569B1 (en) * 2013-06-04 2014-07-22 Morton Silverberg Intermediate cursor touchscreen protocols
CN107272946A (zh) * 2017-06-09 2017-10-20 宇龙计算机通信科技(深圳)有限公司 一种屏幕控制方法和装置
CN108920075A (zh) * 2018-06-26 2018-11-30 努比亚技术有限公司 双屏移动终端控制方法、移动终端及计算机可读存储介质
CN110147174A (zh) * 2019-04-18 2019-08-20 东莞市步步高通信软件有限公司 一种控制方法及终端设备

Also Published As

Publication number Publication date
CN110147174A (zh) 2019-08-20

Similar Documents

Publication Publication Date Title
WO2019154181A1 (zh) 显示控制方法及移动终端
WO2019141243A1 (zh) 应用程序启动方法及移动终端
WO2020063091A1 (zh) 一种图片处理方法及终端设备
CN108491133B (zh) 一种应用程序控制方法及终端
JP2021525430A (ja) 表示制御方法及び端末
WO2021057337A1 (zh) 操作方法及电子设备
WO2020258929A1 (zh) 文件夹界面切换方法及终端设备
WO2021017776A1 (zh) 信息处理方法及终端
WO2019179332A1 (zh) 应用程序的关闭方法及移动终端
WO2020151460A1 (zh) 对象处理方法及终端设备
WO2021012931A1 (zh) 图标管理方法及终端
WO2020211596A1 (zh) 控制方法及终端设备
WO2021169959A1 (zh) 应用程序启动方法及电子设备
WO2020238497A1 (zh) 图标移动方法及终端设备
WO2021109961A1 (zh) 快捷标识生成方法、电子设备及介质
WO2021083087A1 (zh) 截屏方法及终端设备
WO2019174541A1 (zh) 移动终端的操作方法及移动终端
WO2020181955A1 (zh) 界面控制方法及终端设备
US11354017B2 (en) Display method and mobile terminal
WO2021068885A1 (zh) 控制方法及电子设备
WO2020151525A1 (zh) 消息发送方法及终端设备
WO2020078234A1 (zh) 显示控制方法及终端
WO2021004426A1 (zh) 内容选择方法及终端
WO2021057290A1 (zh) 信息控制方法及电子设备
WO2020199783A1 (zh) 界面显示方法及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20791848

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20791848

Country of ref document: EP

Kind code of ref document: A1