[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022088974A1 - 一种遥控方法、电子设备及系统 - Google Patents

一种遥控方法、电子设备及系统 Download PDF

Info

Publication number
WO2022088974A1
WO2022088974A1 PCT/CN2021/116179 CN2021116179W WO2022088974A1 WO 2022088974 A1 WO2022088974 A1 WO 2022088974A1 CN 2021116179 W CN2021116179 W CN 2021116179W WO 2022088974 A1 WO2022088974 A1 WO 2022088974A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
touch
display interface
interface
user interface
Prior art date
Application number
PCT/CN2021/116179
Other languages
English (en)
French (fr)
Inventor
王姚
钱凯
庄志山
朱爽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022088974A1 publication Critical patent/WO2022088974A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to a remote control method, electronic device and system.
  • the smart TVs support more and more functions, and the information on the user interface is more and more complex, so the demand for remote control of smart TVs is higher and higher.
  • the way of realizing remote control of smart TVs through a simple button structure can no longer support the current demand for remote control functions of smart TVs. Therefore, how to better realize the remote control of smart TVs is a problem worthy of study. .
  • the present application provides a remote control method that can meet the needs of more control scenarios of the current smart TV service and can reduce the time delay.
  • the present application provides a remote control method, electronic device and system, which are used to meet the needs of various control scenarios of the current smart TV service, and have the characteristics of low time delay, which can improve the user's remote control control accuracy of the smart TV, so that it can be Improve the user's touch operation experience.
  • an embodiment of the present application provides a remote control method, which is applicable to a first electronic device, where the first electronic device includes a camera, and the first electronic device and the second electronic device can establish a wireless connection, and the The first electronic device receives a first operation; in response to receiving the first operation, the first electronic device activates the camera; the first electronic device acquires a first image by using the camera, and the first An image includes the display interface area of the second electronic device, and the content in the display interface area of the second electronic device is the current display interface of the second electronic device; the first electronic device displays the first user interface , the first user interface includes the first image; the first electronic device receives a touch operation for the first user interface, and the first electronic device obtains a touch operation for the first user interface touch point coordinates corresponding to the touch operation, and generate a virtual touch event based on the touch point coordinates, wherein the virtual touch event includes the relative coordinates in the current display interface of the second electronic device; the The first electronic device sends the virtual touch event to the
  • the user can perform a touch operation on the first electronic device, so that the first electronic device generates a generated image according to the user's touch operation.
  • the virtual touch event is sent to the second electronic device to realize the remote control of the second electronic device, so that the user realizes the needs of various remote control scenarios for the second electronic device according to the touch operation of the first electronic device; and,
  • the amount of data transmitted between the first electronic device and the second electronic device is a virtual touch event, which has the characteristics of a small amount of data, so that the delay of data interaction between the first electronic device and the second electronic device can be reduced , which can improve the user's touch experience.
  • the first electronic device determines that the first operation is received when an application icon installed by itself for performing a touch operation is clicked and triggered; or, the first electronic device pulls down the notification bar When the remote control in the interface is clicked and triggered, it is determined that the first operation is received; or, after the first electronic device receives the voice operation or the gesture operation, it is determined that the first operation is received.
  • the first electronic device can be provided with a variety of portals that can be determined to enter the virtual touch event generation scene, thereby providing the user with portability and improving the user experience.
  • the first electronic device identifies the display interface area of the second electronic device. Specifically, the first electronic device determines that the area inside the screen frame of the second electronic device in the first user interface is the display interface area of the second electronic device, and the area of the second electronic device is The content displayed in the display interface area is the current display interface of the second electronic device. Based on this, when the first electronic device captures the display interface including the second electronic device, the captured image information is in a wider range than that of the second electronic device, and then on the display interface of the first electronic device, except for the In addition to the touch operations in the display interface area of the second electronic device, other touch operations are not used to generate virtual touch events.
  • the first electronic device is determined.
  • the second electronic device displayed on the display interface of an electronic device can improve the efficiency of generating virtual touch events and reduce the processing time of the remote control process.
  • a possible implementation manner, where the first electronic device determines the content area of the screen border of the second electronic device may be implemented as: the first electronic device sends an anchor point generation instruction to the second electronic device, so that the second electronic device After receiving the anchor point generation instruction, the electronic device generates an anchor point on the display interface in response to the anchor point generation instruction; the first electronic device generates an anchor point according to the acquired first image of the anchor point.
  • the information determines the area inside the screen bezel of the second electronic device.
  • the first electronic device may, by detecting anchor points, after determining multiple target anchor points included in the display interface of the first electronic device, determine the area determined by the anchor points as the display area of the second electronic device, Therefore, the efficiency and accuracy of the virtual touch event generated by the first electronic device can be improved, and the processing time of the remote control process can be reduced.
  • a possible implementation manner after the first electronic device identifies the display interface area of the second electronic device, it determines whether the size of the display interface area of the second electronic device is smaller than a first threshold; The size of the display interface area of the electronic device is smaller than the first threshold, and the first electronic device adjusts the focal length of the camera device to the first focal length.
  • the first electronic device determines whether the size of the display interface area of the second electronic device is smaller than a first threshold; The size of the display interface area of the electronic device is smaller than the first threshold, and the first electronic device adjusts the focal length of the camera device to the first focal length.
  • the first electronic device after the first electronic device receives a touch operation for the first user interface, before generating the virtual touch event, acquires at least one touch point coordinates; the first electronic device determines whether the coordinates of the at least one touch point are within the display interface area of the second electronic device; in response to the first electronic device determining that the coordinates of the at least one touch point are within the In the display interface area of the second electronic device, the first electronic device generates the virtual touch event. Based on this, this implementation provides an accurate specific embodiment for generating a virtual event. The first electronic device generates a virtual touch event based on the acquired coordinates of the touch point, which can ensure the accuracy of the generated virtual touch event.
  • the first electronic device generating a virtual touch event in response to receiving the touch operation for the first user interface is specifically implemented as: the first electronic device will The acquired coordinates of the touch point corresponding to the touch operation on the first user interface are converted into relative coordinates in the display interface area of the second electronic device, and the first electronic device is based on the first electronic device. The relative coordinates in the current display interface of the two electronic devices generate the virtual touch event. Based on this, after the first electronic device obtains the coordinates of the touch point corresponding to the user's operation, according to the display effect of the second electronic device on the two-dimensional projection interface of the first electronic device, the coordinates of the touch point are converted into the coordinates of the touch point belonging to the second electronic device. relative coordinates, thereby ensuring the accuracy of the virtual touch event generated based on the first electronic device.
  • the touch operation includes a click operation and/or a slide operation
  • the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or multiple coordinates .
  • This scenario provides that the user's operation can be a click operation or a swipe operation, and the user can perform different user operations on the first electronic device to satisfy the variety of generated virtual remote control events, thereby satisfying the various requirements of the second electronic device.
  • the remote control scene can improve the user experience.
  • the first electronic device is a mobile phone
  • the second electronic device is a smart TV
  • the camera device is a rear camera of the mobile phone
  • the current display interface of the second electronic device is the menu interface of the smart TV
  • the first user interface is the display interface after the first electronic device enters the remote control mode
  • the first image is an image including the menu interface of the smart TV
  • the The menu interface includes multiple controls, and the multiple controls correspond to different functions
  • the display interface area of the second electronic device is the image area of the menu interface of the smart TV acquired by the mobile phone
  • the touch operation of the first user interface is a click operation on one of the multiple controls in the image of the menu interface of the smart TV in the first user interface
  • the second electronic device Executing the operation corresponding to the relative coordinates in the current display interface of the second electronic device is for the second electronic device to execute a function corresponding to one of the multiple controls in the image of the menu interface of the smart TV.
  • an embodiment of the present application further provides an electronic device adapted to a first electronic device, where the first electronic device includes a camera, the first electronic device and the second electronic device establish a wireless connection, and the electronic device
  • the device includes: a touch screen, wherein the touch screen includes a touch panel and a display screen; one or more processors; a memory; a plurality of application programs; and one or more computer programs, wherein the one or more computer programs are Stored in the memory, the one or more computer programs include instructions that, when executed by the first electronic device, cause the first electronic device to perform the steps of: receiving a first operation; responding to Receive the first operation, start the camera; use the camera to acquire a first image, where the first image includes the display interface area of the second electronic device, and the display interface of the second electronic device The content in the area is the current display interface of the second electronic device; displaying a first user interface, where the first user interface includes the first image; receiving a touch operation for the first user interface; In response to receiving the touch operation for the first
  • control event wherein the virtual touch event includes relative coordinates in the current display interface of the second electronic device; send the virtual touch event to the second electronic device, so that the second electronic device After receiving the virtual touch event, in response to the virtual touch event, perform an operation corresponding to the relative coordinates in the current display interface of the second electronic device.
  • a possible implementation manner when the instruction is executed by the first electronic device, causing the first electronic device to perform the reception of the first operation, specifically execute: display a first application icon, and receive an instruction for the first electronic device.
  • a possible implementation manner when the instruction is executed by the first electronic device, causes the first electronic device to further execute: before receiving a touch operation for the first user interface, determine the The area inside the screen frame of the second electronic device in the first user interface is the display interface area of the second electronic device.
  • a possible implementation manner when the instruction is executed by the first electronic device, causing the first electronic device to execute and determine the area inside the screen frame of the second electronic device, specifically execute: to the first electronic device.
  • the second electronic device sends an anchor point generation instruction, so that after receiving the anchor point generation instruction, the second electronic device generates an anchor point on the display interface in response to the anchor point generation instruction;
  • the information of the anchor point in an image determines the area inside the screen frame of the second electronic device.
  • a possible implementation manner when the instruction is executed by the first electronic device, make the first electronic device recognize the display interface area of the second electronic device and also execute: judging the second electronic device Whether the size of the display interface area of the second electronic device is smaller than the first threshold; if the size of the display interface area of the second electronic device is smaller than the first threshold, adjust the focal length of the camera to the first focal length.
  • a possible implementation manner when the instruction is executed by the first electronic device, so that the first electronic device generates the virtual touch after receiving the touch operation for the first user interface Before the event, further perform: acquiring at least one touch point coordinate; determining whether the at least one touch point coordinate is within the display interface area of the second electronic device; determining the at least one touch point coordinate in response to the first electronic device The coordinates of the touch point are in the display interface area of the second electronic device, and the virtual touch event is generated.
  • a possible implementation manner when the instruction is executed by the first electronic device, causing the first electronic device to execute the generation of a virtual touch event based on the coordinates of the touch point, the specific execution is:
  • the obtained touch point coordinates corresponding to the touch operation on the first user interface are converted into relative coordinates in the current display interface of the second electronic device; according to the coordinates in the current display interface of the second electronic device The relative coordinates generate the virtual touch event.
  • the touch operation includes a click operation and/or a slide operation
  • the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or multiple coordinates .
  • the first electronic device is a mobile phone
  • the second electronic device is a smart TV
  • the camera device is a rear camera of the mobile phone
  • the current display interface of the second electronic device is the menu interface of the smart TV
  • the first user interface is the display interface after the first electronic device enters the remote control mode
  • the first image is an image including the menu interface of the smart TV
  • the The menu interface includes multiple controls, and the multiple controls correspond to different functions
  • the display interface area of the second electronic device is the image area of the menu interface of the smart TV acquired by the mobile phone
  • the touch operation of the first user interface is a click operation on one of the multiple controls in the image of the menu interface of the smart TV in the first user interface
  • the second electronic device Executing the operation corresponding to the relative coordinates in the current display interface of the second electronic device is for the second electronic device to execute a function corresponding to one of the multiple controls in the image of the menu interface of the smart TV.
  • an embodiment of the present application provides a remote control system, including a first electronic device and a second electronic device, the first electronic device includes a camera, and the first electronic device and the second electronic device can establish a wireless connection , the first electronic device is configured to receive a first operation; in response to receiving the first operation, the first electronic device activates the camera; the first electronic device uses the camera to obtain the first an image, the first image includes a display interface area of the second electronic device, and the content in the display interface area of the second electronic device is the current display interface of the second electronic device; the first electronic device for displaying a first user interface, where the first user interface includes the first image; the first electronic device is configured to receive a touch operation for the first user interface; in response to receiving the For the touch operation on the first user interface, the first electronic device acquires the coordinates of the touch point corresponding to the touch operation on the first user interface, and generates a virtual touch point based on the coordinates of the touch point control event, wherein the virtual touch event includes relative coordinates in
  • an embodiment of the present application further provides a remote control device, where the remote control device includes a module/unit for executing the method in any of the possible implementation manners of the first aspect.
  • modules/units can be implemented by hardware or by executing corresponding software by hardware.
  • a computer-readable storage medium stores a computer program (also referred to as code, or instruction), when it runs on a computer, causing the computer to execute any one of the first aspects above method in one possible implementation.
  • a computer program also referred to as code, or instruction
  • a computer program product includes: a computer program (also referred to as code, or instruction), which, when the computer program is executed, enables the computer to execute any one of the above-mentioned first aspects. method in method.
  • a graphical user interface on an electronic device, the electronic device having a display screen, one or more memories, and one or more processors, the one or more processors for executing stored in the One or more computer programs in the one or more memories, and the graphical user interface includes a graphical user interface displayed when the electronic device executes any possible implementation manner of the first aspect of the embodiments of the present application.
  • FIG. 1a is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 1b is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an Android operating system provided by an embodiment of the present application.
  • Fig. 4a is one of application scenario diagrams of a remote control method provided by an embodiment of the present application.
  • FIG. 4b is the second application scenario diagram of a remote control method provided by an embodiment of the present application.
  • FIG. 4c is the third application scenario diagram of a remote control method provided by an embodiment of the present application.
  • 5a is a schematic flowchart of a remote control method provided by an embodiment of the present application.
  • FIG. 5b is a schematic diagram of generating an anchor point according to an embodiment of the present application.
  • FIG. 6a is one of the schematic diagrams of coordinate transformation provided by an embodiment of the present application.
  • Fig. 6b is the second schematic diagram of coordinate transformation provided by the embodiment of the application.
  • FIG. 7 is a schematic structural diagram of a remote control device according to an embodiment of the present application.
  • a mobile phone can not only be used as a communication tool, but also a mobile database for the user. It can provide a mobile computing environment, so as to realize the predefined processing of the received data, and then output the command with the control function.
  • the instruction sent by the received first electronic device for realizing the virtual touch event of the remote control can send virtual touch events to electronic devices that require remote control, such as smart TVs, which can be applied to various possible remote control scenarios.
  • the present application provides a remote control method, which is based on the principles of augmented reality technology and image tracking technology, adopts the imaging and imaging capabilities of the first electronic device, uses the imaging device included in the first electronic device to acquire the captured image, and captures the image. It contains the display interface area of the second electronic device, and converts the user's touch operation on the first electronic device into a virtual remote control operation on the second electronic device, and generates a virtual remote control operation by the first electronic device based on the virtual remote control operation. The virtual touch event is sent to the second electronic device, thereby realizing remote control of the second electronic device.
  • the embodiments of the present application can be applied to mobile phones, tablet computers, wearable devices (eg, watches, bracelets, helmets, headphones, etc.), in-vehicle devices, augmented reality (AR)/virtual reality (VR) Devices, laptops, ultra-mobile personal computers (UMPCs), netbooks, personal digital assistants (PDAs), smart home devices (e.g., smart TVs, smart projectors, smart speakers, smart cameras, etc.) ) and other electronic equipment. It can be understood that the embodiments of the present application do not impose any limitations on the specific types of electronic devices.
  • Apps applications with various functions can be installed in the above electronic devices, such as WeChat, email, Weibo, video, smart life, and smart remote control.
  • Apps applications with various functions
  • the focus is on the operation of how the App installed in the first electronic device for sending virtual touch events generates virtual touch events.
  • the multiple involved in the embodiments of the present application refers to greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing and describing, and cannot be understood as indicating or implying relative importance, nor can they be understood as indicating or implying order.
  • Features delimited with “first” and “second” may expressly or implicitly include one or more of that feature.
  • words such as “exemplary” or “for example” are used to mean serving as an example, illustration or illustration. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the application scenario may include: a first electronic device 101 and a second electronic device 102 .
  • the first electronic device 101 and the second electronic device 102 may be connected to the same local area network, or may be connected to different local area networks.
  • the example in which the first electronic device 101 and the second electronic device 102 are connected to the same local area network may specifically be: the first electronic device 101 and the second electronic device 102 establish a wireless connection with the same wireless access point.
  • the first electronic device 101 and the second electronic device 102 are connected to the same wireless fidelity (Wi-Fi) hotspot, and for example, the first electronic device 101 and the second electronic device 102 can also be connected through the Bluetooth protocol. into the same Bluetooth beacon.
  • a communication connection between the first electronic device 101 and the second electronic device 102 can also be triggered by a near field communication (NFC) tag, and encrypted information can be transmitted through a Bluetooth module for identity authentication. After the authentication is successful, data transmission is carried out in a point-to-point (P2P) manner.
  • NFC near field communication
  • P2P point-to-point
  • the first electronic device 101 may serve as a sending client, and after generating a virtual touch event based on a user's touch operation, send the virtual touch event to the second electronic device 102 .
  • the first possible implementation manner in which the first electronic device 101 enters a user interface for generating a virtual touch event is shown in FIG. 1a.
  • the interface displayed by the first electronic device 101 is a mobile phone main interface including a plurality of App icons, and the mobile phone main interface includes an intelligent remote control App icon for generating a virtual touch event.
  • the first electronic device 101 may be a portable electronic device that also includes other functions such as a personal digital assistant and/or a music player function, such as a mobile phone, a tablet computer, a wearable device (eg, a smart phone) with wireless communication functions. watches) and other electronic devices.
  • the second electronic device 102 may be an electronic device such as a smart TV, a smart projector, etc., which is not specifically limited in this embodiment of the present application.
  • FIG. 1b A second possible implementation manner of the first electronic device 101 entering a user interface for generating a virtual touch event is shown in FIG. 1b.
  • the interface displayed by the first electronic device 101 may also be a notification bar drop-down interface of a mobile phone, and the notification bar drop-down interface includes a remote control control for remote control of a second electronic device such as a smart TV or a smart projector.
  • the first electronic device 101 jumps to the interface for generating a virtual touch event.
  • User interface, the subsequent implementation manner is the same as that of the first possible implementation manner, and will not be repeated here.
  • the specific implementation manner of the virtual touch event generated by the first electronic device 101 will be introduced later, and will not be described here for the time being.
  • Exemplary embodiments of electronic devices to which embodiments of the present application may be applied include but are not limited to carrying Or portable electronic devices with other operating systems.
  • the portable electronic device described above may also be other portable electronic devices, such as a laptop computer (Laptop) or the like having a touch-sensitive surface (eg, a touch panel).
  • the electronic device 200 may be the first electronic device 101 and/or the second electronic device 102 in the embodiment of the present application.
  • the electronic device 200 provided by the embodiment will be introduced.
  • those skilled in the art can understand that the electronic device 200 shown in FIG. 2 is only an example, and does not constitute a limitation to the electronic device 200, and the electronic device 200 may have more or more than those shown in the figure. Fewer components, two or more components may be combined, or there may be different component configurations.
  • the various components shown in Figure 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, a sensor module 280, a camera 293, a display screen 294, and the like.
  • the sensor module 280 may include a gyroscope sensor 280A, a touch sensor 280K (of course, the electronic device 200 may also include other sensors, such as a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, etc. , not shown in the figure).
  • the processor 210 can run the remote control method provided by the embodiment of the present application, so as to realize various remote control functions of the smart TV through the first electronic device on the basis of ensuring the control accuracy, thereby improving the user experience.
  • the processor 210 may include different devices. For example, when a central processing unit (central processing unit, CPU) and a graphics processing unit (graphics processing unit, GPU) are integrated, the CPU and the GPU may cooperate to execute the remote control method provided by the embodiments of the present application, such as Part of the algorithms in the remote control method are executed by the CPU, and the other part of the algorithms are executed by the GPU, so as to obtain faster processing efficiency.
  • CPU central processing unit
  • GPU graphics processing unit
  • Display screen 294 may display photos, videos, web pages, or documents, and the like.
  • the display screen 294 may display the main interface of the mobile phone of the first electronic device 101 as shown in FIG. 1a, or the pull-down interface of the notification bar as shown in FIG. 1b.
  • the processor 210 detects the touch event of the user's finger (or stylus, etc.) on an application icon, in response to the touch event, the user interface of the application corresponding to the application icon is opened, and the user interface of the application corresponding to the application icon is opened on the display screen 294 Displays the app's user interface.
  • the camera 293 (front camera or rear camera, or a camera can be used as both a front camera and a rear camera) is used to capture still images or videos, for example, if the electronic device 200 is as shown in Figure 1a and Figure 1b If the first electronic device 101 is shown, the camera of the first electronic device 101 is used to capture an image including the display interface area of the second electronic device 102 .
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes various functional applications and data processing of the electronic device 200 by executing the instructions stored in the internal memory 221 .
  • the internal memory 221 may also store one or more computer programs corresponding to the virtual touch event generation algorithms provided in the embodiments of the present application.
  • the processor 210 may execute the generation of the virtual touch event, and send it to the second electronic device through the mobile communication module 251 or the wireless communication module 252 device 102 .
  • the code of the virtual touch event generation algorithm provided in the embodiment of the present application may also be stored in an external memory.
  • the processor 210 can run the code of the virtual touch event generation algorithm stored in the external memory through the external memory interface 220, the processor 210 can run the generation of the virtual touch event, and through the mobile communication module 251 or wireless The communication module 252 sends to the second electronic device 102 .
  • the function of the sensor module 280 is described below.
  • the gyro sensor 280A can be used to determine the movement attitude of the electronic device 200 .
  • the angular velocity of electronic device 200 about three axes may be determined by gyro sensor 280A. That is, the gyro sensor 280A can be used to detect the current motion state of the electronic device 200, such as shaking or stillness.
  • the electronic device 200 if the electronic device 200 is detected to be in a shaking state by the gyro sensor 280A, the electronic device 200 can analyze and identify the real-time image captured by the camera 293 in time, so as to avoid inaccurate generation of virtual touch events caused by shaking problem arises.
  • Touch sensor 280K also called “touch panel”.
  • the touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, also called a "touch screen”.
  • the touch sensor 280K is used to detect a touch operation acting on or near it, such as a user's touch operation used to generate a virtual touch event in the embodiment of the present application.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided via display screen 294 .
  • the touch sensor 280K may also be disposed on the surface of the electronic device 200 , which is different from the location where the display screen 294 is located.
  • the user clicks the icon of the smart remote control in the main interface of the mobile phone as shown in FIG. 1a through the touch sensor 280K, triggers the processor 210 to start the smart remote control application, and displays the jumped to the virtual touch event generated by the display screen 294. , and triggers the opening of the camera 293.
  • the wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, the modulation and demodulation processor, the baseband processor, and the like.
  • the interaction of information such as virtual touch events may be implemented between the first electronic device 101 and the second electronic device 102 through the wireless communication function of the electronic device 200 .
  • the wireless communication module 252 can provide applications on the electronic device 200 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • BT Bluetooth
  • GNSS global navigation satellites System
  • FM global navigation satellite system
  • NFC infrared technology
  • IR infrared technology
  • the electronic device 200 may include more or less components than those shown in FIG. 2 , which are not limited in this embodiment of the present application.
  • the electronic device 200 may include hardware structures and/or software modules, and implement the above functions in the form of hardware structures, software modules, or hardware structures plus software modules. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system using a layered architecture as an example to illustrate the software structure of an electronic device.
  • FIG. 3 shows a software structural block diagram of an Android system provided by an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, from top to bottom, the application layer, the application framework (framework) layer, the Android runtime (Android runtime) and system library, the hardware abstraction layer, and the kernel layer. .
  • the application layer is the top layer of the operating system and can include a series of application packages.
  • the application layer may include native applications of the operating system and third-party applications, wherein the native applications of the operating system may include user interface (user interface, UI), camera, short message, call, etc.
  • Third-party applications can include maps, smart life, smart remote control, etc.
  • the applications mentioned below may be native applications of the operating system installed on the electronic device when it leaves the factory, or may be third-party applications downloaded from the network or obtained from other electronic devices by the user during the use of the electronic device.
  • the application layer can be used to implement the presentation of an editing interface
  • the above-mentioned editing interface can be used by the first electronic device for the user to implement the second electronic device in the App, such as the smart remote control, which is the focus of the present application.
  • Action for generated virtual touch events may be the control interface of the smart remote control App displayed on the touch screen of the first electronic device, such as the user interface displayed by the first electronic device shown in (1-2) in FIG.
  • the interface displays the picture information of the real-time display interface including the second electronic device shot by the first electronic device using the camera, so as to realize the virtual image of the second electronic device by realizing the remote control operation on the control interface of the first electronic device.
  • Remote control operation and then realize corresponding operations such as manipulation of the display interface of the second electronic device or setting change.
  • the application framework layer provides application programming interfaces and programming frameworks for applications in the application layer.
  • the application framework layer can also include some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • the application framework layer is mainly responsible for invoking a service interface that communicates with the hardware abstraction layer, so as to transmit a virtual touch event generation request to the hardware abstraction layer, where the virtual touch event generation request includes
  • the virtual touch event generation service may include various modules for managing the virtual touch event generation involved in the embodiments of the present application.
  • the virtual touch event generation service includes a target detection module, a coordinate conversion module, and a Wi-Fi service.
  • FIG. 5b is a schematic diagram of generating an anchor point according to an embodiment of the present application.
  • the screen frame of the second electronic device such as a smart TV
  • the first electronic device such as a mobile phone
  • the above-mentioned coordinate conversion module is used to determine the coordinate point sequence of the touch operation after the first electronic device detects the touch operation of the user in the opened smart remote control App, and filter out the coordinates belonging to the second electronic device. Display the coordinate points in the interface area, and then perform coordinate conversion on the selected punctuation points, so as to realize the conversion of the coordinate point sequence generated by the touch operation performed by the user in the opened smart remote control App into the second electronic device.
  • the corresponding coordinate point sequence in the interface is used to determine the coordinate point sequence of the touch operation after the first electronic device detects the touch operation of the user in the opened smart remote control App, and filter out the coordinates belonging to the second electronic device. Display the coordinate points in the interface area, and then perform coordinate conversion on the selected punctuation points, so as to realize the conversion of the coordinate point sequence generated by the touch operation performed by the user in the opened smart remote control App into the second electronic device.
  • the corresponding coordinate point sequence in the interface is used to determine the coordinate point sequence of the touch operation after the first electronic device detects the
  • the above Wi-Fi service is used to ensure the information interaction between the first electronic device and the second electronic device, so as to realize sending the virtual touch event generated by the first electronic device to the second electronic device, and further realize the control of the second electronic device. virtual remote control.
  • the hardware abstraction layer is the support of the application framework layer and an important link between the application framework layer and the kernel layer. It can provide services for developers through the application framework layer.
  • the function of the virtual touch event generation service in the embodiment of the present application may be implemented by configuring a first process in the hardware abstraction layer, and the first process may be a sub-process independently constructed in the hardware abstraction layer.
  • the first process may include modules such as a virtual touch event generation service configuration interface, a virtual touch event generation service controller, and the like.
  • the virtual touch event generation service configuration interface is a service interface that communicates with the application framework layer.
  • the kernel layer can be the Linux kernel (linux kernel) layer, which is an abstraction layer between hardware and software.
  • the kernel layer has many drivers related to the first electronic device, including at least a display driver and a camera driver; a camera driver; an audio driver; a Bluetooth driver;
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the inner core layer processes the touch operation into an original input event (including touch coordinates, time stamps of the touch operation, etc.), where the original input event is, for example, a user touch event in the subsequent embodiments of the present application.
  • Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event.
  • the control corresponding to the click operation is the control of the application icon of the smart remote control App
  • the interface of the application framework layer is called, and then the kernel layer is called to start the operation.
  • the camera driver captures an image including the display interface of the second electronic device through the camera 293, and displays it as real-time image information on the display screen 294 of the first electronic device through the display driver, and the real-time image information includes the image captured by the camera.
  • a mobile phone will be used as an example for introduction.
  • the hardware architecture of the mobile phone may be as shown in FIG. 2, and the software architecture may be as shown in FIG. 3, wherein the software programs and/or modules corresponding to the software architecture in the mobile phone may be stored in the memory 140, and the processor 130 may run
  • the software programs and applications stored in the memory 140 are used to execute the flow of the remote control method provided by the embodiments of the present application.
  • the terms that may be involved in the following embodiments are explained below:
  • User touch event used to indicate a touch operation performed by the user on the first electronic device, and the user touch event includes a touch point coordinate point or a touch point coordinate sequence of the touch operation. For example, if the touch operation is a click operation, the user's touch event includes the coordinates of the touch point. If the touch operation is a sliding operation, the user touch event includes a touch point coordinate sequence (the touch point coordinate sequence includes at least the sliding start position coordinates, the sliding end coordinates, or further includes the sliding distance and the sliding direction, etc. ).
  • the touch operation may include, but is not limited to, a click operation, a slide operation, a long-press operation, a double-click operation, an operation of clicking on a screen to specify a control, and the like, which is not limited in this application.
  • the virtual touch event includes relative coordinate points or relative coordinate sequences of the virtual touch operation.
  • the relative coordinate points (or sequence) are the coordinates (or sequence) of the touch point coordinate points (or sequence) of the first electronic device according to the coordinate positions of the four corner points of the screen frame of the second electronic device in the two-dimensional projection interface of the first electronic device ) is obtained after performing coordinate transformation, and the specific implementation of coordinate transformation will be introduced later, and will not be described in detail here.
  • Two-dimensional projection interface an interface for instructing the two-dimensional display effect of the second electronic device on the display interface of the first electronic device.
  • the two-dimensional projection interface is an interface obtained by performing two-dimensional projection according to the positional relationship of the second electronic device in the three-dimensional space.
  • terminal in the embodiments of the present application, “terminal”, “first electronic device”, “device”, “electronic device”, “mobile phone”, etc. may be mixed, that is, various devices that can be used to implement the embodiments of the present application .
  • the first electronic device is a smartphone as an example, but the application is not limited to smartphones, and any electronic device that can realize touch operation can be used as the first electronic device in this application.
  • the first electronic device; and the second electronic device is a smart TV as an example, but this application is not limited to smart TVs, and any electronic device that needs to be remotely controlled can be used as the second electronic device in this application, such as the first electronic device.
  • the second electronic device may also be a smart projector or the like.
  • Scenario 1 Refer to the schematic diagram of the interface processing effect shown in Figure 4a.
  • the display interface of the second electronic device shown in (1-1) in FIG. 4a (a smart TV is used as an example in FIG. 4a ) displays the home page of the program, and is a selection interface under the category of TV dramas.
  • the user can open the smart remote control App in the manner shown in FIG. 1a and FIG. 1b.
  • the smart remote control App triggers the mobile phone camera to turn on, and the user captures the content displayed on the current smart TV display interface based on the mobile phone camera as the real-time image displayed on the mobile phone interface.
  • the content displayed on the display interface of the first electronic device (a mobile phone is used as an example in FIG. 4a).
  • the user performs a click operation on "TV drama 2"
  • the mobile phone determines the coordinates of the touch point of the click position in the display interface of the mobile phone.
  • the coordinates of the touch point are converted into coordinates to obtain the corresponding relative coordinates on the smart TV, and a virtual touch event including the obtained relative coordinates on the smart TV is generated and sent to the smart TV.
  • the smart TV After the smart TV receives the virtual touch event, after analyzing the virtual touch event to obtain the relative coordinates, it is determined that the position corresponding to the relative coordinates on the smart TV display interface is the position of the poster screen of "TV drama 2". Therefore, it can be determined that the user touches to play the TV series 2, and the smart TV plays the TV series 2 in response to the virtual touch event.
  • the display interface is changed to the interface displayed by the smart TV in (2-1) in FIG. 4a, and the display interface is the start screen of the TV series 2.
  • Scenario 2 Refer to the schematic diagram of the interface processing effect shown in Figure 4b.
  • (1-1) in FIG. 4b is the display interface of the smart TV
  • (1-2) in FIG. 4b is the display interface of the mobile phone.
  • the user slides the screen from bottom to top in the right half area of the display interface of the mobile phone, and after receiving the user's sliding operation through the touch panel, the mobile phone obtains the coordinates of multiple touch points corresponding to the user's sliding operation.
  • the coordinates of the plurality of touch points include the sliding start coordinates and the sliding end coordinates, and the coordinates of the sliding start coordinates and the sliding end coordinates are respectively transformed to obtain the corresponding relative sliding start coordinates and relative sliding end coordinates on the smart TV.
  • a virtual touch event including the obtained relative sliding start coordinates and relative sliding end coordinates on the corresponding smart TV is generated and sent to the smart TV.
  • the smart TV After the smart TV receives the virtual touch event, it analyzes the virtual touch event to obtain the relative sliding start coordinates and relative sliding end coordinates on the corresponding smart TV, and then calculates the corresponding relative sliding start coordinates on the smart TV and The sliding distance between relative sliding end coordinates. Then, the corresponding volume is determined according to the pre-stored correspondence between different sliding distances and the volume.
  • the smart TV determines that the relative sliding start coordinates and relative sliding end coordinates on the corresponding smart TV are two coordinates obtained by sliding from the bottom to the top of the smart TV display screen, and are located in the display area on the right half of the display interface, then It can be determined that the virtual touch event is to increase the volume, and the smart TV increases the current volume by the determined volume by invoking the corresponding volume adjustment control. Conversely, if the smart TV determines that the relative sliding start coordinates and the relative sliding end coordinates on the corresponding smart TV are two coordinates obtained by sliding from top to bottom on the smart TV display screen, it can be determined that the virtual touch event is for Decrease the volume, the smart TV will lower the current volume by the determined volume by calling the corresponding volume adjustment control.
  • the smart TV can also display the volume adjustment status on the smart TV display interface.
  • it may be the volume pop-up window shown in (1-1) in FIG. 4b, and the volume pop-up window may be a volume adjustment display bar.
  • the display interface of the mobile phone will also display the volume adjustment display bar synchronously, so that the user can adjust the current volume through the volume adjustment display bar displayed on the display interface of the mobile phone, and the mobile phone can also drag the volume adjustment display bar by the user. Distance information and drag directions are sent to the smart TV through virtual touch events.
  • the smart TV parses the distance information and the dragging direction included in the virtual touch event, and then determines the volume corresponding to the parsed distance information according to the relationship between different distances and different volume levels, and then determines the volume corresponding to the parsed distance information according to the determined volume level and dragging direction. Whether to increase or decrease the volume of the currently playing program.
  • the increased or decreased volume may be the determined volume.
  • Scenario 3 Following the scenario described in Scenario 1, refer to the schematic diagram of the interface processing effect shown in FIG. 4c.
  • the smart TV displays the play screen of TV drama 2.
  • the smart TV can also determine that the user is on the mobile phone by processing the mobile phone and the smart TV with the same principle as in scene 1.
  • the click operation is to call the playback information control, and the smart TV displays the display interface of the smart TV as shown in (2-1) in Figure 4c after responding according to the virtual touch operation. That is, a " ⁇ " control for returning, a "
  • the user can continue to click on the " ⁇ " control for returning in the display interface of the mobile phone as shown in (2-2) in Figure 4c.
  • the smart TV determines that the mobile phone performs a click operation on the " ⁇ " control after analyzing the virtual touch event, it will call back the control to Make the smart TV execute the operation of exiting the play screen of TV drama 2.
  • the smart TV display interface is changed to the program home page display interface, that is, the display interface of the smart TV shown in (1-1) in FIG. 4a.
  • the user can also click the "
  • the TV will control the playback process of the current program according to the dragging direction and dragging distance of the user, and display the current display interface as the playback screen corresponding to the position to which the playback progress bar is adjusted.
  • the second electronic device performs touch control on its own display screen.
  • the virtual touch events that can be realized by the operation can be realized by the user performing a user operation on the display interface of the first electronic device, thereby facilitating the user to perform a remote control operation to generate a virtual touch event on the second electronic device and improving the user experience.
  • click operation and sliding operation can also include a click and long-press operation, for example, it can be used to realize double-speed playback of the currently playing program of the second electronic device; a double-click operation, for example, can be used to realize the Pause/replay operations of currently playing programs of the second electronic device; multi-finger operation events, such as zoom-in/zoom-out operations on the display interface of the second electronic device and other operations that can be implemented.
  • a click and long-press operation for example, it can be used to realize double-speed playback of the currently playing program of the second electronic device
  • double-click operation for example, can be used to realize the Pause/replay operations of currently playing programs of the second electronic device
  • multi-finger operation events such as zoom-in/zoom-out operations on the display interface of the second electronic device and other operations that can be implemented.
  • a schematic processing flow diagram of a remote control method provided by an embodiment of the present application includes the following steps:
  • the first electronic device acquires real-time image information including a display interface area of the second electronic device through a camera.
  • the processor in the first electronic device can control the camera device of the first electronic device to turn on after detecting that the smart remote control App installed in the first electronic device is triggered by the user, and the user can operate the camera device to control the first electronic device.
  • the device captures a picture, the picture including the display interface of the second electronic device.
  • the processor first needs to determine that the first electronic device is in the scene of the user interface for generating the virtual touch event.
  • the calling instruction is sent, and the calling instruction is sent to the camera, so that the camera of the first electronic device is turned on and in a working state after receiving the calling instruction.
  • a possible implementation manner of determining that the user interface is used to generate a virtual touch event is as follows: if the processor in the first electronic device detects through the touch sensor 280K that the user is on the touch panel for a specified application icon (all Described designated application is the App that is used to realize the remote control function, for example, it can be the click operation of the Smart Remote Control, Smart Life and other Apps contained in FIG. In the scene of the interface, the camera device is then triggered to turn on, and it is determined that the function of the real-time image information captured by the camera device is to generate a virtual touch event.
  • the click operation can also be implemented as the user clicking on the remote control control in the pull-down display interface of the notification bar of the first electronic device (for example, it may be the remote control icon control in the smart TV and smart projection included in FIG. 1b). Therefore, in order to provide various implementation manners for determining the virtual touch event generation scene, various display interfaces of the first electronic device may be preset with triggering entries for triggering the scene in the user interface for generating the virtual touch event , so as to facilitate the user to perform remote control operation.
  • the processor in the first electronic device may also receive, through the microphone 270C, a user's voice control for starting an application that implements a remote control function After the instruction, it is determined to be in a user interface scenario for generating a virtual touch event. For example, after the processor receives the voice control instruction of "turn on the smart remote control" sent by the user through the microphone, it triggers the display of the editing interface of the smart remote control App, so that the first electronic device is in the user interface for generating virtual touch events. under the scene.
  • the user points the camera of the first electronic device at the first electronic device.
  • the second electronic device shoots, so that the first electronic device receives the display interface of the second electronic device included in the real-time image information captured by the camera, and displays the real-time image information on the interface of the first electronic device in real time Synchronized display.
  • Fig. 5b (1) is the capture area range of the mobile phone
  • Fig. 5b (1-2) is the display interface for the area range captured by the smartphone, and the image displayed on the interface of the smartphone
  • the live image information includes the front appearance of the smart TV and the display interface of the smart TV.
  • the delay of the second electronic device is long, so as to avoid the poor chirality caused by the delay (that is, for the user's touch operation, the processing time for the second electronic device to perform the corresponding operation is long, so that the user can perceive the operation response of the second electronic device)
  • the first electronic device directly transmits the real-time image information captured by the camera device to the display driver, so that the real-time image information can be displayed synchronously on the user interface on the application layer through the display driver, that is, displayed as a real-time image.
  • the first electronic device identifies a display interface area belonging to the second electronic device from the real-time image information.
  • the first electronic device may first use object detection and object tracking technologies to monitor the captured real-time image information during implementation. Each frame of the image is analyzed, so that the screen frame of the second electronic device can be identified and tracked from each frame of the real-time image information.
  • the screen frame is used to determine the display interface area belonging to the second electronic device, that is, the area inside the screen frame is the display interface area of the second electronic device. Then, the first electronic device filters out touch operations that belong to the screen frame, and filters out touch operations that do not belong to the screen frame.
  • the anti-shake function on the first electronic device can be further implemented, and each frame of the captured real-time image information can be analyzed and identified during implementation. , based on the identified screen frame, the content of the display area belonging to the screen frame can be locked, thereby avoiding the blurring of the display area of the second electronic device displayed in the real-time image information caused by the shaking of the first electronic device.
  • the processor of the first electronic device can receive the camera device After the captured real-time image information is displayed on the display screen 294 as a real-time image, the display ratio is intelligently adjusted according to the pre-configured display range ratio based on the screen frame of the identified second electronic device based on the displayed real-time image.
  • the preconfigured display range ratio may be, for example, that the area occupied by the display interface of the second electronic device is two-thirds of the area of the display screen of the first electronic device.
  • the pre-configured The scale of the display range is two-thirds.
  • the range of the displayed real-time image can be enlarged based on the screen frame of the first electronic device, so that the real-time image can be The display ratio in the display screen of the first electronic device reaches two thirds.
  • the processor can use the wide-angle lens of the camera to capture a wider range.
  • the real-time image of the second electronic device can be displayed completely on the screen of the first electronic device, and the display interface of the second electronic device can be further displayed to a pre-configured display ratio.
  • the implementation manners of the first electronic device identifying the screen frame of the second electronic device may include the following:
  • the target detection module in the application framework layer in the first electronic device identifies the screen frame of the second electronic device according to a pre-trained target detection model.
  • the implementation of the training target detection model for the first electronic device may be as follows: the first electronic device uses a large number of frame images of real-time image information as training samples, and uses the screen frame of the second electronic device as a training target to learn the second electronic device. Finally, the target detection model is obtained by using the screen frame of the second electronic device as the output.
  • the target detection module uses the real-time image information as the input of the pre-trained target detection model, and then uses the target detection model to identify the screen frame of the second electronic device, and finally outputs the output.
  • the screen frame of the second electronic device is identified, and then the display interface area of the second electronic device is determined according to the screen frame of the second electronic device.
  • the screen frame of the second electronic device is identified from the real-time image information
  • the second electronic device is an intelligent projection, since there is no obvious screen frame on the projection interface of the intelligent projection, it may be difficult to accurately identify the screen frame through the pre-trained target detection model.
  • the first electronic device may send an instruction to generate an anchor point to the second electronic device through interaction with the second electronic device, so that the second electronic device can be displayed on the display interface (or the Anchor points that facilitate detection by the first electronic device are generated on the four corners of the projection interface).
  • the screen border of the second electronic device can also be determined according to the A, B, C, and D anchor points.
  • the first electronic device receives a user's touch operation on the touch panel, performs screening to obtain a first touch coordinate point (or sequence), and obtains a user touch event.
  • the processor in the first electronic device receives the user's touch operation on the touch panel through the touch sensor 280K, it acquires the touch coordinate point (or sequence) corresponding to the touch operation.
  • the processor processes it into a user touch event according to the touch point coordinates, timestamp and other information of the click operation.
  • the touch operation is a sliding operation
  • multiple touch coordinate points of the sliding operation are acquired, that is, a touch coordinate sequence, and the touch coordinate sequence includes at least the start coordinates of the touch sliding.
  • touch sliding end coordinates and determine the sliding distance and sliding direction of the sliding operation, and further process the touch sliding start coordinates, touch sliding end coordinates, sliding distance, sliding direction, timestamp and other information into the user touch control event.
  • the processor processes it into a user touch event according to the touch point coordinates of the click and long press operation, the long press duration and other information; If it is a multi-finger operation event, the processor processes it into a user touch event according to the touch point coordinates, timestamps and other information of each finger; other possible user operations are processed based on the same principle to obtain a user touch event, which is not discussed in this application. Repeat.
  • the first touch coordinate point (or sequence) belonging to the screen frame area in the user's touch operation is screened, so as to screen out the first touch coordinate point (or sequence) of the user on the first electronic device.
  • the virtual remote control operation performed in the display area of the second electronic device generates a virtual touch event according to the first touch coordinate point (or sequence); and the first electronic device controls the second touch that does not belong to the screen frame area.
  • the control coordinate point (or sequence) is ignored, and a virtual touch event is not generated according to the second touch coordinate point (or sequence).
  • the first electronic device converts the first touch coordinate point (or sequence) into a relative coordinate point (or sequence) of the second electronic device, and generates a virtual touch event.
  • the display interface of the second electronic device included in the real-time image displayed on the interface of the first electronic device is essentially a two-dimensional projection of the second electronic device in the three-dimensional space captured by the camera device.
  • the shooting angle of the electronic device is arbitrary, so the two-dimensional projection of the second electronic device displayed on the interface of the first electronic device may be an irregular quadrilateral, for example, as shown in (1) in Figure 6a and (1) in Figure 6b A schematic diagram of the screen bezel of the second electronic device. It can be understood that the touch coordinates of the user's touch operation received on the first electronic device cannot be in a one-to-one correspondence with the coordinates on the second electronic device. Therefore, in order to ensure the accuracy of the virtual touch event generated by the first electronic device.
  • the first electronic device can convert the first touch coordinate point (or sequence) to the corresponding second electronic device through the principle of augmented reality technology. touch relative coordinate points (or sequences), and the first electronic device processes and generates virtual touch events according to the touch relative coordinate points (or sequences), so that the second electronic device receives and parses the virtual touch events After the event, the virtual touch operation on the second electronic device can be obtained according to the relative coordinate points (or sequence), so as to obtain accurate touch coordinates on the second electronic device.
  • the coordinates of the four corners of the screen frame of the second electronic device in the display interface of the first electronic device are respectively (x1, y1), (x2, y2), (x3, y3), (x4, y4), and the touch coordinate point of the user touch event (assuming a click operation) on the display interface of the first electronic device is represented by (x, y) (if the user touches
  • the event is a sliding operation, which can be represented by a touch coordinate sequence.
  • the screen frame of the second electronic device is displayed on the two-dimensional projection interface as a set of vertical frames that are parallel to each other.
  • the line L1 is the left vertical border of the screen border of the second electronic device
  • the line L3 is the right vertical border
  • the lines L1 and L3 are in the left vertical border of the screen border of the second electronic device.
  • the first electronic device determines x' and y' respectively.
  • the value of the virtual touch event coordinate x' depends on the relative distance between (x', y') on the second electronic device and any vertical frame in the three-dimensional space.
  • a group of vertical frames of the second electronic device such as lines L1, L3, are shown to be parallel to each other, so it can be seen through (x, y) in the first
  • the relative distance relationship between the display interface of the electronic device (that is, the two-dimensional projection interface of the second electronic device) and any vertical frame can be obtained by analogy to the vertical distance between (x', y') and the same position in the three-dimensional space.
  • the relative distance of the border is that, taking the relative distance between the touch point and the left vertical frame as an example, the virtual touch event coordinate x' on the second electronic device can be obtained according to the following formula 1:
  • w in the formula is the width of the screen frame of the second electronic device
  • x in the formula is the abscissa in the coordinates of the user touch event on the first electronic device
  • x1 is the first electronic device in the display interface of the first electronic device.
  • x2 is the abscissa of the right screen border.
  • the size information of the second electronic device may be created when the first electronic device and the second electronic device are established for the first time.
  • the first electronic device requests the second electronic device; or the second electronic device actively sends it to the first electronic device.
  • the first electronic device may store the size information of the second electronic device locally for subsequent use.
  • the first electronic device may also acquire other related information of the second electronic device, such as model information of the second electronic device.
  • the first electronic device may also determine the size information of the second electronic device according to the acquired model information of the second electronic device.
  • the first electronic device locally stores the corresponding relationship between model information and size information of the second electronic device; or the first electronic device can perform a local network query to determine the size information of the second electronic device.
  • the value of the virtual touch event coordinate y' depends on the relative distance between (x', y') on the second electronic device and any horizontal frame in the three-dimensional space.
  • a group of horizontal borders in the screen borders displayed on the display interface are not parallel to each other.
  • the lines L2 and L4 in (1) of FIG. 6a are not parallel to each other.
  • the relative distance between the display interface (ie, the two-dimensional projection interface of the second electronic device) and any horizontal frame can be obtained by analogy to obtain the relative distance between (x', y') in the three-dimensional space and the horizontal frame at the same position. Therefore, y' cannot be determined according to the above embodiment of determining x'.
  • a group of horizontal borders of the second electronic device are displayed as non-parallel to each other on the two-dimensional projection interface
  • a group of screen borders of the second electronic device in the horizontal direction are essentially parallel in three-dimensional space.
  • the points (x, y) on the two-dimensional projection plane and the lines L2 (the upper horizontal frame in the two-dimensional projection interface) and L4 (the lower screen frame in the two-dimensional projection interface) can be used. Inversely deduce the distance relationship between (x', y') and the screen frame of the second electronic device in the three-dimensional space.
  • the three-dimensional projection principle can be used to extend the lines L2 and L4 to obtain the intersection (x5, y5) of the two horizontal frames in the two-dimensional projection plane, as shown in (3) in Figure 6a; then, connect the The intersection point (x5, y5) and (x, y) are obtained to obtain the intersection point (x6, y6) of the connecting line and the line L1 (the left vertical frame in the two-dimensional projection interface).
  • the proportion of any sub-line segment after L1 is divided into sub-line segments L1a and L1b relative to L1 by using the intersection point (x6, y6) as the dividing point in the two-dimensional projection interface can be deduced by analogy.
  • the virtual touch event coordinate y' on the second electronic device can be implemented to be obtained according to the following formula 2:
  • h in the formula is the height of the screen frame of the second electronic device
  • y6 in the formula is the intersection point obtained at the left screen frame L1 on the first electronic device according to the above-mentioned embodiment
  • y3 is the left screen frame L1 and the lower screen frame The ordinate of the intersection of L4, the ordinate of the intersection of the left screen border L1 of y1 and the upper screen border L2.
  • the screen frame of the second electronic device is displayed on the two-dimensional projection interface as a set of horizontal frames that are parallel to each other.
  • the line L2 is the upper screen frame of the second electronic device
  • the line L4 is the lower screen frame
  • the lines L2 and L4 are in the first electronic device.
  • the first electronic device determines x' and y' respectively.
  • the value of the virtual touch event coordinate x' depends on the relative distance between (x', y') on the second electronic device and any vertical frame in the three-dimensional space. Since a group of vertical borders in the screen borders displayed on the display interface of the first electronic device in scenario 2 are not parallel to each other, for example, the lines L1 and L3 shown in FIG.
  • the implementation of x' determines x' in scenario 2, and the x' in scenario 2 can be determined based on the same principle as the implementation of determining y' in scenario 1.
  • the three-dimensional projection principle can be used to extend the lines L1 and L3, so as to obtain the intersection (x5', y5') of the two vertical frames in the two-dimensional projection plane, as shown in (3) in Figure 6b; Then, the intersection point (x5', y5') and (x, y) are connected to obtain the intersection point (x6', y6') of the connection line and the line L2 (the upper horizontal frame in the two-dimensional projection interface).
  • the proportion of any sub-line segment after L2 is divided into short sub-lines L2a and L2b relative to L2 by using the intersection point (x6', y6') as the dividing point in the two-dimensional projection interface.
  • the relative distance between the point (x', y') in the three-dimensional space and the vertical frame at the same position can be obtained by analogy.
  • the virtual touch event coordinate x' on the second electronic device can be implemented as obtained according to the following formula 3:
  • w in the formula is the width of the screen frame of the second electronic device
  • x6' in the formula is the intersection point obtained by the upper screen frame L2 on the first electronic device according to the above embodiment
  • x2 is the upper screen frame L2 and the right screen
  • x1 is the abscissa of the intersection of the left screen border L1 and the upper screen border L2.
  • the value of the virtual touch event coordinate y' depends on the relative distance between (x', y') on the second electronic device and any horizontal frame in the three-dimensional space.
  • the lines L2 and L4 are shown to be parallel to each other, so y' in the scenario 2 is determined based on the same principle as the implementation of the determination of x' in the scenario 1.
  • a possible implementation is that, taking the relative distance between the touch point and the upper horizontal frame as an example, the virtual touch event coordinate y' on the second electronic device can be obtained according to the following formula 4:
  • h in the formula is the height of the screen frame of the second electronic device
  • y in the formula is the ordinate in the coordinates of the user touch event on the first electronic device
  • y1 is the first electronic device in the display interface of the first electronic device.
  • the ordinate of the upper screen frame of the second electronic device, and y2 is the ordinate of the lower screen frame.
  • the screen frame of the second electronic device is displayed on the two-dimensional projection interface as a set of horizontal frames and a set of vertical frames that are not parallel to each other.
  • a set of horizontal borders are shown as lines L2, L4 in Figure 6a
  • a set of vertical borders are shown as lines L1, L3 in Figure 6b. If a group of vertical borders or a group of horizontal borders are displayed in a scenario where they are not parallel to each other, the virtual touch event coordinate x on the second electronic device in scenario 3 is determined according to the same principle as the implementation of determining x' in scenario 2 '; and the virtual touch event coordinate y' on the second electronic device in the scenario 3 is determined according to the same principle as the implementation of determining y' in the scenario 1, and the specific implementation is not repeated here.
  • Scenario 4 If in the real-time image information captured by the first electronic device, the screen frame of the second electronic device is displayed on the two-dimensional projection interface as a set of horizontal frames and a set of vertical frames that are parallel to each other.
  • S505 The first electronic device establishes a communication connection with the second electronic device.
  • the first electronic device establishes a communication connection with the second electronic device.
  • the first electronic device and the second electronic device may be connected to the same local area network to realize the establishment of a communication connection.
  • a wireless communication channel can be established through the Wi-Fi P2P technology, and the wireless communication channel has the characteristics of low latency.
  • the first electronic device may establish a short-range communication connection with the second electronic device.
  • the first electronic device may establish a communication connection with the second electronic device before it is determined to be in the virtual touch event generation scene.
  • An electronic device and a second electronic device are always connected to the same local area network, that is, the communication connection is always maintained.
  • the first electronic device introduced in the foregoing embodiment needs to send an anchor point generation instruction to the second electronic device, a communication connection between the first electronic device and the second electronic device is established. It can be understood that, when the first electronic device needs to interact with the second electronic device, a communication connection between the first electronic device and the second electronic device can be established.
  • the first electronic device sends the virtual touch event to the second electronic device; wherein the virtual touch event carries the relative coordinate point (or sequence).
  • S507 The second electronic device parses the virtual touch event, and performs a corresponding operation according to the relative coordinate point (or sequence).
  • the second electronic device After receiving the virtual touch event sent by the first electronic device, the second electronic device performs corresponding processing through its own operating system. Since the second electronic device is also a first electronic device, the hardware architecture of the second electronic device may also be as shown in FIG. 2 , and the software architecture may be as shown in FIG. 3 .
  • the software programs and/or modules corresponding to the software architecture in the second electronic device may be stored in the memory 140, and the processor 130 may execute the software programs and applications stored in the memory 140 to execute the remote control method provided by the embodiments of the present application. process.
  • the second electronic device determines that the relative coordinate point (or sequence) is on the display interface of the second electronic device and the second electronic device identifies the control involved in the touch position; then, operates the corresponding control, so that the second electronic device performs the corresponding operation according to the virtual touch event.
  • the user's touch event is that the user performs a bottom-up sliding operation on the right half of the screen frame of the second electronic device displayed in the display interface of the first electronic device
  • the second electronic device will slide from the bottom to the top.
  • the received virtual touch event parses out the corresponding coordinate sequence, it is determined that the corresponding coordinate sequence includes the sliding start coordinates and the sliding end coordinates, and the location area of the sliding start coordinates and the sliding end coordinates is the right side of the display interface.
  • Half-side area determines that the sliding direction is the positive direction of the y-axis (assuming that the coordinate axis is established according to the horizontal screen display interface, such as the method of establishing the coordinate axis shown in Figures 6a and 6b), then determine the touch sent by the first electronic device.
  • the control event is to increase the volume of the currently playing program.
  • the second electronic device After determining that the control to be called corresponding to the virtual touch event is a volume control, the second electronic device performs a callback operation for the virtual touch event through the volume control, so as to realize the setting of "increase the volume".
  • the present application implements the generation of a virtual touch event on the second electronic device through the first electronic device.
  • the transmitted data after the first electronic device establishes a communication connection with the second electronic device, the transmitted data only includes virtual touch events obtained according to the touch coordinate points (or sequences) of the user touch events, Therefore, it has the characteristics of a small amount of data to be transmitted, so that the use of the embodiments of the present application has the characteristics of low delay, so that the control accuracy of the first electronic device on the second electronic device can be better improved, and the second electronic device can be satisfied.
  • a variety of remote control scenarios are required.
  • the methods provided by the embodiments of the present application are introduced from the perspective of an electronic device as an execution subject.
  • the first electronic device may include a hardware structure and/or a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the embodiments of the present application provide a remote control apparatus, which is applied to a first electronic device and used to implement a remote control method provided by the embodiments of the present application.
  • the apparatus 700 includes: a transceiver unit 701 and a processing unit 702 .
  • the transceiver unit 701 is configured to receive a first operation; the processing unit 702 is configured to activate the camera in response to receiving the first operation; use the camera to acquire a first image, the first image Including the display interface area of the second electronic device, the content in the display interface area of the second electronic device is the current display interface of the second electronic device; displaying the first user interface, in the first user interface including the first image; the transceiver unit 701 is configured to receive a touch operation for the first user interface; the processing unit 702 is configured to respond to receiving the touch operation for the first user interface touch operation, obtain the touch point coordinates corresponding to the touch operation on the first user interface, and generate a virtual touch event based on the touch point coordinates, wherein the virtual touch event includes the first touch The relative coordinates in the current display interface of the second electronic device; after the virtual touch event is sent to the second electronic device through the transceiver unit 701, so that the second electronic device receives the virtual touch event, In response to the virtual touch event, an operation corresponding to the relative coordinate
  • the transceiver unit 701 when configured to receive the first operation, it is specifically configured to: when the first electronic device displays a first application icon, the transceiver unit 701 An operation of an application icon; or, the transceiver unit 701 receives a first voice operation; or, the transceiver unit 701 receives a first gesture operation.
  • the transceiver unit 701 before the transceiver unit 701 is used for the touch operation on the first user interface, it is also used for The area is the display interface area of the second electronic device.
  • the processing unit 702 when the processing unit 702 is configured to determine the area inside the screen frame of the second electronic device, the processing unit 702 is further configured to: send an anchor point generation instruction to the second electronic device through the transceiver unit 701 to After the second electronic device receives the instruction to generate an anchor point, generate an anchor point on the display interface in response to the instruction to generate an anchor point; the processing unit 702 is configured to generate an anchor point according to the acquired first image The information of the anchor point determines the area inside the screen frame of the second electronic device.
  • the processing unit 702 is configured to identify the display interface area of the second electronic device, it is further configured to: determine whether the size of the display interface area of the second electronic device is smaller than a first threshold; If the size of the display interface area of the second electronic device is smaller than the first threshold, the first electronic device adjusts the focal length of the camera device to the first focal length.
  • the processing unit 702 is further configured to: acquire at least one touch control point coordinates; determine whether the at least one touch point coordinate is within the display interface area of the second electronic device; in response to the first electronic device determining that the at least one touch point coordinate is within the second electronic device In the display interface area of the device, the first electronic device generates the virtual touch event.
  • the processing unit 702 generates a virtual touch event based on the coordinates of the touch point, which is specifically used for: converting the acquired touch corresponding to the touch operation on the first user interface.
  • the control point coordinates are converted into relative coordinates in the current display interface of the second electronic device; and the virtual touch event is generated according to the relative coordinates in the current display interface of the second electronic device.
  • the touch operation includes a click operation and/or a slide operation
  • the coordinates of the touch point corresponding to the touch operation for the first user interface include a single coordinate and/or multiple coordinates.
  • the first electronic device is a mobile phone
  • the second electronic device is a smart TV
  • the camera device is a rear camera of the mobile phone
  • the current display interface of the second electronic device is the menu interface of the smart TV
  • the first user interface is the display interface after the first electronic device enters the remote control mode
  • the first image is an image including the menu interface of the smart TV
  • the The menu interface includes multiple controls, and the multiple controls correspond to different functions
  • the display interface area of the second electronic device is the image area of the menu interface of the smart TV acquired by the mobile phone
  • the touch operation of the first user interface is a click operation on one of the multiple controls in the image of the menu interface of the smart TV in the first user interface
  • the second electronic device Executing the operation corresponding to the relative coordinates in the current display interface of the second electronic device is for the second electronic device to execute a function corresponding to one of the multiple controls in the image of the menu interface of the smart TV.
  • Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

本申请实施例提供一种遥控方法、电子设备及系统,第一电子设备包括摄像装置,第一电子设备和第二电子设备建立无线连接,所述第一电子设备接收第一操作;响应于接收到所述第一操作,所述第一电子设备启动所述摄像装置;所述第一电子设备利用所述摄像装置获取第一图像;所述第一电子设备显示第一用户界面,所述第一用户界面中包括所述第一图像;所述第一电子设备接收针对于所述第一用户界面的触控操作;响应于接收到所述针对于所述第一用户界面的触控操作,所述第一电子设备生成虚拟触控事件,并向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行第三操作。

Description

一种遥控方法、电子设备及系统
相关申请的交叉引用
本申请要求在2020年10月27日提交中国专利局、申请号为202011167645.6、申请名称为“一种遥控方法、电子设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种遥控方法、电子设备及系统。
背景技术
目前,随着智能电视的发展,使得智能电视支持的功能越来越多,且用户界面的信息越来越复杂,因此对智能电视的遥控需求越来越高。传统遥控技术中,通过简单的按键结构对智能电视实现遥控的方式,已经不能较好地支持当前智能电视对遥控功能的需求,因此如何更好的实现对智能电视的遥控是一个值得研究的问题。
基于触控技术应用的普及性,当前存在将触控技术与遥控技术结合起来以实现对智能电视遥控的研究方向,相比于传统遥控技术来说,该遥控方式可以满足智能电视更多类型的遥控需求。相关技术中,可以通过在传统遥控器上增加触控面板,来实现按键与触控面板相结合的方式对智能电视进行遥控,然而该技术方案存在不能满足当前智能电视业务的各种操控场景的需要;或者,在智能手机上开发用于实现智能电视遥控的应用,通过将智能电视的显示内容进行编码之后传输到智能手机上,从而智能手机在进行解码之后可以实现对智能电视的遥控,然后该技术方案由于传输数量大的特点存在时延严重的缺点。因此,为了解决相关技术中存在的问题,本申请提供了一种可以满足当前智能电视业务的更多种操控场景的需要并且可以降低时延的遥控方法。
发明内容
本申请提供一种遥控方法、电子设备及系统,用以满足当前智能电视业务的多种操控场景的需要,并且存在时延低的特点,可以提升用户对智能电视的遥控操控准确性,从而可以提升用户触控操作体验。
第一方面,本申请实施例提供了一种遥控方法,适用于第一电子设备,所述第一电子设备包括摄像装置,所述第一电子设备和第二电子设备可以建立无线连接,所述第一电子设备接收第一操作;响应于接收到所述第一操作,所述第一电子设备启动所述摄像装置;所述第一电子设备利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;所述第一电子设备显示第一用户界面,所述第一用户界面中包括所述第一图像;所述第一电子设备接收针对于所述第一用户界面的触控操作,所述第一电子设备获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;所 述第一电子设备向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
该方法中,第一电子设备利用摄像装置获取摄取图像并根据摄取图像显示为用户界面之后,用户可以实现在第一电子设备上进行触控操作,从而由第一电子设备根据用户触控操作生成虚拟触控事件并发送给第二电子设备,以实现对第二电子设备的遥控,从而用户根据对第一电子设备的触控操作实现对第二电子设备的各种遥控场景的需要;并且,本申请中第一电子设备与第二电子设备之间的传输数据量为虚拟触控事件,具有数据量小的特点,从而可以降低第一电子设备与第二电子设备之间交互数据的时延,可以提高用户的触控体验。
一种可能的实现方式,所述第一电子设备在自身安装的用于进行触控操作的应用图标被点击触发时,确定接收到第一操作;或者,所述第一电子设备在通知栏下拉界面中的遥控控件被点击触发时,确定接收到第一操作;或者,所述第一电子设备接收到语音操作或手势操作之后,确定接收到第一操作。这样,可以为用户在第一电子设备上提供多种可以确定进入虚拟触控事件生成场景的入口,从而为用户提供便携性,提升用户体验。
一种可能的实现方式,在所述第一电子设备接收针对于所述第一用户界面的触控操作之前,所述第一电子设备识别所述第二电子设备的显示界面区域。具体实施为,所述第一电子设备确定所述第一用户界面中的所述第二电子设备屏幕边框内部的区域为所述第二电子设备的显示界面区域,且所述第二电子设备的显示界面区域中显示的内容为所述第二电子设备当前的显示界面。基于此,由于第一电子设备在摄取包含第二电子设备的显示界面时,摄取的画面信息为比第二电子设备范围更广的范围,然后在第一电子设备的显示界面上,除了对第二电子设备的显示界面区域内的触控操作之外,其他的触控操作不用于生成虚拟触控事件,因此,通过识别第一电子设备显示界面中的第二电子设备的屏幕边框从而确定第一电子设备显示界面上显示的第二电子设备,从而可以提升生成虚拟触控事件的效率,可以降低遥控过程的处理时间。
一种可能的实现方式,第一电子设备确定第二电子设备屏幕边框内容的区域可以实施为,所述第一电子设备向所述第二电子设备发送生成锚点指令,以使所述第二电子设备接收所述生成锚点指令之后,响应于所述生成锚点指令,在显示界面上生成锚点;所述第一电子设备根据获取到的所述第一图像中的所述锚点的信息确定所述第二电子设备屏幕边框内部的区域。基于此,第一电子设备可以通过检测锚点的方式,在确定第一电子设备显示界面中包含的多个目标锚点之后,根据锚点所确定的区域确定为第二电子设备的显示区域,从而可以提升第一电子设备生成虚拟触控事件的效率和准确度,可以降低遥控过程的处理时间。
一种可能的实现方式,所述第一电子设备识别所述第二电子设备的显示界面区域后,判断所述第二电子设备的显示界面区域的大小是否小于第一阈值;若所述第二电子设备的显示界面区域的大小小于第一阈值,所述第一电子设备调整所述摄像装置的焦距至第一焦距。这样,为了便于用户在第一电子设备的显示界面上进行更准确的触控操作,可以根据第一电子设备的显示界面中包含的第二电子设备的显示界面的显示大小,智能调整摄像装置的焦距,从而便于用户通过第一电子设备进行触控操作,生成更加准确的虚拟遥控事件。
一种可能的实现方式,所述第一电子设备接收到针对于所述第一用户界面的触控操作 后,生成所述虚拟触控事件之前,所述第一电子设备获取至少一个触控点坐标;所述第一电子设备确定所述至少一个触控点坐标是否在所述第二电子设备的显示界面区域内;响应于所述第一电子设备确定所述至少一个触控点坐标在所述第二电子设备的显示界面区域内,所述第一电子设备生成所述虚拟触控事件。基于此,该实现方式提供了一种生成虚拟事件准确的具体实施方式,第一电子设备基于获取的触控点坐标生成虚拟触控事件,可以保证生成的虚拟触控事件的准确性。
一种可能的实现方式,所述响应于接收到所述针对于所述第一用户界面的触控操作,所述第一电子设备生成虚拟触控事件具体实施为,所述第一电子设备将所述获取到的针对于所述第一用户界面的触控操作对应的触控点坐标转换成所述第二电子设备的显示界面区域中的相对坐标,所述第一电子设备根据所述第二电子设备当前的显示界面中的相对坐标生成所述虚拟触控事件。基于此,第一电子设备得到用户操作对应的触控点坐标之后,根据第二电子设备在第一电子设备的二维投影界面的显示效果,将触控点坐标转换为属于第二电子设备的相对坐标,从而保证基于第一电子设备生成的虚拟触控事件的准确性。
一种可能的实现方式,所述触控操作包括点击操作和/或滑动操作,所述针对于所述第一用户界面的触控操作对应的触控点坐标包括单个坐标和/或多个坐标。该场景提供了用户的操作可以为点击操作或滑动操作,用户可以通过在第一电子设备上执行不同的用户操作,从而满足生成的虚拟遥控事件的多样性,进而满足第二电子设备的各种遥控场景,可以提高用户体验。
一种可能的实现方式,所述第一电子设备为手机,所述第二电子设备为智能电视,所述摄像装置为所述手机的后置摄像头,其中所述第二电子设备当前的显示界面为所述智能电视的菜单界面;所述第一用户界面为第一电子设备进入遥控模式后的显示界面;所述第一图像为包括所述智能电视的菜单界面的图像,所述智能电视的菜单界面中包括多个控件,所述多个控件对应于不同的功能;所述第二电子设备的显示界面区域为所述手机获取的所述智能电视的菜单界面的图像区域;所述针对于所述第一用户界面的触控操作为针对于所述第一用户界面中所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件的点击操作;所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作为所述第二电子设备执行所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件对应的功能。基于此,该实现方式给出一种第一电子设备与第二电子设备的可能的场景,即通过手机实现对智能电视遥控的场景。
第二方面,本申请实施例还提供一种电子设备,适应于第一电子设备,所述第一电子设备包括摄像装置,所述第一电子设备和第二电子设备建立无线连接,所述电子设备包括:触摸屏,其中,所述触摸屏包括触控面板和显示屏;一个或多个处理器;存储器;多个应用程序;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行以下步骤:接收第一操作;响应于接收到所述第一操作,启动所述摄像装置;利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;显示第一用户界面,所述第一用户界面中包括所述第一图像;接收针对于所述第一用户界面的触控操作;响应于接收到所述针对于所述第一用户界面的触控操作,获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控 事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行接收第一操作时,具体执行:显示第一应用图标,接收针对于所述第一应用图标的操作;或者,接收第一语音操作;或者,接收第一手势操作。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备还执行:在接收针对于所述第一用户界面的触控操作之前,确定所述第一用户界面中的所述第二电子设备屏幕边框内部的区域为所述第二电子设备的显示界面区域。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行确定所述第二电子设备屏幕边框内部的区域时,具体执行:向所述第二电子设备发送生成锚点指令,以使所述第二电子设备接收所述生成锚点指令之后,响应于所述生成锚点指令,在显示界面上生成锚点;根据获取到的所述第一图像中的所述锚点的信息确定所述第二电子设备屏幕边框内部的区域。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备识别所述第二电子设备的显示界面区域后还执行:判断所述第二电子设备的显示界面区域的大小是否小于第一阈值;若所述第二电子设备的显示界面区域的大小小于第一阈值,调整所述摄像装置的焦距至第一焦距。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备接收到针对于所述第一用户界面的触控操作后,生成所述虚拟触控事件之前,还执行:获取至少一个触控点坐标;确定所述至少一个触控点坐标是否在所述第二电子设备的显示界面区域内;响应于所述第一电子设备确定所述至少一个触控点坐标在所述第二电子设备的显示界面区域内,生成所述虚拟触控事件。
一种可能的实现方式,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行基于所述触控点坐标生成虚拟触控事件时,具体执行:将所述获取到的针对于所述第一用户界面的触控操作对应的触控点坐标转换成所述第二电子设备当前的显示界面中的相对坐标;根据所述第二电子设备当前的显示界面中的相对坐标生成所述虚拟触控事件。
一种可能的实现方式,所述触控操作包括点击操作和/或滑动操作,所述针对于所述第一用户界面的触控操作对应的触控点坐标包括单个坐标和/或多个坐标。
一种可能的实现方式,所述第一电子设备为手机,所述第二电子设备为智能电视,所述摄像装置为所述手机的后置摄像头,其中所述第二电子设备当前的显示界面为所述智能电视的菜单界面;所述第一用户界面为第一电子设备进入遥控模式后的显示界面;所述第一图像为包括所述智能电视的菜单界面的图像,所述智能电视的菜单界面中包括多个控件,所述多个控件对应于不同的功能;所述第二电子设备的显示界面区域为所述手机获取的所述智能电视的菜单界面的图像区域;所述针对于所述第一用户界面的触控操作为针对于所述第一用户界面中所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件的点击操作;所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作为所述第二电子设备执行所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件对应的功能。
第三方面,本申请实施例提供一种遥控系统,包括第一电子设备和第二电子设备,所述第一电子设备包括摄像装置,所述第一电子设备和第二电子设备可以建立无线连接,所述第一电子设备用于接收第一操作;响应于接收到所述第一操作,所述第一电子设备启动所述摄像装置;所述第一电子设备利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;所述第一电子设备用于显示第一用户界面,所述第一用户界面中包括所述第一图像;所述第一电子设备用于接收针对于所述第一用户界面的触控操作;响应于接收到所述针对于所述第一用户界面的触控操作,所述第一电子设备获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;所述第一电子设备向所述第二电子设备发送所述虚拟触控事件;所述第二电子设备接收所述虚拟触控事件;响应于接收到所述虚拟触控事件,所述第二电子设备执行第三操作。
第四方面,本申请实施例还提供一种遥控装置,该遥控装置包括执行上述第一方面中任一种可能实现方式中的方法的模块/单元。这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第五方面,提供了一种计算机可读存储介质,计算机可读介质存储有计算机程序(也可以称为代码,或指令)当其在计算机上运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法。
第六方面,提供了一种计算机程序产品,计算机程序产品包括:计算机程序(也可以称为代码,或指令),当计算机程序被运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法。
第七方面,还提供一种电子设备上的图形用户界面,该电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行本申请实施例第一方面任一可能的实现方式时显示的图形用户界面。
需要说明的是,上述第二方面至第七方面中各个设计可以达到的有益效果,请具体参考上述第一方面中各项设计所对应的有益效果,此处不再一一重复赘述。
附图说明
图1a为本申请实施例提供的一种应用场景示意图;
图1b为本申请实施例提供的一种应用场景示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的一种安卓操作系统结构示意图;
图4a为本申请实施例提供的一种遥控方法的应用场景图之一;
图4b为本申请实施例提供的一种遥控方法的应用场景图之二;
图4c为本申请实施例提供的一种遥控方法的应用场景图之三;
图5a为本申请实施例提供的一种遥控方法的流程示意图;
图5b为本申请实施例提供的一种锚点生成示意图;
图6a为本申请实施例提供的坐标转换的示意图之一;
图6b为本申请实施例提供的坐标转换的示意图之二;
图7为本申请实施例提供的一种遥控装置的结构示意图。
具体实施方式
随着社会的快速发展,移动终端设备例如手机越来越普及。手机不但具有通信功能、还具有强大的处理能力、存储能力、照相功能、数据编辑功能等。因此,手机不仅可以作为一种通信工具,更是用户的移动数据库,它可以提供移动式的计算环境,以实现对接收数据进行预定义的处理后,输出具有控制功能的指令,如本申请涉及到的第一电子设备发送的用于实现遥控的虚拟触控事件的指令。因此,基于移动终端设备的便捷性和触控能力,通过移动终端设备可以发送对如智能电视等需要进行遥控的电子设备的虚拟触控事件,可以适用于各种可能的遥控场景。
基于背景技术中的描述,相关技术中,通过在传统遥控器上增加触摸面板的遥控方式,虽然考虑到了触控技术和遥控技术的结合来实现智能电视更多类型的遥控需求,但其存在触控操作不准确的缺点。例如,无法实现对播放视频的快进或后退的准确操控,从而无法满足智能电视对各种操控场景的需求。或者,通过在智能手机上开发用于实现智能电视遥控的应用,将智能电视当前的显示内容编码成图像数据后,发送到智能手机上;然后再通过对图像数据进行解码之后,显示在智能手机的触摸面板上;在智能手机的触摸面板上进行触控操作之后,再将包含触控操作的显示内容进行编码后反馈回智能电视;从而实现智能手机对智能电视的遥控操作。然而该遥控方式存在跟手性差的问题,从而导致操控不准确,用户体验差。
有鉴于此,本申请提供一种遥控方法,基于增强现实技术和图像跟踪技术的原理,采用第一电子设备的摄像和呈像能力,利用第一电子设备包括的摄像装置获取摄取图像,摄取图像中包含第二电子设备的显示界面区域,并且通过将用户对第一电子设备的触控操作转换为对第二电子设备的虚拟遥控操作,并通过第一电子设备基于所述虚拟遥控操作生成的虚拟触控事件发送给第二电子设备,从而实现对第二电子设备的遥控。
本申请实施例可以应用于诸如手机、平板电脑、可穿戴设备(例如,手表、手环、头盔、耳机等)、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能家居设备(例如,智能电视,智能投影,智能音箱,智能摄像头等)等电子设备。可以理解的是,本申请实施例对电子设备的具体类型不作任何限制。
上述电子设备中可以安装各种功能的应用(application,在下文中简称App),比如微信、邮箱、微博、视频、智慧生活和智能遥控等App。在本申请实施例中,重点关注第一电子设备中安装的用于发送虚拟触控事件的App如何生成虚拟触控事件的操作。
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
需要说明的是,本申请实施例中“至少一个”是指一个或者多个,多个是指两个或两个以上。除非另有定义,本文所使用的所有的技术和科学术语与属于本申请中的技术领域的技术人员通常理解的含义相同。本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明申请。应理解,本申请中除非另有说明,“发明表示或的意思。例如,A/B可以表示A或B。本申请中的“和/或”仅仅是一种描述关联对象的关联关 系,表示可以存在三种关系。
本申请实施例涉及的多个,是指大于或等于两个。
还需要说明的是,本申请实施例中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
参阅图1a,为本申请实施例提供的一种遥控方法的应用场景图。如图1a所示,该应用场景可以包括:第一电子设备101和第二电子设备102。第一电子设备101与第二电子设备102可以接入同一个局域网,也可以接入不同的局域网。其中,第一电子设备101与第二电子设备102接入在同一个局域网的示例,具体可以为:第一电子设备101和第二电子设备102与同一无线接入点建立无线连接。例如,第一电子设备101与第二电子设备102接入同一个无线保真(wireless fidelity,Wi-Fi))热点,再例如第一电子设备101和第二电子设备102也可以通过蓝牙协议接入同一个蓝牙信标下。再比如,第一电子设备101和第二电子设备102间也可以通过近距离无线通信技术(near field communication,NFC)标签触发通信连接,通过蓝牙模块传输加密信息进行身份认证。在认证成功后通过点对点(point to point,P2P)的方式进行数据传输。
实施时,第一电子设备101可以作为发送客户端,在基于用户的触控操作生成虚拟触控事件之后,将虚拟触控事件发送给第二电子设备102。其中,第一电子设备101进入用于生成虚拟触控事件的用户界面的第一种可能的实施方式参阅图1a所示。第一电子设备101所显示界面为包括多个App图标的手机主界面,在该手机主界面中包括用于生成虚拟触控事件的智能遥控App图标。用户通过点击如图1a中的手机主界面中包含的智能遥控App图标,第一电子设备101检测到智能遥控App图标被触发后,跳转到用于生成虚拟触控事件的用户界面,根据再次检测到的用户触控操作生成虚拟触控事件,并将生成的各种虚拟触控事件通过网络或通过第一电子设备101和第二电子设备102之间建立的近距离通信连接将生成的虚拟触控事件发送给第二电子设备102,从而使得第二电子设备102响应于虚拟触控事件执行相应的操作。在一些实施例中,第一电子设备101可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴设备(如智能手表)等电子设备。第二电子设备102可以为智能电视、智能投影仪等电子设备,本申请实施例不做具体限定。
第一电子设备101进入用于生成虚拟触控事件的用户界面的第二种可能的实施方式参阅图1b所示。第一电子设备101所显示界面还可以为手机的通知栏下拉界面,在该通知栏下拉界面中包括对如智能电视或智能投影等第二电子设备进行遥控的遥控控件。用户通过点击如图1b中的第一电子设备101中的通知栏下拉界面中包含的遥控控件,第一电子设备101在检测到遥控控件被触发后,跳转到用于生成虚拟触控事件的用户界面,后续实施方式与第一种可能的实施方式原理相同,在此不再赘述。其中,第一电子设备101生成虚拟触控事件的具体实施方式将在后文中介绍,在此暂不展开说明。
本申请实施例可以应用到的电子设备的示例性实施例包括但不限于搭载
Figure PCTCN2021116179-appb-000001
Figure PCTCN2021116179-appb-000002
或者其它操作系统的便携式电子设备。上述便携式电子设备也可以 是其它便携式电子设备,诸如具有触敏表面(例如触控面板)的膝上型计算机(Laptop)等。
请参考图2,电子设备200可以为本申请实施例中的第一电子设备101和/或第二电子设备102,本申请实施例这里以第一电子设备101为电子设备200为例,对本申请实施例提供的电子设备200进行介绍。其中,本领域技术人员可以理解,图2所示的电子设备200仅仅是一个范例,并不构成对电子设备200的限定,并且电子设备200可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图2中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备200可以包括处理器210,外部存储器接口220,内部存储器221,天线1,天线2,移动通信模块251,无线通信模块252,传感器模块280,摄像头293以及显示屏294等。其中传感器模块280可以包括陀螺仪传感器280A,触摸传感器280K(当然,电子设备200还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。
处理器210可以运行本申请实施例提供的遥控方法,以便于在保证操控准确性的基础上,通过第一电子设备实现对智能电视进行遥控的各种遥控功能的需求,从而提升用户的体验。处理器210可以包括不同的器件,比如集成中央处理器(central processing unit,CPU)和图形处理器(graphics processing unit,GPU)时,CPU和GPU可以配合执行本申请实施例提供的遥控方法,比如遥控方法中的部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏294可以显示照片、视频、网页、或者文件等。本申请实施例中,显示屏294可以显示如图1a所示的第一电子设备101的手机主界面,或者如图1b所示的通知栏下拉界面。当处理器210检测到用户的手指(或触控笔等)针对某一应用图标的触摸事件后,响应于该触摸事件,打开与该应用图标对应的应用的用户界面,并在显示屏294上显示该应用的用户界面。
摄像头293(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频,例如,若电子设备200为如图1a、图1b中所示的第一电子设备101,则第一电子设备101的摄像头用于捕获包含第二电子设备102的显示界面区域的图像。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行电子设备200的各种功能应用以及数据处理。内部存储器221还可以存储本申请实施例中提供的虚拟触控事件生成算法对应的一个或多个计算机程序。当内部存储器221中存储的虚拟触控事件生成算法的代码被处理器210运行时,处理器210可以执行虚拟触控事件的生成,并通过移动通信模块251或无线通信模块252发送给第二电子设备102。
当然,本申请实施例提供的虚拟触控事件生成算法的代码还可以存储在外部存储器中。这种情况下,处理器210可以通过外部存储器接口220运行存储在外部存储器中的虚拟触控事件生成算法的代码,处理器210可以运行虚拟触控事件的生成,并通过移动通信模块251或无线通信模块252发送给第二电子设备102。
下面介绍传感器模块280的功能。
陀螺仪传感器280A,可以用于确定电子设备200的运动姿态。在一些实施例中,可以通过陀螺仪传感器280A确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器280A可以用于检测电子设备200当前的运动状态,比如抖动还是静止。本申请实施例中,若通过陀螺仪传感器280A检测到电子设备200为抖动状态时,电子设备200可以及时对摄像头293摄取的实时影像进行分析识别,避免由于抖动导致的虚拟触控事件生成不准确问题的产生。
触摸传感器280K,也称“触控面板”。触摸传感器280K可以设置于显示屏294,由触摸传感器280K与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280K用于检测作用于其上或附近的触控操作,例如本申请实施例中用于生成虚拟触控事件的用户触控操作。触摸传感器可以将检测到的触控操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏294提供与触控操作相关的视觉输出。在另一些实施例中,触摸传感器280K也可以设置于电子设备200的表面,与显示屏294所处的位置不同。
示例性的,用户通过触摸传感器280K点击如图1a所示的手机主界面中智能遥控的图标,触发处理器210启动智能遥控应用,通过显示屏294显示跳转到的用于生成虚拟触控事件的用户界面,并触发打开摄像头293。
电子设备200的无线通信功能可以通过天线1,天线2,移动通信模块251,无线通信模块252,调制解调处理器以及基带处理器等实现。其中,本申请实施例中,第一电子设备101和第二电子设备102之间可以通过电子设备200的无线通信功能实现虚拟触控事件等信息的交互。
无线通信模块252可以提供应用在电子设备200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。
应理解,在实际应用中,电子设备200可以包括比图2所示的更多或更少的部件,本申请实施例不作限定。
为了实现本申请实施例提供的方法中的各功能,电子设备200可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以采用分层架构的Android系统为例,示例性说明电子设备的软件结构。图3示出了本申请实施例提供的Android系统的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架(framework)层,安卓运行时(Android runtime)和系统库,硬件抽象层,以及内核层。
应用程序层是操作系统的最上一层,可以包括一系列应用程序包。如图3所示,应用程序层可以包括操作系统的原生应用程序和第三方应用程序,其中,操作系统的原生应用程序可以包括用户界面(user interface,UI)、相机、短信息、通话等,第三方应用程序可以包括地图,智慧生活、智能遥控等。下文中提到的应用,可以是电子设备出厂时已安装的操作系统的原生应用,也可以是用户在使用电子设备的过程中从网络下载或从其他电子 设备获取的第三方应用。
在本申请一些实施例中,应用程序层可以用于实现编辑界面的呈现,上述编辑界面可以为第一电子设备用于用户实现本申请重点关注的比如智能遥控等App中的针对第二电子设备生成的虚拟触控事件的操作。示例性的,上述编辑界面可以为第一电子设备的触摸屏上所显示的智能遥控App的控制界面,例如图5b中(1-2)中所示的第一电子设备显示的用户界面,该用户界面显示的是第一电子设备利用摄像装置拍摄的包含第二电子设备的实时显示界面的画面信息,从而通过在第一电子设备的控制界面上实现遥控操作,以实现针对第二电子设备的虚拟遥控操作,进而实现对第二电子设备的显示界面操控或设置变更等相应操作。
应用程序框架层为应用程序层的应用程序提供应用编程接口和编程框架。应用程序框架层还可以包括一些预定义函数。如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。在本申请一些实施例中,该应用程序框架层主要负责调用与硬件抽象层之间通信的服务接口,以将虚拟触控事件生成请求传递到硬件抽象层,所述虚拟触控事件生成请求包含有虚拟触控事件生成服务的预定义编程,用于实现对本申请中第二电子设备需要的各种虚拟触控事件的生成;还负责进行对登录认证的用户名和密码进行管理等。示例性的,虚拟触控事件生成服务可以包括用于管理本申请实施例中涉及到的生成虚拟触控事件所需要的各种模块。例如,虚拟触控事件生成服务包括目标检测模块、坐标转换模块、Wi-Fi服务等。
其中,上述目标检测模块用于执行从第一电子设备打开的智能遥控App的控制界面中对第二电子设备的显示界面区域进行检测和遥控,以实现对第二电子设备的更精准的界面操控。图5b为本申请实施例提供的一种锚点生成示意图。例如,结合图5b所示,从图5b中(1-2)所示的第一电子设备(例如为手机)的显示界面中检测出第二电子设备(例如为智能电视)的屏幕边框,从而确定智能电视的显示界面区域。
上述坐标转换模块用于执行当第一电子设备检测到用户在打开的智能遥控App中的触控操作后,确定所述触控操作的坐标点序列,并筛选出属于所述第二电子设备的显示界面区域内的坐标点,然后对筛选出来的对标点进行坐标的转换,从而实现将用户在打开的智能遥控App中进行的触控操作所生成的坐标点序列转换为第二电子设备上的界面中对应的坐标点序列。
上述Wi-Fi服务用于保障第一电子设备与第二电子设备之间的信息交互,从而实现将第一电子设备产生的虚拟触控事件发送给第二电子设备,进一步实现对第二电子设备的虚拟遥控操作。
硬件抽象层(hardware abstraction layer,HAL)是应用程序框架层的支撑,是连接应用程序框架层与内核层的重要纽带,其可通过应用程序框架层为开发者提供服务。示例性的,可以通过在硬件抽象层配置第一进程来实现本申请实施例中虚拟触控事件生成服务的功能,第一进程可以是在硬件抽象层中单独构建的子进程。其中,第一进程可以包括虚拟触控事件生成服务配置接口、虚拟触控事件生成服务控制器等模块。其中,虚拟触控事件生成服务配置接口是与应用程序框架层进行通信的服务接口。
内核层可以是Linux内核(linux kernel)层,是硬件和软件之间的抽象层。内核层有许多与第一电子设备相关的驱动程序,至少包含显示驱动和摄像头驱动;照相机驱动;音频驱动;蓝牙驱动;Wi-Fi驱动等,本申请实施例对此不做任何限制。
结合上述图2中对电子设备的硬件框架的介绍以及图3中对电子设备的软件框架的介绍,下面针对遥控应用场景,示例性说明电子设备200的软件以及硬件的工作原理。
当触摸传感器280K接收到触控操作,相应的硬件中断被发给内核层。内核层将触控操作加工成原始输入事件(包括触摸坐标,触控操作的时间戳等信息),其中,所述原始输入事件例如为后续本申请实施例中的用户触控事件。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。例如以该触控操作是单击操作,且该单击操作所对应的控件为智能遥控App应用图标的控件为例,智能遥控App被启动时调用应用框架层的接口,进而通过调用内核层启动摄像头驱动,通过摄像头293摄取包含第二电子设备的显示界面的图像,并通过显示驱动实现在第一电子设备的显示屏294上显示为实时影像信息,实时影像信息中包括摄像头摄取的图像。
在下文的介绍中,将以手机为例进行介绍。应理解,手机的硬件架构可以如图2所示,软件架构可以如图3所示,其中,手机中的软件架构对应的软件程序和/或模块可以存储在存储器140中,处理器130可以运行存储器140中存储的软件程序和应用以执行本申请实施例提供的遥控方法的流程。为了方便理解,下面先对以下实施例中可能涉及到的名词进行解释:
(1)用户触控事件:用于指示用户在第一电子设备上进行的触控操作,所述用户触控事件中包含触控操作的触控点坐标点或触控点坐标序列。例如,若触控操作为点击操作,则用户触控事件包含的为触控点坐标点。若触控操作为滑动操作,则用户触控事件包含为触控点坐标序列(所述触控点坐标序列中至少包括滑动起始位置坐标、滑动结束坐标,或者进一步包括滑动距离及滑动方向等)。其中,触控操作可以包括但不限于点击操作、滑动操作、长按操作、双击操作、点击屏幕指定控件操作等,本申请这里不做限定。
(2)虚拟触控事件:用于指示第一电子设备根据用户触控事件转换为针对第二电子设备上的虚拟触控操作,以实现第二电子设备根据所述虚拟触控事件执行相应操作。所述虚拟触控事件中包含虚拟触控操作的相对坐标点或相对坐标序列。其中,所述相对坐标点(或序列)为第一电子设备根据在第一电子设备二维投影界面中第二电子设备的屏幕边框的四个角点坐标位置将触控点坐标点(或序列)进行坐标转换之后得到的,具体的坐标转换的实施方式在后文中介绍,在此暂不详述。
(3)二维投影界面:用于指示第二电子设备在第一电子设备的显示界面上的二维显示效果的界面。所述二维投影界面为根据三维空间中第二电子设备的位置关系进行二维投影之后得到的界面。
应理解,本申请实施例中,“终端”、“第一电子设备”、“设备”、“电子设备”、“手机”等可以混用,即指可以用于实现本申请实施例的各种设备。
为了便于理解,以下介绍到的实施例中,以第一电子设备为智能手机为例,但本申请并不限定于智能手机,可以实现触控操作的任一电子设备均可作为本申请中的第一电子设备;并且,以第二电子设备为智能电视为例,但本申请并不限定于智能电视,需要进行遥控的任一电子设备均可作为本申请中的第二电子设备,例如第二电子设备还可为智能投影仪等。
下面介绍几个本申请实施例的场景情况,以便于更好理解本申请实施例。
为了便于理解本申请提供的遥控方法,以下通过结合图4a至4c中所示的用户界面,对采用本申请提供的遥控方法可以达到的界面处理效果进行说明。包括如下几种可能的场 景:
场景1:参阅图4a中所示的界面处理效果示意图。其中图4a中(1-1)所示的第二电子设备(图4a中以智能电视作为示例)的显示界面显示的为节目首页,且为电视剧分类下的选择界面。基于前述介绍的实施方式,用户可以像图1a和图1b所示的方式打开智能遥控App。然后智能遥控App触发手机摄像头打开,用户基于手机的摄像头摄取当前智能电视显示界面显示的内容,作为手机界面显示的实时影像。具体如图4a中(1-2)所示的第一电子设备(图4a中以手机作为示例)的显示界面所显示的内容。
此时,用户对“电视剧2”执行点击操作,手机在接收到该点击操作之后,确定该点击位置在手机显示界面中的触控点坐标。并对该触控点坐标进行坐标转换,得到对应的智能电视上的相对坐标,并生成包括得到的智能电视上的相对坐标的虚拟触控事件发送给智能电视。
智能电视接收所述虚拟触控事件之后,解析所述虚拟触控事件得到所述相对坐标之后,确定该相对坐标在智能电视显示界面上对应的位置为“电视剧2”的海报画面所在的位置。因此可以确定用户触控播放电视剧2,则智能电视响应于该虚拟触控事件,播放电视剧2。并将显示界面变更为如图4a中(2-1)中智能电视所显示的界面,该显示界面即为电视剧2的开始播放画面。
基于介绍的场景1的实施过程,以下介绍的场景2至3中触发进入生成虚拟触控事件的界面的实施方式类似,故此在以下介绍内容中不再赘述。
场景2:参阅图4b中所示的界面处理效果示意图。其中图4b中(1-1)为智能电视的显示界面,图4b中(1-2)为手机的显示界面。此时,用户在手机的显示界面上右半侧区域自下向上滑动屏幕,手机通过触摸面板接收到用户的滑动操作后,获取用户滑动操作对应的多个触控点坐标。多个触控点坐标包括滑动起始坐标和滑动结束坐标,并对滑动起始坐标和滑动结束坐标分别进行坐标转换,得到对应的智能电视上的相对滑动起始坐标和相对滑动结束坐标。并生成包括得到的对应的智能电视上的相对滑动起始坐标和相对滑动结束坐标的虚拟触控事件发送给智能电视。
智能电视接收所述虚拟触控事件之后,解析所述虚拟触控事件得到对应的智能电视上的相对滑动起始坐标和相对滑动结束坐标之后,计算对应的相对智能电视上的滑动起始坐标和相对滑动结束坐标之间的滑动距离。然后根据预先存储的不同滑动距离和音量大小的对应关系,确定对应的音量大小。
智能电视如果确定出对应的智能电视上的相对滑动起始坐标和相对滑动结束坐标是从智能电视显示屏幕自下而上滑动得到的两个坐标,且位于显示界面的右半侧显示区域,则可确定该虚拟触控事件是为了增大音量,则智能电视通过调用相应的调节音量控件将当前音量增大确定出的音量大小。反之,智能电视如果确定出对应的智能电视上的相对滑动起始坐标和相对滑动结束坐标是从智能电视显示屏幕自上而下滑动得到的两个坐标,则可确定该虚拟触控事件是为了降低音量,则智能电视通过调用相应的调节音量控件将当前音量降低确定出的音量大小。
此外,智能电视在确定该滑动操作是为了调整音量之后,还可以在智能电视显示界面上显示音量调节情况。比如可以为图4b中(1-1)所显示的音量弹窗,该音量弹窗可以为音量调节显示条。这样手机的显示界面上也会同步显示该音量调节显示条,以使用户可以通过手机显示界面上显示的该音量调节显示条调整当前音量,则手机也可以将用户对音量 调节显示条拖动的距离信息和拖动方向通过虚拟触控事件发送给智能电视。智能电视解析虚拟触控事件中包括的距离信息和拖动方向,然后根据不同距离与不同音量大小的关系确定解析出的距离信息对应的音量大小,再根据确定的音量大小以及拖动方向,确定是增加还是降低当前播放节目的音量。其中增大或降低的音量大小可以为确定出的音量大小。
场景3:沿用场景1所述的场景,参阅图4c中所示的界面处理效果示意图。图4c中(1-1)智能电视显示的为电视剧2的播放画面。此时,当用户在如图4c中(1-2)所示的手机显示界面上进行点击操作之后,通过与场景1相同原理的手机与智能电视的处理,智能电视也可以确定用户在手机上的点击操作是为了调取播放信息控件,则智能电视根据虚拟触控操作进行响应之后显示如图4c中(2-1)所示的智能电视的显示界面。即智能电视显示界面中显示用于返回的“<”控件、暂停的“||”控件、表示播放进度条的控件等。
进一步的,用户还可以继续在如图4c中(2-2)所示的手机显示界面中执行对用于返回的“<”控件的点击。通过与场景1或2中相同原理的手机与智能电视的处理,当智能电视通过对虚拟触控事件的解析之后,确定手机执行的是对“<”控件的点击操作,则回调该控件,以使智能电视执行退出电视剧2的播放画面的操作。并将智能电视显示界面变更为节目首页显示界面,也即为图4a中(1-1)所示的智能电视的显示界面。同理,用户还可以在手机显示界面上点击“||”控件,则智能电视会执行暂停当前播放画面的操作;或者,用户在手机显示界面上可以进行拖动播放进度条的操作,则智能电视会根据用户的拖动方向和拖动距离控制当前节目的播放进程,并将当前显示界面显示为播放进度条调整到的位置所对应的播放画面。
需要说明的是,以上介绍到的内容为本申请提供的几种可能的场景,但本申请并不限定于以上涉及到的几种场景,第二电子设备通过对其自身显示屏执行的触控操作可以实现的虚拟触控事件均可通过用户在第一电子设备的显示界面上执行用户操作来实现,从而方便用户进行遥控操作进而生成对第二电子设备的虚拟触控事件,提升用户体验。例如,除了以上涉及到的点击操作、滑动操作外,还可以包含点击长按操作,例如可以用以实现对第二电子设备的当前播放节目的倍速播放;双击操作,例如可以用以实现对第二电子设备的当前播放节目的暂停/重新播放操作;多指操作事件,例如可以实现对第二电子设备的显示界面的放大/缩小操作等其它可以实现的操作。
基于前述对采用本申请提供的方法可以达到的界面处理效果进行说明的内容,以下介绍本申请提供的遥控方法的实现过程,以用来说明如何采用本申请提供的方法达到前述介绍到的图4a至4c的界面处理效果,从而可以达到通过第一电子设备生成各种虚拟触控事件,满足第二电子设备的各种操控场景的需要。参阅图5a所示,为本申请实施例提供的一种遥控方法的处理流程示意图,包括以下步骤:
S501:第一电子设备通过摄像装置获取包含第二电子设备的显示界面区域的实时影像信息。
具体实施为,第一电子设备中的处理器可以在检测到第一电子设备中安装的智能遥控App被用户触发后,控制第一电子设备的摄像装置打开,用户可以操作摄像装置控制第一电子设备来摄取画面,该画面包含第二电子设备的显示界面。
也就是说,首先需要处理器确定第一电子设备处于用于生成虚拟触控事件的用户界面的场景下,处理器在确定第一电子设备处于该场景下后,可以产生用于驱动摄像装置的调用指令,并将所述调用指令发送给摄像装置,以使第一电子设备的摄像装置在接收到调用 指令之后打开处于工作状态。
其中,一种确定处于用于生成虚拟触控事件的用户界面的场景的可能实施方式为:若第一电子设备中的处理器通过触摸传感器280K检测到用户在触摸面板上针对指定应用图标(所述指定应用为用于实现遥控功能的App,例如可以为图1a中所包含的智能遥控、智慧生活等App)的点击操作,则可以确定第一电子设备处于用于生成虚拟触控事件的用户界面的场景下,进而触发摄像装置打开,并确定通过摄像装置拍摄到的实时影像信息的作用是为了进行虚拟触控事件的生成。其中,点击操作还可实施为用户点击第一电子设备通知栏下拉显示界面中的遥控控件(例如可以为图1b中所包含的智能电视、智能投影中的遥控图标控件)。因此,为了提供多种确定虚拟触控事件生成场景的实施方式,第一电子设备的各种显示界面中均可预先设置用于触发处于用于生成虚拟触控事件的用户界面的场景的触发入口,从而便于用户进行遥控操作。
此外,另一种确定处于用于生成虚拟触控事件的用户界面的场景的可能实施方式为:第一电子设备中的处理器还可以通过麦克风270C接收用户用于启动实现遥控功能应用的语音控制指令之后,确定处于用于生成虚拟触控事件的用户界面的场景下。例如,处理器通过麦克风接收到用户发出的“打开智能遥控”的语音控制指令后,则触发对智能遥控App编辑界面的显示,从而使第一电子设备处于用于生成虚拟触控事件的用户界面的场景下。
为了实现基于第一电子设备对第二电子设备的遥控操作,在确定第一电子设备处于用于生成虚拟触控事件的用户界面的场景下之后,用户将第一电子设备的摄像装置对准第二电子设备进行拍摄,以使第一电子设备接收摄像装置摄取的实时影像信息中包含的所述第二电子设备的显示界面,并将所述实时影像信息在第一电子设备的界面上进行实时同步显示。例如,如图5b中(1)显示的为手机的摄取区域范围,在图5b中(1-2)显示的为对智能手机拍摄到的区域范围的显示界面,在智能手机的界面上显示的实时影像信息中包含有智能电视的正面外观和智能电视的显示界面。
需要说明的是,上述实施方式中,由于本申请通过第一电子设备进行拍摄的作用是为了实现对第二电子设备的遥控,因此无需将摄像装置摄取的实时影像信息进行存储,而是可以实施为通过第一电子设备的预览能力实现对摄像装置摄取的实时影像信息进行实时同步显示,从而节省第一电子设备的存储空间,并提高第一电子设备处理效率,减少虚拟触控事件生成过程中的时延,进而避免由于时延导致跟手性差(即针对用户的触控操作,第二电子设备执行相应操作的处理时间长,以使用户对第二电子设备的操作反应是有感知的)的问题。实施时,第一电子设备将摄像装置摄取的实时影像信息直接传输给显示驱动,从而通过显示驱动实现将实时影像信息在应用程序层上的用户界面进行实时的同步显示,即显示为实时影像。
S502:第一电子设备从所述实时影像信息中识别出属于所述第二电子设备的显示界面区域。
由于第一电子设备摄取的实时影像信息一般包含比第二电子设备显示界面所处范围更大的区域范围,且用户对第二电子设备显示界面区域之外的触控操作与虚拟触控事件的生成无关。因此,第一电子设备为了实现对第二电子设备更准确的虚拟触控操作的监控,实施时,可以首先采用目标检测(object detection)和目标跟踪(object tracking)技术对摄取的实时影像信息中的每一帧图像进行分析,从而实现从所述实时影像信息的每一帧图像中识别出第二电子设备的屏幕边框并进行跟踪。
其中,所述屏幕边框用于确定属于第二电子设备的显示界面区域,即所述屏幕边框内部的区域为所述第二电子设备的显示界面区域。然后,第一电子设备再将属于所述屏幕边框内的触控操作筛选出来,而将不属于所述屏幕边框内的触控操作过滤掉。通过识别出第二电子设备的屏幕边框,用来作为第一电子设备对在触摸面板上接收到的用户触控事件的筛选条件,可以较好提高生成虚拟触控事件的准确性及数据处理效率。
此外,通过对第二电子设备的屏幕边框的识别和跟踪,还可进一步实现在第一电子设备上的防抖动功能,实施时可以对摄取的实时影像信息中的每一帧图像进行分析识别,基于识别出的屏幕边框,实现对属于屏幕边框中的显示区域内容进行锁定,从而避免由于第一电子设备的抖动导致的实时影像信息中显示的第二电子设备的显示区域模糊的问题产生。
而且,为了提升用户第一电子设备中显示的第二电子设备的显示界面的整体感,避免用户手持第一电子设备对第二电子设备进行摄取时,由于距离过远导致的在第一电子设备上的触控操作不便捷,或者由于距离过近导致第一电子设备的显示界面无法全部涵盖第二电子设备的显示界面的问题;实施时,可以在第一电子设备的处理器接收到摄像装置摄取的实时影像信息之后,在显示屏294上显示为实时影像,基于显示的实时影像以识别到的第二电子设备的屏幕边框为基准,根据预先配置的显示范围比例,智能调整显示比例。具体可实施为,若第一电子设备判断到所述实时影像中所述第二电子设备的显示界面区域的大小小于第一阈值,则调整所述第一电子设备的焦距。其中,预先配置的显示范围比例,例如可以是第二电子设备的显示界面所占区域范围是所述第一电子设备的显示屏区域范围的三分之二。
示例性的,若由于距离过远,所导致的第二电子设备的显示界面在所述第一电子设备的显示界面上的显示比例过小,不利于用户操作的问题,则可以基于预先配置的显示范围比例三分之二,在确定当前显示比例为二分之一时,则可以以所述第一电子设备的屏幕边框为基准对显示的实时影像的范围大小进行放大,以使实时影像在第一电子设备的显示屏幕中的显示比例达到三分之二。同理,若由于距离过近,所导致的第二电子设备的显示界面在所述第一电子设备的屏幕上无法显示完全的问题,则处理器可通过调用摄像装置的广角镜头来拍摄范围更广的实时影像,以满足第二电子设备的显示界面在所述第一电子设备的屏幕上显示完全、且可以进一步显示至预先配置的显示比例。
其中,第一电子设备识别第二电子设备的屏幕边框的实施方式可以有如下几种:
一种可能的实施方式为,第一电子设备中的应用程序框架层中的所述目标检测模块根据预先训练的目标检测模型识别第二电子设备的屏幕边框。其中,第一电子设备训练目标检测模型的实施方式可以为:第一电子设备通过大量实时影像信息的帧图像作为训练样本,并且以第二电子设备的屏幕边框作为训练目标,学习第二电子设备的屏幕边框的特性,最后以第二电子设备的屏幕边框作为输出,得到所述目标检测模型。
实施时,目标检测模块在接收到实时影像信息之后,将实时影像信息作为所述预先训练的目标检测模型的输入,然后通过所述目标检测模型进行第二电子设备的屏幕边框的识别,最后输出识别到的第二电子设备的屏幕边框,进而根据第二电子设备的屏幕边框确定第二电子设备的显示界面区域。
另一种可能的实施方式,若通过目标检测模型进行屏幕边框的识别结果较差时,例如, 在光线不好的暗光环境下,则从实时影像信息中识别出第二电子设备的屏幕边框较困难;或者,若第二电子设备为智能投影时,则智能投影的投影界面由于不存在明显的屏幕边框,因此可能很难通过预先训练的目标检测模型准确地识别出屏幕边框。在这种场景下,实施时,第一电子设备可通过与第二电子设备之间的交互,向第二电子设备发送生成锚点指令,以使第二电子设备在显示界面(或智能投影的投影界面)的四个角上生成便于第一电子设备进行检测的锚点。在第二电子设备生成锚点之后,第一电子设备通过目标检测模块检测到实时影像信息中包含的四个锚点后(例如图5b(1-1)所示的第二电子设备的显示界面中四个屏幕边框角点位置的A、B、C、D点),则根据A、B、C、D锚点也能确定第二电子设备的屏幕边框。
S503:第一电子设备接收用户在触摸面板上的触控操作,并进行筛选后得到第一触控坐标点(或序列),得到用户触控事件。
实施时,第一电子设备中的处理器通过触摸传感器280K接收到用户在触摸面板上的触控操作之后,获取所述触控操作对应的触控坐标点(或序列)。一种可能的实施方式中,若触控操作为点击操作,则处理器根据该点击操作的触控点坐标、时间戳等信息加工成用户触控事件。另一种可能的实施方式中,若触控操作为滑动操作,则获取所述滑动操作的多个触控坐标点,也即触控坐标序列,触控坐标序列至少包括触控滑动起始坐标、触控滑动结束坐标,以及确定滑动操作的滑动距离及滑动方向,进一步的根据所述触控滑动起始坐标、触控滑动结束坐标、滑动距离、滑动方向、时间戳等信息加工成用户触控事件。其他可能的实施方式中,若触控操作为点击长按操作,则处理器根据该点击长按操作的触控点坐标、长按时长等信息加工成用户触控事件;或者,若触控操作为多指操作事件,则处理器根据各手指的触控点坐标、时间戳等信息加工成用户触控事件;其他可能的用户操作基于相同原理进行加工得到用户触控事件,本申请在此不再赘述。
基于S502中识别出的第二电子设备的屏幕边框,筛选用户触控操作中属于该屏幕边框区域内的第一触控坐标点(或序列),从而筛选出用户在第一电子设备上针对第二电子设备的显示区域内执行的虚拟遥控操作,进而根据所述第一触控坐标点(或序列)生成虚拟触控事件;而第一电子设备对不属于该屏幕边框区域内的第二触控坐标点(或序列)进行忽略,进而不根据所述第二触控坐标点(或序列)生成虚拟触控事件。通过检测第二电子设备的屏幕边框,然后对用户触控事件对应的触控坐标点(或序列)根据检测到的屏幕边框进行筛选,从而可以实现对第二电子设备的屏幕边框区域之外的用户触控事件进行忽略,以减少虚拟触控事件生成时的计算数据量,并且可以提高虚拟触控事件生成的准确性。
S504:第一电子设备将所述第一触控坐标点(或序列)转换为所述第二电子设备的相对坐标点(或序列),并生成虚拟触控事件。
具体地,第一电子设备的界面上显示的实时影像中包含的第二电子设备的显示界面,本质上是通过摄像装置摄取得到的三维空间中的第二电子设备的二维投影,由于第一电子设备拍摄角度存在任意性,因此在第一电子设备的界面上显示的第二电子设备的二维投影可能是一个不规则四边形,例如图6a中(1)、图6b中(1)所示的第二电子设备的屏幕边框的示意图。可以理解的是,在第一电子设备上接收到的用户触控操作的触控坐标与第二电子设备上的坐标无法一一对应,因此,为了保障第一电子设备生成虚拟触控事件的准确性,第一电子设备得到第一触控坐标点(或序列)之后,可以通过增强现实技术的原理,将所述第一触控坐标点(或序列)转换为相对应的第二电子设备上的触控相对坐标点(或 序列),并且第一电子设备根据所述触控相对坐标点(或序列)加工生成虚拟触控事件,从而使得第二电子设备在接收并解析所述虚拟触控事件之后,可以根据所述相对坐标点(或序列)得到第二电子设备上的虚拟触控操作,从而得到第二电子设备上准确的触控坐标。
在进行坐标转换的实施过程中,假设将第一电子设备的显示界面中的第二电子设备的屏幕边框的四个角点坐标分别用(x1,y1)、(x2,y2)、(x3,y3)、(x4,y4)来表示,且将第一电子设备的显示界面上的用户触控事件(假设为点击操作)的触控坐标点用(x,y)来表示(倘若用户触控事件为滑动操作,则可以通过触控坐标序列来表示。该实施方式中仅以一个坐标点为例,坐标序列中的其他坐标点的坐标转换方式同理,后续不再赘述),而将转换后的第二电子设备的显示界面上的虚拟触控操作的相对坐标点以(x’,y’)来表示。以及,通过线L1、L3表示第二电子设备的一组垂直边框,通过线L2、L4表示第二电子设备的一组平行边框,基于第二电子设备的屏幕边框是一个矩形,则可以得到三维空间中一组垂直边框互相平行,且一组水平边框互相平行。以下结合图6a至6b所示的内容,示例性的介绍第一电子设备通过坐标转换模块进行坐标转换的实施方式,包括以下几种可能的场景:
场景1、在第一电子设备摄取的实时影像信息中,第二电子设备的屏幕边框在二维投影界面上显示为一组垂直边框互相平行。示例性的,参阅图6a中(1)所示的线L1、L3,线L1为所述第二电子设备的屏幕边框的左垂直边框,而线L3为右垂直边框,且线L1、L3在第一电子设备的显示界面上为两条平行线。
第一电子设备在确定虚拟触控事件的相对坐标(x’,y’)的实施过程中,分别确定x’和y’。
其中,虚拟触控事件坐标x’的取值取决于在三维空间中第二电子设备上的(x’,y’)与任一垂直边框的相对距离。示例性的,参阅图6a所示的内容,在二维投影平面中第二电子设备的一组垂直边框,如线L1、L3显示为互相平行的,因此可以通过(x,y)在第一电子设备的显示界面(即第二电子设备的二维投影界面)上与任一垂直边框的相对距离关系,可以采用类推方式得到在三维空间中(x’,y’)与相同位置上的垂直边框的相对距离。一种可能的实施方式为,以触控点与左垂直边框的相对距离为例,第二电子设备上的虚拟触控事件坐标x’可以根据以下公式1得到:
Figure PCTCN2021116179-appb-000003
其中,式中的w为第二电子设备的屏幕边框的宽度,式中的x为第一电子设备上的用户触控事件坐标中的横坐标,x1为第一电子设备的显示界面中的第二电子设备的左屏幕边框的横坐标,而x2则为右屏幕边框的横坐标。
需要说明的是,第二电子设备的尺寸信息(例如包括第二电子设备的屏幕边框的宽度信息和后续实施例中涉及到的高度信息)可以是在第一电子设备与第二电子设备首次建立通信连接的时候,第一电子设备向第二电子设备请求的;或者是第二电子设备主动发送给第一电子设备的。然后,第一电子设备获取到第二电子设备的尺寸信息之后,可以将该第二电子设备的尺寸信息存储在本地,以便于后续使用。此外,除了第二电子设备的尺寸信息之外,第一电子设备还可以获取第二电子设备的型号信息等其他第二电子设备的相关信息。示例性的,在第一电子设备进行生成虚拟触控事件之前,第一电子设备也可以根据获 取到的第二电子设备的型号信息确定第二电子设备的尺寸信息。其中第一电子设备本地存储有第二电子设备的型号信息与尺寸信息的对应关系;或者第一电子设备可以在本地进行网络查询确定第二电子设备的尺寸信息。
其中,虚拟触控事件坐标y’的取值取决于在三维空间中第二电子设备上的(x’,y’)与任一水平边框的相对距离,由于在场景1下第一电子设备的显示界面上显示的屏幕边框中的一组水平边框不互相平行,例如图6a的(1)中的线L2、L4显示为互相不平行,因此无法通过(x,y)在第一电子设备的显示界面(即第二电子设备的二维投影界面)上与任一水平边框的相对距离关系,类推得到(x’,y’)在三维空间中与相同位置上的水平边框的相对距。所以,无法根据以上确定x’的实施方式确定y’。
虽然,在二维投影界面上第二电子设备的一组水平边框显示为互相不平行,但在三维空间中第二电子设备水平方向上的一组屏幕边框本质上是平行的。在此基础上,实施时,可以通过在二维投影平面上的点(x,y)和线L2(二维投影界面中的上水平边框)、L4(二维投影界面中的下屏幕边框)反推出三维空间中(x’,y’)与第二电子设备的屏幕边框的距离关系。具体地,可以采用三维投影原理,对线L2、L4进行延长,从而得到两条水平边框在二维投影平面中的交点(x5,y5),如图6a中(3)所示;然后,连接交点(x5,y5)与(x,y),得到该连接线与线L1(二维投影界面中的左垂直边框)的交点(x6,y6)。可以理解的是,根据三维投影原理,通过二维投影界面中以交点(x6,y6)为分割点将L1划分为子线段L1a、L1b之后的任一子线段相对于L1所占比例,可以类推得到在三维空间中点(x’,y’)与相同位置上的水平边框的相对距离。一种可能的实施方式为,以子线段L1b相对于L1所占比例为例,第二电子设备上的虚拟触控事件坐标y’可实施为根据以下公式2得到:
Figure PCTCN2021116179-appb-000004
其中,式中的h为第二电子设备的屏幕边框的高度,式中的y6为第一电子设备上根据上述实施方式在左屏幕边框L1得到的交点,y3为左屏幕边框L1与下屏幕边框L4交点的纵坐标,y1左屏幕边框L1与上屏幕边框L2交点的纵坐标。
场景2、在第一电子设备摄取的实时影像信息中,第二电子设备的屏幕边框在二维投影界面上显示为一组水平边框互相平行。示例性的,参阅图6b中(1)所示的线L2、L4,线L2为所述第二电子设备的上屏幕边框,而线L4为下屏幕边框,且线L2、L4在第一电子设备的显示界面上为两条平行线。
第一电子设备在确定虚拟触控事件的相对坐标(x’,y’)的实施过程中,分别确定x’和y’。
其中,虚拟触控事件坐标x’的取值取决于在三维空间中第二电子设备上的(x’,y’)与任一垂直边框的相对距离。由于在场景2下第一电子设备的显示界面上显示的屏幕边框中的一组垂直边框互相不平行,例如图6b所示的线L1、L3显示为互相不平行,因此无法根据场景1中确定x’的实施方式确定场景2下的x’,可以基于场景1中确定y’的实施方式相同的原理确定场景2下的x’。
具体实施为,可以采用三维投影原理,对线L1、L3进行延长,从而得到两条垂直边框在二维投影平面中的交点(x5’,y5’),如图6b中(3)所示;然后,连接交点(x5’, y5’)与(x,y),得到该连接线与线L2(二维投影界面中的上水平边框)的交点(x6’,y6’)。可以理解的是,根据三维投影原理,通过二维投影界面中以交点(x6’,y6’)为分割点将L2划分为子线短L2a、L2b之后的任一子线段相对于L2所占比例,可以类推得到在三维空间中点(x’,y’)与相同位置上的垂直边框的相对距离。一种可能的实施方式为,以子线段L2b相对于L2所占比例为例,第二电子设备上的虚拟触控事件坐标x’可实施为根据以下公式3得到:
Figure PCTCN2021116179-appb-000005
其中,式中的w为第二电子设备的屏幕边框的宽度,式中的x6’为第一电子设备上根据上述实施方式在上屏幕边框L2得到的交点,x2为上屏幕边框L2与右屏幕边框L3交点的横坐标,x1为左屏幕边框L1与上屏幕边框L2交点的横坐标。
其中,虚拟触控事件坐标y’的取值取决于在三维空间中第二电子设备上的(x’,y’)与任一水平边框的相对距离。示例性的,参阅图6b所示的内容,线L2、L4显示为互相平行,因此基于场景1中确定x’的实施方式相同的原理确定场景2下的y’。一种可能的实施方式为,以触控点与上水平边框的相对距离为例,第二电子设备上的虚拟触控事件坐标y’可以根据以下公式4得到:
Figure PCTCN2021116179-appb-000006
其中,式中的h为第二电子设备的屏幕边框的高度,式中的y为第一电子设备上的用户触控事件坐标中的纵坐标,y1为第一电子设备的显示界面中的第二电子设备的上屏幕边框的纵坐标,而y2则为下屏幕边框的纵坐标。
场景3、在第一电子设备摄取的实时影像信息中,第二电子设备的屏幕边框在二维投影界面上显示为一组水平边框和一组垂直边框均互相不平行。
示例性的,结合图6a至6b所示的内容,其中一组水平边框如图6a中所示的线L2、L4,而一组垂直边框如图6b中所示的线L1、L3。若一组垂直边框或一组水平边框显示为互相不平行的场景下,则根据场景2下确定x’的实施方式相同的原理确定场景3下的第二电子设备上的虚拟触控事件坐标x’;以及,根据场景1下确定y’的实施方式相同的原理确定场景3下的第二电子设备上的虚拟触控事件坐标y’,具体的实施方式在此不再赘述。
场景4:若在第一电子设备摄取的实时影像信息中,第二电子设备的屏幕边框在二维投影界面上显示为一组水平边框和一组垂直边框均互相平行。
示例性的,示例性的,结合图6a至6b所示的内容,其中一组水平边框如图6b中所示的线L2、L4,而一组垂直边框如图6a中所示的线L1、L3。若一组垂直边框或一组水平边框显示为互相平行的场景下,则根据场景1下确定x’的实施方式相同的原理确定场景4下的x’,具体实施方式在此不再赘述;同理,根据场景2下确定y’的实施方式相同的原理确定场景4下的y’,具体实施方式在此不再赘述。
S505:第一电子设备建立与第二电子设备之间的通信连接。
为了保证可以实现第一电子设备对第二电子设备的遥控操作,实施时,第一电子设备建立与第二电子设备之间的通信连接。一种可能的实施方式中,第一电子设备可与第二电子设备接入同一局域网内,实现建立通信连接。示例性的,可通过Wi-Fi P2P技术建立无线通信通道,该无线通信通道具有低延时的特点。另一种可能的实施方式为,第一电子设 备可与第二电子设备之间通过建立近距离通信连接。
需要说明的是,本申请中不限定S505的执行时机,例如,第一电子设备建立与第二电子设备之间的通信连接也可以是在确定处于虚拟触控事件生成场景之前执行,比如,第一电子设备与第二电子设备一直接入到同一局域网内,即一直维持着通信连接。或者,在前述实施例中介绍到的第一电子设备需要向第二电子设备发送锚点生成指令时建立第一电子设备与第二电子设备之间的通信连接。可以理解的是,在第一电子设备需要与第二电子设备进行交互时,则便可以建立第一电子设备与第二电子设备的通信连接。
S506:第一电子设备发送所述虚拟触控事件给第二电子设备;其中,所述虚拟触控事件中携带所述相对坐标点(或序列)。
通过本申请采用的仅将触控事件中包含的触控坐标序列传输给第二电子设备的实施方式,由于传输数据量小的特点,所以数据传输所产生的时延低,因此可较好地减轻将第二电子设备的整个显示界面采用编解码进行传输的技术方案所导致的跟手性差问题。
S507:第二电子设备解析所述虚拟触控事件,并根据所述相对坐标点(或序列)执行相应操作。
第二电子设备在接收到第一电子设备发送的虚拟触控事件之后,通过自身的操作系统进行相应的处理。由于第二电子设备也是一种第一电子设备,因此第二电子设备的硬件架构也可以如图2所示,软件架构如图3所示。其中,第二电子设备中的软件架构对应的软件程序和/或模块可以存储在存储器140中,处理器130可以运行存储器140中存储的软件程序和应用以执行本申请实施例提供的遥控方法的流程。
实施时,第二电子设备在接收并解析第一电子设备发送的虚拟触控事件中的相对坐标点(或序列)之后,确定所述相对坐标点(或序列)在第二电子设备的显示界面上的触控位置,并且第二电子设备识别该触控位置涉及到的控件;然后,操作相应控件,以使第二电子设备根据所述虚拟触控事件执行相应操作。
示例性的,若用户的触控事件为用户在第一电子设备的显示界面中显示的第二电子设备的屏幕边框中的右半侧区域执行自下向上滑动的操作,第二电子设备在从接收到的虚拟触控事件解析出相对应的坐标序列之后,确定该相对应的坐标序列包含滑动起始坐标和滑动结束坐标,且滑动起始坐标和滑动结束坐标的位置区域为显示界面的右半侧区域,以及确定滑动方向为y轴的正方向(假设根据横屏显示界面建立坐标轴,比如图6a、6b中显示的建立坐标轴方式),则确定第一电子设备发送的所述触控事件是为了实现增大当前播放节目的音量。第二电子设备确定所述虚拟触控事件对应要调用的控件为音量控件之后,通过音量控件执行对所述虚拟触控事件的回调操作,以实现“增大音量”的设置。
根据前述介绍的实施方式,可见本申请实现了通过第一电子设备生成对第二电子设备的虚拟触控事件。并且,本申请采用的实施方式中,第一电子设备与第二电子设备建立通信连接之后,传输的数据仅包含根据用户触控事件的触控坐标点(或序列)得到的虚拟触控事件,因此具有传输数据量小的特性,进而使得采用本申请实施方式存在时延低的特点,从而可以较好提高第一电子设备对第二电子设备的操控准确性,并且可以满足第二电子设备的多种遥控场景的需求。
上述本申请提供的实施例中,从电子设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,第一电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实 现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
基于以上实施例,本申请实施例提供了一种遥控装置,该装置应用于第一电子设备中,用于实现本申请实施例提供的一种遥控方法。参阅图7,所述装置700包括:收发单元701、处理单元702。所述收发单元701用于接收第一操作;所述处理单元702用于响应于接收到所述第一操作,启动所述摄像装置;利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;显示第一用户界面,所述第一用户界面中包括所述第一图像;所述收发单元701用于接收针对于所述第一用户界面的触控操作;所述处理单元702用于响应于接收到所述针对于所述第一用户界面的触控操作,获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;通过所述收发单元701向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
一种可能的设计中,所述收发单元701用于接收第一操作时,具体用于:在所述第一电子设备显示有第一应用图标,所述收发单元701接收针对于所述第一应用图标的操作;或者,所述收发单元701接收第一语音操作;或者,所述收发单元701接收第一手势操作。
一种可能的设计中,在所述收发单元701用于针对于所述第一用户界面的触控操作之前还用于确定所述第一用户界面中的所述第二电子设备屏幕边框内部的区域为所述第二电子设备的显示界面区域。
一种可能的设计中,所述处理单元702用于确定所述第二电子设备屏幕边框内部的区域时,还用于:通过收发单元701向所述第二电子设备发送生成锚点指令,以使所述第二电子设备接收所述生成锚点指令之后,响应于所述生成锚点指令,在显示界面上生成锚点;所述处理单元702用于根据获取到的所述第一图像中的所述锚点的信息确定所述第二电子设备屏幕边框内部的区域。
一种可能的设计中,所述处理单元702用于识别所述第二电子设备的显示界面区域后,还用于:判断所述第二电子设备的显示界面区域的大小是否小于第一阈值;若所述第二电子设备的显示界面区域的大小小于第一阈值,所述第一电子设备调整所述摄像装置的焦距至第一焦距。
一种可能的设计中,所述收发单元701接收到针对于所述第一用户界面的触控操作后,生成所述虚拟触控事件之前,所述处理单元702还用于:获取至少一个触控点坐标;确定所述至少一个触控点坐标是否在所述第二电子设备的显示界面区域内;响应于所述第一电子设备确定所述至少一个触控点坐标在所述第二电子设备的显示界面区域内,所述第一电子设备生成所述虚拟触控事件。
一种可能的设计中,所述处理单元702基于所述触控点坐标生成虚拟触控事件,具体用于:将所述获取到的针对于所述第一用户界面的触控操作对应的触控点坐标转换成所述第二电子设备当前的显示界面中的相对坐标;根据所述第二电子设备当前的显示界面中的相对坐标生成所述虚拟触控事件。
一种可能的设计中,所述触控操作包括点击操作和/或滑动操作,所述针对于所述第一 用户界面的触控操作对应的触控点坐标包括单个坐标和/或多个坐标。
一种可能的设计中,所述第一电子设备为手机,所述第二电子设备为智能电视,所述摄像装置为所述手机的后置摄像头,其中所述第二电子设备当前的显示界面为所述智能电视的菜单界面;所述第一用户界面为第一电子设备进入遥控模式后的显示界面;所述第一图像为包括所述智能电视的菜单界面的图像,所述智能电视的菜单界面中包括多个控件,所述多个控件对应于不同的功能;所述第二电子设备的显示界面区域为所述手机获取的所述智能电视的菜单界面的图像区域;所述针对于所述第一用户界面的触控操作为针对于所述第一用户界面中所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件的点击操作;所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作为所述第二电子设备执行所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件对应的功能。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种遥控方法,适用于第一电子设备,所述第一电子设备包括摄像装置,所述第一电子设备和第二电子设备建立无线连接,其特征在于,包括:
    所述第一电子设备接收第一操作;
    响应于接收到所述第一操作,所述第一电子设备启动所述摄像装置;
    所述第一电子设备利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;
    所述第一电子设备显示第一用户界面,所述第一用户界面中包括所述第一图像;
    所述第一电子设备接收针对于所述第一用户界面的触控操作;
    响应于接收到所述针对于所述第一用户界面的触控操作,所述第一电子设备获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;
    所述第一电子设备向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
  2. 根据权利要求1所述的方法,其特征在于,所述第一电子设备接收第一操作,包括:
    所述第一电子设备显示第一应用图标,所述第一电子设备接收针对于所述第一应用图标的操作;或者,
    所述第一电子设备接收第一语音操作;或者,
    所述第一电子设备接收第一手势操作。
  3. 根据权利要求1所述的方法,其特征在于,在所述第一电子设备接收针对于所述第一用户界面的触控操作之前,所述方法还包括:所述第一电子设备确定所述第一用户界面中的所述第二电子设备屏幕边框内部的区域为所述第二电子设备的显示界面区域。
  4. 根据权利要求3所述的方法,其特征在于,所述第一电子设备确定所述第二电子设备屏幕边框内部的区域,包括:
    所述第一电子设备向所述第二电子设备发送生成锚点指令,以使所述第二电子设备接收所述生成锚点指令之后,响应于所述生成锚点指令,在显示界面上生成锚点;
    所述第一电子设备根据获取到的所述第一图像中的所述锚点的信息确定所述第二电子设备屏幕边框内部的区域。
  5. 根据权利要求3所述的方法,其特征在于,所述第一电子设备识别所述第二电子设备的显示界面区域后,所述方法还包括:
    判断所述第二电子设备的显示界面区域的大小是否小于第一阈值;
    若所述第二电子设备的显示界面区域的大小小于第一阈值,所述第一电子设备调整所述摄像装置的焦距至第一焦距。
  6. 根据权利要求3或4所述的方法,其特征在于,所述第一电子设备接收到针对于所述第一用户界面的触控操作后,生成所述虚拟触控事件之前,所述方法还包括:
    所述第一电子设备获取至少一个触控点坐标;
    所述第一电子设备确定所述至少一个触控点坐标是否在所述第二电子设备的显示界 面区域内;
    响应于所述第一电子设备确定所述至少一个触控点坐标在所述第二电子设备的显示界面区域内,所述第一电子设备生成所述虚拟触控事件。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述基于所述触控点坐标生成虚拟触控事件,包括:
    所述第一电子设备将所述获取到的针对于所述第一用户界面的触控操作对应的触控点坐标转换成所述第二电子设备当前的显示界面中的相对坐标;
    所述第一电子设备根据所述第二电子设备当前的显示界面中的相对坐标生成所述虚拟触控事件。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,所述触控操作包括点击操作和/或滑动操作,所述针对于所述第一用户界面的触控操作对应的触控点坐标包括单个坐标和/或多个坐标。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,包括:
    所述第一电子设备为手机,所述第二电子设备为智能电视,所述摄像装置为所述手机的后置摄像头,其中所述第二电子设备当前的显示界面为所述智能电视的菜单界面;
    所述第一用户界面为第一电子设备进入遥控模式后的显示界面;
    所述第一图像为包括所述智能电视的菜单界面的图像,所述智能电视的菜单界面中包括多个控件,所述多个控件对应于不同的功能;
    所述第二电子设备的显示界面区域为所述手机获取的所述智能电视的菜单界面的图像区域;
    所述针对于所述第一用户界面的触控操作为针对于所述第一用户界面中所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件的点击操作;
    所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作为所述第二电子设备执行所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件对应的功能。
  10. 一种电子设备,对应于第一电子设备,所述第一电子设备和第二电子设备建立无线连接,其特征在于,所述第一电子设备包括:
    摄像装置;
    触摸屏,其中,所述触摸屏包括触控面板和显示屏;
    一个或多个处理器;
    存储器;
    多个应用程序;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行以下步骤:
    接收第一操作;
    响应于接收到所述第一操作,启动所述摄像装置;
    利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;
    显示第一用户界面,所述第一用户界面中包括所述第一图像;
    接收针对于所述第一用户界面的触控操作;
    响应于接收到所述针对于所述第一用户界面的触控操作,获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;
    向所述第二电子设备发送所述虚拟触控事件,以使所述第二电子设备接收所述虚拟触控事件之后,响应于所述虚拟触控事件,执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
  11. 根据权利要求10所述的电子设备,其特征在于,所述第一电子设备接收第一操作,包括:
    显示第一应用图标,接收针对于所述第一应用图标的操作;或者,
    接收第一语音操作;或者,
    接收第一手势操作。
  12. 根据权利要求10所述的电子设备,其特征在于,当所述指令被所述第一电子设备执行时,使得所述第一电子设备还执行:在接收针对于所述第一用户界面的触控操作之前确定所述第一用户界面中的所述第二电子设备屏幕边框内部的区域为所述第二电子设备的显示界面区域。
  13. 根据权利要求12所述的电子设备,其特征在于,所述第一电子设备确定所述第二电子设备屏幕边框内部的区域,具体包括:
    所述第一电子设备向所述第二电子设备发送生成锚点指令,以使所述第二电子设备接收所述生成锚点指令之后,响应于所述生成锚点指令,在显示界面上生成锚点;
    所述第一电子设备根据获取到的所述第一图像中的所述锚点的信息确定所述第二电子设备屏幕边框内部的区域。
  14. 根据权利要求12所述的电子设备,其特征在于,当所述指令被所述第一电子设备执行时,使得所述第一电子设备识别所述第二电子设备的显示界面区域后还执行:
    判断所述第二电子设备的显示界面区域的大小是否小于第一阈值;
    若所述第二电子设备的显示界面区域的大小小于第一阈值,调整所述摄像装置的焦距至第一焦距。
  15. 根据权利要求12或13所述的电子设备,其特征在于,当所述指令被所述第一电子设备执行时,使得所述第一电子设备接收到针对于所述第一用户界面的触控操作后,生成所述虚拟触控事件之前,还执行:
    获取至少一个触控点坐标;
    确定所述至少一个触控点坐标是否在所述第二电子设备的显示界面区域内;
    响应于所述第一电子设备确定所述至少一个触控点坐标在所述第二电子设备的显示界面区域内,生成所述虚拟触控事件。
  16. 根据权利要求10至15任一项所述的电子设备,其特征在于,当所述指令被所述第一电子设备执行时,使得所述第一电子设备执行基于所述触控点坐标生成虚拟触控事件时,具体执行:
    将所述获取到的针对于所述第一用户界面的触控操作对应的触控点坐标转换成所述第二电子设备当前的显示界面中的相对坐标;
    根据所述第二电子设备当前的显示界面中的相对坐标生成所述虚拟触控事件。
  17. 根据权利要求10至16任一项所述的电子设备,其特征在于,所述触控操作包括点击操作和/或滑动操作,所述针对于所述第一用户界面的触控操作对应的触控点坐标包括单个坐标和/或多个坐标。
  18. 根据权利要求10至17任一项所述的电子设备,其特征在于,包括:
    所述第一电子设备为手机,所述第二电子设备为智能电视,所述摄像装置为所述手机的后置摄像头,其中所述第二电子设备当前的显示界面为所述智能电视的菜单界面;
    所述第一用户界面为第一电子设备进入遥控模式后的显示界面;
    所述第一图像为包括所述智能电视的菜单界面的图像,所述智能电视的菜单界面中包括多个控件,所述多个控件对应于不同的功能;
    所述第二电子设备的显示界面区域为所述手机获取的所述智能电视的菜单界面的图像区域;
    所述针对于所述第一用户界面的触控操作为针对于所述第一用户界面中所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件的点击操作;
    所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作为所述第二电子设备执行所述智能电视的菜单界面的图像中所述多个控件中的其中一个控件对应的功能。
  19. 一种遥控系统,包括第一电子设备和第二电子设备,所述第一电子设备包括摄像装置,所述第一电子设备和第二电子设备建立无线连接,其特征在于,包括:
    所述第一电子设备用于接收第一操作;
    响应于接收到所述第一操作,所述第一电子设备启动所述摄像装置;
    所述第一电子设备利用所述摄像装置获取第一图像,所述第一图像包括所述第二电子设备的显示界面区域,所述第二电子设备的显示界面区域中的内容为所述第二电子设备当前的显示界面;
    所述第一电子设备用于显示第一用户界面,所述第一用户界面中包括所述第一图像;
    所述第一电子设备用于接收针对于所述第一用户界面的触控操作;
    响应于接收到所述针对于所述第一用户界面的触控操作,所述第一电子设备获取针对于所述第一用户界面的触控操作对应的触控点坐标,并基于所述触控点坐标生成虚拟触控事件,其中,所述虚拟触控事件包括所述第二电子设备当前的显示界面中的相对坐标;
    所述第一电子设备向所述第二电子设备发送所述虚拟触控事件;
    所述第二电子设备接收所述虚拟触控事件;
    响应于接收到所述虚拟触控事件,所述第二电子设备执行所述第二电子设备当前的显示界面中的相对坐标对应的操作。
PCT/CN2021/116179 2020-10-27 2021-09-02 一种遥控方法、电子设备及系统 WO2022088974A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011167645.6 2020-10-27
CN202011167645.6A CN114513689B (zh) 2020-10-27 2020-10-27 一种遥控方法、电子设备及系统

Publications (1)

Publication Number Publication Date
WO2022088974A1 true WO2022088974A1 (zh) 2022-05-05

Family

ID=81381838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116179 WO2022088974A1 (zh) 2020-10-27 2021-09-02 一种遥控方法、电子设备及系统

Country Status (2)

Country Link
CN (1) CN114513689B (zh)
WO (1) WO2022088974A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895834A (zh) * 2022-05-30 2022-08-12 四川启睿克科技有限公司 一种智能家居设备控制页面的显示方法
CN115167752A (zh) * 2022-06-28 2022-10-11 华人运通(上海)云计算科技有限公司 一种单屏系统及其操控方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118414835A (zh) * 2022-11-29 2024-07-30 京东方科技集团股份有限公司 一种多设备协同控制的方法、显示设备及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130212629A1 (en) * 2012-02-15 2013-08-15 Li Tv Taiwan Inc. Television system operated with remote touch control
CN103491444A (zh) * 2012-06-14 2014-01-01 腾讯科技(深圳)有限公司 图像互动方法和系统以及对应的显示装置
CN104639962A (zh) * 2015-02-02 2015-05-20 惠州Tcl移动通信有限公司 一种实现电视触摸控制的方法及系统
CN104703008A (zh) * 2015-02-04 2015-06-10 中新科技集团股份有限公司 一种通过手机控制电视的方法
CN106331809A (zh) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 电视控制方法和电视控制系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201278211Y (zh) * 2008-09-08 2009-07-22 Tcl集团股份有限公司 带触摸屏及摄像头的遥控器
KR20150074389A (ko) * 2013-12-24 2015-07-02 삼성전자주식회사 디스플레이 장치 및 그 디스플레이 방법
CN103945251A (zh) * 2014-04-03 2014-07-23 上海斐讯数据通信技术有限公司 一种远程遥控系统及移动终端
CN110866495B (zh) * 2019-11-14 2022-06-28 杭州睿琪软件有限公司 票据图像识别方法及装置和设备、训练方法和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130212629A1 (en) * 2012-02-15 2013-08-15 Li Tv Taiwan Inc. Television system operated with remote touch control
CN103491444A (zh) * 2012-06-14 2014-01-01 腾讯科技(深圳)有限公司 图像互动方法和系统以及对应的显示装置
CN104639962A (zh) * 2015-02-02 2015-05-20 惠州Tcl移动通信有限公司 一种实现电视触摸控制的方法及系统
CN104703008A (zh) * 2015-02-04 2015-06-10 中新科技集团股份有限公司 一种通过手机控制电视的方法
CN106331809A (zh) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 电视控制方法和电视控制系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895834A (zh) * 2022-05-30 2022-08-12 四川启睿克科技有限公司 一种智能家居设备控制页面的显示方法
CN115167752A (zh) * 2022-06-28 2022-10-11 华人运通(上海)云计算科技有限公司 一种单屏系统及其操控方法

Also Published As

Publication number Publication date
CN114513689A (zh) 2022-05-17
CN114513689B (zh) 2023-09-12

Similar Documents

Publication Publication Date Title
WO2022100237A1 (zh) 投屏显示方法及相关产品
US10832448B2 (en) Display control device, display control method, and program
WO2021023021A1 (zh) 一种显示方法及电子设备
WO2022088974A1 (zh) 一种遥控方法、电子设备及系统
US12061833B2 (en) Multi-window display method, electronic device, and system
WO2021057830A1 (zh) 一种信息处理方法及电子设备
WO2021017836A1 (zh) 控制大屏设备显示的方法、移动终端及第一系统
WO2021032097A1 (zh) 一种隔空手势的交互方法及电子设备
WO2022100305A1 (zh) 画面跨设备显示方法与装置、电子设备
US10802663B2 (en) Information processing apparatus, information processing method, and information processing system
CN112398855B (zh) 应用内容跨设备流转方法与装置、电子设备
US9742995B2 (en) Receiver-controlled panoramic view video share
EP2832107B1 (en) Information processing apparatus, information processing method, and program
CN112527174B (zh) 一种信息处理方法及电子设备
WO2022028537A1 (zh) 一种设备识别方法及相关装置
CN112527222A (zh) 一种信息处理方法及电子设备
CN114079691B (zh) 一种设备识别方法及相关装置
US20140191947A1 (en) Using Natural Movements of a Hand-Held Device to Manipulate Digital Content
EP4085593A2 (en) Integration of internet of things devices
KR20180046681A (ko) 영상 표시 장치, 모바일 장치 및 그 동작방법
WO2021052488A1 (zh) 一种信息处理方法及电子设备
WO2023231697A1 (zh) 一种拍摄方法及相关设备
WO2022228259A1 (zh) 一种目标追踪方法及相关装置
US11756302B1 (en) Managing presentation of subject-based segmented video feed on a receiving device
WO2022121751A1 (zh) 相机控制方法、装置和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884705

Country of ref document: EP

Kind code of ref document: A1