[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114079730B - Shooting method and shooting system - Google Patents

Shooting method and shooting system Download PDF

Info

Publication number
CN114079730B
CN114079730B CN202010839365.9A CN202010839365A CN114079730B CN 114079730 B CN114079730 B CN 114079730B CN 202010839365 A CN202010839365 A CN 202010839365A CN 114079730 B CN114079730 B CN 114079730B
Authority
CN
China
Prior art keywords
electronic device
image
information
shooting
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010839365.9A
Other languages
Chinese (zh)
Other versions
CN114079730A (en
Inventor
刘亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010839365.9A priority Critical patent/CN114079730B/en
Priority to PCT/CN2021/112362 priority patent/WO2022037479A1/en
Publication of CN114079730A publication Critical patent/CN114079730A/en
Application granted granted Critical
Publication of CN114079730B publication Critical patent/CN114079730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method and a shooting system, wherein the shooting method is applied to the shooting system, and the shooting system comprises electronic equipment and first wearable equipment; the electronic equipment is connected with the first wearable equipment; the electronic equipment receives a first operation, and responds to the first operation to obtain a multimedia file acquired by a camera, wherein the multimedia file comprises a picture file, a video file and the like; the first wearable device detects first sensor data through at least one sensor; the electronic equipment acquires first information, and the first information corresponds to first sensor data; the electronic device stores a multimedia file, the multimedia file being associated with the first information. The first information comprises the biological characteristic information of the user, the electronic equipment correlates the biological characteristic information with the information of the pictures/videos, more accurate characteristic identification is carried out on the pictures/videos, and the stored pictures/videos are conveniently classified according to the biological characteristic information.

Description

Shooting method and shooting system
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a photographing method and a photographing system.
Background
In the network digital information age, the resources (such as pictures and videos) of consumers are exponentially increased, and challenges are presented to the storage and use of the resources. Typically involves searching for a specified type of resource. For example, by a search engine, a picture is searched according to preset features. For another example, the user searches the resource library stored on the electronic device thereof, so as to achieve the purposes of browsing and sharing. Such as: the photo with the specified characteristics is searched through the photo album of the mobile phone, and the picture of dancing and kicking of the child is searched.
However, feature extraction and labeling rely on a large amount of hardware resources, and particularly when the amount of resources is large, the calculation time also increases in geometric multiples; and the existing image recognition technology can only recognize and extract some external features, and has no technology for classifying images according to psychological and physiological features of people.
Disclosure of Invention
The application provides a shooting method and a shooting system, which realize that biological characteristic information is associated with information of a picture/video in the process of generating the picture/video, and more accurate characteristic identification is carried out on the picture/video.
In a first aspect, the present application provides a photographing system, comprising: the electronic equipment comprises a camera; the electronic equipment is used for establishing connection with the first wearable equipment; the electronic device is also used for receiving a first operation; the electronic equipment is also used for responding to the first operation and acquiring the multimedia file acquired by the camera; a first wearable device for detecting first sensor data by at least one sensor; the electronic equipment is also used for acquiring first information, and the first information corresponds to the first sensor data; the electronic device is further configured to store a multimedia file, where the multimedia file is associated with the first information. Wherein the multimedia files include picture files, video files, and the like. The first operation may include, but is not limited to, operations such as clicking, double clicking, long pressing, sliding, etc., the first operation being used to trigger the electronic device to take a picture/video. The method comprises the steps that connection between the electronic equipment and first wearable equipment is established; the electronic device responds to the first operation and acquires pictures/videos through the camera. The electronic device obtains biometric information (e.g., heart rate, blood pressure, movement posture, etc.) of the user detected by the first wearable device via the sensor, the biometric information also being referred to as first information. The electronic device associates the biometric information with the captured picture/video or associates the biometric information with a person in the captured picture/video, the person corresponding to the biometric information. The electronic device generates a picture/video comprising biometric information indicative of a biometric of the user. According to the shooting system provided by the application, in the process of generating the pictures/videos, the electronic equipment performs information interaction with the first wearable equipment, the electronic equipment acquires the biological characteristic information, the electronic equipment correlates the biological characteristic information with the information of the pictures/videos, more accurate characteristic identification is performed on the pictures/videos, the electronic equipment stores the pictures/videos with the biological characteristic information, and the stored pictures/videos are convenient to classify according to the biological characteristic information.
In one possible implementation, the first wearable device is further configured to determine first information according to the first sensor data; the first information is sent to the electronic device. The first information is biometric information. This way, one way of the electronic device obtaining the first information is described, the first wearable device detecting the sensor data (first sensor data) by means of one or more sensors, the first wearable device determining the first information based on the first sensor data, the first wearable device sending the first information to the electronic device, the electronic device obtaining the first information. The first wearable device processes the first sensor data, so that the time and resources for processing the first sensor data by the electronic device are saved.
In one possible implementation, the first wearable device is further configured to send the first sensor data to the electronic device; the electronic device is further configured to determine first information according to the first sensor data. The first information is biometric information. This way, yet another way of the electronic device obtaining the first information is described, the first wearable device detecting sensor data (first sensor data) by means of one or more sensors, the first sensor data being sent to the electronic device, the electronic device determining biometric information based on the obtained first sensor data.
In one possible implementation, the electronic device is specifically configured to: in response to a first operation, first information is acquired, the first information corresponding to the first sensor data. This way the timing of the electronic device to acquire the first information is described. The first operation includes a user operation triggering the electronic device to take a picture/video. After the electronic equipment receives the first operation, the electronic equipment acquires the first information based on the first operation. The electronic device processes the first sensor data, so that the time and resources for processing the first sensor data by the first wearable device are saved.
In one possible implementation, the electronic device is further configured to: in response to the first operation, sending a first request message to the first wearable device; the first wearing equipment is specifically used for: the first information is sent to the electronic device in response to the first request message. This way a way of the electronic device obtaining the first information in response to the first operation is described. After the electronic device receives the first operation, the electronic device requests the first wearing device to acquire the biometric information (first information) based on the first operation, the first wearing device transmits the first information to the electronic device, and the electronic device acquires the first information.
In one possible implementation, the electronic device is further configured to: in response to the first operation, sending a first request message to the first wearable device; the first wearing equipment is specifically used for: the first sensor data is sent to the electronic device in response to the first request message. This way a further way of the electronic device obtaining the first information in response to the first operation is described. After the electronic device receives the first operation, the electronic device requests acquisition of sensor data (first sensor data) from the first wearable device based on the first operation, and the electronic device determines biometric information (first information) based on the acquired first sensor data.
In one possible implementation, the electronic device is further configured to: a shooting preview interface is displayed, wherein the shooting preview interface comprises a shooting button, and the first operation comprises an input operation acting on the shooting button. An application scenario is provided herein, where an electronic device receives a first operation at a capture preview interface, triggering the electronic device to capture a picture/video.
In one possible implementation, the attribute information of the multimedia file includes first information. The electronic equipment stores the multimedia file, performs fusion coding on the multimedia file and the first information, and the user can check the first information by checking the attribute information of the multimedia file.
In one possible implementation, the electronic device for establishing a connection with a first wearable device includes: and responding to the electronic equipment entering a preset shooting mode, and establishing connection between the electronic equipment and the first wearable equipment. The method describes the time when the electronic equipment is connected with the first wearable equipment, and the electronic equipment displays a shooting preview interface in a preset shooting mode. When the electronic equipment detects user operation entering a preset shooting mode, the electronic equipment automatically starts Bluetooth and automatically establishes Bluetooth connection with the first wearable equipment.
In one possible implementation, the first wearable device is further configured to: receiving a second operation; the first wearable device is further used for responding to the second operation and indicating the electronic device to start the camera; the first electronic device is also used for displaying a shooting preview interface, and the shooting preview interface displays a preview image acquired by the camera; the first operation includes an operation for shooting a preview interface. This way a way of triggering the taking of pictures/video on the first wearable device side is described. The second operation may include, but is not limited to, operations such as clicking, double clicking, long pressing, sliding, and the like, where the second operation acts on the first wearable device and is used to trigger the first wearable device to instruct the electronic device to turn on the camera, so as to take a picture/video.
In a possible implementation, the electronic device is further configured to display the multimedia file and at least part of the first information. The user may enter a gallery to view the multimedia file and the electronic device displays the multimedia file and at least a portion of the first information, the first information indicating biometric information of the user. Optionally, the multimedia file and part of the first information are displayed on a display interface of the electronic device, and the user can view all the first information by viewing detailed information and the like.
In one possible implementation, the electronic device is further configured to display the multimedia file and at least a portion of the first information, including: the electronic equipment is further used for displaying the multimedia file and at least part of the first information in response to the fact that the preset facial image is included in the multimedia file; the preset facial image corresponds to the first wearable device. The electronic equipment determines a preset face image corresponding to the first wearing equipment through the identity information of the first wearing equipment, the preset face image is subjected to similarity matching with one or more people in the multimedia file, if the preset face image is successfully matched with one of the people, the people in the multimedia file are indicated to be users of the first wearing equipment, at least part of the first information is displayed by the electronic equipment, otherwise, the first information is not displayed, and the privacy of the users is protected.
The preset facial image and the first wearable device have an association relationship, wherein facial information can be preset in the electronic device by a user, can be uploaded to the electronic device in an image or video mode, can be preset in the first wearable device by the user, and is provided to the electronic device by the first wearable device, and the application is not limited.
In one possible implementation, the electronic device is further configured to display the multimedia file and at least a portion of the first information, including: the electronic equipment is further used for responding to the fact that the multimedia file comprises a first face image and a second face image, the first face image is matched with the preset face image, and at least part of first information is displayed in a first area of the multimedia file; the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the distance between the first area and the display area of the second facial image. This way a way of determining the display position of at least part of the first information on the multimedia file is described. And performing similarity matching on the preset face image and one or more people in the multimedia file, and if the preset face image is successfully matched with one of the people, displaying at least part of first information near the people by the shooting equipment. The method has the advantages that the correspondence between the user and the first information is enhanced in the display effect, the user corresponding to the first information can be visually seen, and the user experience can be improved.
In one possible implementation, the electronic device is further configured to: before receiving the first operation, displaying a shooting preview interface, wherein the shooting preview interface comprises preview images acquired by a camera; the electronic device is further configured to display at least part of second information on the preview image, where the second information corresponds to the second sensor data detected by the first wearable device. The second information is biometric information. The electronic equipment displays at least part of the second information on the preview image, so that the biological characteristic information is displayed on the preview interface in real time, and the user can view the biological characteristic information in real time. I.e. such that more biometric information may be included in the image or video file acquired by the electronic device and derived from sensor data acquired by different sensors.
In one possible implementation, the electronic device is further configured to display at least a portion of the second information on the preview image, including: the electronic device is further configured to display the preview image and at least part of the second information in response to the preview image including a preset face image, where the preset face image corresponds to the first wearable device. The electronic equipment determines a preset face image corresponding to the first wearing equipment through the identity information of the first wearing equipment, the preset face image is subjected to similarity matching with one or more people on the preview image, if the preset face image is successfully matched with one of the people, the people on the preview image are users of the first wearing equipment, at least part of second information is displayed by the electronic equipment, otherwise, the second information is not displayed, and the privacy of the users is protected.
In one possible implementation, the electronic device is further configured to display at least a portion of the second information on the preview image, including: the electronic device is further used for responding to the fact that the preview image comprises a third face image and a fourth face image, the third face image is matched with the preset face image, and at least part of second information is displayed in a second area of the preview image; the preset facial image corresponds to the first wearable device; the distance between the second region and the display region of the third face image is smaller than the distance between the second region and the display region of the fourth face image. This way a way of determining the display position of at least part of the second information on the preview image is described. And performing similarity matching on the preset face image and one or more persons in the preview image, and if the preset face image is successfully matched with one of the persons, displaying at least part of biological characteristic information near the person in the preview image by the shooting device. The correspondence between the user and the second information is enhanced on the display effect, so that the user corresponding to the second information on the preview image can be intuitively seen, and the user experience is improved.
In one possible implementation, the electronic device is further configured to: and outputting a first prompt for prompting the user to aim at the face in response to the preview image not including the preset face image. The electronic equipment performs similarity matching on the preset face image and one or more people in the picture, if the matching is unsuccessful, the user who does not comprise the first wearable device in the preview image is indicated, the electronic equipment outputs prompt information to prompt the user to aim the shooting angle at the face, the situation that the shot multimedia file does not comprise the user of the first wearable device is avoided, and user experience is improved.
In one possible implementation, the first information includes at least one of: health state information, exercise state information, or emotional state information.
In one possible implementation, the first sensor data comprises data detected by at least one sensor comprising at least one of: acceleration sensor, gyroscope sensor, geomagnetic sensor, barometric pressure sensor, heart rate sensor, blood pressure sensor, electrocardio sensor, myo-sensor, body temperature sensor, skin sensor, air temperature and humidity sensor, illumination sensor, or bone conduction sensor.
In one possible implementation, the system further comprises a second wearable device; the electronic equipment is also used for establishing connection with the second wearable equipment; a second wearable device for detecting fourth sensor data by the at least one sensor; wherein the first information also corresponds to fourth sensor data. This way it is described that in case the electronic device establishes a connection with two wearable devices (first wearable device and second wearable device), the electronic device acquires first information corresponding to sensor data (first sensor data and fourth sensor data) of both the first wearable device and the second wearable device. The electronic device establishes connection with more than two wearable devices.
In a second aspect, a photographing method, applied to an electronic device including a camera, includes: the electronic equipment establishes connection with first wearable equipment; the electronic device receives a first operation; the electronic equipment responds to a first operation to acquire a multimedia file acquired by a camera; the electronic equipment acquires first information, wherein the first information corresponds to first sensor data detected by at least one sensor of the first wearable equipment; the electronic device stores a multimedia file, the multimedia file being associated with the first information. Wherein the multimedia files include picture files, video files, and the like. The first operation may include, but is not limited to, operations such as clicking, double clicking, long pressing, sliding, etc., the first operation being used to trigger the electronic device to take a picture/video. The electronic device responds to the first operation and acquires pictures/videos through the camera. The electronic device obtains biometric information (e.g., heart rate, blood pressure, movement posture, etc.) of the user detected by the first wearable device via the sensor, and the electronic device associates the biometric information with the captured picture/video or associates the biometric with a person in the captured picture/video that is corresponding to the biometric. The biometric information is also referred to as first information. The electronic device generates a picture/video comprising biometric information indicative of a biometric of the user. According to the method, the biological characteristic information is associated with the information of the picture/video in the picture/video generation process, more accurate characteristic identification is performed on the picture/video, and a new picture/video format is provided, so that the electronic equipment stores the picture/video with the biological characteristic information, and the stored picture/video is conveniently classified according to the biological characteristic information.
In one possible implementation, the electronic device obtains first information, including: the electronic equipment acquires first sensor data; the electronic device determines first information based on the first sensor data. This way, one way of the electronic device obtaining the first information is described, the first information being biometric information. The first wearable device detects sensor data (first sensor data) through one or more sensors, transmits the first sensor data to the electronic device, and the electronic device determines biometric information based on the acquired first sensor data. The electronic device processes the first sensor data, so that the time and resources for processing the first sensor data by the first wearable device are saved.
In one possible implementation, the electronic device obtains first information, including: the electronic device obtains first information determined by the first wearable device based on the first sensor data. This way a further way of the electronic device obtaining the first information is described, the first information being biometric information. The first wearable device detects sensor data (first sensor data) through one or more sensors, the first wearable device determines first information based on the first sensor data, the first wearable device sends the first information to the electronic device, and the electronic device acquires the first information. The first wearable device processes the first sensor data, so that the time and resources for processing the first sensor data by the electronic device are saved.
In one possible implementation, the electronic device obtains first information, including: in response to the first operation, the electronic device obtains first information. This way, the timing of the electronic device to acquire the first information is described, the first operation including a user operation to trigger the electronic device to take a picture/video. After the electronic equipment receives the first operation, the electronic equipment acquires the first information based on the first operation.
In one possible implementation, in response to a first operation, the electronic device obtains first information, including: in response to the first operation, the electronic device sends a first request to the first wearable device, the first request being for requesting acquisition of first sensor data detected by at least one sensor of the first wearable device; the electronic device obtains first information, and the first information corresponds to the first sensor data. This way a way of the electronic device obtaining the first information in response to the first operation is described. After the electronic device receives the first operation, the electronic device sends a request to the first wearable device based on the first operation to acquire sensor data (first sensor data), and the electronic device determines biometric information (first information) based on the acquired first sensor data.
In one possible implementation, in response to a first operation, the electronic device obtains first information, including: in response to the first operation, the electronic device sends a first request to the first wearable device, the first request being used for requesting acquisition of first information determined by the first wearable device based on the first sensor data; the electronic device obtains first information, and the first information corresponds to the first sensor data. This way a further way of the electronic device obtaining the first information in response to the first operation is described. After the electronic device receives the first operation, the electronic device sends a request to the first wearable device based on the first operation to acquire biometric information (first information), the first wearable device sends the first information to the electronic device, and the electronic device acquires the first information.
In one possible implementation, the electronic device receiving the first operation includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises shooting buttons; the electronic device receives a first operation including an input operation for a photographing button. An application scenario is provided herein, where an electronic device receives a first operation at a capture preview interface, triggering the electronic device to capture a picture/video.
In one possible implementation, the attribute information of the multimedia file includes first information. The electronic equipment stores the multimedia file, performs fusion coding on the multimedia file and the first information, and the user can check the first information by checking the attribute information of the multimedia file.
In one possible implementation, the electronic device establishes a connection with a first wearable device, including: and responding to the electronic equipment entering a preset shooting mode, and establishing connection between the electronic equipment and the first wearable equipment. The method describes the time when the electronic equipment is connected with the first wearable equipment, and the electronic equipment displays a shooting preview interface in a preset shooting mode. When the electronic equipment detects user operation entering a preset shooting mode, the electronic equipment automatically starts Bluetooth and automatically establishes Bluetooth connection with the first wearable equipment.
In one possible implementation, the method further includes: the electronic device displays the multimedia file and at least a portion of the first information. The user may enter a gallery to view the multimedia file and the electronic device displays the multimedia file and at least a portion of the first information, the first information indicating biometric information of the user. Optionally, the multimedia file and part of the first information are displayed on a display interface of the electronic device, and the user can view all the first information by viewing detailed information and the like.
In one possible implementation, the electronic device displays the multimedia file and at least part of the first information, specifically including: responsive to the multimedia file including the preset facial image, the electronic device displays the multimedia file and at least a portion of the first information; the preset facial image corresponds to the first wearable device. The electronic equipment determines a preset face image corresponding to the first wearing equipment through the identity information of the first wearing equipment, the preset face image is subjected to similarity matching with one or more people in the multimedia file, if the preset face image is successfully matched with one of the people, the people in the multimedia file are indicated to be users of the first wearing equipment, at least part of the first information is displayed by the electronic equipment, otherwise, the first information is not displayed, and the privacy of the users is protected.
The preset facial image and the first wearable device have an association relationship, wherein facial information can be preset in the electronic device by a user, can be uploaded to the electronic device in an image or video mode, can be preset in the first wearable device by the user, and is provided to the electronic device by the first wearable device, and the application is not limited.
In one possible implementation, the electronic device displays the multimedia file and at least part of the first information, specifically including: in response to the multimedia file including the first facial image and the second facial image, wherein the first facial image is matched with the preset facial image, the electronic equipment displays at least part of first information in a first area of the multimedia file; the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the distance between the first area and the display area of the second facial image. This way a way of determining the display position of at least part of the first information on the multimedia file is described. And performing similarity matching on the preset face image and one or more people in the multimedia file, and if the preset face image is successfully matched with one of the people, displaying at least part of first information near the people by the shooting equipment. The method has the advantages that the correspondence between the user and the first information is enhanced in the display effect, the user corresponding to the first information can be visually seen, and the user experience can be improved.
In one possible implementation, the electronic device receives a first operation, and further includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises preview images acquired by a camera; the electronic device displays at least part of second information on the preview image, wherein the second information corresponds to second sensor data detected by the first wearable device. The second information is biometric information displayed on the preview interface. The electronic equipment displays at least part of the second information on the preview image, so that the biological characteristic information is displayed on the preview interface in real time, and the user can view the biological characteristic information in real time.
In one possible implementation, the electronic device displays at least part of the second information on the preview image, specifically including: and in response to the preview image including the preset face image, the electronic device displays the preview image and at least part of the second information, wherein the preset face image corresponds to the first wearable device. The electronic equipment determines a preset face image corresponding to the first wearing equipment through the identity information of the first wearing equipment, the preset face image is subjected to similarity matching with one or more people on the preview image, if the preset face image is successfully matched with one of the people, the people on the preview image are users of the first wearing equipment, at least part of second information is displayed by the electronic equipment, otherwise, the second information is not displayed, and the privacy of the users is protected.
In one possible implementation manner, the electronic device displays at least part of the second information on the preview interface, specifically including: in response to the preview image including the third face image and the fourth face image, wherein the third face image is matched with the preset face image, the electronic device displays at least part of second information in a second area of the preview image; the preset facial image corresponds to the first wearable device; the distance between the second region and the display region of the third face image is smaller than the distance between the second region and the display region of the fourth face image. This way a way of determining the display position of at least part of the second information on the preview image is described. And performing similarity matching on the preset face image and one or more persons in the preview image, and if the preset face image is successfully matched with one of the persons, displaying at least part of biological characteristic information near the person in the preview image by the shooting device. The correspondence between the user and the second information is enhanced on the display effect, so that the user corresponding to the second information on the preview image can be intuitively seen, and the user experience is improved.
In one possible implementation, the method further includes: in response to not including the preset face image in the preview image, the electronic device outputs a first prompt for prompting the user to aim at the face. The electronic equipment performs similarity matching on the preset face image and one or more people in the picture, if the matching is unsuccessful, the user who does not comprise the first wearable device in the preview image is indicated, the electronic equipment outputs prompt information to prompt the user to aim the shooting angle at the face, the situation that the shot multimedia file does not comprise the user of the first wearable device is avoided, and user experience is improved.
In one possible implementation, the first information includes at least one of: health state information, exercise state information, or emotional state information.
In one possible implementation, the first sensor data comprises data detected by at least one sensor comprising at least one of: acceleration sensor, gyroscope sensor, geomagnetic sensor, barometric pressure sensor, heart rate sensor, blood pressure sensor, electrocardio sensor, myoelectricity sensor, body temperature sensor, skin electric sensor, air temperature humidity sensor, illumination sensor, or bone conduction sensor.
In one possible implementation, the method further includes: the electronic equipment establishes connection with the second wearable equipment; the second wearable device is configured to detect fourth sensor data through the at least one sensor, and the first information further corresponds to the fourth sensor data. This way it is described that in case the electronic device establishes a connection with two wearable devices (first wearable device and second wearable device), the electronic device acquires first information corresponding to sensor data (first sensor data and fourth sensor data) of both the first wearable device and the second wearable device. The electronic device establishes connection with more than two wearable devices.
In a third aspect, the present application provides an electronic device, comprising: one or more processors, one or more memories; the one or more memories are coupled to the one or more processors; the one or more memories are used to store computer program code, including computer instructions; the computer instructions, when executed on the processor, cause the electronic device to perform the shooting method in any one of the possible implementation manners of the above aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium including computer instructions that, when executed on an electronic device, cause a communication apparatus to perform a shooting method in any one of the possible implementation manners of the above aspect.
In a fifth aspect, the present application provides a chip system for use in an electronic device comprising a memory, a display screen and a sensor; the chip system includes: one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is used for receiving signals from the memory and sending signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method of task processing in the first aspect and any possible implementation manner of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the shooting method in any one of the possible implementation manners of the above aspect.
Drawings
FIG. 1 is a system diagram provided by an embodiment of the present application;
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a software architecture diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present application;
fig. 5a to 5e are schematic interface diagrams of a group of shooting methods according to an embodiment of the present application;
fig. 6 a-6 b are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 7 a-7 b are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 8a to 8d are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 9a to fig. 9c are schematic interface diagrams of still another group of photographing methods according to an embodiment of the present application;
fig. 10 is an interface schematic diagram of another group of photographing methods according to an embodiment of the present application;
fig. 11a to 11c are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 12 is an interface schematic diagram of another group of shooting methods according to an embodiment of the present application;
fig. 13 a-13 b are schematic interface diagrams of another group of shooting methods according to an embodiment of the present application;
fig. 14a to fig. 14b are schematic interface diagrams of still another group of photographing methods according to an embodiment of the present application;
Fig. 15a to 15b are schematic interface diagrams of still another group of photographing methods according to an embodiment of the present application;
fig. 16 a-16 b are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 17 a-17 b are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 18 a-18 b are schematic interface diagrams of another group of photographing methods according to an embodiment of the present application;
fig. 19 a-19 b are technical schematic diagrams of a shooting method according to an embodiment of the present application;
fig. 20 a-20 b are a method flowchart of a shooting method according to an embodiment of the present application;
fig. 21 is a flowchart of a method of still another photographing method according to an embodiment of the present application;
FIG. 22 is a system diagram of yet another embodiment of the present application;
fig. 23 to 24 are flowcharts of a shooting method according to another embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The electronic device/user device involved in embodiments of the present application may be a cell phone, tablet, desktop, laptop, notebook, ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), virtual reality device, PDA (Personal Digital Assistant, personal digital assistant, also known as palm top), portable internet device, data storage device, camera or wearable device (e.g., wireless headset, smart watch, smart bracelet, smart glasses, headset (HMD), electronic clothing, electronic bracelets, electronic necklace, electronic accessories, electronic tattoos, and smart mirrors), and so forth.
The embodiment of the application provides a shooting method, which is applied to a system at least comprising an electronic device 100 and a wearable device 201, wherein the electronic device 100 is connected with the wearable device 201, when the electronic device 100 shoots a picture/video, the biological characteristics (such as heart rate, blood pressure, movement gesture and the like) of a user detected by the wearable device 201 through a sensor are acquired, the electronic device 100 associates the biological characteristics with the shot picture/video or associates the biological characteristics with a person in the shot picture/video, and the person corresponds to the biological characteristics. The electronic device 100 generates a picture/video that includes biometric information that indicates the user's biometric characteristics. In the method, the biological characteristic information is associated with the information of the picture/video in the picture/video generation process, more accurate characteristic identification is performed on the picture/video, and a new picture/video format is provided, so that the electronic equipment 100 stores the picture/video with the biological characteristic information, and the stored picture/video is convenient to classify according to the biological characteristic information. In order to implement the solution according to the embodiments of the present application, the number of the electronic devices and the number of the wearable devices may be one or more, which is not limited in the present application.
Fig. 1 illustrates a system diagram provided by the present application.
As shown in fig. 1, the system may include an electronic device 100 and one or more wearable devices (e.g., wearable device 201, wearable device 202). The electronic device 100 and the wearing device 201 (wearing device 202) may be connected by wireless communication. The connection may be established, for example, by at least one of the following wireless connections: bluetooth (BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, wiFi), or WiFi direct.
Wherein, optionally, the electronic device 100 may be connected with a plurality of different types of wearable devices. For example, the electronic device 100 may connect a smart watch and a wireless headset simultaneously through bluetooth.
In the embodiment of the present application, the electronic device 100 and the wearable device 201 are exemplified by bluetooth connection.
The electronic apparatus 100 is an electronic apparatus having an image capturing function. Such as a cell phone, tablet, or camera, etc.
The wearable device 201 includes a wireless headset, a smart watch, a smart bracelet, smart glasses, a smart ring, a smart sports shoe, a virtual reality display device, a smart headband, an electronic garment, an electronic bracelet, an electronic necklace, an electronic accessory, an electronic tattoo, or a smart mirror, etc.
The wearing device 201 may detect health state information, movement state information, emotion state information, etc. of the user through the sensor. The health status information comprises heart rate, blood pressure, blood sugar, brain electricity, electrocardio, myoelectricity, body temperature and other information; the athletic performance information includes common athletic style poses such as walking, running, riding, swimming, playing shuttlecocks, skating, surfing, dancing, etc., and may include some finer granularity athletic poses such as: batting with a hand, batting with a back hand, dancing with Latin, dancing with a mechanical dance, etc.; the emotional state information includes tension, anxiety, sadness, stress, excitement, pleasure, and the like.
The electronic device 100 is connected to the wearable device 201 via bluetooth. When taking a picture/video, the electronic device 100 acquires biometric information of the user, which corresponds to the health state information, movement state information, emotion state information, and the like of the user detected by the wearable device 201 through the sensor. For example, the wearable device 201 detects heart rate data by a heart rate sensor, detects blood pressure data by a blood pressure sensor; the biometric information may be heart rate data or blood pressure data, or information such as heart rate normalization or blood pressure normalization obtained based on the heart rate data and the blood pressure data.
The electronic device associates the biometric information with the information of the picture/video and saves the picture/video with the biometric information. The electronic equipment can quickly and accurately inquire and screen the pictures/videos with different characteristics by classifying the pictures/videos with the biological characteristic information. And more intrinsic characteristics can be provided through the sensor of the wearable device, so that the characteristic identification of the picture/video is more accurate.
An exemplary electronic device 100 provided in the following embodiments of the present application will first be described.
Fig. 2 shows a schematic structural diagram of the electronic device 100.
The embodiment will be specifically described below taking the electronic device 100 as an example. It should be understood that the electronic device 100 shown in fig. 2 is only one example, and that the electronic device 100 may have more or fewer components than shown in fig. 2, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In some embodiments, the electronic device 100 may also include one or more processors 110.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. Illustratively, the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, or the like.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include a fifth generation mobile communication (the 5th Generation,5G) system, a New Radio, NR, system, a global system for mobile communications (global system for mobile communications, GSM), a general packet Radio service (general packet Radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
Optionally, in some embodiments, a Bluetooth (BT) module, WLAN module included in the wireless communication module 160 may transmit signals to detect or scan for nearby devices of the electronic device 100, such that the electronic device 100 may discover nearby devices using wireless communication techniques such as bluetooth or WLAN, and establish a wireless communication connection with the nearby devices, and share data to the nearby devices over the connection. Among other things, a Bluetooth (BT) module may provide a solution that includes one or more of classical bluetooth (bluetooth 2.1) or bluetooth low energy (Bluetooth low energy, BLE) bluetooth communications. The WLAN module may provide a solution that includes one or more WLAN communications of Wi-Fi direct, wi-Fi LAN, or Wi-Fi softAP.
Optionally, in some embodiments, the solution of wireless communication provided by the mobile communication module 150 may enable the electronic device to communicate with a device (e.g., a server) in a network, and the solution of WLAN wireless communication provided by the wireless communication module 160 may also enable the electronic device to communicate with a device (e.g., a server) in a network and may communicate with a cloud device through the device (e.g., a server) in the network. Thus, the electronic device can find the cloud device and transmit data to the cloud device.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 may be a flat display, a curved display, or a folded display. When the display 194 is a folding screen, the folding screen is in a folded state, and the folding screen includes at least a first display area and a second display area. The light emitting surfaces of the first display area and the second display area are different. The first display area is located the first region of folding screen, and the second display area is located the second region of folding screen, and when folding screen is in the folded state, the contained angle between first region and the second region is greater than or equal to 0 degree, is less than 180.
The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. When the electronic device 100 includes two or more displays, the morphology of each display may be different. For example, one of the displays may be a folded display and the other display may be a flat display. For example, one of the displays may be a color display and the other display may be a black and white display.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The camera 193 may be a 3D camera, and the electronic device 100 may implement a camera function through the 3D camera, ISP, video codec, GPU, display 194, and application processor AP, neural network processor NPU, etc.
The 3D camera may be used to acquire color image data as well as depth data of a photographed object. ISPs can be used to process color image data acquired by 3D cameras. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located at the 3D camera.
Alternatively, in some embodiments, the 3D camera may be composed of a color camera module and a 3D sensing module.
Alternatively, in some embodiments, the photosensitive element of the camera of the color camera module may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
Alternatively, in some embodiments, the 3D sensing module may be a (time of flight) 3D sensing module or a structured light (structured light) 3D sensing module. The structured light 3D sensing is an active depth sensing technology, and basic components of the structured light 3D sensing module may include an Infrared (Infrared) emitter, an IR camera module, and the like. The working principle of the structured light 3D sensing module is that a light spot (pattern) with a specific pattern is emitted to a shot object, then a light spot pattern code (light coding) on the surface of the object is received, and the difference between the light spot and an original projected light spot is compared, and the three-dimensional coordinate of the object is calculated by utilizing the triangle principle. The three-dimensional coordinates include the distance from the electronic device 100 to the subject. Among other things, TOF 3D sensing is also an active depth sensing technology, and the basic components of TOF 3D sensing modules may include Infrared (IR) emitters, IR camera modules, and the like. The working principle of the TOF 3D sensing module is to calculate the distance (namely depth) between the TOF 3D sensing module and the shot object through the time of infrared ray turn-back so as to obtain a 3D depth map.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) -1, MPEG-2, MPEG-3, MPEG-4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, data such as music, photos, videos, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100.
The air pressure sensor 180C is used to measure air pressure.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated. The Android system is only one system example of the electronic device 100 in the embodiment of the present application, and the present application may also be applicable to other types of operating systems, such as IOS, windows, etc., which is not limited in this aspect of the present application. The following will only take the Android system as an example of the operating system of the electronic device 100.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: an image processing module, a video processing module, a surface manager (surface manager), a Media library (Media Libraries), a three-dimensional graphics processing library (e.g., openGL embedded system edition (OpenGL for Embedded Systems, openGL ES)), a 2D graphics engine (e.g., skia graphics library (Skia Graphics Library, SGL)), etc.
And the image processing module is used for encoding, decoding, rendering and the like of the image, so that an application program can display the image on a display screen. Conversion of image formats, generation of image files, and the like can be realized.
And the video processing module is used for encoding, decoding, rendering and the like of the video frames, so that an application program can display the video on a display screen. Conversion of video formats, generation of video files, and the like can be achieved.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: moving picture experts group 4 (Motion Picture Expert Group, MPEG 4), advanced video coding (MPEG-4Part 10Advanced Video Coding,MPEG-4 AVC/h.264), moving picture experts compression standard Audio Layer3 (MPEG Audio Layer3, MP 3), advanced Audio coding (Advanced Audio Coding, AAC), adaptive Multi-Rate (AMR), joint photographic experts group (Joint Photographic Experts Group, JPEG/JPG), portable network graphics (Portable Network Graphics, PNG), and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The software system shown in fig. 3 involves application rendering (e.g., gallery, file manager) using shooting capabilities, and application framework layers provide WLAN services, bluetooth services, and kernel and underlying layers provide WLAN bluetooth capabilities and basic communication protocols.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the control corresponding to the touch operation as an example of the control of the camera application icon, the camera application calls an interface of the application framework layer, starts a camera service corresponding to the camera application, and further starts a camera driver by calling the kernel layer, and captures a still image or video through the camera 193.
The camera service synchronously calls the kernel layer to start Bluetooth driving, sends a request message to the connected wearable equipment through a Bluetooth antenna, and receives sensor data or biological characteristic information sent by the wearable equipment based on the request message; the camera service calls an image processing module or a video processing module to write the biological characteristic information into an image frame of an image or a video to generate a picture file or a video file with the biological characteristic information.
In the present application, the picture file and the video file may be referred to as multimedia files.
Fig. 4 schematically illustrates a structural diagram of the wearable device 201 provided by the present application.
As shown in fig. 4, the wearable device 201 may include a processor 102, a memory 103, a wireless communication processing module 104, a mobile communication processing module 105, a touch display 106, and a sensor module 107. These components may be connected by a bus. Wherein:
the processor 102 may be used to read and execute computer readable instructions. In particular implementations, the processor 102 may primarily include a controller, an operator, and registers. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, logic operations, and the like, and may also perform address operations and conversions. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In particular implementations, the hardware architecture of the processor 102 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, MIPS architecture, ARM architecture, NP architecture, or the like.
In some embodiments, the processor 102 may be configured to parse signals received by the wireless communication processing module 104 and the wired LAN communication processing module 116, such as a request message sent by the electronic device 100 to obtain sensor data or biometric information, and a request message sent by the electronic device 100 to stop obtaining sensor data or biometric information. The processor 102 may be configured to perform corresponding analysis based on the information collected by the sensor module 106, such as analyzing the health status, movement status, emotional status, etc. of the user.
A memory 103 is coupled to the processor 102 for storing at least one of various software programs or sets of instructions. In particular implementations, memory 103 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 103 may store an operating system, such as an embedded operating system, for example uCOS, vxWorks, RTLinux. The memory 103 may also store communication programs that may be used to communicate with the electronic device 100, one or more servers, or additional devices.
The wireless communication processing module 104 may include one or more of a Bluetooth (BT) communication processing module 104A, WLAN communication processing module 104B.
In some embodiments, one or more of the Bluetooth (BT) communication processing module, the WLAN communication processing module may monitor signals transmitted by other devices (e.g., electronic device 100), such as probe requests, scan signals, etc., and may transmit response signals, such as probe responses, scan responses, etc., such that the other devices (e.g., electronic device 100) may discover the wearable device 201 and establish a wireless communication connection with the other devices (e.g., electronic device 100) to communicate with the other devices (e.g., electronic device 100) via one or more wireless communication technologies in bluetooth or WLAN.
In other embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may also transmit signals, such as broadcast bluetooth signals, beacon signals, so that other devices (e.g., electronic device 100) may discover the wearable device 201 and establish a wireless communication connection with other devices (e.g., electronic device 100) to communicate with other devices (e.g., electronic device 100) via one or more wireless communication technologies of bluetooth or WLAN.
The mobile communication processing module 105 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 100. For performing cellular communications and/or data communications, for example, the mobile communication module 106 may include a circuit-switched module ("CS" module) for performing cellular communications and a packet-switched module ("PS" module) for performing data communications. In the present application, the mobile communication processing module 105 may communicate with other devices (e.g., servers) through the fourth generation mobile communication technology (4 th generation mobile networks) or the fifth generation mobile communication technology (5 th generation mobile networks).
Touch panel 106, also known as a touch panel, is an inductive liquid crystal display device that receives input signals such as contacts and is used to display images, video, and the like. The touch screen 106 may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED) display, an active-matrix organic light emitting diode (AMOLED) display, a flexible light-emitting diode (FLED) display, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) display, or the like.
The sensor module 107 may include a motion sensor 107A, a biosensor 107B, an environmental sensor 107C, and the like. Wherein,,
the motion sensor 107A is a device that converts a change in non-electric quantity (e.g., speed, pressure) into a change in electric quantity. May include at least one of the following: acceleration sensor, gyro sensor, geomagnetic sensor (also called electronic compass sensor), or barometric pressure sensor. The acceleration sensor can detect the magnitude of acceleration in all directions (typically three axes, i.e., x, y and z axes). A gyroscopic sensor may be used to determine a motion gesture. Electronic compass sensors may be used to measure direction, implement or assist in navigation. The atmospheric pressure sensor is used for measuring the air pressure, in some embodiments, the height change of the position can be calculated through weak air pressure change in the movement process, meanwhile, the accuracy can be controlled within 10CM in the height movement process of a 10-story building, and the data of climbing up to rock and down to climbing up to stairs can be monitored.
In the present application, the motion sensor 107A can measure the activity of the user, such as running steps, speed, number of swimming turns, riding distance, exercise posture (e.g., playing ball, swimming, running), etc.
The biosensor 107B is an instrument that is sensitive to biological substances and converts the concentration thereof into an electrical signal for detection. Is an analysis tool or system which is composed of immobilized biological sensitive materials as recognition elements (including bioactive substances such as enzyme, antibody, antigen, microorganism, cell, tissue, nucleic acid, etc.) and proper physicochemical transducers (such as oxygen electrode, photosensitive tube, field effect tube, piezoelectric crystal, etc.), namely signal amplifying devices. The biosensor has the functions of a receiver and a transducer. The biosensor 107B may include at least one of: blood glucose sensors, blood pressure sensors, electrocardiosignals, myoelectric sensors, body temperature sensors, brain wave sensors and the like, and the functions mainly realized by the sensors include health and medical monitoring, entertainment and the like.
Wherein the blood glucose sensor is for measuring blood glucose. Blood pressure sensors are used to measure blood pressure. Electrocardiographic sensors, for example, use silver nanowires to monitor electrophysiological signals, such as an electrocardiogram. The electromyography sensor is used for monitoring the electromyography. The body temperature sensor is used for measuring body temperature and the brain wave sensor is used for monitoring brain waves. In the present application, various physiological indexes (such as blood sugar, blood pressure, body temperature, electrocardio, etc.) of the user can be measured through the biosensor 107B, and the wearing device 201 can calculate the health condition of the user according to the physiological indexes.
The biosensor 107B may also include a heart rate sensor, a piezoelectric sensor. Wherein,,
the heart rate sensor can track the exercise intensity, different exercise training modes and the like of the user by detecting the heart rate of the user, and can calculate the health data of the user such as the sleep period, the sleep quality and the like. When the light of the capacitor is emitted to the skin, the light reflected by skin tissues is received by the photosensitive sensor and converted into an electric signal, then the electric signal is converted into a digital signal, and the heart rate can be calculated according to the absorbance of blood.
A piezoelectric sensor is used to measure the arousal level of a user, which is closely related to the attention and participation of the user, and is usually equipped with some devices that can monitor sweat levels. The skin resistance, electrical conductance of the human body changes with changes in skin sweat gland function, these measurable changes in skin electricity are called skin electrical activity (EDA).
In an embodiment of the present application, the wearable device 201 measures the psychologically induced sweat gland activity by the piezoelectric sensor to determine the psychological activity of the user, such as the mood index of the user. Such as happiness, stress fear, greater stress, etc.
The environmental sensor 107C may include at least one of: an air temperature and humidity sensor, a rainfall sensor, an illumination sensor, a wind speed and direction sensor, a particulate matter sensor and the like. The environmental sensor 107C may enable detection of air quality, such as haze level, indoor formaldehyde concentration, PM2.5 detection, etc. In the present application, weather changes, air humidity, air quality, etc. can be measured by the environmental sensor 107C.
It will be appreciated that the structure illustrated in fig. 4 does not constitute a specific limitation on the wearable device 201. Alternatively, in other embodiments of the application, the wearable device 101 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In some specific scenarios, a user wears a wearable device, and an electronic device establishes a connection with the wearable device. The user triggers the electronic equipment to start the camera to shoot a picture, the electronic equipment acquires picture information and biological characteristic information. The biometric information corresponds to sensor data detected by the wearable device through the at least one sensor. Among them, the sensor data related to the present application includes, but is not limited to, data detected by at least one sensor among the above-mentioned motion sensor 107A, the biosensor 107B, and the environmental sensor 107C.
Optionally, the biometric information may be sent to the electronic device after the wearable device performs analysis processing based on the sensor data; optionally, the biometric information may be obtained after the electronic device obtains sensor data sent by the wearable device and then performs analysis processing based on the sensor data.
The electronic device associates the biometric information with the picture information or associates the biometric information with a person in the captured picture, the person corresponding to the biometric information. The electronic device generates a picture file with biometric information that the electronic device may display or display a portion of the biometric information in response to a user viewing the picture file.
The implementation form of the shooting method provided by the application on the display interface of the smart phone is exemplified by the smart phone.
First, a description will be given of a start annotation mode.
In the first mode, a camera of the electronic device is started, and a labeling mode is selected.
Camera applications are a type of application software that electronic devices have for taking pictures. When a user wants to take an image or video, a camera application may be started and the electronic device invokes at least one camera to take the image. As shown in fig. 5a, fig. 5a illustrates an exemplary user interface on an electronic device for exposing a list of applications. Fig. 5a includes a status bar 201 and a display interface 202, wherein the status bar 201 may include: one or more signal strength indicators 203 for mobile communication signals (also may be referred to as cellular signals), one or more signal strength indicators 207 for wireless high-fidelity (wireless fidelity, wi-Fi) signals, a bluetooth indicator 208, a battery status indicator 209, a time indicator 211. When the bluetooth module of the electronic device is in an on state (i.e., the electronic device is powering the bluetooth module), a bluetooth indicator 208 is displayed on the display interface of the electronic device.
The display interface 202 displays a plurality of application icons. Wherein the display interface 202 includes application icons of the camera 205. When the electronic device detects a user operation of the application icon acting on the camera 205, the electronic device displays an application interface provided by the camera application.
Referring to fig. 5b, fig. 5b shows a user interface provided by one possible camera application. The application interface of camera 205 is shown in fig. 5b, which may include: a display area 30, a flash icon 301, a setting icon 302, a mode selection area 303, a gallery icon 304, a shooting icon 305, and a switching icon 306.
The display content of the display area 30 is a preview display interface of an image collected by a camera currently used by the electronic device. The currently used camera of the electronic device may be a default camera set by the camera application, and the currently used camera of the electronic device may also be a camera used when the camera application is turned off last time.
The flash icon 301 may be used to indicate the operating state of the flash.
The setting icon 302, when a user operation acting on the setting icon 302 is detected, in response to the operation, the electronic device may display other shortcut functions, such as functions of adjusting resolution, timing shooting (which may also be referred to as delay shooting, and may control the time to start shooting), shooting silence, sound-controlled shooting, smiling face snapshot (auto-focusing to a smiling face when a smiling face feature is detected by a camera), and the like.
The mode selection area 303 is configured to provide different shooting modes, and the camera and shooting parameters activated by the electronic device are different according to the shooting modes selected by the user. May include an annotation mode 303A, a night scene mode 303B, a photograph mode 303C, a video mode 303D, and more 303E. The icon of the photographing mode 303C in fig. 5b is marked to prompt the user that the current mode is the photographing mode. Wherein,,
and a labeling mode 303A, in which when the electronic device detects a user operation of capturing a picture/video, the electronic device acquires image information currently acquired by the camera and acquires biometric information corresponding to sensor data (for example, heart rate data detected by a heart rate sensor and blood pressure data detected by a blood pressure sensor) detected by the wearable device through the sensor, and the electronic device performs fusion encoding on the biometric information and the captured image information to generate a picture/video file with the biometric information. For example, the biometric information may be heart rate data, blood pressure data, etc. of the user; the information can also be information such as normal heart rate, normal blood pressure and the like.
Optionally, in some embodiments, if the electronic device does not turn on bluetooth, when the electronic device detects a user operation 307 acting on the labeling mode 303A, the electronic device automatically turns on bluetooth and automatically searches for connectable bluetooth devices in response to the user operation 307, and establishes a connection according to a user selection, or automatically establishes a connection with a bluetooth device that has established a connection.
Optionally, in some embodiments, the electronic device may receive sensor data or biometric information of the wearable devices of two users at the same time, perform fusion encoding on the two biometric information and the captured image information, and associate the biometric information of the two users in one picture/video file.
When a user operation on the labeling mode 303A is detected, in response to the operation, the icon of the photographing mode 303C in the mode selection area 303 of the electronic device is no longer marked, and the labeling mode 303A is marked.
The night scene mode 303B can improve the detail presenting capability of the bright part and the dark part, control noise points, and present more picture details. The photographing mode 303C adapts to most photographing scenes, and may automatically adjust photographing parameters according to the current environment. A video recording mode 303D for capturing a video. Further 303E, when a user operation acting on the further 303E is detected, in response to the operation, the electronic device may display other selection modes such as a panoramic mode (automatic stitching is achieved, the electronic device stitches a plurality of pictures taken continuously into one picture, an effect of enlarging a view angle of a picture is achieved), an HDR mode (automatic continuous shooting under-exposure, normal exposure, over-exposure of three pictures, and selection of a best portion to be combined into one picture), and the like.
When a user operation of an application icon (e.g., an annotation mode 303A, a night view mode 303B, a photographing mode 303C, a video recording mode 303D, a panorama mode, an HDR mode, etc.) acting on any one of the modes in the mode selection area 303 is detected, the photographing apparatus/electronic apparatus may enter a corresponding mode in response to the operation. Accordingly, the image displayed in the display area 30 is the processed image in the current mode.
Each mode icon in the mode selection area 303 is not limited to a virtual icon, and may be a mode selection by a physical key disposed on the photographing apparatus/electronic apparatus, so that the photographing apparatus enters a corresponding mode.
The gallery icon 304, when a user operation on the gallery icon 304 is detected, in response to the operation, the electronic device may enter a gallery of the electronic device, which may include captured photographs and videos therein. The gallery icon 304 may be displayed in a different form, for example, after the electronic device saves the image currently acquired by the camera, a thumbnail of the image is displayed in the gallery icon 304.
When a user operation (e.g., a touch operation, a voice operation, a gesture operation, etc.) acting on the photographing icon 305 is detected, the electronic device acquires an image currently displayed in the display area 30 in response to the operation, and saves the image in the gallery. Wherein the gallery may be entered by a user operation (e.g., touch operation, gesture operation, etc.) directed to gallery icon 304.
A switch icon 306 may be used to switch between front and rear cameras. The shooting direction of the front camera is the same as the display direction of the screen of the electronic equipment used by the user, and the shooting direction of the rear camera is opposite to the display direction of the screen of the electronic equipment used by the user. If the display area 30 currently displays an image captured by the rear camera, when a user operation on the switch icon 306 is detected, the display area 30 displays an image captured by the front camera in response to the operation. If the display area 30 currently displays an image captured by the front camera, when a user operation on the switch icon 306 is detected, the display area 30 displays an image captured by the rear camera in response to the operation.
As shown in fig. 5c, fig. 5c exemplarily shows an application interface corresponding to the labeling mode 303A. Wherein the icons of the intelligent labels 303A in the mode selection area 303 are marked, indicating that the current mode is the labeling mode.
Optionally, the display area 30 of fig. 5c may also display a prompt message for prompting the user that the current electronic device is connected to the wearable device. Such as the display of the text "connect to a wearable device" in the prompt area 308, indicating that the current electronic device has been connected to a wearable device, which may be headphones, a watch, a bracelet, glasses, or the like. For another example, the text "not connected to wearable device" is displayed in the prompt field 308, prompting the user that the electronic device is not currently connected to the wearable device. The connection mode can be a short-distance connection mode such as Bluetooth connection, wiFi connection and the like.
Optionally, in some embodiments, the content in the alert area 308 may alert the user using the electronic device, causing the user wearing the wearable device to be within the capture area. For example, the text "please confirm that the user wearing the wearable device is within the shooting range of the lens" is displayed in the prompt area 308.
In the application, the user operation includes but is not limited to clicking, shortcut key-press, gesture, suspension touch control, voice command and other operations.
Alternatively, referring to fig. 5d, fig. 5d shows a user interface provided by yet another possible camera application. The application interface of the camera 205 is shown in fig. 5d, unlike fig. 5b, in which an annotation icon 310 is included in the application interface 31. When the electronic device detects a user operation on the annotation icon 310, the electronic device initiates an annotation function in response to the operation. In this way, the labeling function can be turned on in any shooting mode in the mode selection area 303.
When the electronic device is in the photographing mode 303C, a user operation on the annotation icon 310 is detected, and in response to the operation, the electronic device starts the annotation function in the photographing mode 303C. When the electronic device is in the night scene mode 303B, a user operation on the annotation icon 310 is detected, and in response to the operation, the electronic device starts the annotation function in the night scene mode 303B.
As shown in fig. 5e, fig. 5e illustrates the application interface after the labeling function is initiated. Wherein the annotation icon 310 is marked indicating that the annotation function is currently enabled.
The embodiment of the application is not limited to the camera 205 being activated by an application icon. For example, the camera can be started in a mode of short messages, shooting functions of social software, video call and the like, so that the labeling function is further started. Under the condition that the labeling function is started, the electronic equipment acquires the pictures/videos with the biological characteristic information, and shares the pictures/videos with the biological characteristic information in modes of short messages, social software, video call and the like.
In a second mode, a camera is started through the first application icon, and the shooting mode of the camera defaults to the labeling mode.
Wherein "smart label" is merely an example, and may be other names. As shown in fig. 6a, the display interface 202 of fig. 6a presents a plurality of application icons. Wherein the display interface 202 includes application icons of the smart labels 212. If the user wants to start the annotation mode, the user operates the application icon triggering the intelligent annotation 212. The electronic device displays an application interface for the smart label 212 in response to a user operation.
In some embodiments, if the electronic device does not turn on bluetooth, when the electronic device detects a user operation on the smart label 212, the electronic device automatically turns on bluetooth, automatically searches for connectable bluetooth devices, establishes a connection according to a user selection, or automatically establishes a connection with a bluetooth device that has established a connection.
Referring to FIG. 6b, FIG. 6b illustrates an exemplary application interface provided by one possible smart label 212. The application interface may include: a display area 40, a flash icon 401, a setting icon 402, a gallery icon 403, a shooting icon 404, a switch icon 405, and a presentation area 406.
It should be noted that, based on the same inventive concept, the principles of the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 according to the embodiment shown in fig. 6b are similar to those of the embodiment shown in fig. 5b, so that the implementation of the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 in fig. 6b may be referred to the corresponding descriptions of the implementation of the flash icon 301, the setting icon 302, the gallery icon 304, the shooting icon 305, and the prompt area 308 in fig. 5b, which are not repeated herein.
Optionally, in some embodiments, the labeling mode may also be started by the wearable device APP in the electronic device. Such as application icons in the display interface 202 of fig. 6a, smart wear 214. Wherein smart wear 214 is an application that manages and interacts with one, or more types of wearable devices, including function management, rights management, and the like. In the application interface of the smart wearable 214, functionality controls for multiple wearable devices may be included. The electronic equipment is connected with the wearable equipment in a pairing way, a user selects to enter a user interface of the corresponding wearable equipment, in the user interface of the wearable equipment, the electronic equipment detects the user operation aiming at the starting annotation mode, the starting annotation mode is realized, and the electronic equipment can acquire the biological characteristic information corresponding to the sensor data of the wearable equipment.
The two modes introduce different modes of starting the labeling mode of the electronic equipment and corresponding display interfaces. Fig. 5c and 6b each schematically show an application interface in a labeling mode. Optionally, fig. 7a also provides one possible application interface.
As shown in fig. 7a, in contrast to fig. 6b, the display area 40 may further comprise a preview icon 407, the preview icon 407 being used to trigger the electronic device to obtain biometric information. When the electronic device detects the user operation for the preview icon 407, the electronic device sends a request message to the wearable device, and after receiving the request message, the wearable device sends sensor data or biometric information to the electronic device, and the electronic device displays at least part of the biometric information on the display screen in real time according to the acquired sensor data or biometric information. Specifically, after receiving the request message, the wearable device sends sensor data to the electronic device, and the electronic device determines biometric information based on the received sensor data and displays at least part of the biometric information on the display screen in real time; or after receiving the request message, the wearable device determines the biological characteristic information based on the sensor data, sends the biological characteristic information to the electronic device, and displays at least part of the biological characteristic information on the display screen in real time based on the received biological characteristic information.
Optionally, in some embodiments, when the electronic device detects a user operation with respect to the smart label 212 in fig. 6a, the electronic device displays the interface shown in fig. 7a and sends a request message to the wearable device, after which the wearable device receives the request message, sends sensor data or biometric information to the electronic device. The preview icon 407 is used to trigger the electronic device to display at least part of the biometric information on the display screen in real time based on the acquired sensor data or biometric information.
For example, as shown in fig. 7b, when the electronic device detects a user operation with respect to the preview icon 407, the electronic device displays the interface shown in fig. 7b, and the display area 40 includes a preview area 408, and the preview area 408 displays the health status of the user of small a, for example, that the heart rate of small a is normal; the movement status of small a is also shown, for example small a is running. The preview area 408 may further include information of blood pressure, blood sugar, whether the exercise posture is normal, emotion (e.g., happy, tension, and heart injury), and the like of the small a.
Alternatively, in one possible embodiment, the electronic device may display the application interface of fig. 7b directly in response to a user operation, without triggering by preview icon 407 in fig. 7 a. When the electronic device detects the user operation of starting the labeling mode, the electronic device sends a request message to the wearable device, after the wearable device receives the request message, the wearable device sends sensor data or biological characteristic information to the electronic device, and the electronic device displays at least part of biological characteristic information on the display screen in real time according to the acquired sensor data or biological characteristic information, as shown in fig. 7 b.
The above embodiment provides an intelligent labeling possible application interface, and the electronic device responds to the user operation 307 to display an application interface of the intelligent labeling 303A (as shown in fig. 5 c); or the electronic device displays an application interface of the smart label 212 (as shown in fig. 6 b) in response to a user operation; or display an application interface such as that of fig. 7a or 7b, etc.
Optionally, before entering the labeling mode or in the labeling mode, the user may configure the biometric information at the electronic device side, and select to obtain specific biometric information as required.
And clicking the setting icon in the application interface by the user, and displaying the setting interface by the electronic equipment. For example, in the above application interface, the electronic device may display the setting interface 60 of the annotation mode shown in fig. 8a in response to a user operation with respect to the setting icon 302 (or the setting icon 402).
As shown in fig. 8a, the setting interface 60 includes functions of setting resolution, taking a photograph at regular time, taking a photograph mute, taking a photograph by sound control, taking a snapshot of smiling face, and the like. In the setting interface 60, the setting interface 60 includes an area column of the wearable device 601, and the wearable device can be specifically configured.
The electronic device displays an interface 70 of the wearable device as shown in fig. 8b in response to a user operation for option 602. The setup interface 70 includes my device 701 and other devices 702, where my device 701 includes a device to which an electronic device is being connected and a connected device, e.g., a watch including small a is the device to which the electronic device is being connected, and a watch including small B indicates that the electronic device and the watch of small B have been connected, but are not currently connected. Other devices 702 refer to devices that are connectable as found by the electronic device via bluetooth and that are not connected, such as watches including a small a headset and a small C.
The user may click on area 703 to enter the specific configuration interface of the watch of small a and the electronic device, in response to a user operation with respect to area 703, may enter the specific configuration interface 80 of the watch of small a as shown in fig. 8 c. As shown in fig. 8c, the configuration interface 80 includes a plurality of configuration options, such as heart rate, blood pressure, blood glucose, exercise status, emotional status, etc. Icon 801 may be described as a switch, indicating off when the circle in icon 801 is to the left; when the circle in icon 801 is to the right, it is indicated as open; wherein the off state and the on state can be switched by a click operation. For example, as shown in fig. 8c, at this time, a circle of an icon 801 of the heart rate column is on the left, which indicates that the biometric information of the current picture/video shot by the electronic device does not include heart rate information, and when the electronic device detects a click operation on the icon 801, the circle of the icon 801 moves to the right, which indicates that the biometric information of the current picture/video shot by the electronic device includes heart rate information.
The configuration interface 80 also includes an icon 802 that associates a facial image, and the electronic device obtains facial image information of a user wearing the small a watch in response to a user operation with respect to the icon 802, and combines the small a watch with the facial image information. The facial image information may be an image uploaded by the user, the user clicks the icon 802 associated with the facial image, and uploads the facial image of the small a, or after clicking the icon 802, the facial image of the small a may be acquired by the camera of the electronic device, and the facial image of the small a is associated with the watch of the small a.
As shown in fig. 8d, after the electronic device successfully associates the watch of small a with the face image, the face image 7031 is displayed in the area 703. The face image 7031 indicates a face image of a user wearing the wristwatch of small a. The electronic device can recognize the small A in the picture through the face recognition technology.
It will be appreciated that when the small a watch is again connected to the electronic device, the electronic device may automatically display the facial image 7031 associated with the small a watch in fig. 8d, i.e., the electronic device may directly display the interface as in fig. 8d after detecting the user operation for option 602 in fig. 8 a. Optionally, before the electronic device establishes a connection with the watch of the small a again, when the electronic device searches for the watch of the small a, the electronic device may automatically display the facial image associated with the watch of the small a.
When the user clicks on the area 703 in fig. 8d, the electronic device displays an interface 80 as shown in fig. 8c in response to the clicking operation. At this point, icon 802 may be used to alter the facial image associated with the watch of small a. The electronic device combines the watch of small a with the latest received facial image information in response to user operation with respect to icon 802.
Optionally, in some embodiments, icon 802 may also be used to add a watch-associated facial image of small a. The electronic device combines the small a watch with the received facial image information in response to user operation with respect to icon 802. The small a watch may be combined with multiple facial image information without affecting the previously added associated facial image.
The above embodiments describe the configuration process of the electronic device for the annotation mode and the corresponding user interface. In another implementation of the present application, the annotation mode may also be configured by the wearable device. The wearable device is a wearable device that establishes a connection with an electronic device.
As shown in fig. 9a, fig. 9a illustrates a main interface 90 of a wearable device, the main interface 90 listing a plurality of application icons. Wherein the main interface 90 includes application icons of the camera 901. When the wearable device detects a user operation of an application icon acting on the camera 901, the wearable device displays an application interface provided by the camera application.
Referring to fig. 9b, fig. 9b shows an application interface 1000 provided by a camera application. Application interface 1000 of camera 901 as shown in fig. 9b, the application interface 1000 may include: an annotation icon 1001, a gallery icon 1002, a shooting icon 1003, and a switch icon 1004. The principles of solving the problem provided by the gallery icon 1002, the shooting icon 1003, and the switch icon 1004 in the embodiment shown in fig. 9b are similar to those of the embodiment in fig. 5b, so that the implementation of the gallery icon 1002, the shooting icon 1003, and the switch icon 1004 in fig. 6b may be referred to the corresponding descriptions of the implementation of the gallery icon 304, the shooting icon 305, and the switch icon 306 in fig. 5b, which are not repeated herein.
The display content of the display area of the application interface 1000 is an image acquired by a camera currently used by the electronic device. The currently used camera of the electronic device may be a default camera set by the camera application, and the currently used camera of the electronic device may also be a camera used when the camera application is turned off last time.
The annotation icon 1001 indicates that the user can initiate an annotation mode and configure the annotation mode. When the wearable device detects a user operation for the annotation icon 1001, a configuration interface as in fig. 9c is displayed. Fig. 9c is used to configure the labeling mode, and the configured options include heart rate, blood pressure, blood glucose, exercise state, emotional state, etc. The icon 1005 is used for identifying whether the electronic device acquires heart rate information, as shown in fig. 9c, at this time, a circle in the icon 1005 is on the left, the biometric information representing the current picture/video shot by the electronic device does not include heart rate information, and when the electronic device detects a click operation for the icon 1005, the circle in the icon 1005 moves to the right, and the biometric information representing the current picture/video shot by the electronic device includes heart rate information. The icon 1007 is used to identify whether the electronic device obtains blood pressure information, and in fig. 9c, the circle in the icon 1007 is on the right, and the biometric information representing the current picture/video taken by the electronic device includes blood pressure information.
When the user completes the configuration, the ok icon 1006 is clicked. The wearable device detects the user operation for the determination icon 1006, completes the configuration of the labeling mode, and opens the labeling mode. After the annotation mode is turned on, the user may trigger a shoot icon 1003 on the application interface 1000 to shoot a picture or video. The wearable device sends sensor data to the electronic device, in some embodiments, a user triggers shooting at the wearable device side, and the sent data type is determined by the configuration of the wearable device side; and triggering shooting by the user at the electronic equipment side, and setting the sent data type to be in accordance with the configuration of the electronic equipment side. Taking the configuration of fig. 9c as an example, the wearable device sends the blood pressure data detected by the blood pressure sensor to the electronic device.
Optionally, in some embodiments, the wearable device detects sensor data through one or more sensors, and based on the configuration of the wearable device side, sends biometric information to the electronic device. Taking the configuration of fig. 9c as an example, the wearable device transmits biometric information to the electronic device, where the biometric information corresponds to blood pressure data, including blood pressure information (e.g., normotensive blood pressure).
Optionally, in some embodiments, when the wearable device detects a click operation for the annotation icon 1001, directly starting the annotation mode, the wearable device sends sensor data or biometric information to the electronic device according to the configuration information; when the wearable device detects a long press or double click operation for the callout icon 1001, a configuration interface as in fig. 9c is displayed.
The above embodiment provides a possible setting interface for the annotation mode, and the electronic device displays the setting interface for the annotation mode in response to a user operation with respect to the setting icon 302 (or the setting icon 402) (as shown in fig. 8 a); or the wearable device displays a setting interface of the annotation mode in response to a user operation for the annotation icon 1001 (as shown in fig. 9 c); etc.
After the configuration is completed, the electronic device photographs in the labeling mode, and in the interface shown in fig. 7b, the display content of the preview area 408 is changed correspondingly according to different configurations. For example, if the user selects to turn on heart rate and exercise state in the configuration interface 80 as shown in fig. 8c, the display of the preview area 408 may include heart rate information (e.g., heart rate data or heart rate normal) and exercise state information (e.g., running).
When the electronic equipment detects user operation triggering to shoot pictures, the electronic equipment fuses and codes the picture information acquired at present with the biological characteristic information to generate the picture with the biological characteristic information, and the biological characteristic information corresponds to sensor data of heart rate, blood pressure, blood sugar, movement state, emotion state and the like of the user acquired by a sensor of the wearing equipment.
Unlike taking pictures, taking video is a process over a period of time. When the electronic device detects a user operation triggering the shooting of video, the application provides an exemplary user interface. As shown in fig. 10, fig. 10 shows a user interface 45 of an electronic device capturing video, wherein an icon 411 in the user interface 45 indicates that the current capturing mode is in capturing video. The display area of the user interface 45 may include a timing area 410, a preview area 408, and an annotation icon 412. Wherein the timing area 410 counts the shooting length of the video in real time. The preview area 408 displays information such as health status, exercise status, emotional status, etc. of the user in real time, for example, health status of the user small a (user of the wearable device connected to the electronic device) and normal heart rate; the movement status of small a is also shown, for example small a is running. The display content of the preview area 408 may be displayed according to the configuration of the user on the electronic device side or the wearable device side, and the display content of the preview area 408 may be updated in real time along with the biometric information acquired by the electronic device.
The annotation icon 412 is used to control the on and off of the annotation mode, wherein the annotation icon 412 may indicate the on and off states by displaying a brightness or color, etc.
For example, when the annotation icon 412 is in a bright state and indicates that the current shooting mode is the annotation mode during shooting a video, the user interface 45 displays the preview area 408, and the electronic device performs fusion encoding on the acquired biometric information and the shot image frame; when the annotation icon 412 is in a dark state, indicating that the current captured mode is not an annotation mode (e.g., normal mode), the user interface 45 does not display the preview area 408, the electronic device does not fusion encode the captured biometric information with the captured image frame, or the electronic device does not capture biometric information any more. That is, the user can control the on and off of the annotation mode by the user operation on the annotation icon 412 during the video recording process. And in the state that the labeling mode is started, the electronic equipment fuses and codes the continuously acquired picture information and the biological characteristic information to generate an image frame with the biological characteristic information, and a series of image frames are combined into a continuous video.
Alternatively, in some embodiments, the callout icon 412 can be used to control the display and concealment of the preview area 408, wherein the callout icon 412 can indicate the display status and concealment status by means of display brightness or color, etc. In such an embodiment, whether the preview area 408 is displayed or hidden, the electronic device may acquire biometric information and fusion encode with the captured image frame based on the acquired biometric information.
In the labeling mode, the electronic equipment shoots pictures/videos and generates the pictures/videos with the biological characteristic information. The photographed picture/video can be viewed in a gallery. When the electronic equipment detects the user operation aiming at the gallery icon, the electronic equipment displays an application interface of the gallery.
As shown in fig. 11a, fig. 11a illustrates a gallery interface 1100, the gallery interface 1100 displaying a plurality of pictures. The user clicks on a picture 1101 taken in the annotation mode and enters the picture viewing interface. As shown in fig. 11b, fig. 11b illustrates a picture viewing interface 1200 including a display area 1201, a share icon 1202, an edit icon 1203, a delete icon 1204, a more icon 1205, and a callout area 1206. Wherein,,
the display area 1201 displays a picture 1101. The share icon 1202 may be used to trigger a picture sharing function to share the picture 1101 to other devices or applications. Edit icon 1203 may be used to trigger edit functions on picture 1101 such as rotation, cropping, adding filters, blurring, etc. A delete icon 1204 may be used to trigger deletion of the picture 1101. More icons 1205 may be used to trigger the opening of more functions associated with the picture 1101.
Alternatively, in some embodiments, the display areas of the share icon 1202, the edit icon 1203, the delete icon 1204, and the more icon 1205 may be collectively referred to as a menu area. The menu area is optional. The menu region may be hidden in the picture viewing interface 1200, for example, a user clicking on the display region 1201 may hide the menu region, and again clicking on the display region 1201 may display the menu region, as the application is not limited.
The labeling area 1206 displays biometric information of the picture 1101. The text content displayed by the callout area 1206 includes, but is not limited to, heart rate, blood pressure, blood glucose, motor status, emotional status, and the like. For example, the text content displayed in the labeling area 1206 may be "small a running" or "small a has a normal heart rate, running in an exercise state, and pleasant in an emotion state". The text content displayed by the annotation region 1206 may also include an assessment of the user's movement gestures, e.g., gesture specifications, gesture non-specifications, etc.; suggestions for the user, such as a low mood index of the user, may also be included, hopefully every day; corresponding pushing can be performed according to the state of the user, for example, the running posture of the user is not standard, the electronic device searches pictures or videos of the correct running posture through the network, and the user is recommended to watch related learning videos or picture data and the like. The application is not limited in this regard. It will be appreciated that the callout area 1206 can be displayed in any location and in any form in the picture viewing interface 1200.
The user can view detailed information of the picture 1101 through more icons 1205. As shown in fig. 11c, when the electronic device detects a user operation for more icons 1205, a user interface 1300 is displayed. The user interface 1300 in fig. 11c shows details of the small a, including information such as normoglycemia, running state, happy emotion state, geographical state, etc., and specific numerical information such as blood pressure data, blood sugar data, heart rate data, mood index, etc.
In some embodiments, FIG. 12 illustrates yet another gallery interface 1400. As shown in fig. 12, the gallery interface 1400 divides different picture sets for different types of pictures, and the division manner may be divided according to the biometric information of the pictures. For example, fig. 12 includes a intelligently annotated picture set, where the pictures/videos in the picture set all carry biometric information; the method comprises the steps of distinguishing picture sets by characters, for example, a picture set of a small A and a picture set of a small B, wherein the picture sets of the small A are pictures/videos of the small A, and the picture sets of the small B are pictures/videos of the small B; including a picture set that is distinguished by a state of motion, such as a running picture set and a badminton picture set. Optionally, a picture set distinguished by an emotional state, such as a pleasant picture set, a sad picture set, etc., may also be included. In this way, the user can select a picture set that he wants to view with the biometric as a distinction.
In some embodiments, fig. 13a illustrates yet another picture viewing interface 1201. As shown in fig. 13a, the display area of the picture viewing interface 1201 displays a picture 1101. In contrast to the picture viewing interface 1200, the labeling area is not included in the picture viewing interface 1201, but the cursor 1210 is included. The cursor 1210 prompts the user to select to view the biometric information of the picture. As shown in fig. 13a and 13b, when the electronic device detects a user operation on the cursor 1210, a labeling area 1211 of the picture 1101 is displayed, and text content in the labeling area 1211 may include at least part of biometric information, such as a sports state, an emotional state, a health state, etc., where "small a is running" in fig. 13a indicates a sports state of small a.
Wherein the display position of the cursor 1210 is in the vicinity of the small a, the biometric information represented by the cursor 1210 is information of the small a. The display position of the labeling area 1211 is in the vicinity of the small a, indicating that the character described by the text content in the labeling area 1211 is the small a. When two or more persons are included in the picture, this way the specific person described by the biometric information can be accurately displayed.
Optionally, cursor 1210 may be hidden in the picture viewing interface 1201. The user can view the biological characteristic information of the small a in the picture 1101 by clicking the display area of the small a in the picture 1101, and click the display area of the small a again, so that the biological characteristic information of the small a can be hidden.
As shown in fig. 14a, the picture displayed in the display area 1201 in fig. 14a includes two characters, and the labeling area 1206 includes biometric information of two users (small a running and small B running), that is, the electronic device may simultaneously receive sensor data or biometric information of the wearable devices of the two users, and the biometric information of the two users is displayed in one picture.
In some embodiments, the biometric information may be displayed in the vicinity of the person. As shown in fig. 14B, the biometric information "small a is in running" is displayed near the person whose clothing is No. 12, and the biometric information "small B is in running" is displayed near the person whose clothing is No. 8. If the wearing device is associated with the facial image, the electronic device can identify the user in the image according to facial image information through a facial identification technology, match the user corresponding to the wearing device with the person in the image, and display the biological feature information in the vicinity of the corresponding person in the image.
The above embodiments describe a picture viewing interface in an electronic device, and next, an application interface for viewing video with biometric information will be described.
As shown in fig. 15a, fig. 15a illustrates an exemplary video viewing interface 1500. One or more videos are included in the video viewing interface 1500, where the title of each video may be automatically named by the electronic device according to the biometric information of the video. For example, small a running indicates that the user in the video is small a and the motion state of small a is running. The electronic device detects a user operation for the video "small a running" and plays the video of "small a running".
As shown in fig. 15b, fig. 15b schematically illustrates a video playback interface 1600. The video playback interface 1600 includes a progress bar 1601 for indicating the progress of video playback. The title "small a running" of a video indicates the motion state of a user in the video, wherein the title of the video may further include information of the health state, emotional state, etc. of the user.
In some embodiments, the video playback interface 1600 displays at least a portion of the biometric information in real-time during the video playback. Since a video segment is made up of a plurality of image frames. Each image frame has biometric information, and the electronic device can display at least part of the biometric information of each image frame in real time according to the progress of the playing. As shown in fig. 16a and 16b, fig. 16a and 16b further include a labeling area 1603 and a labeling area 1604, compared to fig. 15b, where text content in the labeling area 1603 and the labeling area 1604 may include related descriptions of the user's health status, exercise status, emotional status, and the like; evaluation of the user's movement gestures, such as gesture specifications, gesture non-specifications, etc., may also be included; suggestions for the user, such as a low mood index of the user, may also be included, hopefully every day; corresponding pushing can be performed according to the state of the user, for example, the running posture of the user is not standard, and the following linked video learning is recommended to be watched; etc.
As can be seen in fig. 16a and 16b, in the same video, the content in the annotation region 1603 and the annotation region 1604 can change as the video playback progress changes. That is, video playback interface 1600 may display at least a portion of the biometric information in real-time.
In some embodiments, at least a portion of the biometric information displayed in real-time in the video playback interface is part of the content of the biometric information. If the user wants to view the complete biometric information, he can pause to view the detailed information. As shown in fig. 17a, the progress bar 1605 currently indicates a pause state (pause at a time of 1 minute 41 seconds), and when the electronic device detects a user operation for more icons 1205, the complete biometric information of the current image frame (image frame at a time of 1 minute 41 seconds) is displayed, as shown in fig. 17 b. The detailed information in fig. 17b is the complete biometric information of the video in fig. 17a at an image frame of 1 minute and 41 seconds.
In some embodiments, the biometric information may be displayed in real-time in the vicinity of the person. As shown in fig. 18a, the video played in the video playing interface 1600 in fig. 18a is a video of "small a and small B running", includes two persons, and includes biometric information of two users. Wherein the biometric information "small a pose specification" is displayed near a person whose clothing is number 12, and the biometric information "small B pose specification" is displayed near a person whose clothing is number 8. If the wearing device is associated with the facial image, the electronic device can identify the user in the image according to facial image information through a facial identification technology, match the user corresponding to the wearing device with the person in the image, and display the biological feature information in the vicinity of the corresponding person in the image.
As can be seen in fig. 16a and 16b, in the same video, the display content in region 1603 and region 1604 may change as the video playback progress changes. I.e., the video playback interface 1600 may display biometric information in real-time in the case of multiple users.
The foregoing describes the display interface and the method flow related to the present application, and it can be understood that the content related to the association of the biometric information and the picture/message is dependent on fusion encoding of the biometric information and the picture/video information, so as to obtain a corresponding new picture/video with the biometric information. The following describes specific technical implementation and principle of fusion encoding of biometric information and picture/video information to generate a picture/video file with biometric information.
(1) Picture file format with biometric information.
The embodiment of the application provides a picture file format with biological characteristic information. The electronic equipment performs fusion coding on the biological characteristic information and the picture information to generate a picture file format. As shown in fig. 19a, fig. 19a exemplarily shows a picture file format with biometric information. The basic data structure of the picture file format includes two major types: "segment" and compression-encoded image data. The "segment" includes a field identifying the start of image (SOI), a field identifying image identification information (e.g., APP1, APP2, etc.), a field defining a quantization table (define quantization table, DQT), a field defining a huffman table (define huffman table, DHT), a field identifying the start of image frame (SOF), a field identifying the start of scan (SOS), a field identifying the end of image (EDI), and the like.
The fields of the image identification information can have one or more sections, such as APP1, APP2, etc., and each field of the image identification information defines attribute information of the image. The attribute information of the image includes various information related to the photographing condition at the time, such as aperture, shutter, date and time, etc. at the time of photographing, such as camera brand type, color coding, sound recorded at the time of photographing, and Global Positioning System (GPS) information.
Specifically, the APP1 field includes data of a segment identifier, a segment character, a segment length, and a tag image file format (tag image file format, TIFF). The TIFF data includes the attribute information. Wherein a TIFF file may contain multiple images, each with its own image file directory (image file director, IFD) and a series of tags, and various compression algorithms may be employed.
Taking IFD0 as an example, the data field includes a plurality of data fields (e.g., DE1, DE2, etc.), each of which has an identification tag, and different tags indicate different attribute information. For example, a field of DE1 may indicate a shooting time of an image, a field of DE2 may indicate a camera model, a field of tag=0x8825 may indicate GPS information of an image, and a field of tag=0x8769 may indicate biometric information (exercise posture, heart rate, blood pressure, etc.) of an image.
Illustratively, in the field of tag=0x8769, the field defining 0x00-0x7F represents the motion gesture, and then 128 motion gestures may be included in the field at most, for example, 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, and so on; a field of 0x80-0x9F represents vital signs, then up to 32 vital signs can be included in the field, e.g., 0x80 represents heart rate, 0x81 represents blood pressure, 0x82 represents blood glucose, etc.; the field of 0xA0-0xAF represents personal basic information, and 16 kinds of personal basic information can be included in the field at maximum, for example, 0xA0 represents height, 0xA1 represents age, 0xA2 represents gender, and the like.
If the sensor data or the biometric information acquired by the electronic device indicates that the user is running, writing a 0x00 field in a field indicating the biometric information of the image; if the sensor data or the biometric information acquired by the electronic device indicates that the heart rate of the user is 60 times/min, writing a 0x80 x3C field (hexadecimal of 0x3C is 60) in a field indicating the biometric information of the image; etc.
In some embodiments, the IFD in the picture file format further includes fields such as identity information and scene information, where the identity information includes a device name of the wearable device, a device account number (for example, a Hua account number), a custom user name, and the like; the scene information includes information of geographic positions of the electronic device when the electronic device captures the picture, and the electronic device comprehensively determines scenes in the picture, such as parks, bars, lakeboxes, museums and the like.
It will be appreciated that the picture format shown in fig. 19a is an exemplary picture format provided by the present application, wherein the location of the biometric information field in the picture format is not limiting of the present application.
(2) Video frame format with biometric information.
The embodiment of the application also provides a video frame format with the biological characteristic information. A complete video is made up of multiple video frames, one video frame corresponding to each picture. The electronic device performs fusion encoding on the biological characteristic information and the video frame information to generate the video frame format. As shown in fig. 19b, fig. 19b illustrates an exemplary video frame format with biometric information. The video frame format includes supplemental enhancement information (supplemental enhancement information, SEI), a sequence parameter set (sequence parameter sset, SPS), a picture parameter set (picture parameter set, PPS), and a compression-encoded video data sequence (VCL data). Wherein a set of global parameters of the encoded video sequence (coded video sequence) is stored in the SPS. In which the coded video sequence, i.e. the sequence of structures after the pixel data of a frame of the original video has been coded. And the parameters on which the encoded data of each frame depends are stored in the PPS.
The SEI belongs to the category of streams and provides a way to add additional information to the video stream. SEI information may be inserted during the generation and transmission of video content. This inserted information, along with other video content, is passed over a transmission link to the electronic device. Among these, fields of network abstraction layer unit (network abstraction layer unit, NAL) type, SEI length, etc. are included in the SEI.
Also included in the SEI are fields indicating biometric information, including exercise posture, heart rate, blood pressure, etc. By way of example, a field defining 0x00-0x7F represents a motion gesture, then a maximum of 128 motion gestures may be included in the field, e.g., 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; a field of 0x80-0x9F represents vital signs, then up to 32 vital signs can be included in the field, e.g., 0x80 represents heart rate, 0x81 represents blood pressure, 0x82 represents blood glucose, etc.; the field of 0xA0-0xAF represents personal basic information, and 16 kinds of personal basic information can be included in the field at maximum, for example, 0xA0 represents height, 0xA1 represents age, 0xA2 represents gender, and the like.
If the sensor data or the biometric information acquired by the electronic device indicates that the user is running, writing a 0x00 field in a field indicating the biometric information of the image; if the sensor data or the biometric information acquired by the electronic device indicates that the heart rate of the user is 60 times/min, writing a 0x80 x3C field (hexadecimal of 0x3C is 60) in a field indicating the biometric information of the image; etc.
In some embodiments, the SEI further includes fields such as identity information, scene information, and the like, where the identity information includes a device name of the wearable device, a device account number (for example, a Hua account number), a custom user name, and the like; the scene information includes information of geographic positions of the electronic device when the electronic device captures the picture, and the electronic device comprehensively determines scenes in the picture, such as parks, bars, lakeboxes, museums and the like.
It will be appreciated that the video frame format shown in fig. 19b is an exemplary video frame format provided by the present application, wherein the location of the biometric information field in the video frame format is not limiting of the present application.
It should be understood that the foregoing embodiments, the method flows, the related technical principles and the like may be organically combined to obtain other new embodiments, which are not limited in this disclosure.
Based on the technical principle and the feature labeling system shown in fig. 3, the following description first describes a method flow for shooting a picture according to the present application with reference to an example. Referring to fig. 20a, fig. 20a shows a flowchart of a method for capturing a picture. The equipment involved in the flow chart of the method comprises a wearable equipment and a shooting equipment. The photographing device includes an electronic device 100 with a photographing function, and the wearable device includes a wearable device 201 and a wearable device 202, and may further include more devices. The method comprises the following steps:
Step S101: the wearable device and the shooting device are connected.
The wearable device and the shooting device are connected in a manner not limited to Bluetooth (BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, wiFi), wiFi direct connection, network and other wireless communication manners. In the embodiment of the present application, pairing using bluetooth will be described as an example.
Optionally, in the process of establishing the connection, the wearable device and the photographing device acquire connection information (such as hardware information, interface information, identity information, etc.) of each other through bluetooth. The shooting device can acquire sensor data of the wearable device after the shooting function is started, the wearable device can synchronize part of functions of the shooting device, for example, the wearable device can synchronously output notification reminders (such as incoming call reminders, new message reminders and the like) in the shooting device, the wearable device can actively trigger the shooting device to start the shooting function, and the wearable device can view pictures/video files and the like in the shooting device.
Optionally, in some embodiments, when the photographing device detects a user operation to turn on the annotation mode, the photographing device automatically turns on bluetooth and automatically establishes a bluetooth connection with the wearable device.
Step S102: the photographing apparatus detects a user operation triggering a function of photographing a picture.
The shooting equipment detects user operation triggering the function of shooting pictures, triggers the function of shooting pictures and acquires the picture information currently acquired by the camera. The user operation may be a touch operation, a voice operation, a hover gesture operation, etc., which are not limited herein. In the application, the shooting device detects the user operation triggering the function of shooting pictures in the labeling mode, and specific content can refer to the UI embodiment. For example, the user may trigger the photographing device to take a picture by a user operation with respect to the icon 305 in fig. 5c, or the icon 404 in fig. 6b, and the photographing device detects the user operation and triggers the function of taking a picture.
Alternatively, in some embodiments, the user operation may be a user operation with respect to the wearable device, where the wearable device detects a user operation triggering the function of taking a picture, and sends the user operation to the photographing device through bluetooth, and the photographing device detects the user operation triggering the function of taking a picture. Referring to fig. 9a, a user starts a camera application by clicking an application icon 901 in fig. 9a, triggers a picture shooting function by clicking an icon 1003 in fig. 9b, and the wearable device sends a picture shooting instruction to the shooting device through bluetooth, so that the shooting device triggers shooting, and specific contents are not described herein. In the present application, the user operation that triggers the function of taking a picture may also be referred to as a first operation.
Step S103: the photographing apparatus transmits a request message for requesting acquisition of sensor data or biometric information.
After detecting the user operation triggering the function of shooting pictures, the shooting device sends a request message to the wearable device, wherein the request message is used for requesting to acquire sensor data or biological characteristic information.
Optionally, the request message includes a data type, a data collection manner, and a data collection interval for requesting acquisition. The data types requested to be acquired may be: health status, exercise status, emotional status, etc. Wherein the health status includes heart rate, blood pressure, blood sugar, brain electricity, electrocardio, myoelectricity, body temperature, etc.; the exercise state includes usual exercise type gestures such as walking, running, riding, swimming, playing shuttlecocks, skating, surfing, dancing, etc., and may also include some finer granularity exercise gestures, for example: batting with a hand, batting with a back hand, dancing with Latin, dancing with a mechanical dance, etc.; emotional states include tension, anxiety, sadness, stress, excitement, pleasure, and the like.
Alternatively, the data collection mode may be divided into a single acquisition and a continuous acquisition. For taking pictures, the data collection mode can be generally single acquisition.
The data collection interval is related to the data collection mode, and is an invalid value when the data collection mode is a single acquisition. That is, the photographing device only needs to acquire the data sent by the wearable device once. When the data collection mode is continuous collection, the data collection interval is a preset interval. The shooting device obtains data sent by the wearable device at intervals preset.
The shooting device requests the acquired data types, the shooting device side can be configured by the user, the specific configuration content can refer to the UI embodiment, for example, as shown in fig. 8c, the configuration interface 80 includes a plurality of configuration options, each configuration option corresponds to one data type, and the shooting device sends a request message to the wearable device according to the data type selected by the user. For example, the exercise posture, heart rate, blood pressure, blood sugar and emotion states are respectively represented by one byte, if the shooting device wants to acquire the exercise posture, the byte representing the exercise posture is set to 1, otherwise, 0 is set; other data types are the same.
Optionally, in some embodiments, the user may trigger a picture taking function on the wearable device, and the wearable device detects a user operation triggering the picture taking function and sends a picture taking instruction to the picture taking device. The shooting equipment receives the picture shooting instruction, acquires picture information currently acquired by the camera and sends a request message to the wearable equipment. In the application, the wearable device detects a user operation triggering a function of taking pictures, and the user operation can be called a second operation.
Step S104: the wearable device transmits the sensor data or the biometric information to the photographing device.
The wearable device analyzes the received request message, and obtains the data type, the data collection mode and the data collection interval required by the shooting device. According to the request message, the wearable device transmits sensor data or biometric information to the photographing device. For example, for taking pictures, the data collection mode is single collection, the data collection interval is set to be an invalid value, and the wearable device sends data to the shooting device once. If the data type in the request message comprises the motion gesture and the heart rate, the wearable device sends sensor data or biological characteristic information acquired through the motion sensor and the heart rate sensor to the shooting device.
Optionally, the sensor data is raw data, namely data directly detected by the sensor; for example, the sensor data may include heart rate data obtained by the wearable device to the user through a heart rate sensor; acquiring data such as motion amplitude, angle, speed and the like of a user through a motion sensor; acquiring data such as skin resistance, electric conductivity and the like of a user through a skin electric sensor; the data of blood sugar, blood pressure, body temperature and the like of the user are obtained through the biological sensor.
Optionally, the biometric information is data processed based on the sensor data, i.e. further data calculated/extrapolated from the original data. For example, the biometric information may be that the wearable device calculates that the current heart rate of the user is within a normal range according to the heart rate data; calculating the current motion gesture of the user, such as walking, running, riding, swimming, playing badminton, and the like, according to the data acquired by the motion sensor; calculating the current mood index, pressure index and the like of the user according to the data acquired by the piezoelectric sensor; and calculating the current health state of the user according to the data acquired by the biosensor.
By way of example, a field of 0x00-0x7F represents a motion gesture, which can include up to 128 motion gestures therein, e.g., 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; the fields of 0x80-0x9F represent vital signs, which can include up to 32 vital signs, e.g., 0x80 represents heart rate, 0x81 represents blood pressure, 0x82 represents blood glucose, etc.; the field of 0xA0-0xAF represents personal basic information, and 16 kinds of personal basic information can be included in the field at most, for example, 0xA0 represents height, 0xA1 represents age, 0xA2 represents gender, and the like. If the shooting device wants to acquire the motion gesture, the wearing device detects that the data acquired by the motion sensor at the moment indicates that the user is running, and the wearing device writes 0x00 into a data packet to be sent to indicate that the user is running. If the shooting device wants to acquire the heart rate, and the wearable device detects that the data acquired by the heart rate sensor is 60 times/min at the moment, the wearable device writes a data byte representing that the heart rate is 60 times/min into a data packet to be transmitted, for example, 0x80 x3C (0 x3C is 60 hexadecimal); optionally, the wearable device may write a data byte representing "heart rate normal" to the data packet to be transmitted, with a heart rate of 60 beats/minute within a normal range.
Optionally, in some embodiments, the wearable device parses the received request message to obtain the data type, the data collection manner and the data collection interval required by the photographing device. According to the request message, the wearable device acquires sensor data before a preset time period, and sends the sensor data or biological characteristic information obtained based on the sensor data to the shooting device. The preset time period may be 1 second, may be 0.5 second, and is not limited herein. Because the time when the wearable device acquires the sensor data is always later than the time when the electronic device detects the user operation triggering the picture shooting function, the delayed data value is acquired through experiments and is set to a preset time period, the wearable device can provide accurate sensor data or biological characteristic information for the shooting device, and errors are reduced.
Optionally, in some embodiments, the wearable device parses the received request message, and according to the request message, the wearable device sends the sensor data or the biometric information to the photographing device according to the timestamp. The timestamp is the moment when the electronic equipment detects the user operation triggering the picture shooting function, the wearable equipment acquires sensor data of the moment, and the sensor data or the biological characteristic information is sent to the shooting equipment. In this way, the wearable device can provide more accurate information to the photographing device.
Step S105: and fusing and encoding the shot picture information and the biological characteristic information to generate a picture with the biological characteristic information.
After receiving the sensor data or the biological characteristic information, the shooting device performs fusion encoding on the shot picture information and the biological characteristic information to generate a picture with the biological characteristic information, wherein the format of the picture can refer to the picture format shown in the figure 19a, and the biological characteristic information corresponds to the sensor data. The display content of the biological characteristic information can be a simple sentence or detailed and complete information. The display content, the display position and the display form of the biological characteristic information in the picture are not limited by the application.
Optionally, the electronic device receives the sensor data, determines the biological characteristic information based on the sensor data, and performs fusion coding on the shot picture information and the biological characteristic information; or alternatively, the electronic device receives the biological characteristic information, and the electronic device fuses and codes the shot picture information and the biological characteristic information based on the biological characteristic information.
For example, if the sensor data acquired by the photographing device is heart rate data of the user, the biometric information may be whether the heart rate of the user is within a normal range; the sensor data acquired by the shooting equipment is the current motion gesture of the user, such as walking, running, riding, swimming, playing badminton, and the like, and the biological characteristic information can be whether the motion gesture of the user is standard or not; the sensor data acquired by the shooting equipment are information such as a current mood index, a pressure index and the like of a user, and the biological characteristic information can be whether the mood of the user is pleasant or not; etc.
The picture generated therein may be as shown in fig. 11b described above, and when the user views the picture with the biometric information, the biometric information may be displayed at a fixed location in the picture, such as the upper portion of the picture. Wherein the biometric information in the picture may be hidden at the beginning of the viewing and then displayed by the trigger control, e.g. in fig. 13a, the biometric information is displayed by the trigger cursor 1210. As also in fig. 11b, biometric information is viewed in user interface 1300 in fig. 11c by triggering icon 1205.
Optionally, in some embodiments, the electronic device may further obtain identity information of the wearable device, such as information of a device name, a device account number (for example, a Hua account number), a custom user name, and so on; and carrying out fusion encoding on the identity information and the picture information of the wearable equipment to generate a picture file with the identity information. The electronic device may also obtain scene information when taking the picture, where the scene information includes a scene in the picture that is comprehensively determined by the electronic device by identifying the scene in the image and geographical location information when taking the picture, such as a park, a bar, a lake, a museum, etc. And carrying out fusion coding on the scene information and the picture information to generate a picture file with the scene information.
Optionally, in some embodiments, the biometric information is derived from a combination of sensor data and picture information. The information of the health state, the movement state, the emotion state and the like of the user can be obtained according to the sensor data, the information of the scene information, the movement posture and the like of the user can be obtained according to the picture information, and the shooting equipment combines the sensor data and the picture information to comprehensively obtain the biological characteristic information of the picture. For example, the sensor data received by the photographing apparatus includes heart rate data of 60 times/minute and the exercise posture is running; the shooting device performs image analysis on the shot picture information to obtain that the motion gesture of the user in the picture is running and the shooting place is a park. And integrating the sensor data and the picture information, and obtaining final biological characteristic information by the shooting equipment, wherein the final biological characteristic information comprises running of a user in a park, and the heart rate is 60 times/minute.
Optionally, in some embodiments, the information of one or more face images is stored in the photographing device, the photographing device determines, according to identity information of the wearable device, a preset face image corresponding to the wearable device, performs similarity matching on the preset face image and one or more people in the picture, and if the preset face image is successfully matched with one of the people, the photographing device displays at least part of biometric information near the one person.
The preset facial image has an association relationship with the wearable device, wherein the preset facial image can be preset in the shooting device by a user, can be uploaded to the shooting device in an image or video mode, can be preset in the wearable device by the user, and is provided for the shooting device by the wearable device, and the application is not limited.
For example, the wearable device is a warrior watch 1, wherein the user name bound to the warrior watch 1 is a small a, and the photographing device determines the face information of the small a. The shooting device can determine the face information of the small A by searching the binding relation between the user and the face in the address book; the facial information of the small a can also be determined through the binding relationship between the wearable device and the facial information (for example, the facial information of the small a is uploaded through the icon 802 associated with the facial image in fig. 8c, and the facial information of the small a is the facial image 7031); etc. The shooting equipment carries out image recognition on the pictures, recognizes one or more faces in the pictures, and carries out similarity matching on the face information of the small A and the one or more faces in the pictures. And if the similarity between the small A face information and one of the faces in the picture is greater than a threshold value, the biological characteristic information is displayed near the face. Referring to fig. 13b, the biometric information of the picture in fig. 13b is displayed in the vicinity of the person, indicating the person described by the biometric information.
Optionally, if the preset face image is not successfully matched with the person in the picture, that is, the photographing device detects that the preset face image is not in the picture acquired by the current photographing device, the photographing device may output a prompt message (first prompt), for example, "no user small a in the picture".
Optionally, in some embodiments, compared to the method flow shown in fig. 20a, the present application further shows a method flow for taking a picture, as shown in fig. 20 b.
Step S101a: the wearable device and the shooting device are connected. The specific description may refer to the description of step S101, and will not be repeated here.
Step S102a: the photographing apparatus transmits a request message for requesting acquisition of sensor data or biometric information.
After the wearable device and the shooting device are connected, the shooting device sends a request message to the wearable device, wherein the request message is used for requesting to acquire sensor data or biological characteristic information.
Optionally, when the photographing device detects a user operation of turning on the annotation mode, the photographing device sends a request message to the wearable device.
Optionally, in the labeling mode, the photographing device detects a user operation triggering to acquire the sensor data or the biometric information, and the photographing device sends a request message to the wearable device. Referring to fig. 7a, in fig. 7a the photographing device detects a user operation for the icon 407, and the photographing device transmits a request message to the wearable device.
For a specific description of the request message, reference may be made to the description of step S103, which is not repeated here.
Step S103a: the wearable device transmits the sensor data or the biometric information to the photographing device.
For a specific description of this step, reference may be made to the description of step S104. In addition, the method comprises the steps of,
in some embodiments, after the capturing device obtains the sensor data or the biometric information sent by the wearable device, at least part of the biometric information is displayed on the capturing interface. Referring to fig. 7b, the image acquired by the camera in real time is displayed in the display area 40, and the preview area 408, wherein the display content in the preview area 408 is at least part of the biometric information for the user to view in real time. The biometric information corresponds to the sensor data. In the embodiment of the application, the shooting interface can be called a shooting preview interface, and the shooting preview interface comprises preview images acquired by a camera. The photographing apparatus displays at least part of the biometric information on the preview image, which may be referred to as second information corresponding to the second sensor data.
In some embodiments, the shooting device has information of one or more face images and a corresponding relation between each face image and the wearing device. The shooting device determines a preset face image corresponding to the wearing device from one or more face images through identity information of the wearing device, the preset face image is subjected to similarity matching with one or more people in the picture, and if the preset face image is successfully matched with one of the people in the picture, the shooting device displays at least part of the biological characteristic information nearby the people. For example, preview area 408 in fig. 7b may be displayed near the matched person.
Optionally, if the matching of the target face information and the person in the picture is unsuccessful, that is, the photographing device detects that the preset face image is not in the picture acquired by the current photographing device, the photographing device may output a prompt message (first prompt), for example, "no user small a exists in the picture, please align the camera with the user small a".
Step S104a: the photographing apparatus detects a user operation triggering a function of photographing a picture. The specific description may refer to the description of step S102, and will not be repeated here.
Step S105a: and carrying out fusion encoding on the shot picture information and the biological characteristic information to generate a picture with the biological characteristic information, wherein the biological characteristic information corresponds to the sensor data. The specific description may refer to the description of step S105, and will not be repeated here.
Compared to the embodiment described in fig. 20a, after detecting the user operation triggering the function of taking the picture, the photographing device acquires the sensor data or the biometric information from the wearable device; in the method shown in fig. 20b, after the labeling mode is started, the shooting device may acquire sensor data or biometric information from the wearable device, and at least part of the biometric information is displayed in real time on the shot preview interface, so as to achieve the preview effect on the biometric information, and improve the user experience. The shooting device can also acquire sensor data or biological characteristic information from the wearable device after receiving user operation after starting the labeling mode, at least part of biological characteristic information is displayed on a shot preview interface in real time, and a user can freely control the display and hiding of the biological characteristic information, so that user experience is improved.
Optionally, in some embodiments, the user may trigger a picture taking function on the wearable device, and the wearable device detects a user operation that triggers the picture taking function, and sends a picture taking instruction and sensor data (or biometric information) to the picture taking device. After receiving the image shooting instruction and the sensor data (or the biological characteristic information), the shooting equipment acquires the image information acquired by the current camera, and performs fusion coding on the shot image information and the biological characteristic information to generate an image with the biological characteristic information, wherein the biological characteristic information corresponds to the sensor data. For a specific description of the sensor data and the biometric information, reference is made to step S104 described above.
For example, referring to fig. 9c, the user selects a desired data type in the configuration interface as shown in fig. 9c, and after the configuration is completed, the wearable device turns on the labeling mode. The wearable device transmits a picture taking instruction to the photographing device in response to a user operation for the icon 1003 in the interface 1000, and provides sensor data or biometric information to the photographing device according to the data type selected by the user in fig. 9 c. The shooting equipment receives a picture shooting instruction, acquires picture information acquired by a current camera, and performs fusion coding on the shot picture information and or biological characteristic information to generate a picture with the biological characteristic information.
Optionally, the application further provides a method flow for shooting the video. Referring to fig. 21, fig. 21 shows a flowchart of a method for capturing video.
Step S201: the wearable device and the shooting device are connected. The specific description may refer to the description of step S101, and will not be repeated here.
Step S202: the photographing apparatus detects a user operation triggering a photographing video function.
The shooting device detects user operation triggering the video shooting function, triggers the video shooting function and acquires video frame information currently acquired by the camera. The user operation may be a touch operation, a voice operation, a hover gesture operation, etc., which are not limited herein. In the application, the shooting device detects the user operation triggering the video shooting function in the annotation mode, and specific content can refer to the UI embodiment.
Alternatively, in some embodiments, the user operation may be a user operation with respect to the wearable device, where the wearable device detects a user operation triggering the video capturing function, and sends the user operation to the capturing device through bluetooth, and the capturing device detects the user operation triggering the video capturing function.
Step S203: the photographing apparatus transmits a request message for requesting acquisition of sensor data or biometric information.
After detecting the user operation triggering the video shooting function, the shooting device sends a request message to the wearable device, wherein the request message is used for requesting to acquire sensor data or biological characteristic information.
The request message includes a data type, a data collection manner, and a data collection interval supported by the photographing apparatus. For the specific description of this part, reference may be made to the description of step S103 described above, and unlike step S103, the data collection manner is generally continuous collection for the captured video, and the data collection interval may be set to 1 second. I.e. the photographing apparatus acquires sensor data or biometric information every 1 second during the photographing of the video.
Optionally, in some embodiments, the user may trigger a video capturing function on the wearable device, and the wearable device detects a user operation that triggers the video capturing function and sends a video capturing instruction to the capturing device. The shooting device receives the shooting video instruction and sends a request message to the wearable device.
In some embodiments, the user is triggering a capture video function on the wearable device, the wearable device detects a user operation triggering the capture video function, wants the capture device to send a capture video instruction, and sends sensor data or biometric information to the capture device. At this time, the wearable device does not need to perform step S203, and the wearable device transmits the sensor data or the biometric information to the photographing device based on the configuration on the wearable device side.
Step S204: the wearable device periodically transmits sensor data or biometric information.
The wearable device analyzes the received request message, and obtains the data type, the data collection mode and the data collection interval required by the shooting device. According to the request message, the wearable device periodically transmits sensor data or biometric information to the photographing device. For example, for a captured video, the data collection is performed continuously, and the data collection interval is set to 1 second. The wearable device transmits the sensor data or the biometric information to the photographing device every 1 second. For a specific description of the sensor data and the biometric information, reference is made to step S104 described above.
Step S205: and (3) each time the shooting equipment receives sensor data or biological characteristic information, carrying out fusion coding on the shot video information and or the biological characteristic information, and generating an image frame with the biological characteristic information, wherein the biological characteristic information corresponds to the sensor data.
And the shooting equipment carries out fusion coding on the shot video information and the biological characteristic information every time the shooting equipment receives the sensor data or the biological characteristic information. Since one video is composed of a plurality of image frames, each image frame generated by the photographing apparatus has corresponding biometric information, the biometric information of the image frame corresponds to sensor data. For example, the photographing apparatus receives sensor data once every 1 second, and 24 frames of image frames are generated by the photographing apparatus within one second, and biometric information of the 24 image frames within one second is obtained according to the sensor data of the next second. For example, the shooting device acquires sensor data once in the 5 th second, and the shooting device performs fusion encoding on 24 frames of images between the 4 th second and the 5 th second in the video information and the sensor data to generate an image frame with biological characteristic information; the shooting device acquires sensor data once in the 6 th second, and the shooting device carries out fusion coding on 24 frames of images between the 5 th second and the 6 th second in the video information and the sensor data to generate an image frame with biological characteristic information; and so on.
Optionally, in some embodiments, the capture device fusion encodes the biometric information with the video information according to the timestamp. For example, the wearable device periodically transmits sensor data or biometric information with a time stamp indicating the time the sensor data or biometric information corresponds to in the video information. In this way, the wearable device can provide more accurate information for the shooting device, and the situation that the biological characteristic information is not matched with the video content due to information transmission delay is avoided.
Optionally, in some embodiments, the biometric information is derived from a combination of sensor data and video information. The information of the health state, the movement state, the emotion state and the like of the user can be obtained according to the sensor data, the information of the scene information, the movement posture and the like of the user can be obtained according to the video information, and the shooting equipment combines the sensor data and the video information to comprehensively obtain the biological characteristic information of the picture. For example, the sensor data received by the photographing apparatus includes heart rate data of 60 times/minute and the exercise posture is running; the shooting equipment performs image analysis on the shot video information to obtain that the movement posture of the user in the video is running and the shooting place is a park. And integrating the sensor data and the video information, wherein the shooting equipment obtains final biological characteristic information of an image frame in the video, wherein the final biological characteristic information comprises running of a user in a park, and the heart rate is 60 times/minute.
Step S206: the photographing apparatus detects a user operation triggering stopping photographing of a video.
The photographing apparatus detects a user operation triggering the stop of photographing video, and stops photographing video. The user operation may be a touch operation, a voice operation, a hover gesture operation, etc., which are not limited herein. In the application, the shooting device detects the user operation triggering the function of stopping shooting the video in the annotation mode, and specific content can refer to the UI embodiment.
Optionally, in some embodiments, the user may trigger to stop shooting the video on the wearable device, and the wearable device detects a user operation triggering to stop shooting the video and sends an instruction to stop shooting the video to the shooting device. The shooting device receives the video shooting stopping instruction and stops shooting video.
Step S207: the shooting device sends a request message to the wearable device, and the request message is used for stopping acquiring the sensor data or the biological characteristic information.
After detecting the user operation triggering the function of stopping shooting the video, the shooting device sends a request message to the wearable device, wherein the request message is used for stopping acquiring the sensor data or the biological characteristic information. The wearable device receives the request message and stops sending the sensor data or the biological characteristic information to the shooting device.
Alternatively, in some embodiments, the user may be triggering a stop to capture video on the wearable device, the wearable device detecting a user operation triggering a stop to capture video, sending a stop to capture video instruction to the capture device, and stopping sending sensor data or biometric information to the capture device.
Step S208: the photographing apparatus generates and saves a video.
After the shooting device detects the user operation triggering the function of stopping shooting the video, the shooting device generates and stores the shot video. The format of the video may refer to the video format shown in fig. 19b described above, with biometric information corresponding to the sensor data.
The generated video may refer to fig. 16a or fig. 16b, and when the user views the video, the video further includes biometric information, where the display content of the biometric information may be a simple sentence or detailed and complete information. Wherein the display content, display position and display form of the biometric information are not limiting of the present application.
Optionally, in some embodiments, the electronic device may further obtain identity information of the wearable device, such as information of a device name, a device account number (for example, a Hua account number), a custom user name, and so on; and carrying out fusion encoding on the identity information of the wearable equipment and the video frame information to generate a video file with the identity information. The electronic device may also obtain scene information when capturing video, where the scene information includes a scene in a video frame that is comprehensively determined by the electronic device by identifying the scene in the video frame and geographic location information when capturing video, such as a park, a bar, a lake, a museum, and the like. And carrying out fusion coding on the scene information and the video frame information to generate a video file with the scene information.
Optionally, in some embodiments, the information of one or more face images is stored in the photographing device, the photographing device determines, according to identity information of the wearable device, a preset face image corresponding to the wearable device, performs similarity matching between the preset face image and one or more people in an image frame of the video, and if the preset face image is successfully matched with one of the people, the photographing device displays at least part of the biometric information near the one. Referring to fig. 18a or 18b described above, the biometric information of the video in fig. 18a or 18b is displayed above the person, indicating the person described by the biometric information.
Optionally, the preset facial image has an association relationship with the wearable device, where the preset facial image may be preset in the shooting device by the user, or may be uploaded to the shooting device in an image or video manner, or may be preset in the wearable device by the user, and then provided to the shooting device by the wearable device.
It should be noted that, the shooting device may acquire sensor data or biometric information from the wearable device after the labeling mode is started, and display at least part of the biometric information on the shot preview interface in real time, so as to achieve the effect of previewing the biometric information, and improve user experience. The shooting device can also acquire sensor data or biological characteristic information from the wearable device after receiving user operation after starting the labeling mode, at least part of biological characteristic information is displayed on a shot preview interface in real time, and a user can freely control the display and hiding of the biological characteristic information, so that user experience is improved.
Optionally, in some embodiments, during the process of capturing video, the biometric information may be displayed on the capturing interface in real time for the user to view in real time.
The method described above for taking pictures and videos is applicable to the system shown in fig. 1. Optionally, the application further provides a shooting system, which can enable one shooting device to be connected with a plurality of wearable devices of the same type, the shooting device obtains sensor data or biological characteristic information of the plurality of wearable devices of the same type, and characteristic recognition is carried out on a plurality of users in shot pictures/videos. As shown in fig. 22, the photographing system shown in fig. 22 includes a photographing device 101, a plurality of wearable devices 201 of the same type, and a third device 301. The photographing apparatus 101 and the plurality of wearable apparatuses 201 of the same type establish connection through the third apparatus 301. Wherein,,
the photographing apparatus 101 is an electronic apparatus having an image capturing function, such as a cellular phone, a tablet, a camera, or the like. The wearable device 201 includes a wireless earphone, a smart watch, a smart bracelet, smart glasses, an electronic article of clothing, an electronic bracelet, an electronic necklace, an electronic accessory, an electronic tattoo, a smart mirror, and the like.
The third device 301 may be a relay device. Such as a bluetooth repeater hub, which connects with the photographing device 101 and a plurality of the same type of wearable devices 201 through bluetooth; the third device 301 may also be a cloud server, where the third device 301 is connected to the photographing device 101 and the plurality of wearable devices 201 through a mobile communication module. Through the third device 301, the photographing device 101 can establish connection with a plurality of the same type of wearable devices 201. Optionally, in one possible implementation manner, the wearable device and the photographing device may also establish a connection with the third device 301 through other wireless communication manners such as WiFi, where the third device 301 may also be a router with data processing and operation functions.
Based on the above technical principles and the photographing system shown in fig. 22, the following describes a method flow for photographing a picture according to the present application with reference to an example. Referring to fig. 23, fig. 23 shows a flowchart of a method for capturing a picture. The equipment involved in the flow chart of the method comprises n wearable equipment, shooting equipment and third equipment, wherein n is a positive integer. The method comprises the following steps:
step S301: and the n wearable devices are connected with the third device, and the shooting device is connected with the third device.
The connection mode is not limited to Bluetooth (BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, wiFi), wiFi direct connection, network, and other wireless communication modes. In the embodiment of the present application, pairing using bluetooth will be described as an example.
In the process of establishing the connection, the n wearable devices and the photographing device mutually acquire connection information (such as hardware information, interface information, identity information and the like) of each other through the third device. The photographing device may acquire sensor data or biometric information of the wearable device, the wearable device may synchronize part of functions of the photographing device, for example, the wearable device may synchronously output notification reminders (e.g., incoming call reminders, new message reminders, etc.) in the photographing device, the wearable device may actively trigger the photographing device to start the photographing function, the wearable device may view pictures/video files in the photographing device, and so on.
Step S302: the photographing apparatus detects a user operation triggering a function of photographing a picture. The specific description may refer to the description of step S102, and will not be repeated here.
Step S303: the photographing apparatus transmits a request message for requesting acquisition of sensor data or biometric information to the third apparatus.
The specific description of this step may refer to the description of step S103, except for step S103, the photographing apparatus transmits a request message to the third apparatus.
Step S304: the third device forwards the request message to the n wearable devices.
After receiving the request message sent by the shooting device, the third device forwards the request message to the n wearable devices.
Step S305: the wearable device sends sensor data or biological characteristic information, wherein the sensor data or biological characteristic information comprises identity information of the wearable device.
The specific description of this step may refer to the description of step S104, and unlike step S104, the wearable device sends sensor data or biometric information to the third device, where the sensor data or biometric information further includes identity information of the wearable device, where the identity information may uniquely represent one wearable device.
Step S306: and after the third equipment determines that the sensor data or the biological characteristic information of all the connected wearable equipment is completely received, the sensor data or the biological characteristic information is sent to the shooting equipment.
And the third equipment receives the sensor data or the biological characteristic information sent by the wearable equipment, and sends the sensor data or the biological characteristic information of the n wearable equipment to the shooting equipment after receiving the sensor data or the biological characteristic information of the n wearable equipment. The sensor data or the biological characteristic information of the n wearable devices comprise identity information of the wearable devices, and the third device judges whether the sensor data or the biological characteristic information of all connected wearable devices are received or not according to the acquired identity information.
Step S307: the shooting device carries out fusion coding on shot picture information and biological characteristic information to generate a picture with the biological characteristic information, wherein the biological characteristic information corresponds to the sensor data.
The specific description of this step may refer to the description of step S105, except for step S105, since the sensor data or the biometric information received by the photographing apparatus is of n wearable apparatuses, the biometric information also includes the biometric information of n users accordingly.
The generated picture can be shown by referring to fig. 14a, and fig. 14a shows a picture viewing interface, where the viewed picture includes two characters. When the user views the picture, the picture also comprises biological characteristic information, wherein the biological characteristic information comprises biological characteristic information of two people, namely running of a small A and running of a small B. The display content of the biological characteristic information can be a simple sentence or detailed and complete information. Wherein the display content, display position and display form of the biometric information are not limiting of the present application.
Optionally, in some embodiments, the information of one or more face images is stored in the photographing device, the photographing device determines, according to identity information of the wearable device, a preset face image corresponding to the wearable device, performs similarity matching on the preset face image and one or more people in the picture, and if the preset face image is successfully matched with one of the people, the photographing device displays at least part of biometric information near the one person. The photographing device obtains identity information of n wearable devices, and the matching process can be executed n times at most. Referring to fig. 14B, fig. 14B includes two pieces of biometric information each displayed in the vicinity of a different person (small a is displayed above the clothing person No. 12 in running and small B is displayed above the clothing person No. 8 in running), and two persons each, the person described by the biometric information is indicated according to the display position of the biometric information.
The preset facial image has an association relationship with the wearable device, wherein the preset facial image can be preset in the shooting device by a user, can be uploaded to the shooting device in an image or video mode, can be preset in the wearable device by the user, and is provided for the shooting device by the wearable device, and the application is not limited.
It should be noted that, the shooting device may acquire sensor data or biometric information from the wearable device after the labeling mode is started, and display at least part of the biometric information on the shot preview interface in real time, so as to achieve the effect of previewing the biometric information, and improve user experience. The shooting device can also acquire sensor data or biological characteristic information from the wearable device after receiving user operation after starting the labeling mode, at least part of biological characteristic information is displayed on a shot preview interface in real time, and a user can freely control the display and hiding of the biological characteristic information, so that user experience is improved.
In the embodiment of the application, the shooting device and the n wearable devices are connected through the third device, the shooting device detects the user operation triggering the function of shooting the picture in the labeling mode, and the third device sends a request message to the wearable device while acquiring the picture information. The shooting device receives sensor data or biological characteristic information of n pieces of wearable devices, performs fusion coding on the n pieces of biological characteristic information and the picture information, and generates a picture with the biological characteristic information, wherein the biological characteristic information corresponds to the n pieces of sensor data. It can be seen that the method provided by the application finishes the labeling of the picture in the picture generation process, and does not need to extract the characteristics of the stored picture later, thereby saving hardware resources; and can combine the sensor data of wearing equipment to carry out more accurate and abundant characteristic annotation to the picture.
The application further provides a method flow for shooting the video. Referring to fig. 24, fig. 24 shows a flowchart of a method for capturing video.
Step S401: and the n wearable devices are connected with the third device, and the shooting device is connected with the third device. The specific description may refer to the description of step S301, and will not be repeated here.
Step S402: the photographing apparatus detects a user operation triggering a photographing video function. The specific description may refer to the description of step S202, and will not be repeated here.
Step S403: the photographing apparatus transmits a request message for requesting acquisition of sensor data or biometric information to the third apparatus.
The specific description of this step may refer to the description of step S203, except that the photographing apparatus transmits a request message to the third apparatus, unlike step S203.
Step S404: the third device forwards the request message to the n wearable devices respectively.
After receiving the request message sent by the shooting device, the third device forwards the request message to the n wearable devices.
Step S405: the wearable device periodically sends sensor data or biological characteristic information, wherein the sensor data or biological characteristic information comprises identity information of the wearable device.
The specific description of this step may refer to the description of step S204, unlike step S204, n wearable devices send sensor data or biometric information to the third device, where the sensor data or biometric information further includes identity information of the wearable device, where the identity information may uniquely represent one wearable device.
Step S406: and the third equipment determines that the sensor data or the biological characteristic information of all the connected wearable equipment in one period is completely received and then sends the sensor data or the biological characteristic information to the shooting equipment.
Since the n wearable devices periodically transmit the sensor data or the biometric information to the third device, the third device receives the n sensor data or the biometric information within one periodicity. The sensor data or the biological characteristic information of the n wearable devices comprise identity information of the wearable devices, and the third device judges whether the sensor data or the biological characteristic information of all connected wearable devices are received or not according to the acquired identity information. If the sensor data or the biological characteristic information of the n wearable devices are received, the sensor data or the biological characteristic information of the n wearable devices is sent to the shooting device.
Step S407: and (3) carrying out fusion coding on the shot video information and the biological characteristic information every time the shooting equipment receives the sensor data or the biological characteristic information, and generating an image frame with the biological characteristic information, wherein the biological characteristic information corresponds to the sensor data.
The specific description may refer to the description of step S205, and will not be repeated here.
Step S408: the photographing apparatus detects a user operation triggering stopping photographing of a video. The specific description may refer to the description of step S206, and will not be repeated here.
Step S409: the photographing apparatus transmits a request message for stopping acquisition of the sensor data or the biometric information to the third apparatus. For the specific description, reference may be made to the description of step S207, and unlike step S207, the photographing apparatus transmits a request message to the third apparatus.
Step S410: the third device forwards the request message to the n wearable devices respectively.
After receiving the request message sent by the shooting device, the third device forwards the request message to the n wearable devices. The request message is used for stopping acquiring the sensor data or the biological characteristic information, and the wearable device receives the request message and stops transmitting the sensor data or the biological characteristic information.
Step S411: the photographing apparatus generates and saves a video.
The specific description of this step may refer to the description of step S208, unlike step S208, the generated video may refer to fig. 18a and 18b, and when the user views the video, the biometric information may be displayed along with the playing progress of the video, and the display content of the biometric information may be a simple sentence or detailed and complete information. Wherein the display content, display position and display form of the biometric information are not limiting of the present application.
In the embodiment of the application, the shooting device and the n wearable devices are connected through the third device, the shooting device detects the user operation triggering the video shooting function in the intelligent labeling mode, and the third device sends a request message to the wearable device while acquiring the video information. The shooting device receives sensor data or biological characteristic information of n wearable devices through the third device, carries out fusion coding on the n biological characteristic information and the image frame, and generates the image frame with the biological characteristic information along with the video shooting process, wherein the biological characteristic information corresponds to the sensor data. It can be seen that the method provided by the application finishes the feature recognition of the image frames in the video generation process, and does not need to subsequently perform feature recognition and extraction on the stored video, thereby saving hardware resources; and can combine the sensor data of wearing equipment to carry out more accurate and abundant feature recognition to the video.
User operations mentioned in embodiments of the present application include, but are not limited to, click, double click, long press, swipe, hover gesture, voice command, etc.
The first operation mentioned in the embodiment of the present application may be a user operation triggering a function of taking a picture, or a user operation triggering a function of taking a video. For example, the user operation for triggering the function of capturing a picture described in the foregoing step S102, step S302, and the user operation for triggering the function of capturing a video described in the foregoing step S202, step S402.
The first information mentioned in the embodiment of the present application may be biometric information fusion-encoded with a picture/video file.
The second information mentioned in the embodiment of the present application may be biometric information displayed on the preview interface.
For the above method flow, the following exemplary brief description is provided for three application scenarios to which the embodiments of the present application are applicable.
Scene one, home environment, take dance picture/video, yoga picture/video etc. of specific user.
The first user wants to shoot dance videos for the second user by using the mobile phone, and can select the annotation mode to shoot for the second user. And wearing the intelligent watch on the two hands of the user, and establishing connection between the mobile phone and the intelligent watch on the two hands of the user. After the connection is successful, the first user selects the labeling mode, and the second user is shot by using the mobile phone, and the shot picture/video has biological characteristic information. The biological characteristic information can indicate the health state of the second user in the dancing process, and further judge the fatigue degree, the physical pressure and the like of the second user; the biological characteristic information can also indicate the movement posture of the second user in the dancing process, and whether the movement posture is standard or not is further judged; etc.
Before photographing, a user can configure on a mobile phone or a smart watch according to requirements, and the configured content determines the content of the biological characteristic information. For example, when the user configures the options of starting heart rate information, exercise state information, emotion state information and the like on the mobile phone, the biological feature information of the shot dance video comprises information of heart rate information, exercise state information, emotion state information and the like of the user II.
Because the home scene is simpler, the video only comprises the person of the second user with high probability, and the biological characteristic information indicates the relevant information of the second user, so that the biological characteristic information of the video can be displayed at a fixed position in the display screen of the mobile phone.
And a scene II, namely an outdoor activity scene, and shooting playing pictures/videos of specific users in multiple people.
The first user wants to use the mobile phone to shoot the playing picture for the second user in the park, and the second user can shoot the playing picture by selecting the labeling mode. And wearing the intelligent watch on the two hands of the user, and establishing connection between the mobile phone and the intelligent watch on the two hands of the user. After the connection is successful, the first user selects the labeling mode, and the second user is shot by using the mobile phone, and the shot picture/video has biological characteristic information. The biometric information may indicate a movement posture, an emotional state (e.g., pleasure, excitement, etc.), etc. of the second user, and may also indicate a current environmental state (e.g., air quality, air temperature and humidity, etc.).
Because the number of tourists in the park is large, the scene is complex, other tourists possibly appear in the picture shot by the first user for the second user, and the biological characteristic information indicates the relevant information of the second user, the biological characteristic information of the picture can be displayed near the second user, and the biological characteristic information is matched with the second user on the picture.
Scene three, professional training venues and gymnasiums, and shooting training pictures/videos of multiple users in multiple people.
In a scenario such as a gym, training room, etc., which typically includes multiple trainees moving together, a trainer needs to grasp the movement status of each trainee. The photographing apparatus needs to connect with a plurality of wearable apparatuses, such as the system architecture shown in fig. 22, and the photographing apparatus and the plurality of wearable apparatuses establish connection through a third apparatus.
And a plurality of intelligent watches are respectively worn on hands of a plurality of users, the intelligent watches are connected with the third equipment, and the shooting equipment is connected with the third equipment. After the connection is successful, shooting is carried out for a plurality of users by using shooting equipment, and at the moment, the shot pictures/videos have biological characteristic information. The biological characteristic information can indicate the heart rate, blood pressure, blood sugar, exercise posture and other information of the user, and further judge whether the exercise posture is standard, whether the user is suitable for increasing the training intensity and the like. Can be used for coaching motion guidance and fatigue monitoring of sports human body.
Since the scene three is aimed at a plurality of users, the biometric information indicates the related information of the plurality of users, the biometric information of different students can be respectively displayed near the corresponding users in the picture/video, and the biometric information of different users is matched with the user himself on the picture. Thus, the biological characteristic information of different users can be checked in a targeted way.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above method embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. A storage media may be any available media that can be accessed by a computer.
The embodiment of the application also provides a computer program product. The methods described in the above method embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, may be embodied in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions described above are loaded and executed on a computer, the processes or functions described in the method embodiments described above are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, or other programmable apparatus.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (33)

1. A photographing system, comprising: the electronic device comprises a camera; wherein,,
the electronic device is used for establishing connection with the first wearable device;
the electronic equipment is further used for displaying a preview image acquired by the camera and a first motion gesture of a shooting object on a shooting preview interface, wherein the preview image comprises the moving shooting object;
the electronic device is further configured to receive a first operation;
the electronic equipment is further used for responding to the first operation and acquiring an image acquired by the camera;
the first wearable device is used for detecting first sensor data through at least one sensor;
The electronic equipment is further used for acquiring a first motion gesture of the shooting object, and the first motion gesture is determined based on the first sensor data;
the electronic equipment is further used for storing the image to a first picture set, wherein the first picture set is a picture set corresponding to the first motion gesture; the electronic equipment further comprises a second picture set corresponding to a second motion gesture;
the electronic device is further configured to display the image in the first picture set and the first motion gesture on a picture viewing interface.
2. The system of claim 1, wherein the first wearable device is further to:
determining the first motion gesture according to the first sensor data;
and sending the first motion gesture to the electronic device.
3. The system of claim 1, wherein the first wearable device is further configured to send the first sensor data to the electronic device;
the electronic equipment is further used for determining the first motion gesture according to the first sensor data.
4. The system according to claim 1, characterized in that said electronic device is in particular adapted to: the first motion gesture is acquired in response to the first operation.
5. The system of claim 2, wherein the electronic device is further configured to: responsive to the first operation, sending a first request message to the first wearable device;
the first wearable device is specifically configured to: and sending the first motion gesture to the electronic device in response to the first request message.
6. The system of claim 3, wherein the electronic device is further configured to: responsive to the first operation, sending a second request message to the first wearable device;
the first wearable device is specifically configured to: and transmitting the first sensor data to the electronic equipment in response to the second request message.
7. The system of claim 1, wherein the capture preview interface includes a capture button therein, and wherein the first operation includes an input operation on the capture button.
8. The system of claim 1, wherein the first motion gesture is included in attribute information of the image.
9. The system of claim 1, wherein the electronic device for establishing a connection with the first wearable device comprises: and responding to the electronic equipment entering a preset shooting mode, and establishing connection between the electronic equipment and the first wearable equipment.
10. The system of claim 1, wherein the first wearable device is further to: receiving a second operation;
the first wearable device is further configured to instruct the electronic device to turn on the camera in response to the second operation.
11. The system of claim 1, wherein the electronic device further for displaying the image and the first motion gesture in the first picture set comprises:
the electronic equipment is further used for displaying the image and the first motion gesture in response to the image comprising a preset facial image; the preset facial image corresponds to the first wearable device.
12. The system of claim 1, wherein the electronic device further for displaying the image and the first motion gesture in the first picture set comprises:
the electronic device is further configured to display the first motion gesture in a first area of the image in response to the image including a first facial image and a second facial image, where the first facial image matches a preset facial image; wherein the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the distance between the first area and the display area of the second facial image.
13. The system of claim 1, wherein the electronic device further configured to display, at a capture preview interface, a preview image captured by the camera and a first motion profile of the subject, comprises:
the electronic device is further configured to display, in response to the preview image including a preset face image, the preview image collected by the camera and a first motion gesture of the shooting object on a shooting preview interface, where the preset face image corresponds to the first wearable device.
14. The system of claim 1, wherein the electronic device further configured to display, at a capture preview interface, a preview image captured by the camera and a first motion profile of the subject, comprises:
the electronic device is further configured to display the first motion gesture in a second area of the preview image in a capturing preview interface in response to the preview image including a third face image and a fourth face image, where the third face image matches a preset face image; wherein the preset facial image corresponds to the first wearable device; the distance between the second area and the display area of the third face image is smaller than the distance between the second area and the display area of the fourth face image.
15. The system of claim 13, wherein the electronic device is further configured to: and outputting a first prompt for prompting a user to aim at the face in response to the fact that the preset face image is not included in the preview image.
16. The system of claim 1, further comprising a second wearable device;
the electronic equipment is also used for establishing connection with the second wearable equipment;
the second wearable device is configured to detect fourth sensor data by at least one sensor;
wherein the first motion gesture is determined based on the first sensor data and the fourth sensor data.
17. A photographing method applied to an electronic device including a camera, the method comprising:
the electronic equipment is connected with first wearable equipment;
the electronic equipment displays a preview image acquired by the camera and a first motion gesture of a shooting object on a shooting preview interface, wherein the preview image comprises the moving shooting object;
the electronic device receives a first operation;
the electronic equipment responds to the first operation and acquires an image acquired by the camera;
The electronic device obtains a first motion gesture of the shooting object, wherein the first motion gesture is determined based on first sensor data detected by at least one sensor of the first wearable device;
the electronic equipment stores the image into a first picture set, wherein the first picture set is a picture set corresponding to the first motion gesture; the electronic equipment further comprises a second picture set corresponding to a second motion gesture;
the electronic device displays the image and the first motion gesture in the first picture set at a picture viewing interface.
18. The method of claim 17, wherein the electronic device obtaining a first motion profile of the photographic subject comprises:
the electronic equipment acquires the first sensor data;
the electronic device determines a first motion gesture of the photographic subject based on the first sensor data.
19. The method of claim 17, wherein the electronic device obtaining a first motion profile of the photographic subject comprises:
the electronic device obtains a first motion gesture of the shooting object, which is determined by the first wearable device based on the first sensor data.
20. The method of claim 17, wherein the electronic device obtaining a first motion profile of the photographic subject comprises:
in response to the first operation, the electronic device obtains a first motion gesture.
21. The method of claim 18, wherein in response to the first operation, the electronic device obtains a first motion gesture of the photographic subject, comprising:
in response to the first operation, the electronic device sends a first request message to the first wearable device, where the first request message is used to request to acquire the first sensor data detected by at least one sensor of the first wearable device;
the electronic device obtains a first motion gesture of the photographic subject, the first motion gesture being determined based on the first sensor data.
22. The method of claim 19, wherein in response to the first operation, the electronic device obtains a first motion gesture of the photographic subject, comprising:
in response to the first operation, the electronic device sends a second request message to the first wearable device, where the second request message is used to request to obtain a first motion gesture of the shooting object, which is determined by the first wearable device based on the first sensor data;
The electronic device obtains the first motion gesture, the first motion gesture determined based on the first sensor data.
23. The method of claim 17, wherein the capture preview interface includes a capture button therein; the first operation includes an input operation acting on the photographing button.
24. The method of claim 17, wherein the first motion gesture is included in attribute information of the image.
25. The method of claim 17, wherein the electronic device establishes a connection with a first wearable device, comprising:
and responding to the electronic equipment entering a preset shooting mode, and establishing connection between the electronic equipment and the first wearable equipment.
26. The method according to claim 17, wherein the electronic device displays the image and the first motion gesture in the first set of pictures, in particular comprising:
in response to the image including a preset facial image, the electronic device displays the image and the first motion gesture; the preset facial image corresponds to the first wearable device.
27. The method according to claim 17, wherein the electronic device displays the image and the first motion gesture in the first set of pictures, in particular comprising:
In response to the image including a first facial image and a second facial image, wherein the first facial image is matched with a preset facial image, the electronic device displays the first motion gesture in a first area of the image; wherein the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the distance between the first area and the display area of the second facial image.
28. The method according to claim 17, wherein the electronic device displays the preview image acquired by the camera and the first motion gesture of the shooting object on a shooting preview interface, specifically comprising:
and responding to the preview image to comprise a preset face image, wherein the electronic equipment displays the preview image acquired by the camera and a first motion gesture of the shooting object on a shooting preview interface, and the preset face image corresponds to the first wearable equipment.
29. The method according to claim 17, wherein the electronic device displays the preview image acquired by the camera and the first motion gesture of the shooting object on a shooting preview interface, specifically comprising:
In response to the preview image including a third face image and a fourth face image, wherein the third face image is matched with a preset face image, the electronic device displays the first motion gesture in a second area of the preview image in a shooting preview interface; wherein the preset facial image corresponds to the first wearable device; the distance between the second area and the display area of the third face image is smaller than the distance between the second area and the display area of the fourth face image.
30. The method of claim 28, wherein the method further comprises:
in response to the preview image not including the preset face image, the electronic device outputs a first prompt for prompting a user to aim at a face.
31. The method of claim 17, wherein the method further comprises:
the electronic equipment is connected with a second wearable device; the second wearable device is to detect fourth sensor data by at least one sensor, the first motion gesture being determined based on the first sensor data and the fourth sensor data.
32. An electronic device, comprising: one or more processors, one or more memories, and at least one camera; the one or more memories are coupled with the one or more processors; the one or more memories are for storing a computer program which, when run on the processor, causes the electronic device to perform the shooting method of any of claims 17-31.
33. A computer-readable medium storing a computer program executable by a processor to implement the photographing method of any one of claims 17 to 31.
CN202010839365.9A 2020-08-19 2020-08-19 Shooting method and shooting system Active CN114079730B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010839365.9A CN114079730B (en) 2020-08-19 2020-08-19 Shooting method and shooting system
PCT/CN2021/112362 WO2022037479A1 (en) 2020-08-19 2021-08-12 Photographing method and photographing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010839365.9A CN114079730B (en) 2020-08-19 2020-08-19 Shooting method and shooting system

Publications (2)

Publication Number Publication Date
CN114079730A CN114079730A (en) 2022-02-22
CN114079730B true CN114079730B (en) 2023-09-12

Family

ID=80281788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010839365.9A Active CN114079730B (en) 2020-08-19 2020-08-19 Shooting method and shooting system

Country Status (2)

Country Link
CN (1) CN114079730B (en)
WO (1) WO2022037479A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116955662A (en) * 2022-04-14 2023-10-27 华为技术有限公司 Media file management method, device, equipment and storage medium
CN115439307B (en) * 2022-08-08 2023-06-27 荣耀终端有限公司 Style conversion method, style conversion model generation method and style conversion system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510903A (en) * 2002-11-25 2004-07-07 ��˹���´﹫˾ Image method and system
JP2016082482A (en) * 2014-10-20 2016-05-16 シャープ株式会社 Image recorder
CN105830066A (en) * 2013-12-19 2016-08-03 微软技术许可有限责任公司 Tagging images with emotional state information
CN106464287A (en) * 2014-05-05 2017-02-22 索尼公司 Embedding biometric data from a wearable computing device in metadata of a recorded image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2370709A (en) * 2000-12-28 2002-07-03 Nokia Mobile Phones Ltd Displaying an image and associated visual effect
US20070124292A1 (en) * 2001-10-30 2007-05-31 Evan Kirshenbaum Autobiographical and other data collection system
KR100828371B1 (en) * 2006-10-27 2008-05-08 삼성전자주식회사 Method and Apparatus of generating meta data of content
US9189682B2 (en) * 2014-02-13 2015-11-17 Apple Inc. Systems and methods for sending digital images
CN107320114B (en) * 2017-06-29 2020-12-25 京东方科技集团股份有限公司 Shooting processing method, system and equipment based on brain wave detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510903A (en) * 2002-11-25 2004-07-07 ��˹���´﹫˾ Image method and system
CN105830066A (en) * 2013-12-19 2016-08-03 微软技术许可有限责任公司 Tagging images with emotional state information
CN106464287A (en) * 2014-05-05 2017-02-22 索尼公司 Embedding biometric data from a wearable computing device in metadata of a recorded image
JP2016082482A (en) * 2014-10-20 2016-05-16 シャープ株式会社 Image recorder

Also Published As

Publication number Publication date
WO2022037479A1 (en) 2022-02-24
CN114079730A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
US20220176200A1 (en) Method for Assisting Fitness and Electronic Apparatus
WO2021244457A1 (en) Video generation method and related apparatus
WO2020078299A1 (en) Method for processing video file, and electronic device
EP3893129A1 (en) Recommendation method based on user exercise state, and electronic device
CN114466128B (en) Target user focus tracking shooting method, electronic equipment and storage medium
WO2021104485A1 (en) Photographing method and electronic device
CN111466112A (en) Image shooting method and electronic equipment
WO2022042766A1 (en) Information display method, terminal device, and computer readable storage medium
CN114111704A (en) Method and device for measuring distance, electronic equipment and readable storage medium
CN114079730B (en) Shooting method and shooting system
CN115147451A (en) Target tracking method and device thereof
EP4195073A1 (en) Content recommendation method, electronic device and server
CN113996046B (en) Warming-up judgment method and device and electronic equipment
CN114080258B (en) Motion model generation method and related equipment
CN111339513B (en) Data sharing method and device
CN115273216A (en) Target motion mode identification method and related equipment
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN114860178A (en) Screen projection method and electronic equipment
CN114721614A (en) Method, electronic equipment and system for calling capabilities of other equipment
CN114496155A (en) Motion adaptive evaluation method, electronic device, and storage medium
CN113693556A (en) Method and device for detecting muscle fatigue degree after exercise and electronic equipment
CN113472996B (en) Picture transmission method and device
CN115223236A (en) Device control method and electronic device
CN118000660A (en) Vestibular function risk detection method and electronic equipment
CN115482900A (en) Electronic scale-based detection report generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant