[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111556358B - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111556358B
CN111556358B CN202010429797.2A CN202010429797A CN111556358B CN 111556358 B CN111556358 B CN 111556358B CN 202010429797 A CN202010429797 A CN 202010429797A CN 111556358 B CN111556358 B CN 111556358B
Authority
CN
China
Prior art keywords
picture
video
target
frame image
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010429797.2A
Other languages
Chinese (zh)
Other versions
CN111556358A (en
Inventor
胡侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202010429797.2A priority Critical patent/CN111556358B/en
Publication of CN111556358A publication Critical patent/CN111556358A/en
Application granted granted Critical
Publication of CN111556358B publication Critical patent/CN111556358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a display method, a display device and electronic equipment, and relates to the technical field of communication. The method comprises the following steps: receiving a first input of a user for pictures and videos; in response to the first input, acquiring a target video associated with the picture according to the matching degree of the object posture in the picture and the object posture in the video, wherein a frame image contained in the target video has first playing information, and the first playing information comprises: at least one of a first playback order and a first playback speed; displaying the picture in a first display area, and playing a frame image in the target video in a second display area according to first playing information; the first display area and the second display area are different display areas. According to the scheme, the first playing information of the target video can be determined and adjusted according to the matching degree of the object posture in the picture and the object posture in the video, the picture and the video can be efficiently watched and compared, and the user experience is improved.

Description

Display method and device and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a display method and apparatus, and an electronic device.
Background
At present, when a user compares different matches, the same match often takes several postures, such as pictures and videos of the front, the side, the abdomen, the waist and the like, the pictures and the videos can be continuously taken, when the user needs to compare other matches, the pictures and the videos need to be slid left and right for comparison, the operation is complex, and the pictures and the videos are difficult to watch and compare efficiently.
Disclosure of Invention
The embodiment of the invention provides a display method, a display device and electronic equipment, and aims to solve the problem that pictures and videos are difficult to watch and compare efficiently in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a display method applied to a first electronic device, including:
receiving a first input of a user for pictures and videos;
in response to the first input, acquiring a target video associated with the picture according to the matching degree of the object posture in the picture and the object posture in the video, wherein a frame image contained in the target video has first playing information, and the first playing information comprises: at least one of a first playback order and a first playback speed;
and displaying the picture in a first display area, and playing the frame image in the target video in a second display area according to the first playing information.
In a second aspect, an embodiment of the present invention further provides a display device, including:
the first receiving module is used for receiving first input of a user for pictures and videos;
a first response module, configured to, in response to the first input, obtain a target video associated with the picture according to a degree of matching between an object pose in the picture and an object pose in the video, where a frame image included in the target video has first play information, and the first play information includes: at least one of a first playback order and a first playback speed;
and the first processing module is used for displaying the picture in a first display area and playing the frame image in the target video in a second display area according to the first playing information.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the display method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the display method described above.
In this way, in the embodiment of the present invention, the target video having the first playing information and associated with the picture is obtained according to the matching degree between the object posture in the picture and the object posture in the video, the picture is displayed in the first display region, and the frame image in the target video is played in the second display region according to the first playing information, so that the first playing information of the target video can be determined and adjusted according to the matching degree between the object posture in the picture and the object posture in the video, the picture and the video can be efficiently viewed and compared, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of a display method according to an embodiment of the present invention;
FIG. 2 shows one of the display diagrams of an embodiment of the present invention;
FIG. 3 is a second schematic view of the display according to the embodiment of the present invention;
FIG. 4 is a third schematic view of a display according to an embodiment of the present invention;
FIG. 5 is a fourth illustration of a display diagram according to an embodiment of the present invention;
FIG. 6 shows a fifth display schematic of an embodiment of the present invention;
FIG. 7 shows a sixth illustrative view of an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a display device according to an embodiment of the invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in the clothing process is selected in the user autodyne, different clothing can be selected at first to wear, several pictures and videos with fixed postures are taken, when the user needs to compare different clothing effects, the user is generally required to turn over and watch different pictures and videos, when the user needs to share the pictures and videos to other electronic equipment, the pictures and the videos need to be transmitted to other electronic equipment one by one, if the pictures need to be compared, the two electronic equipment need to turn over and watch the pictures, the operation is very troublesome, many communication costs are increased, and the high-efficiency comparison and sharing of different collocation are difficult.
Therefore, the embodiment of the invention provides a display method, a display device and electronic equipment, which can determine and adjust the first playing information of a target video according to the matching degree of the object posture in the picture and the object posture in the video, can efficiently watch and compare the picture and the video, and improve the user experience.
Specifically, as shown in fig. 1, an embodiment of the present invention provides a display method applied to a first electronic device, including:
step 11, receiving a first input of a picture and a video from a user.
Specifically, the first input may be a click operation of a user, and is not specifically limited herein.
It should be noted that the first input is not limited to the input of at least one picture and at least one video, and may be the input of multiple pictures or multiple videos.
Step 12, in response to the first input, according to the matching degree between the object posture in the picture and the object posture in the video, acquiring a target video associated with the picture, where the target video includes a frame image having first playing information, where the first playing information includes, but is not limited to: at least one of a first playback order and a first playback speed.
Specifically, under the condition that the first input is input by a user to at least one picture and at least one video, processing the video according to the matching degree (such as similarity) of the object posture in the picture and the object posture in each frame of picture in the video, acquiring a target video associated with the picture, and determining at least one of a first playing sequence and a first playing speed of frame images contained in the target video; the frame image in the target video is a part of the frame images in the video, and the target video is equivalent to a dynamic playing picture combined by a plurality of frame images.
And step 13, displaying the picture in a first display area, and playing the frame image in the target video in a second display area according to the first playing information.
Specifically, the picture and the target video are displayed on the screen of the first electronic device in parallel, wherein the target video is played according to at least one of a first playing sequence and a first playing speed of frame pictures included in the target video. The first display area and the second display area may be different display areas on the same screen, or different display areas on two screens after the foldable screen is unfolded, and are not specifically limited herein.
In the embodiment of the invention, the target video which is associated with the picture and has the first playing information is obtained according to the matching degree of the object posture in the picture and the object posture in the video, the picture is displayed in the first display area, and the frame image in the target video is played in the second display area according to the first playing information.
Optionally, the method further includes:
and sharing the picture and the target video to a second electronic device in an incremental updating mode.
Specifically, the first electronic device shares the picture and the target video to the second electronic device in an incremental updating manner, the picture and the target video can be displayed on the screen of the second electronic device in parallel, the whole target video does not need to be completely sent to the second electronic device, the flow loss is reduced, meanwhile, the time delay and instability of screen sharing are reduced, and a user can obtain better screen sharing experience.
Optionally, the step 12 may specifically include:
extracting a target frame image in the video according to the object posture in the picture, wherein the matching degree of the object posture in the target frame image and the object posture in the picture exceeds a first threshold value;
and forming the target video by the target frame image according to the first playing information.
Specifically, an object pose in a picture is obtained first, and then, according to the object pose in the picture, a key frame extraction is performed on the video (that is, a certain frame image in the video is extracted to generate a frame image), wherein the video can be split into a plurality of frame images, and generally, in a self-portrait video, each self-portrait object pose lasts for a time of more than a second level, so that the self-portrait video can perform the key frame extraction, a group of key pose frame images is extracted (the similarity of object poses in a plurality of continuous frame images exceeds a second threshold), a matching degree between the object pose in each frame image and the object pose in the picture is compared, frame images (that is, target frame images) with the matching degree exceeding a first threshold (for example, 90%) are retained in a list to be stitched, and if there are a plurality of videos, the frame images retained in the list to be stitched in each video are combined according to the first playing information, and forming a target video, namely a dynamic playing video containing a plurality of frame images. Wherein the first threshold is greater than the second threshold.
Further, the step 12 includes:
determining first playing information of frame images contained in the target video according to at least one of the following items:
a degree of matching of the object pose in the picture and the object pose in the video;
the position of the object in the picture;
the proportion of the objects in the picture;
the position of an object in the video;
the proportion of objects in the video.
Specifically, the display area of the pictures and the display area of the target video may be determined according to the number of the pictures, the number of the target videos, and the screen display area, that is, in a case where the first input is an input of a picture and a video by a user, the display area of each picture and each target video on the screen is determined according to the total number of the pictures and the target videos and the screen display area; for example: and calculating the total number of the pictures and the target videos (such as the total number of the pictures and the target videos is 2, 3, 6, 9 and the like) which are suitable for being displayed on the same screen of the screen according to the total number of the pictures and the target videos and then calculating the display area of each picture and each target video according to the total number (such as the display area of the screen divided by the total number is the display area of one picture or one target video).
For example: the total number of pictures and target videos which are suitable to be displayed on the screen is 3, and the total number of the pictures and the target videos is 6, the pictures and the target videos can be displayed by being divided into two pages, and the number of the displayed pages is 3; as shown in fig. 2 and 3, A, B is a picture, C is the target video associated with a and B, D, E is a picture, and F is the target video associated with D and E; as shown in fig. 2, at the first page presentations A, B and C, and at the second page presentations D, E and F, as shown in fig. 3, the first page and the second page can be switched by the user sliding left and right, or the like.
Specifically, after the display area of the picture and the display area of the target video are determined, the display size of the picture on the screen may be determined according to at least one of the matching degree of the object posture in the picture and the object posture in the video, the position of the object in the picture, the proportion of the object in the picture, and the display area of the picture, that is, the picture is displayed on the screen according to the display size; and determining the playing size of the frame image contained in the target video on the screen according to at least one of the matching degree of the object posture in the picture and the object posture in the video, the position of the object in the video, the proportion of the object in the frame image in the video and the display area of the target video, namely, the target video is played on the screen according to the playing size of each frame image. It should be noted that the above schemes are only examples, and are not limited herein.
For example: the method comprises the steps of placing feet of an object in a picture at the lower edge of a display area of the picture, then scaling the posture of the object in the picture according to the area of the display area of the picture to the height of the display area 2/3, then cutting the rest edges, and the size of the cut picture is the display size of the picture. Similarly, the cropping mode of each frame of image in the target video is similar to the cropping mode of the image, and is not repeated here.
It should be noted that, when the first input is input to a plurality of pictures by a user, the area of a display area of each picture on the screen is determined according to the number of the pictures and the display area of the screen; if the videos are displayed on the same screen, determining the display area of each video on the screen according to the number of the videos and the display area of the screen.
Further, in the case that the number of the pictures is multiple, after the step 11, the method further includes:
and responding to the first input, and adjusting the display position of the picture on the screen according to the matching degree of the object gestures in the plurality of pictures.
Specifically, under the condition that the number of the pictures is multiple, the pictures with higher matching degree of the object posture in the pictures can be adjacent to the display position on the screen, that is, the pictures with higher matching degree of the object posture are preferentially put together, so that the user can conveniently and visually compare the pictures.
It should be noted that the first playing order of the frame images in the target video may also be sorted according to the object pose matching degree of each frame image, which is not described herein again.
Optionally, in a case that the first playing information includes a first playing order and the number of the target videos is multiple, after the step 12, the method further includes:
and adjusting a first playing sequence of the frame images contained in the target video according to the matching degree of the object postures in the frame images contained in different target videos.
Specifically, the display positions of the target videos are adjacently placed, and the first playing sequence and/or the first playing speed of the target videos can be adjusted according to the matching degree of the object posture of each frame of image, so that the frame images with higher matching degree of the object posture in the target videos can be simultaneously played, and the comparison by a user is facilitated; moreover, the playing sequence of the frame images in the target video can be adjusted according to the displayed pictures, so that the matching degree of the object posture in the frame image played first in the target video with the object posture in the picture is the highest compared with other frame images; also, the first playing speed of each frame of image in the target video can be set to different speeds, i.e. the dwell time can be different when each frame of image is played, for example: the dwell time of the frame image with the highest matching degree with the object posture in the picture can be increased, and the user can conveniently compare the target video with the picture.
It should be noted that one or more target videos may be adjusted, so as to ensure that frame images with higher matching degree in different target videos are played simultaneously, which is convenient for users to compare.
For example: as shown in fig. 4 and 5, A, B, C are both videos, the a video includes two frame images, a1 front-view and a2 side-view, respectively, the B video includes two frame images, B1 front-view and B2 side-view, respectively, and the C video includes two frame images, C1 front-view and C2 side-view, respectively. Playing A, B, C the three videos in a loop according to the sequence of fig. 4 and fig. 5; if the a video includes only one frame image, and both a1 and a2 are front views, then B1 and C1 must be front views (priority shows), and the dwell time of playback will be extended.
It should be noted that, a user may replace one of the target videos with a picture, at this time, the playing speed and the playing sequence of the frame pictures in the other target videos may be adjusted according to the replaced picture, and the adjustment method is similar to the above method, and is not described herein in detail.
Optionally, when the first playing information includes a first playing order, before step 12, the method further includes:
receiving a second input of the target video from the user;
in response to the second input, adjusting a first playback order of frame images included in the target video.
Specifically, the second input is an operation of manually adjusting a first playing sequence of frame images in the target video by a user, for example: dragging the third frame picture to the front of the first frame picture by the user, namely adjusting the sequence of the third frame picture into the first played frame picture; after the first play order is changed, the play speed of the first frame image may be extended by 10 s.
It should be noted that, under the condition that the first playing sequence and/or the first playing speed of the target video is changed, the first electronic device may send only the changed information to the second electronic device, and the second electronic device adjusts the target video according to the changed information of the target video, so as to increase the calculation Processing of a Central Processing Unit (CPU) of the opposite-end electronic device (the CPUs of general electronic devices are all excessive), reduce the outgoing network traffic, and particularly obtain better sharing experience in a scene with a weak signal.
Optionally, in a case that the first display area includes a first object, the second display area includes a second object, and feature information of the first object is different from feature information of the second object, the method further includes:
receiving a third input of the feature information of the first object and/or the feature information of the second object by a user;
in response to the third input, replacing the feature information of the second object with the feature information of the first object.
Specifically, the feature information includes a person's clothing, clothing color, and the like. For example: as shown in fig. 6, first, based on the person identification, the body part (e.g., leg, waist, upper body, eye, head, etc.) of the person in the picture is identified, then the user selects a rectangular part in the picture a (e.g., the user slides out with two fingers to trigger cropping, and crops the rectangular part in the picture a), then selects a picture B or a video B, and replaces the picture B or the video B with the rectangular part corresponding to the picture B (e.g., the rectangular part on the picture B slides in with two fingers, or the rectangular part with a similar gesture corresponding to the picture B can be automatically determined by the operation of sliding in with two fingers at any position on the picture B, and then the rectangular part in the picture B is replaced with the rectangular part in the picture a), as shown in fig. 7, thereby realizing automatic change of the device and reducing the cost of user for changing the device and comparing.
In summary, in the above embodiments of the present invention, after the key frame is extracted from the video, the picture displayed on the same screen and the dynamic playing video associated with the picture can be formed, so that the formats displayed on the same screen are uniform, and the user can quickly perform the comparison without excessive operations; moreover, the first playing information of the target video is adjusted, and the adjustment information is sent to the second electronic equipment, so that the transmission of network flow can be reduced; moreover, automatic changing can be realized, so that the cost of changing and comparing the clothes is reduced for a user, and the user can be helped to select favorite clothes for matching.
As shown in fig. 8, an embodiment of the present invention further provides a display device 80, including:
a first receiving module 81, configured to receive a first input of a picture and a video from a user;
a first response module 82, configured to, in response to the first input, obtain a target video associated with the picture according to a degree of matching between the object pose in the picture and the object pose in the video, where a frame image included in the target video has first play information, and the first play information includes: at least one of a first playback order and a first playback speed;
and the first processing module 83 is configured to display the picture in a first display area, and play a frame image in the target video in a second display area according to the first play information.
Optionally, the display device 80 further includes:
and the first sharing module is used for sharing the picture and the target video to second electronic equipment in an incremental updating mode.
Optionally, the first response module 82 includes:
a first extraction unit, configured to extract a target frame image in the video according to the object posture in the picture, where a degree of matching between the object posture in the target frame image and the object posture in the picture exceeds a first threshold;
and the second processing unit is used for forming the target video by the target frame image according to the first playing information.
Optionally, the first response module 82 includes:
a first determining module, configured to determine first playing information of a frame image included in the target video according to at least one of:
a degree of matching of the object pose in the picture and the object pose in the video;
the position of the object in the picture;
the proportion of the objects in the picture;
the position of an object in the video;
the proportion of objects in the video
Optionally, in a case that the number of the pictures is multiple, the apparatus further includes:
and the first adjusting module is used for responding to the first input and adjusting the display position of the picture on the screen according to the matching degree of the object gestures in the plurality of pictures.
Optionally, when the first playing information includes a first playing order and the number of the target videos is multiple, the apparatus further includes:
and the second adjusting module is used for adjusting the first playing sequence of the frame images contained in the target video according to the matching degree of the object postures in the frame images contained in different target videos.
Optionally, when the first playing information includes a first playing sequence, the apparatus further includes:
the second receiving module is used for receiving a second input of the target video from the user;
a second response module, configured to adjust a first playing order of frame images included in the target video in response to the second input.
Optionally, in a case that the first display area includes a first object, the second display area includes a second object, and feature information of the first object is different from feature information of the second object, the apparatus further includes:
a third receiving module, configured to receive a third input of feature information of the first object and/or feature information of the second object by a user;
a third response module, configured to replace, in response to the third input, the feature information of the second object with the feature information of the first object.
The display device 80 can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
According to the embodiment of the invention, after the key frame of the video is extracted, the picture displayed on the same screen and the dynamic playing video related to the picture can be formed, so that the formats displayed on the same screen are uniform, a user can quickly compare the pictures without excessive operation; moreover, the first playing information of the target video is adjusted, and the adjustment information is sent to the second electronic equipment, so that the transmission of network flow can be reduced; moreover, automatic changing can be realized, so that the cost of changing and comparing the clothes is reduced for a user, and the user can be helped to select favorite clothes for matching.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, where the electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 910 is configured to implement the following steps:
receiving a first input of a user for pictures and videos;
in response to the first input, acquiring a target video associated with the picture according to the matching degree of the object posture in the picture and the object posture in the video, wherein a frame image contained in the target video has first playing information, and the first playing information comprises: at least one of a first playback order and a first playback speed;
and displaying the picture in a first display area, and playing the frame image in the target video in a second display area according to the first playing information.
The electronic device 900 can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 7, and details are not repeated here to avoid repetition.
Therefore, according to the electronic equipment, the target video which is associated with the picture and has the first playing information is obtained according to the matching degree of the object posture in the picture and the object posture in the video, the picture is displayed in the first display area, the frame image in the target video is played in the second display area according to the first playing information, the first playing information of the target video can be determined and adjusted according to the matching degree of the object posture in the picture and the object posture in the video, the picture and the video can be efficiently watched and compared, and the user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 902, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may provide audio output related to a specific function performed by the electronic device 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The electronic device 900 also includes at least one sensor 905, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 9061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 9061 and/or the backlight when the electronic device 900 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 908 is an interface for connecting an external device to the electronic apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 900 or may be used to transmit data between the electronic device 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the electronic device. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The electronic device 900 may further include a power supply 911 (e.g., a battery) for supplying power to various components, and preferably, the power supply 911 may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the electronic device 900 includes some functional modules that are not shown, and thus are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 910, a memory 909, and a computer program that is stored in the memory 909 and can be run on the processor 910, and when the computer program is executed by the processor 910, the processes of the display method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (15)

1. A display method is applied to first electronic equipment and is characterized by comprising the following steps:
receiving a first input of a user for pictures and videos;
in response to the first input, extracting a target frame image in the video according to the object posture in the picture, wherein the matching degree of the object posture in the target frame image and the object posture in the picture exceeds a first threshold value;
forming a target video by the target frame image according to first playing information;
the frame image contained in the target video has first playing information, and the first playing information comprises: at least one of a first playback order and a first playback speed;
and displaying the picture in a first display area, and playing the frame image in the target video in a second display area according to the first playing information.
2. The method of claim 1, further comprising:
and sharing the picture and the target video to a second electronic device in an incremental updating mode.
3. The method according to claim 1, wherein obtaining the target video associated with the picture according to the matching degree of the object posture in the picture and the object posture in the video comprises:
extracting a target frame image in the video according to the object posture in the picture, wherein the matching degree of the object posture in the target frame image and the object posture in the picture exceeds a first threshold value;
and forming the target video by the target frame image according to the first playing information.
4. The method according to claim 1, wherein the obtaining the target video associated with the picture according to the matching degree of the object posture in the picture and the object posture in the video comprises:
determining first playing information of frame images contained in the target video according to at least one of the following items:
a degree of matching of the object pose in the picture and the object pose in the video;
the position of the object in the picture;
the proportion of the objects in the picture;
the position of an object in the video;
the proportion of objects in the video.
5. The method of claim 1, wherein in the case that the number of pictures is multiple, after the receiving of the first input of the user for the pictures and the video, the method further comprises:
and responding to the first input, and adjusting the display position of the picture on the screen according to the matching degree of the object gestures in the plurality of pictures.
6. The method according to claim 1, wherein in a case where the first playback information includes a first playback order and the number of target videos is plural, after acquiring a target video associated with the picture, the method further comprises:
and adjusting a first playing sequence of the frame images contained in the target video according to the matching degree of the object postures in the frame images contained in different target videos.
7. The method of claim 1, wherein after acquiring the target video associated with the picture when the first playback information comprises a first playback order, the method further comprises:
receiving a second input of the target video from the user;
in response to the second input, adjusting a first playback order of frame images included in the target video.
8. The method according to claim 1, wherein in a case where the first display region contains a first object, the second display region contains a second object, and feature information of the first object is different from feature information of the second object, the method further comprises:
receiving a third input of the feature information of the first object and/or the feature information of the second object by a user;
in response to the third input, replacing the feature information of the second object with the feature information of the first object.
9. A display device, comprising:
the first receiving module is used for receiving first input of a user for pictures and videos;
a first response module, configured to, in response to the first input, extract a target frame image in the video according to the object posture in the picture, where a degree of matching between the object posture in the target frame image and the object posture in the picture exceeds a first threshold;
forming a target video by the target frame image according to first playing information;
the frame image contained in the target video has first playing information, and the first playing information comprises: at least one of a first playback order and a first playback speed;
and the first processing module is used for displaying the picture in a first display area and playing the frame image in the target video in a second display area according to the first playing information.
10. The display device according to claim 9, further comprising:
and the first sharing module is used for sharing the picture and the target video to second electronic equipment in an incremental updating mode.
11. The display device according to claim 9, wherein the first response module comprises:
a first extraction unit, configured to extract a target frame image in the video according to the object posture in the picture, where a degree of matching between the object posture in the target frame image and the object posture in the picture exceeds a first threshold;
and the second processing unit is used for forming the target video by the target frame image according to the first playing information.
12. The display device according to claim 9, wherein the first response module comprises:
a first determining module, configured to determine first playing information of a frame image included in the target video according to at least one of:
a degree of matching of the object pose in the picture and the object pose in the video;
the position of the object in the picture;
the proportion of the objects in the picture;
the position of an object in the video;
the proportion of objects in the video.
13. The display device according to claim 9, wherein in a case where the first display region includes a first object, the second display region includes a second object, and feature information of the first object is different from feature information of the second object, the device further comprises:
a third receiving module, configured to receive a third input of feature information of the first object and/or feature information of the second object by a user;
a third response module, configured to replace, in response to the third input, the feature information of the second object with the feature information of the first object.
14. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the display method according to any one of claims 1 to 8.
15. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the display method according to any one of claims 1 to 8.
CN202010429797.2A 2020-05-20 2020-05-20 Display method and device and electronic equipment Active CN111556358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010429797.2A CN111556358B (en) 2020-05-20 2020-05-20 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010429797.2A CN111556358B (en) 2020-05-20 2020-05-20 Display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111556358A CN111556358A (en) 2020-08-18
CN111556358B true CN111556358B (en) 2022-03-01

Family

ID=72008332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010429797.2A Active CN111556358B (en) 2020-05-20 2020-05-20 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111556358B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891141B (en) * 2021-10-25 2024-01-26 抖音视界有限公司 Video processing method, device and equipment
CN116028657B (en) * 2022-12-30 2024-06-14 翱瑞(深圳)科技有限公司 Analysis system of intelligent cloud photo frame based on motion detection technology

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104254001A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Remote sharing method, device and terminal
CN104899910A (en) * 2014-03-03 2015-09-09 株式会社东芝 Image processing apparatus, image processing system, image processing method, and computer program product
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN108985262A (en) * 2018-08-06 2018-12-11 百度在线网络技术(北京)有限公司 Limb motion guidance method, device, server and storage medium
CN109685040A (en) * 2019-01-15 2019-04-26 广州唯品会研究院有限公司 Measurement method, device and the computer readable storage medium of body data
CN109740543A (en) * 2019-01-07 2019-05-10 深圳前海默比优斯科技有限公司 A kind of the user's specific behavior analysis method and self-medicine terminal of view-based access control model
CN110298309A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Motion characteristic processing method, device, terminal and storage medium based on image
CN110458235A (en) * 2019-08-14 2019-11-15 广州大学 Movement posture similarity comparison method in a kind of video
CN111046715A (en) * 2019-08-29 2020-04-21 郑州大学 Human body action comparison analysis method based on image retrieval

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10368010B2 (en) * 2012-07-31 2019-07-30 Nec Corporation Image processing system, image processing method, and program
KR101729195B1 (en) * 2014-10-16 2017-04-21 한국전자통신연구원 System and Method for Searching Choreography Database based on Motion Inquiry
US9953217B2 (en) * 2015-11-30 2018-04-24 International Business Machines Corporation System and method for pose-aware feature learning
CN108156385A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 Image acquiring method and image acquiring device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104254001A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Remote sharing method, device and terminal
CN104899910A (en) * 2014-03-03 2015-09-09 株式会社东芝 Image processing apparatus, image processing system, image processing method, and computer program product
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN108985262A (en) * 2018-08-06 2018-12-11 百度在线网络技术(北京)有限公司 Limb motion guidance method, device, server and storage medium
CN109740543A (en) * 2019-01-07 2019-05-10 深圳前海默比优斯科技有限公司 A kind of the user's specific behavior analysis method and self-medicine terminal of view-based access control model
CN109685040A (en) * 2019-01-15 2019-04-26 广州唯品会研究院有限公司 Measurement method, device and the computer readable storage medium of body data
CN110298309A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Motion characteristic processing method, device, terminal and storage medium based on image
CN110458235A (en) * 2019-08-14 2019-11-15 广州大学 Movement posture similarity comparison method in a kind of video
CN111046715A (en) * 2019-08-29 2020-04-21 郑州大学 Human body action comparison analysis method based on image retrieval

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Extracting key postures in a human action video sequence;Y. Chen et al.;《2008 IEEE 10th Workshop on Multimedia Signal Processing》;20081105;全文 *
基于关键姿势的单次学习动作识别;邹武合等;《半导体光电》;20151231;全文 *

Also Published As

Publication number Publication date
CN111556358A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN108495029B (en) Photographing method and mobile terminal
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN110557683B (en) Video playing control method and electronic equipment
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN109874038B (en) Terminal display method and terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108182019A (en) A kind of suspension control display processing method and mobile terminal
CN110928407B (en) Information display method and device
CN110174993B (en) Display control method, terminal equipment and computer readable storage medium
CN109683777B (en) Image processing method and terminal equipment
CN108898555B (en) Image processing method and terminal equipment
CN108600544B (en) Single-hand control method and terminal
CN108646960B (en) File processing method and flexible screen terminal
CN111669503A (en) Photographing method and device, electronic equipment and medium
CN111461985A (en) Picture processing method and electronic equipment
CN109542321B (en) Control method and device for screen display content
CN108804628B (en) Picture display method and terminal
CN107728877B (en) Application recommendation method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN111556358B (en) Display method and device and electronic equipment
CN107729100B (en) Interface display control method and mobile terminal
CN110086998B (en) Shooting method and terminal
CN110321449B (en) Picture display method and terminal
CN110007821B (en) Operation method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210420

Address after: 311121 Room 305, Building 20, Longquan Road, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: VIVO MOBILE COMMUNICATION (HANGZHOU) Co.,Ltd.

Address before: 283 No. 523860 Guangdong province Dongguan city Changan town usha BBK Avenue

Applicant before: VIVO MOBILE COMMUNICATION Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant