[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114785949A - Video object processing method and device and electronic equipment - Google Patents

Video object processing method and device and electronic equipment Download PDF

Info

Publication number
CN114785949A
CN114785949A CN202210394065.3A CN202210394065A CN114785949A CN 114785949 A CN114785949 A CN 114785949A CN 202210394065 A CN202210394065 A CN 202210394065A CN 114785949 A CN114785949 A CN 114785949A
Authority
CN
China
Prior art keywords
video
target
target object
input
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210394065.3A
Other languages
Chinese (zh)
Inventor
梁磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210394065.3A priority Critical patent/CN114785949A/en
Publication of CN114785949A publication Critical patent/CN114785949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4333Processing operations in response to a pause request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video object processing method, a video object processing device and electronic equipment, wherein the method comprises the following steps: under the condition of playing a target video, displaying a video playing interface corresponding to the target video; in response to a first input, pausing the playing of the target video; and responding to a second input of a target object in the video playing interface, and displaying the object information of the target object.

Description

Video object processing method and device and electronic equipment
Technical Field
The present application belongs to the field of video technologies, and in particular, to a method and an apparatus for processing a video object, and an electronic device.
Background
With the popularization of smart phones and the improvement of network technologies, videos become one of important channels for people to acquire information, and video forms are more and more diverse, such as live videos, e-commerce videos, self-media videos and the like. Generally, when a user watches a video, items in the video may be interested, for example, when the user wants to purchase the items in the video, the user is required to obtain a screenshot picture for a video screenshot, exit the video application, enter a shopping application, and search for the items through the screenshot picture, which results in a cumbersome operation.
Disclosure of Invention
The embodiment of the application aims to provide a video object processing method and device and electronic equipment, and the problem that in the prior art, when a user is interested in an article in a video, the operation is complex due to the need of switching among a plurality of applications is solved.
In a first aspect, an embodiment of the present application provides a video object processing method, where the method includes:
under the condition of playing a target video, displaying a video playing interface corresponding to the target video;
in response to a first input, pausing the playing of the target video;
and responding to a second input of a target object in the video playing interface, and displaying the object information of the target object.
In a second aspect, an embodiment of the present application provides a video object processing apparatus, where the apparatus includes:
the display module is used for displaying a video playing interface corresponding to a target video under the condition of playing the target video;
a pause module for pausing playback of the target video in response to a first input;
the display module is further used for responding to a second input of a target object in the video playing interface and displaying the object information of the target object.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in the process of playing the target video, when a first input is received, the playing of the target video is paused, and under the condition that a second input to the target object in the video playing interface corresponding to the target video is received, the object information of the target object is displayed, so that a user can check the object information of the target object in the target video only by operating the video playing interface corresponding to the target video without jumping out of the video playing interface corresponding to the target video, and the complicated operation that the user needs to switch and search among a plurality of applications when the user is interested in an article in the video is avoided.
Drawings
Fig. 1 is a flowchart of a video object processing method according to an embodiment of the present application;
fig. 2 is one of interface schematic diagrams of a video playing interface provided in an embodiment of the present application;
fig. 3 is a second schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 4 is a third schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 5 is a fourth schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 6 is a fifth schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 7 is a sixth schematic interface view of a video playing interface provided in the embodiment of the present application;
fig. 8 is a seventh schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 9 is an eighth schematic interface diagram of a video playing interface according to an embodiment of the present application;
fig. 10 is a ninth schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 11 is a tenth schematic interface diagram of a video playing interface provided in the embodiment of the present application;
fig. 12 is an eleventh schematic interface diagram of a video playing interface according to an embodiment of the present application;
fig. 13 is a twelfth schematic interface view of a video playing interface according to an embodiment of the present application;
fig. 14 is a thirteen schematic interface diagrams of a video playing interface according to an embodiment of the present application;
fig. 15 is a fourteenth schematic interface diagram of a video playing interface according to an embodiment of the present application;
fig. 16 is a fifteen-level interface schematic diagram of a video playing interface provided in the embodiment of the present application;
fig. 17 is a schematic structural diagram of a video object processing apparatus according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video object processing method can be applied to an application environment comprising electronic equipment and a server. The electronic device and the server are connected through a network. The electronic device may be, but is not limited to, various smart phones, personal computers, laptops, tablets, portable wearable devices, and the like.
In one embodiment, the electronic device plays a target video, displays a video playing interface corresponding to the target video, pauses playing the target video in response to a first input, displays a target image of a target object in response to a second input to the target object in the video playing interface, sends the target image to a server, the server searches for object information of the target object according to the target image, sends the object information of the target object to the electronic device, and the electronic device displays the object information of the target object.
In the video object processing method provided by the embodiment of the present application, an execution main body may be the video object processing apparatus provided by the embodiment of the present application, or an electronic device integrated with the video object processing apparatus, where the video object processing apparatus may be implemented in a hardware or software manner.
By the video object processing method provided by the embodiment of the application, a user can select an object which is interested in the user from a video and display related information of the object in a video playing interface.
For example, in a shopping scene, through the video object processing method provided by the embodiment of the application, a user can select an object which the user wants to purchase from a video, and purchase information of the object is displayed in a video playing interface.
For another example, in a search scene, through the video object processing method provided by the embodiment of the present application, a user may select an object that the user wants to search from a video, and display search information of the object in a video playing interface.
The following describes in detail a video object processing method provided in the embodiments of the present application with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, which is a video object processing method provided in the embodiment of the present application, the embodiment mainly illustrates that the method is applied to an electronic device. As shown in fig. 1, the video object processing method may include the following steps 1100 to 1300, which are described in detail below.
Step 1100, displaying a video playing interface corresponding to a target video under the condition of playing the target video.
The target video is a video which is played by a user by using a video application. The video can be a short video played based on a short video playing platform, and can also be a long video played based on a video playing platform. Moreover, the video can be live video or recorded video.
The video playing interface is a playing interface displayed when the target video is played. Illustratively, as shown in fig. 1, in the case of playing a target video, the electronic device displays a video playing interface 202 corresponding to the target video.
In an embodiment, in the case that the target video is played in step 1100, after a video playing interface corresponding to the target video is displayed, the following steps are performed:
in response to the first input, pausing the playing of the target video, step 1200.
The first input may be an input of a user in a video playing interface of the target video, or an input of the user outside the video playing interface.
Optionally, the first input may be a click input in a video playing interface of the target video by the user, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements. For example, in a case where the user slides three fingers across the video playing interface 202 shown in fig. 2, the electronic device may pause playing the target video.
In this embodiment, in response to the first input in step 1200, the pausing the playing of the target video may further include: in response to a first input to the object handling control, pausing playback of the target video.
Optionally, the first input may be a click input for the object processing control, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements, and this embodiment of the present application does not limit this. For example, when the user clicks on the object handling control 201 shown in fig. 2, the electronic device may pause playing the target video.
In one embodiment, the object handling controls are displayed in a video playback interface. The electronic equipment receives a first input of a user through the object processing control, the playing of the target video is paused in response to the first input, the video playing interface is adjusted to be in a selectable state, and the user can select an interested object in the video playing interface.
For example, for a shopping scene, the object processing control may be a control corresponding to a "purchase-as-you-go" Application Program embedded in a system of the electronic device, where the purchase-as-you-go Application Program is equivalent to a system Application of the electronic device and can communicate with an Application Program Interface (API) to detect whether the electronic device is currently playing a video, and if the electronic device is currently playing a video, pause video playing. Referring to fig. 2, an object handling control 201 is displayed in the video playback interface. It should be noted that the user may configure whether the object handling control is displayed, e.g., the user may configure the object handling control not to be displayed. For another example, the user may configure the object handling control to be displayed when a preset condition is met, where the preset condition may be when the target video is played. That is, in the case of playing the target video, the object handling control may be displayed, a first input to the object handling control may be received, and the playing of the target video may be paused in response to the first input to the object handling control.
After the step 1200 is executed and the target video is paused to be played in response to the first input, the following steps are entered:
step 1300, responding to a second input of a target object in the video playing interface, and displaying object information of the target object.
The object information of the target object may be attribute information of the target object, such as a name of the target object, a model of the target object, a manufacturer of the target object, and the like. Further, the object information of the target object also comprises other information for different scenes, for example for a search scene the object information of the target object comprises search information, the search information comprising at least a search link. For another example, for a shopping scenario, the object information of the target object includes shopping information including at least a shopping link.
In one embodiment, before performing step 1300 and in response to the second input to the target object in the video playing interface, displaying the object information of the target object, the method further includes overlaying a transparent cover layer on the video playing interface.
The transparent cover layer is a view container which is used as a mask layer, and the functions of fast forwarding, pausing, brightness and the like of the target video are not triggered by clicking or touching the transparent cover layer by a user.
Continuing with the above example, the user may click on the object handling control 201 shown in fig. 2, and the electronic device may pause playing the target video and superimpose the transparent cover layer on the current video playing interface.
Referring to FIG. 3, a text prompt is displayed on the video playback interface, "try to circle out your favorite bar! ", to inform the user that the transparent cover has been overlaid, which may autonomously select an object in the video playback interface as the target object.
Referring to fig. 4, a layer of filter is added on the video playing interface, and the filter is gray in the figure to inform the user that the transparent cover layer is overlapped, and the user can autonomously select an object in the video playing interface as a target object.
Here, in this embodiment, in response to the second input to the target object in the video playing interface in this step 1300, displaying the object information of the target object may further include the following steps: and responding to a second input of a first area of the transparent covering layer, and displaying object information of a target object in a second area of the video playing interface, wherein the second area is an area at a position corresponding to the first area.
The first area is an area on the transparent obscuration layer, the second area is an area on the video playing interface, and the position of the second area on the transparent obscuration layer corresponds to the position of the first area on the video playing interface. The second input may be a circle-drawing, long-pressing, double-clicking or other input performed by the user for the first area of the transparent covering layer, or other specific gestures input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application.
Referring to fig. 5, a user circles on the transparent cover layer, and after the user circles to form a closed loop, a circle 502 is formed, and an area in the circle 502 is a first area. The target object is the bag 501 in the video playing interface.
In one embodiment, the video object processing method according to the embodiment of the present disclosure further includes: responding to a drawing input of the video playing interface, and determining a drawing track of the drawing input; and taking at least one object selected by the drawing track as the target object.
The drawing input to the video playing interface can be a drawing input of a user to the first area of the transparent covering layer.
Optionally, the user implements a drawing input in the first region of the transparent cover layer, and the electronic device determines a drawing track of the drawing input in response to the drawing input, and takes at least one object selected by the drawing track as a target object.
For example, referring to fig. 9, after a user circles a transparent cover layer, and the user circles a closed loop, a circle 903 and a circle 902 are formed, and the areas in the circle 903 and the circle 902 are both first areas. The target objects are a bag 901 and a bag 904 in the video playing interface. Alternatively, in the case where there are a plurality of target objects, a plurality of drawing inputs need to be completed within a preset time.
For another example, referring to fig. 10, a determination control 1001 is displayed in the video playing interface, and after the user selects the target object, the user may click the determination control 1001, and further display object information of the target object, such as a name, a model, a manufacturer, a purchase link, and the like of the schoolbag.
In one embodiment, in this step 1300, in response to the second input to the target object in the video playing interface, displaying the object information of the target object may further include the following steps 1310 to 1330:
step 1310, in response to a second input to a target object in the video playing interface, displaying a target image of the target object.
The target image may be an image including a target object, for example, an image in which a display angle of the target object satisfies a forward display condition. In the case where the display angle of the target object satisfies the forward display condition, forward image information of the target object should be relatively completely displayed in the target image. The forward image information may be understood as image information of the main presentation side of the target object. For example, for apparel, the primary display surface is the front surface of the apparel.
In this step 1310, the electronic device may display a target image of the target object in the video playing interface. For example, for a single screen, the electronic device may display a target image of the target object on the video playback interface. For a double-sided screen or a folding screen, the electronic device may also display a target image of a target object on another display screen.
Optionally, before displaying the target image of the target object, the electronic device first obtains a video frame displayed in the video playing interface, and directly captures the target object in the video frame to obtain a captured image when the display angle of the target object in the video frame meets the forward display condition, and takes the captured image as the target image. Here, the displaying of the target image of the target object in step 1310 includes: and displaying the screenshot picture of the target object.
It can be understood that, in a case that the display angle of the target object in the video frame satisfies the forward condition, the screenshot picture generated by screenshot the target object in the video frame is a forward image of the target object. The display angle of the target object in the forward image meets the forward display condition, namely, the forward image information of the target object included in the forward image is more comprehensive than the forward image information of the target object included in the non-forward image.
Alternatively, before displaying the target image of the target object, the video frame displayed in the video playing interface is acquired, and the forward image of the target object is searched in the target video when the display angle of the target object in the video frame does not satisfy the forward display condition. Here, the displaying of the target image of the target object in step 1310 includes: a forward image of the target object is displayed.
Optionally, in a case that a display angle of the target object in the video frame does not satisfy the forward condition, the electronic device may capture a screenshot of the target object in the video frame to generate a screenshot picture, and track the target object in the target video based on the screenshot picture, so as to obtain a forward image of the target object.
Referring to fig. 5, since the display angle of the target object 501 in the video frame does not satisfy the forward display condition, a forward image of the target object 501 can be found in the target video. Referring to fig. 6, a forward image 601 of the target object 501 is displayed in the video play interface.
Referring to fig. 9, since neither the target object 904 nor the target object 901 in the video frame satisfies the forward display condition, the forward image of the target object 904 and the forward image of the target object 901 can be searched in the target video. Referring to fig. 11, a forward image 1102 of a target object 904 and a forward image 1101 of a target object 901 are displayed in a video play interface.
Step 1320, searching the object information of the target object according to the target image.
Referring to fig. 6, in the case of displaying a forward image 601 of a target object, information such as the name, model, manufacturer, etc. of the target object can be searched for based on the forward image 601.
Referring to fig. 11, in the case where the forward image 1101 of the target object 904 and the forward image 1102 of the target object 901 are displayed, information of the name, model, manufacturer, etc. of the target object 904 is searched for based on the forward image 1102, and information of the name, model, manufacturer, etc. of the target object 901 is searched for based on the forward image 1101.
Step 1330 of displaying the searched object information of the target object.
According to the embodiment of the disclosure, in the process of playing the target video, when a first input is received, the playing of the target video is paused, and under the condition that a second input to the target object in the video playing interface corresponding to the target video is received, the object information of the target object is displayed, so that a user can check the object information of the target object in the target video only by operating the video playing interface corresponding to the target video without jumping out of the video playing interface corresponding to the target video, and the complex operation of switching and searching among a plurality of applications when the user is interested in an article in the video is avoided.
In one embodiment, for a search scene, the object information includes shopping information, and in a case that the object information includes shopping information of the target object, the video object processing method of an embodiment of the present disclosure further includes: and responding to a fifth input of the shopping information, and displaying a shopping interface corresponding to a shopping link contained in the shopping information.
Wherein the shopping information of the target object comprises a shopping link of the target object. And, the shopping link of the target object can be displayed according to parameters such as price, volume of bargain, goodness, etc.
The fifth input for the shopping information may be a click input for a shopping link of the target object, or a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application.
Referring to fig. 5, for a shopping scene, the object information includes shopping information, that is, a shopping link of the target object 501 is obtained, the shopping link corresponding to the target object 501 includes application 1, application 2, application 3, and application 4, and the ranking results of the application 1, application 2, application 3, and application 4 are obtained by ranking from high to low according to the goodness of rating: application 1, application 2, application 3, application 4. Referring to fig. 7, the link of the application 1, the link of the application 2, the link of the application 3, and the link of the application 4 may be displayed in a list form according to the sorting result. Here, the user may click on the link of application 1, so as to display the shops capable of providing the target object 501 in application 1, and the ranking results of shop 1, shop 2, and shop 3 are obtained by ranking from high to low according to the goodness of comment: shop 1, shop 2, shop 3. Referring to fig. 8, the links of shop 1, shop 2, shop 3 may be displayed in a list form according to the sorting result. Here, the user may click on the link of shop 1, and a shopping interface corresponding to shop 1 may be displayed, so that the user purchases the target object 501 through the shopping interface.
Referring to fig. 9, for a shopping scene, the object information includes shopping information, that is, shopping links of a target object 901 and a target object 904 are obtained, the shopping link corresponding to the target object 901 includes an application 1, an application 2, an application 3, and an application 4, the shopping link corresponding to the target object 904 includes an application 1 and an application 2, and ranking results of the application 1, the application 2, the application 3, and the application 4 are obtained by ranking from high to low according to a rating, and ranking results of the application 1 and the application 2 are obtained by ranking from high to low according to the rating: application 1 and application 2. Referring to fig. 12, the link of application 1, the link of application 2, the link of application 3, and the link of application 4 corresponding to the target object 901 may be displayed in a list form according to the sorting result, and the link of application 1 and the link of application 2 corresponding to the target object 904 may be displayed in a list form according to the sorting result. Here, the user may click on the link of the application 1 corresponding to the target object 904, and may display the shops capable of providing the target object 904 in the application 1, and the ranking results of the shop 1, the shop 2, and the shop 3 are obtained according to the ranking from high to low in the rating score: shop 1, shop 2, shop 3. Referring to fig. 13, the links of shop 1, shop 2, shop 3 may be displayed in a list form according to the sorting result. Here, the user may click on the link of shop 1, and a shopping interface corresponding to shop 1 may be displayed, so that the user may shop for the target object 904 through the shopping interface.
According to the embodiment, the user can accurately select one or more specific items in the video to identify the completion of purchase, and the user can purchase the items according to the recommended selective price ratio and the best shopping link, so that the shopping efficiency is higher.
In one embodiment, the video object processing method according to the embodiment of the present disclosure further includes steps 2100 to 2300 of:
step 2100, store a target image of the target object.
Referring to fig. 5, in a case where a user selects a target object 501 and obtains a target image 601 shown in fig. 6, the electronic device automatically stores the target image 601 shown in fig. 6.
Alternatively, referring to fig. 14, when a storage control 1401 is displayed on the video playback interface and the user selects the target object 501 shown in fig. 5 and obtains the target image 601 shown in fig. 6, the user clicks the storage control 1401, referring to fig. 16, and the target image 601 shown in fig. 6 can be stored. Referring to fig. 15, a prompt message "stored in casual purchase" is displayed.
It is understood that, referring to fig. 14, a viewing control 1402 is displayed on the video playing interface, and by clicking the viewing control 1402, shopping information of the target object 501 shown in fig. 5 can be viewed.
Step 2200 is displaying the stored target image in response to a third input.
The third input may be a click input for the object processing control, or a voice instruction input by a user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application.
Referring to fig. 15, the user may click on the object handling control, referring to fig. 16, to display the stored target image as shown in fig. 6, and also to display other target images that the user has triggered to store during the viewing of the target video. And each target image corresponds to a generation time, and the generation time is the system time when the corresponding picture is generated.
Step 2300, in response to a fourth input for searching the target image, displaying object information of the target object.
The fourth input may be a click input for the displayed target image, or a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application.
Referring to fig. 16, the user may click on the target image, and referring to fig. 8, shopping information of the target object may be displayed.
According to the embodiment, the target image of the target object can be stored so as to be convenient for a user to check, and the user experience is improved.
It should be noted that, in one aspect, the user may set the style of the object handling control, such as adding some personalized styles, for example, but not limited to, including animal shape, cooked dish shape, etc. On the other hand, the user can also set the number of objects which can be simultaneously analyzed by the current video frame, so that the simultaneous analysis of a plurality of objects is supported. In another aspect, the user may also set how long the time interval after the selected target image is selected for picture analysis.
Corresponding to the above embodiments, referring to fig. 17, an embodiment of the present application further provides a video object processing apparatus 1700, where the apparatus 1700 includes a display module 1710 and a play module 1720.
A display module 1710, configured to display a video playing interface corresponding to a target video when the target video is played.
The play module 1720 operable to pause playing the target video in response to a first input.
The display module 1710 is further configured to display object information of a target object in the video playing interface in response to a second input to the target object.
In one embodiment, the apparatus 1700 further comprises a superposition module (not shown).
And the superposition module is used for superposing a transparent covering layer on the video playing interface.
A display module 1710, configured to display, in response to a second input to a first area of the transparent cover layer, object information of a target object in a second area of the video playback interface, where the second area is an area corresponding to the first area.
In one embodiment, the video playback interface includes an object handling control.
A play module 1710 configured to pause playing the target video in response to a first input to the object handling control.
In one embodiment, the apparatus 1700 further includes a determining module (not shown in the figure) configured to determine, in response to a drawing input to the video playing interface, a drawing track of the drawing input; and taking at least one object selected by the drawing track as the target object.
In one embodiment, the apparatus 1700 further comprises a search module (not shown).
A display module 1710, configured to display a target image of a target object in the video playing interface in response to a second input to the target object.
And the searching module is used for searching the object information of the target object according to the target image.
A display module 1710, configured to display the searched object information of the target object.
In one embodiment, the apparatus 1700 further includes an acquisition module (not shown).
And the acquisition module is used for acquiring the video frames displayed in the video playing interface.
A display module 1710, configured to, in response to a fifth input of the shopping information, display a shopping interface corresponding to a shopping link included in the shopping information.
In one embodiment, the apparatus 1700 further comprises a storage module (not shown).
And the storage module is used for storing the target image of the target object.
A display module 1710 for displaying the stored target image in response to a third input; in response to a fourth input for searching the target image, object information of the target object is displayed.
In one embodiment, the object information includes shopping information for the target object.
A display module 1710, configured to, in response to a fifth input of the shopping information, display a shopping interface corresponding to a shopping link included in the shopping information.
In the embodiment of the application, in the process of playing the target video, when a first input is received, the playing of the target video is paused, and under the condition that a second input to the target object in the video playing interface corresponding to the target video is received, the object information of the target object is displayed, so that a user can view the target object of the target object in the target video only by operating on the video playing interface corresponding to the target video without jumping out of the video playing interface corresponding to the target video, and the complex operation that the user needs to switch and search among a plurality of applications when the user is interested in an article in the video is avoided.
The video object processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), an assistant, or a self-service machine, and the embodiments of the present application are not limited in particular.
The video object processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video object processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 18, an electronic device 1800 is further provided in the embodiment of the present application, and includes a processor 1801 and a memory 1802, where the memory 1802 stores a program or an instruction that can be executed on the processor 401, and when the program or the instruction is executed by the processor 1801, the steps of the embodiment of the video object processing method are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 19 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 1900 includes, but is not limited to: a radio frequency unit 1901, a network module 1902, an audio output unit 1903, an input unit 1904, a sensor 1905, a display unit 1906, a user input unit 1907, an interface unit 1908, a memory 1909, a processor 1910, and the like.
Those skilled in the art will appreciate that the electronic device 1900 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 1910 through a power management system, so that functions such as charging, discharging, and power consumption management are implemented through the power management system. The electronic device structure shown in fig. 19 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The display unit 1906 is configured to display a video playing interface corresponding to a target video when the target video is played.
A user input unit 1907, configured to receive a first input.
A processor 1910 configured to pause playing the target video in response to the first input.
A user input unit 1907, configured to receive a second input to a target object in the video playing interface.
A processor 1910 configured to, in response to a second input to a target object in the video playing interface, control the display unit 1906 to display object information of the target object.
According to the embodiment, in the process of playing the target video, when the first input is received, the playing of the target video is paused, and under the condition that the second input of the target object in the video playing interface corresponding to the target video is received, the object information of the target object is displayed, so that a user can check the object information of the target object in the target video only by operating the video playing interface corresponding to the target video without jumping out of the video playing interface of the target video, and the complex operation that the user needs to switch and search among a plurality of applications when the user is interested in an article in the video is avoided.
In one embodiment, the processor 1910 is further configured to overlay a transparent overlay on the video playback interface.
A user input unit 1907, configured to receive a second input to the first region of the transparent cover layer.
The processor 1910 is further configured to, in response to a second input to the first area of the transparent masking layer, control the display unit 1906 to display object information of a target object in a second area of the video playing interface, where the second area is an area in a position corresponding to the first area.
In one embodiment, the video playback interface includes an object handling control.
A user input unit 1907, configured to receive a first input to the object handling control.
Processor 1910 is further configured to pause playing the target video in response to a first input to the object handling control.
In an embodiment, the user input unit 1907 is configured to receive a drawing input for the video playing interface.
The processor 1910 is further configured to determine, in response to a drawing input to the video playing interface, a drawing track of the drawing input; and taking at least one object selected by the drawing track as the target object.
In an embodiment, the user input unit 1907 is configured to receive a second input for a target object in the video playing interface.
The processor 1910 is further configured to control the display unit 1906 to display a target image of a target object in the video playing interface in response to a second input to the target object.
The processor 1910 is further configured to search for object information of the target object according to the target image.
The display unit 1906 is further configured to display the searched object information of the target object.
In an embodiment, the processor 1910 is further configured to obtain a video frame displayed in the video playing interface; and under the condition that the display angle of the target object in the video frame does not meet a forward display condition, searching a forward image of the target object in the target video.
A display unit 1906, further configured to display a forward image of the target object.
Wherein the display angle of the target object in the forward image satisfies the forward display condition.
In one embodiment, a memory 1909 for storing a target image of the target object.
A user input unit 1907, configured to receive a third input.
The processor 1910 is further configured to control the display unit 1906 to display the stored target image in response to a third input.
A user input unit 1907, configured to receive a fourth input to search for the target image.
A processor 1910 configured to control the display unit 1906 to display object information of the target object in response to a fourth input for searching for the target image.
In one embodiment, the object information includes shopping information for the target object.
A user input unit 1907, configured to receive a fifth input of the shopping information.
The processor 1910 is further configured to, in response to a fifth input to the shopping information, control the display unit 1906 to display a shopping interface corresponding to a shopping link included in the shopping information.
It should be understood that, in the embodiment of the present application, the input Unit 1904 may include a Graphics Processing Unit (GPU) 19041 and a microphone 19042, and the Graphics Processing Unit 19041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1906 may include a display panel 19061, and the display panel 19061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1907 includes at least one of a touch panel 19071 and other input devices 19072. A touch panel 19071 is also referred to as a touch screen. The touch panel 19071 may include two parts of a touch detection device and a touch controller. Other input devices 19072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1909 may be used to store software programs as well as various data. The memory 509 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1909 may include volatile memory or nonvolatile memory, or memory 1909 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 509 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1910 may include one or more processing units; optionally, processor 1910 integrates an application processor, which primarily handles operations related to the operating system, user interface, applications, etc., and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1910.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned video object processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned video object processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing video object processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for video object processing, the method comprising:
under the condition of playing a target video, displaying a video playing interface corresponding to the target video;
pausing the playing of the target video in response to a first input;
and responding to a second input of a target object in the video playing interface, and displaying the object information of the target object.
2. The method of claim 1, wherein before the displaying the object information of the target object in response to the second input of the target object in the video playing interface, further comprising:
a transparent covering layer is superposed on the video playing interface;
the responding to the second input of the target object in the video playing interface, and displaying the object information of the target object, including:
and responding to a second input of a first area of the transparent covering layer, and displaying object information of a target object in a second area of the video playing interface, wherein the second area is an area at a position corresponding to the first area.
3. The method of claim 1, wherein the video playback interface comprises an object handling control;
the pausing the playing of the target video in response to the first input comprises:
in response to a first input to the object handling control, pausing playback of the target video.
4. The method of claim 1, further comprising:
responding to a drawing input of the video playing interface, and determining a drawing track of the drawing input;
and taking at least one object selected by the drawing track as the target object.
5. The method of claim 1, wherein the displaying object information corresponding to a target object in the video playing interface in response to a second input to the target object comprises:
responding to a second input of a target object in the video playing interface, and displaying a target image of the target object;
searching object information of the target object according to the target image;
and displaying the searched object information of the target object.
6. The method of claim 5, wherein prior to displaying the target image of the target object, further comprising:
acquiring a video frame displayed in the video playing interface;
under the condition that the display angle of the target object in the video frame does not meet a forward display condition, searching a forward image of the target object in the target video;
the displaying a target image of the target object includes:
displaying a forward image of the target object;
wherein the display angle of the target object in the forward image satisfies the forward display condition.
7. The method of claim 1, further comprising:
storing a target image of the target object;
displaying the stored target image in response to a third input;
in response to a fourth input for searching the target image, object information of the target object is displayed.
8. The method of claim 1, wherein the object information includes shopping information of the target object;
and responding to a fifth input of the shopping information, and displaying a shopping interface corresponding to a shopping link contained in the shopping information.
9. A video object processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a video playing interface corresponding to a target video under the condition of playing the target video;
a pause module for pausing playback of the target video in response to a first input;
the display module is further used for responding to a second input of a target object in the video playing interface and displaying object information of the target object.
10. An electronic device, comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the video object processing method according to any one of claims 1 to 8.
CN202210394065.3A 2022-04-07 2022-04-07 Video object processing method and device and electronic equipment Pending CN114785949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210394065.3A CN114785949A (en) 2022-04-07 2022-04-07 Video object processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210394065.3A CN114785949A (en) 2022-04-07 2022-04-07 Video object processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114785949A true CN114785949A (en) 2022-07-22

Family

ID=82430024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210394065.3A Pending CN114785949A (en) 2022-04-07 2022-04-07 Video object processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114785949A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118192863A (en) * 2024-05-16 2024-06-14 深圳市万里眼技术有限公司 Interaction method of measuring instrument interface and measuring instrument

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767871A (en) * 2014-01-08 2015-07-08 Lg电子株式会社 Mobile terminal and controlling method thereof
CN106598998A (en) * 2015-10-20 2017-04-26 北京奇虎科技有限公司 Information acquisition method and information acquisition device
CN106792006A (en) * 2017-01-17 2017-05-31 南通同洲电子有限责任公司 Commodity supplying system and method based on video
US20170264973A1 (en) * 2016-03-14 2017-09-14 Le Holdings (Beijing) Co., Ltd. Video playing method and electronic device
US20170318355A1 (en) * 2015-03-23 2017-11-02 Tencent Technology (Shenzhen) Company Limited Information processing method and apparatus, terminal and storage medium
CN110458130A (en) * 2019-08-16 2019-11-15 百度在线网络技术(北京)有限公司 Character recognition method, device, electronic equipment and storage medium
CN113031842A (en) * 2021-04-12 2021-06-25 北京有竹居网络技术有限公司 Video-based interaction method and device, storage medium and electronic equipment
CN113761360A (en) * 2021-05-27 2021-12-07 腾讯科技(深圳)有限公司 Video-based article searching method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767871A (en) * 2014-01-08 2015-07-08 Lg电子株式会社 Mobile terminal and controlling method thereof
US20170318355A1 (en) * 2015-03-23 2017-11-02 Tencent Technology (Shenzhen) Company Limited Information processing method and apparatus, terminal and storage medium
CN106598998A (en) * 2015-10-20 2017-04-26 北京奇虎科技有限公司 Information acquisition method and information acquisition device
US20170264973A1 (en) * 2016-03-14 2017-09-14 Le Holdings (Beijing) Co., Ltd. Video playing method and electronic device
CN106792006A (en) * 2017-01-17 2017-05-31 南通同洲电子有限责任公司 Commodity supplying system and method based on video
CN110458130A (en) * 2019-08-16 2019-11-15 百度在线网络技术(北京)有限公司 Character recognition method, device, electronic equipment and storage medium
CN113031842A (en) * 2021-04-12 2021-06-25 北京有竹居网络技术有限公司 Video-based interaction method and device, storage medium and electronic equipment
CN113761360A (en) * 2021-05-27 2021-12-07 腾讯科技(深圳)有限公司 Video-based article searching method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118192863A (en) * 2024-05-16 2024-06-14 深圳市万里眼技术有限公司 Interaction method of measuring instrument interface and measuring instrument
CN118192863B (en) * 2024-05-16 2024-08-23 深圳市万里眼技术有限公司 Interaction method of measuring instrument interface and measuring instrument

Similar Documents

Publication Publication Date Title
CN112788178B (en) Message display method and device
CN110322305A (en) Data object information providing method, device and electronic equipment
CN113570609A (en) Image display method and device and electronic equipment
CN114785949A (en) Video object processing method and device and electronic equipment
CN112860921A (en) Information searching method and device
CN112328829A (en) Video content retrieval method and device
CN113485621B (en) Image capturing method, device, electronic equipment and storage medium
CN112202958B (en) Screenshot method and device and electronic equipment
CN112367487B (en) Video recording method and electronic equipment
CN116596611A (en) Commodity object information display method and electronic equipment
CN115756275A (en) Screen capture method, screen capture device, electronic equipment and readable storage medium
CN115550741A (en) Video management method and device, electronic equipment and readable storage medium
CN113034213B (en) Cartoon content display method, device, equipment and readable storage medium
CN114090896A (en) Information display method and device and electronic equipment
CN113709565A (en) Method and device for recording facial expressions of watching videos
CN115086774B (en) Resource display method and device, electronic equipment and storage medium
CN114339073B (en) Video generation method and video generation device
CN118349149A (en) Image display method and device and electronic equipment
CN118277602A (en) List display method and device
CN115589459A (en) Video recording method and device
CN118741296A (en) Image recommendation and management method and device
CN115469789A (en) Object adding method, device, equipment and medium
CN116055444A (en) Object sharing method and device
CN117041650A (en) Display method, display device, electronic equipment and readable storage medium
CN114546200A (en) Interface display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination