WO2022266878A1 - 景别确定方法、装置及计算机可读存储介质 - Google Patents
景别确定方法、装置及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2022266878A1 WO2022266878A1 PCT/CN2021/101798 CN2021101798W WO2022266878A1 WO 2022266878 A1 WO2022266878 A1 WO 2022266878A1 CN 2021101798 W CN2021101798 W CN 2021101798W WO 2022266878 A1 WO2022266878 A1 WO 2022266878A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- reference image
- target image
- target
- scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000003062 neural network model Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000013441 quality evaluation Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 241001156002 Anthonomus pomorum Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
Definitions
- the present application relates to the technical field of image processing, and in particular, to a method, device, and computer-readable storage medium for determining a scene.
- the application of scene classification can effectively improve the quality of the finished film.
- alternately using various video frames of different scenes can effectively improve the spatial layering, sense of direction, and artistic expression of the video.
- images or video frames are usually analyzed by manual observation to subjectively classify images or video frames. This method has problems such as low recognition efficiency, low pass rate, time-consuming and high labor costs.
- one of the objectives of the present application is to provide a method, device and computer-readable storage medium for determining a scene.
- the embodiment of the present application provides a scene identification method, including:
- both the reference image and the target image include the same part of the same object, according to the relationship between the part of the object in the reference image and the part of the object in the target image, determine the The scene type of the target image.
- the embodiment of the present application provides a scene identification device, including:
- processors one or more processors
- the one or more processors execute the executable instructions, they are individually or jointly configured to:
- both the reference image and the target image include the same part of the same object, according to the relationship between the part of the object in the reference image and the part of the object in the target image, determine the The scene type of the target image.
- an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores executable instructions, and when the executable instructions are executed by a processor, the method as described in the first aspect is implemented .
- a reference image and a target image are obtained from a preset video segment, and when it is determined that both the reference image and the target image include the same part of the same object, according to The relationship between the part of the object in the reference image and the part of the object in the target image determines the scene classification of the target image.
- the image of the same part of the same object as the target image is used as a reference, and the scene of the target image is automatically determined according to the difference between the part of the object in the reference image and the target image, so that It is beneficial to reduce the user's operations and improve the efficiency of scene identification.
- FIG. 1 is a schematic diagram of a scene identification method provided by an embodiment of the present application
- Figure 2 and Figure 3 are different schematic diagrams of the first matching area of the target image and the second matching area of the reference image provided by an embodiment of the present application;
- FIG. 4 is a schematic diagram of scenes marked by users for reference images provided by an embodiment of the present application.
- Fig. 5 is a schematic structural diagram of an apparatus for determining a scene type provided by an embodiment of the present application.
- the present application provides a scene identification method, which obtains a reference image and a target image from a preset video segment, and determines that both the reference image and the target image include the same part of the same object
- the scene classification of the target image is determined according to the relationship between the part of the object in the reference image and the part of the object in the target image.
- the reference image having the same part of the same object as the target image is used as a reference, and the scene classification of the target image is automatically determined according to the difference of the part of the object in the reference image and the target image respectively, so that It is beneficial to reduce the operation of the user and improve the efficiency of scene identification.
- the scene identification determination method provided in the embodiment of the present application may be applied to a scene identification determination device.
- the device for determining scene identification may be an electronic device with data processing capability
- the electronic device includes but is not limited to computing devices such as mobile platforms, terminal devices, or servers.
- the mobile platform include but are not limited to unmanned aerial vehicles, unmanned vehicles, cloud platforms, unmanned ships or mobile robots.
- Examples of such end devices include, but are not limited to: smartphones/cell phones, tablet computers, personal digital assistants (PDAs), laptop computers, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality Systems, wearable devices (e.g., watches, glasses, gloves, headgear (e.g., hats, helmets, virtual reality headsets, augmented reality headsets, head-mounted devices (HMDs), headbands), pendants, armbands , leg rings, shoes, vests), remote controls, or any other type of device.
- smartphones/cell phones e.g., smartphones/cell phones, tablet computers, personal digital assistants (PDAs), laptop computers, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality Systems, wearable devices (e.g., watches, glasses, gloves, headgear (e.g., hats, helmets, virtual reality headsets, augmented reality headsets, head-mounted devices (HMDs), headbands), pendants, armbands
- the apparatus for determining the scene identification may be a computer software product integrated in the electronic device, and the computer software product may include an application program capable of executing the method for determining the scene identification provided in the embodiment of the present application.
- the scene identification device may also be a chip or an integrated circuit with data processing capability, and the scene identification device includes but is not limited to, for example, a central processing unit (Central Processing Unit, CPU), a digital signal processor ( Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC) or off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA), etc.
- the device for determining the scene type can be installed in electronic equipment.
- the scene identification method can be applied to a video editing scene, where the scene identification refers to pictures captured by the camera at different distances from the subject or with a zoom lens that include different ranges , such as perspective, panorama, medium shot, close shot, close-up, etc.
- various video frames of different scenes can be used alternately, which can effectively improve the spatial layering, sense of direction and artistic expression of the video; before video editing, you can use the scene provided by this application
- the identification method is used to determine the scene identification of each video frame in the video to be edited.
- a user may select a video frame from the video to be edited as a reference image, and then determine the video frames in the video to be edited except for the reference image as the target image, and then use the scene identification method provided in the embodiment of the present application To determine the scene classification of each target image (or in other words, each video frame except the reference image), and then combine each target image based on the scene classification of each target image to obtain an edited video.
- FIG. 1 is a schematic flowchart of a scene identification method provided in this embodiment.
- the method can be implemented by a scene identification device.
- the methods include:
- step S101 a reference image and a target image in a preset video clip are acquired.
- step S102 if both the reference image and the target image include the same part of the same object, according to the distance between the part of the object in the reference image and the part of the object in the target image relationship, and determine the scene category of the target image.
- the number of the preset video clips may be one or more, and each of the video clips includes at least two image frames; the video clips may be collected by the user using an imaging device, or may be obtained from a network platform. obtained, and this embodiment does not impose any limitation on it.
- the preset reference image may be selected by the user from the preset video clips according to actual conditions, for example, the reference image is based on a user instruction (the user designation may be based on the user's selection operation generation) is determined from the preset video segment.
- the preset reference image may also be randomly selected by the scene identification determining device from the preset video segment.
- the scene identification device may select an image including a specified object from the preset video segment as the reference image based on the identification of the specified object; it can be understood that, This embodiment does not impose any restrictions on the specified object, and specific settings may be made according to actual application scenarios.
- the specified object may be a person, an animal, or a building.
- the scene identification device may extract the image information in the preset video segment or the image frame, and compare the images containing the image information greater than the preset threshold or the images containing the The image with the largest image information is used as the reference image.
- the image information can be used to reflect the definition of the image, and the image information includes but not limited to: signal-to-noise ratio, image gradient, local variance or mean square error (Mean Square Error, MSE), etc.
- MSE mean square Error
- the larger the signal-to-noise ratio of an image the smaller the noise mixed in the image signal, and the higher the definition of the image, and the image or image frame with the largest signal-to-noise ratio in the image set can be selected as the reference image .
- the embodiment of the present application does not impose any restrictions on the specific way of obtaining image information, and can be specifically selected according to the actual application scenario.
- the image information is image gradient information
- the Brenner gradient function, Tenengrad gradient function can , Laplacian gradient function or energy gradient function to obtain the image gradient information of the image.
- the scene identification device may evaluate the aesthetic quality of the images in the preset video clips based on a preset aesthetic quality evaluation model, and classify the images whose aesthetic quality meets the preset conditions
- a preset aesthetic quality evaluation model for example, an image whose aesthetic quality is greater than a preset quality threshold or has the best aesthetic quality is used as the reference image.
- image aesthetic quality evaluation aims to use computers to simulate human perception and cognition of beauty, and automatically evaluate the "beauty" of images, that is, the computer evaluation of image aesthetic quality, mainly for the composition, color, light and shadow, and depth of field of photographed or painted images.
- the aesthetic feeling formed under the influence of aesthetic factors such as reality, virtuality and reality.
- the preset aesthetic quality evaluation model can be trained based on several images carrying aesthetic quality labels; the aesthetic quality labels are used to evaluate the aesthetic quality of the pattern, for example, the aesthetic quality labels can include ⁇ good, medium, poor ⁇ ; or the aesthetic quality label may also be a score for evaluating the aesthetic quality of the image, and the better the aesthetic quality, the higher the score.
- the scene identification device obtains the reference image from the preset video terminal
- other images of the preset video segment except the reference image may be used as target images.
- the scene identification device performs object recognition on the reference image and the target image respectively, and judges whether the reference image and the target image both include the same part of the same object. If both the reference image and the target image include the same part of the same object, the scene identification means may acquire the sizes of the part of the object in the reference image and the target image respectively, and then according to The difference between the size of the part of the object in the reference image and the size of the part of the object in the target image determines the scene classification of the target image.
- the object recognition process may be performed based on a preset object recognition algorithm, or through a preset object recognition model to perform object recognition, which is not limited in this embodiment.
- the size of the part of the object in the reference image is the size of the outer rectangle including the part of the object
- the size of the object in the target image is the size of the outer rectangle including the part of the object, so that it can effectively avoid the situation that the part of the object is irregular and makes it difficult to determine the size.
- the scene identification device performs feature matching on the target image and the reference image, and if the target image and the reference image have the same features, it indicates that the reference image and the reference image If the target images both include the same part of the same object, then the target image and the reference image can be successfully matched, and the first matching area of the target image and the second matching area of the reference image can be further obtained, and according to the The size difference between the first matching area and the second matching area is used to automatically determine the scene of the target image; if the feature of the target image is different from the feature in the reference image, it indicates that the reference image and the target If the images do not include the same part of the same object, the matching between the target image and the reference image may fail, and the scene of the target image cannot be determined.
- the reference image matched with the target image is used as a reference, and the scene classification of the target image is automatically determined based on the size relationship between the first matching area and the second matching area obtained by matching the features of the two, so that It is beneficial to reduce the user's operations and improve the efficiency of scene identification.
- the features include but are not limited to corner points, edge points, high curvature points, contours, intersection points, line segments, closed boundaries, or centers of gravity.
- the scene identification determining means may extract the feature points of the target image and the reference image respectively; as an example, the scene identification determining means may be based on The trained neural network model performs feature extraction on the target image to obtain feature points and descriptors of the target image, and performs feature extraction on the reference image based on the neural network model to obtain feature points of the reference image and its descriptor, wherein the neural network model is trained using several sample images with different brightness, so that the neural network model learns information about brightness changes, so that the neural network model can cope with different lighting conditions Images can also accurately extract feature points; as another example, algorithms such as SIFT (Scale-invariant feature transform), SURF (Speeded Up Robust Features) algorithm or HOG (Oriented Gradient Histogram, Histogram of Oriented Gradient) algorithm and other feature extraction algorithms to extract the feature points of the target image and the feature points of the reference image respectively, obtain the feature points of the target image and their descriptors, and
- SIFT Scale-invariant feature transform
- the scene identification device can match the feature points of the target image with the feature points of the reference image, for example, can determine the The distance between the feature point of the target image and the feature point of the reference image in the vector space (such as Euclidean distance, cosine distance, etc.), if the feature point of the target image and the feature point of the reference image are in the vector space The distance is less than the preset distance, and the two feature points are determined to be a matching pair of feature points that are successfully matched; wherein, the distance between the feature points of the target image and the feature points of the reference image in the vector space can be determined by the distance between the feature points of the target image The descriptor of the feature point and the descriptor of the feature point of the reference image are calculated.
- the classic random sampling consensus algorithm (Random Sample Consensus, RANSAC) can be used to eliminate mismatched feature point matching pairs according to the fitted geometric relationship, and the geometric relationship can indicate that the target image is consistent with the target image.
- the spatial mapping relationship of the reference image such as the geometric relationship can be the transformation matrix between the target image and the reference image, etc., that is, if the feature point matching pair does not satisfy the geometric relationship, then remove the feature A point matching pair, if the feature point pair satisfies the geometric relationship, then keep the feature point matching pair.
- the feature point matching pair can be screened by comparing the nearest neighbor distance and the second nearest neighbor distance in the vector space, for example, for the feature point of the target image, the feature point in the reference image is determined to be Points in the vector space are the nearest feature point (matched with the feature point of the target image) and the second closest feature point, if the ratio of the closest distance to the second closest distance is less than the preset ratio threshold, then keep this pair of features Point matching pair, otherwise, eliminate this pair of feature point matching pairs.
- the scene identification device counts the number of feature point matching pairs, If the number of feature point matching pairs is greater than the preset number, it is determined that the matching between the target image and the reference image is successful, and the scene identification device can respectively determine the first matching of the target image according to the feature points that are successfully matched. region and the second matching region of the reference image. For example, for the target image, the scene identification device may determine the first matching area according to the feature points in the target image that successfully match the reference image; for the reference image, the scene identification device may A second matching area is determined according to feature points in the reference image that are successfully matched with the target image.
- the preset number may be specifically set according to an actual application scenario, which is not limited in this embodiment.
- the first matching area is determined according to the circumscribed graphics (such as circumscribed rectangle, circumscribed circle, etc.) of the successfully matched feature points in the target image; and the The second matching area is determined according to the circumscribed figure (such as circumscribed rectangle, circumscribed circle, etc.) of the successfully matched feature points in the reference image.
- the first matching area may be a set of blocks of a preset size centered on the successfully matched feature point in the target image
- the second matching area may be the A set of blocks with a preset size centered on the successfully matched feature point in the reference image.
- the scene identification device may, according to the difference between the size of the first matching area and the size of the second matching area, Determine the scene category of the target image.
- the scene classification of the target image is automatically determined based on the size difference between the first matching area and the second matching area, thereby reducing user operations and improving scene identification efficiency.
- the scene type of the target image determined by the scene type determining means is the relative scene of the target image relative to the reference image do not.
- the reference image is the first scene type
- the reference image is also the first scene type
- the size of the first matching area is If the size is larger than the size of the second matching area, the reference image is the second scene; if the size of the first matching area is smaller than the size of the second matching area, the reference image is the third scene; wherein according to the scene The order from far to near is the third scene, the first scene, and the second scene.
- there is no need to determine the scene category of the reference image and it is not necessary to strictly divide the scene category. It is only necessary to distinguish the distance of the relative scene category according to the difference in size.
- the scene type of the reference image when the scene type of the reference image can be obtained, for example, the scene type of the reference image can be obtained through the user's annotation, for example, when the user selects a reference from a preset video clip on the interactive interface
- the reference image and the input controls related to the scene of the reference image can be displayed on the interactive interface.
- the scene of the reference image can be input based on the input control, wherein the input control includes but not limited to the input box as shown in Figure 4 (the input box allows the user to input text content related to the scene) or complex A check box (the check box allows the user to select a scene option among a limited number of scene options), etc.
- the scene category determination device may base on the difference between the size of the first matching area and the size of the second matching area and the scene category of the reference image , to determine the scene category of the target image. For example, assuming that the scene of the reference image is middle ground, if the size of the first matching area is the same as that of the second matching area, then the scene of the reference image is also middle ground; if the first matching If the size of the area is larger than the size of the second matching area, the scene of the reference image is a close-up or close-up; if the size of the first matching area is smaller than the size of the second matching area, the scene of the reference image is a distant view.
- the scene type determination device may determine the The scale of the target image relative to the reference image, and then determine the scene classification of the target image according to the scale of the target image relative to the reference image.
- the scale of the target image relative to the reference image may be a ratio between the size of the first matching area and the size of the second matching area.
- the scale of the target image relative to the reference image may also be the ratio between the area of the first matching area and the area of the second matching area, wherein the area of the first matching area Based on the size determination of the first matching area, the area of the second matching area is determined based on the size of the second matching area.
- the scene classification of the target image can be determined based on the scale of the target image relative to the reference image and a pre-stored mapping relationship, the mapping relationship indicates the scene classification corresponding to different scales or the scene classification corresponding to different size ranges scene.
- the scale of the reference image is a preset value.
- the scale of the reference image is 1
- the scale of the target image relative to the reference image is S
- the size (width and height) of the first matching region is (w1, h1)
- the size (width and height) of the second matching area is (w0, h0)
- S max(w1/w0, h1/h0)
- max() is a function for obtaining the maximum value.
- the scales of the target image 1 in FIG. 2 and the target image 2 in FIG. 3 can be determined as shown in Table 1, where x, y represent the position of the second matching area in the reference image or the first Match the location of the region in the target image.
- the difference between the scale of the target image relative to the reference image and the scale of the reference image is less than a preset difference, in other words, the size of the first matching area
- the target image and the reference image have the same scene type; otherwise, the target image and the reference image have different scene types; assuming that the reference image has The scale is 1, if the scale of the target image relative to the reference image is greater than 1 and the size difference between the two exceeds the preset difference, the scene of the target image is closer than the scene of the reference image ; If the scale of the target image relative to the reference image is less than 1 and the size difference between the two exceeds a preset difference, the scene classification of the target image is farther than the scene classification of the reference image.
- the preset difference may be specifically set according to an actual application scenario, which is not limited in this embodiment.
- the difference between the scales of any two target images is smaller than the preset difference value, it indicates that the sizes of the second matching regions of the any two target images are not much different. is large, the scene types of any two target images are the same; otherwise, the scene types of any two target images are different.
- the scene identification device may perform a process on multiple target images according to the scale of the target image relative to the reference image. sorting, and then determine the scene classification of the target images according to the arrangement order of the target images; for example, if the scale difference between two adjacent target images is less than the preset difference value, the The scene types are the same; otherwise, the scene types of the two adjacent target images are different.
- the target images may be sorted in descending order of scale (ie, the order of scenes from far to near), or may also be sorted in order of size from large to small (ie, the order of scenes from near to far) sequence) to sort the target images, and then determine the distance of the scene of each target image according to the sequence of each target image.
- the device for determining the scene type may also, according to the scale of the target image relative to the reference image, and the scale of the reference image, for multiple The target images and the reference images are sorted together, and then the scene types of the target images are determined according to the sequence of the target images and the scene types of the reference images are determined according to the sequence of the reference images.
- the scene identification device may determine the first occupation ratio of the first matching area in the target image according to the size of the first matching area, and, according to the second matching The size of the area determines a second occupation ratio of the second matching area in the reference image; then, according to the difference between the first occupation ratio and the second occupation ratio, determine the scene classification of the target image .
- the determined scene classification of the target image is a relative scene classification of the target image relative to the reference image.
- the scene type determination device may determine according to the scene type of the reference image and the difference between the first occupation ratio and the second occupation ratio The scene type of the target image.
- the difference between the first occupancy ratio and the second occupancy ratio is smaller than a preset ratio difference, in other words, the size of the first matching area and the second occupancy ratio If the sizes of the two matching regions are not much different, then the scene types of the target image and the reference image are the same; otherwise, the scene types of the target image and the reference image are different; for example, if the first occupation ratio greater than the second occupancy ratio and the difference between the two exceeds the preset ratio difference, the scene type of the target image is closer to the scene type of the reference image; if the first occupancy ratio is smaller than the first occupancy ratio If the two occupy a ratio and the difference between the two exceeds a preset ratio difference, then the scene classification of the target image is farther than the scene classification of the reference image.
- a preset ratio difference in other words, the size of the first matching area and the second occupancy ratio
- the preset ratio difference may be specifically set according to an actual application scenario, which is not limited in this embodiment.
- this embodiment there is no need to strictly divide the scene types, and it is only necessary to distinguish the relative distances of the scene types according to the difference in occupation ratio.
- the target image and its scene information can be displayed on an interactive interface for user confirmation; or, the target can be marked in the target image
- the scene of the image; or, the target image and its scene can be stored associatively, and when the user views the target image, the scene of the target image can be displayed synchronously, so that the user can use the target image based on the scene of the target image.
- Target image for custom video clip
- the scene identification device can also combine multiple target images according to the preset editing template according to the scene identification of the target image to generate target video.
- the editing template at least indicates the combination sequence and/or display mode of the target images in different scenes, so as to help improve the sense of spatial hierarchy, sense of direction and artistic expression of the target video.
- the editing template may also indicate the display positions of the second matching regions of the target images of different scenes, for example, indicating that the display positions of the second matching regions of the target images of different scenes are the same, so as to achieve The alignment of the screen achieves a richer display effect.
- the relative scene or the scene of the reference image can also be determined, and the reference image also has the same scene or Objects can be combined and edited, and the editing template can also indicate the combination sequence and/or presentation mode of multiple target images and the reference image in different scenes.
- the reference image can also be highlighted in the target video Alternatively, the reference image is marked, so that the user can intuitively understand the effects of scene identification and video editing based on the reference image when watching the target video.
- the embodiment of the present application also provides a scene identification device 20, including:
- memory 21 for storing executable instructions
- processors 22 one or more processors 22;
- processors 22 execute the executable instructions, they are individually or jointly configured to:
- both the reference image and the target image include the same part of the same object, according to the relationship between the part of the object in the reference image and the part of the object in the target image, determine the The scene type of the target image.
- the processor 22 is also used to:
- the matching is successful, respectively acquire the first matching area of the target image and the second matching area of the reference image; wherein, the matching success indicates that both the reference image and the target image include the same part of the same object;
- the scene category of the target image is determined according to the relationship between the size of the first matching area and the size of the second matching area.
- the processor 22 is further configured to: respectively extract feature points of the target image and the reference image, and match the feature points of the target image with feature points of the reference image; If the matching is successful, respectively determine the first matching area of the target image and the second matching area of the reference image according to the successfully matching feature points.
- the number of successfully matched feature point matching pairs is greater than a preset number.
- the distance between two feature points that are successfully matched in the vector space is less than a preset distance.
- two feature points that are successfully matched satisfy a preset geometric relationship.
- the feature points of the target image are obtained by extracting features of the target image based on a pre-trained neural network model;
- the image is obtained by feature extraction; wherein, the neural network model is obtained by training using several sample images with different brightness.
- the first matching area is determined according to the circumscribed figure of the feature points of the successful matching in the target image; the second matching area is determined according to the features of the successful matching in the reference image The circumscribed figure of the point is determined.
- the scene distinction of the target image is a relative scene distinction of the target image relative to the reference image.
- the processor 22 is further configured to: obtain the scene classification of the reference image; according to the difference between the size of the first matching area and the size of the second matching area, and the reference image The scene category of the target image is determined.
- the processor 22 is further configured to: determine the scale of the target image relative to the reference image according to the difference between the size of the first matching area and the size of the second matching area; The scene classification of the target image is determined according to the scale of the target image relative to the reference image.
- the scale of the reference image is a preset value.
- the target image is of the same scene type as the reference image.
- the processor 22 when there are multiple target images, is further configured to: sort the multiple target images according to the scale of the target image relative to the reference image ; Determine the scene category of the target image according to the arrangement sequence of the target image.
- the scale of the target image relative to the reference image is a ratio between the size of the first matching area and the size of the second matching area.
- the scene classification of the target image is determined based on a scale of the target image relative to the reference image and a pre-stored mapping relationship, and the mapping relationship indicates scene distinctions corresponding to different scales.
- the scale difference between any two target images is less than a preset difference, then the scene types of any two target images are the same; otherwise, the The scenes of any two target images described above are different.
- the processor 22 is further configured to: determine a first occupation ratio of the first matching area in the target image according to the size of the first matching area, and, according to the size of the second matching area Determine the second occupation ratio of the second matching area in the reference image; determine the scene type of the target image according to the difference between the first occupation ratio and the second occupation ratio.
- the difference between the first occupancy ratios of any two target images is smaller than a preset ratio difference, the scene of any two target images Otherwise, the scene types of any two target images are different.
- the scene types of the target image and the reference image are the same.
- the processor 22 when there are multiple target images, is further configured to: at least according to the scene of the target image, combine multiple target images according to a preset editing template , to generate the target video.
- the editing template at least indicates a combination sequence and/or a presentation manner of target images of different scenes.
- the reference image is randomly selected from the preset video segment; or, the reference image is an image including a specified object determined from the preset video segment; or, the reference The image is determined from the preset video segment and contains image information greater than a preset threshold; or, the reference image is an image whose aesthetic quality determined from the preset video segment meets a preset condition; or, the The reference image is determined from the preset video segment based on a user instruction.
- Various implementations described herein can be implemented using a computer readable medium such as computer software, hardware, or any combination thereof.
- the embodiments described herein can be implemented by using Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays ( FPGA), processors, controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein.
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGA Field Programmable Gate Arrays
- processors controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein.
- an embodiment such as a procedure or a function may be implemented with a separate software module that allows at least one function or operation to be performed.
- the software codes can be implemented by a software application (or program
- non-transitory computer-readable storage medium including instructions, such as a memory including instructions, which are executable by a processor of an apparatus to perform the above method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage device, among others.
- a non-transitory computer-readable storage medium enabling the terminal to execute the above method when instructions in the storage medium are executed by a processor of the terminal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
一种景别确定方法、装置及计算机可读存储介质,所述方法包括:获取预设视频片段中的参考图像和目标图像;若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。本实施例实现自动确定目标图像的景别,有利于减少用户的操作,提高景别确定效率。
Description
本申请涉及图像处理技术领域,具体而言,涉及一种景别确定方法、装置及计算机可读存储介质。
在图像处理领域,对于景别的运用能够有效提升成片质量。例如在视频编辑过程中,交替地使用各种不同的景别的视频帧,能够有效提升视频的空间层次感、方向感以及艺术表现力等。但相关技术中通常通过人工观察的方法对图像或视频帧进行分析以主观划分图像或视频帧的景别,这种方法存在识别效率低、合格率低、耗时且人工成本高等问题。
发明内容
有鉴于此,本申请的目的之一是提供一种景别确定方法、装置及计算机可读存储介质。
第一方面,本申请实施例提供了一种景别确定方法,包括:
获取预设视频片段中的参考图像和目标图像;
若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
第二方面,本申请实施例提供了一种景别确定装置,包括:
用于存储可执行指令的存储器;
一个或多个处理器;
其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成:
获取预设视频片段中的参考图像和目标图像;
若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像 中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
第三方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如第一方面所述的方法。
本申请实施例所提供的一种景别确定方法,从预设视频片段中获取参考图像和目标图像,在确定所述参考图像和所述目标图像均包括同一对象的相同部分的情况下,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。本实施例中,以与所述目标图像具有同一对象的相同部分的图像作为参考,根据所述对象的该部分分别在参考图像和目标图像中的差异来自动确定目标图像的景别,从而有利于减少用户的操作,提高景别确定效率。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例提供的一种景别确定方法的示意图;
图2和图3是本申请一个实施例提供的目标图像的第一匹配区域和参考图像的第二匹配区域的不同示意图;
图4是本申请一个实施例提供的用户标注参考图像的景别的示意图;
图5是本申请一个实施例提供的一种景别确定装置的结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
针对于相关技术中的问题,本申请提供了一种景别确定方法,从预设视频片段中获取参考图像和目标图像,在确定所述参考图像和所述目标图像均包括同一对象的相同部分的情况下,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象 的该部分之间的关系,确定所述目标图像的景别。本实施例中,以与所述目标图像具有同一对象的相同部分的参考图像作为参考,根据所述对象的该部分分别在参考图像和目标图像中的差异来自动确定目标图像的景别,从而有利于减少用户的操作,提高景别确定效率。
在一些实施例中,本申请实施例提供的景别确定方法可以应用于景别确定装置中。
一方面,所述景别确定装置可以是具有数据处理能力的电子设备,如所述电子设备包括但不限于可移动平台、终端设备或者服务器等计算设备。其中,所述可移动平台的示例包括但不限于无人飞行器、无人驾驶车辆、云台、无人驾驶船只或者移动机器人等。所述终端设备的示例包括但不限于:智能电话/手机、平板计算机、个人数字助理(PDA)、膝上计算机、台式计算机、媒体内容播放器、视频游戏站/系统、虚拟现实系统、增强现实系统、可穿戴式装置(例如,手表、眼镜、手套、头饰(例如,帽子、头盔、虚拟现实头戴耳机、增强现实头戴耳机、头装式装置(HMD)、头带)、挂件、臂章、腿环、鞋子、马甲)、遥控器、或者任何其他类型的装置。
示例性的,所述景别确定装置可以是集成于所述电子设备中的计算机软件产品,该计算机软件产品可以包括可以执行本申请实施例提供的景别确定方法的应用程序。
另一方面,所述景别确定装置也可以是具有数据处理能力的芯片或者集成电路,所述景别确定装置包括但不限于例如中央处理单元(Central Processing Unit,CPU)、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)或者现成可编程门阵列(Field-Programmable Gate Array,FPGA)等。所述景别确定装置可以安装于电子设备中。
在一示例性的应用场景中,例如所述景别确认方法可以应用于视频编辑场景中,景别指的是摄像机在距被摄对象的不同距离或用变焦镜头摄成的包含不同范围的画面,例如远景、全景、中景、近景、特写等。在视频编辑过程中可以交替地使用各种不同景别的视频帧,能够有效提升视频的空间层次感、方向感以及艺术表现力等;则在进行视频编辑之前,可以先使用本申请提供的景别确认方法来确定待编辑视频中各个视频帧的景别。例如可以由用户从待编辑视频中选择一个视频帧作为参考图像,然后将待编辑频中除所述参考图像之外的视频帧确定为目标图像,然后使用本申请实施例提供的景别确定方法来确定各个目标图像(或者说除所述参考图像之外的各个视频帧)的景别,进而基于各个目标图像的景别来组合各个目标图像,获得编辑好的视频。
接下来对本申请实施例提供的景别确定方法进行说明,请参阅图1,图1为本实施例提供的一种景别确定方法的流程示意图,所述方法可以由景别确定装置来实现, 所述方法包括:
在步骤S101中,获取预设视频片段中的参考图像和目标图像。
在步骤S102中,若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
其中,所述预设视频片段的数量可以是一个或多个,每个所述视频片段包括至少两个图像帧;所述视频片段可以是用户使用成像装置采集得到,也可以是从网络平台中获取的,本实施例对此不做任何限制。
可以理解的是,本实施例对于所述参考图像的选定不做任何限制,可依据实际应用场景进行具体选择。以下对于所述参考图像的几种可能的选择方式进行说明:
在第一种可能的实现方式中,所述预设的参考图像可以由用户根据实际情况从所述预设视频片段中选择,比如所述参考图像基于用户指令(该用户指定可以基于用户的选择操作生成)从所述预设视频片段中确定。
在第二种可能的实现方式中,所述预设的参考图像也可以是所述景别确定装置从所述预设视频片段中随机选取得到。
在第三种可能的实现方式中,所述景别确定装置可以基于对指定对象的识别,从所述预设视频片段中选取包括有指定对象的图像作为所述参考图像;可以理解的是,本实施例对于所述指定对象不做任何限制,可依据实际应用场景进行具体设置,例如所述指定对象可以是人物、动物或者建筑等。
在第四种可能的实现方式中,所述景别确定装置可以提取所述预设视频片段中的图像或图像帧中的图像信息,并将包含的图像信息大于预设阈值的图像或者包含的图像信息最大的图像作为所述参考图像。其中,所述图像信息可以用来反映图像的清晰度,所述图像信息包括但不限于:信噪比、图像梯度、局部方差或者均方误差(Mean Square Error,MSE)等。比如,图像的信噪比越大,表示混在图像信号里的噪声越小,则该图像的清晰度越高,可以选择所述图像集合中信噪比最大的图像或图像帧作为所述参考图像。可以理解的是,本申请实施例对于获取图像信息的具体方式不做任何限制,可依据实际应用场景进行具体选择,例如所述图像信息为图像梯度信息时,可以通过Brenner梯度函数、Tenengrad梯度函数、Laplacian梯度函数或者能量梯度函数等方式来获取图像的图像梯度信息。
在第五种可能的实现方式中,所述景别确定装置可以基于预设的美学质量评价模型来评价所述预设视频片段中的图像的美学质量,并将美学质量满足预设条件的图像 作为所述参考图像,比如将美学质量大于预设质量阈值或者美学质量最佳的图像作为所述参考图像。其中,图像美学质量评价旨在利用计算机模拟人类对美的感知与认知,自动评价图像的“美感”,即图像美学质量的计算机评价,主要针对拍摄或绘画的图像在构图、颜色、光影、景深、虚实等美学因素影响下形成的美感。示例性的,所述预设的美学质量评估模型可以基于若干携带有美学质量标签的图像训练得到;所述美学质量标签用于评估图样的美学质量,例如所述美学质量标签可以包括{良好、中等、差};或者所述美学质量标签也可以是评价所述图像的美学质量的分值,美学质量越好,分值越高。
其中,所述景别确定装置在从所述预设视频端中获取参考图像之后,可以将所述预设视频片段除所述参考图像之外的其他图像作为目标图像。
在一些实施例中,所述景别确定装置对所述参考图像和所述目标图像分别进行对象识别,判断所述参考图像和所述目标图像是否均包括同一对象的相同部分。如果所述参考图像和所述目标图像均包括同一对象的相同部分,则所述景别确定装置可以获取所述对象的该部分分别在所述参考图像和所述目标图像中的尺寸,然后根据所述参考图像中所述对象的该部分的尺寸和所述目标图像中所述对象的该部分的尺寸之间的差异,确定所述目标图像的景别。其中,所述对象识别过程可基于预设的对象识别算法进行,或者通过预设的对象识别模型来进行对象识别,本实施例对此不做任何限制。
在一个例子中,为了方便计算,提高景别确定效率,所述参考图像中所述对象的该部分的尺寸为包括所述对象的该部分的外界矩形的尺寸,所述目标图像中所述对象的该部分的尺寸为包括所述对象的该部分的外界矩形的尺寸,从而可以有效避免在所述对象的该部分为不规则形状导致较难确定尺寸的情况。
在另一些实施例中,所述景别确定装置对所述目标图像和所述参考图像进行特征匹配,如果所述目标图像和参考图像中均有相同的特征,表明所述参考图像和所述目标图像均包括同一对象的相同部分,则所述目标图像和所述参考图像可以匹配成功,可以进一步获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域,并根据所述第一匹配区域和第二匹配区域之间的尺寸区别来自动确定所述目标图像的景别;如果所述目标图像的特征与参考图像中的特征不同,表明所述参考图像和所述目标图像没有包括同一对象的相同部分,则所述目标图像和所述参考图像可能匹配失败,无法确定所述目标图像的景别。本实施例中,以与所述目标图像匹配的参考图像作为参考,基于两者特征匹配得到的第一匹配区域和第二匹配区域之间的尺寸关系来自动确定目标图像的景别,从而有利于减少用户的操作,提高景别确定效率。
其中,所述特征包括但不限于角点、边缘点、高曲率点、轮廓、交点、线段、封闭边界或者重心等。
在一些可能的实施方式中,在进行特征匹配的过程中,所述景别确定装置可以分别提取所述目标图像和所述参考图像的特征点;作为例子,所述景别确定装置可以基于预先训练好的神经网络模型对所述目标图像进行特征提取得到所述目标图像的特征点及其描述子,以及基于所述神经网络模型对所述参考图像进行特征提取得到所述参考图像的特征点及其描述子,其中,所述神经网络模型使用若干亮度不同的样本图像训练得到,使得所述神经网络模型学习了有关于亮度变化的信息,从而所述神经网络模型在应对不同光照条件下的图像也能准确提取特征点;作为另一个例子,也可以使用诸如SIFT(尺度不变特征变换,Scale-invariant feature transform)算法、SURF(加速稳健特征,Speeded Up Robust Features)算法或者HOG(方向梯度直方图,Histogram of Oriented Gradient)算法等特征提取算法来分别提取所述目标图像的特征点和所述参考图像的特征点,获得所述目标图像的特征点及其描述子、和所述参考图像的特征点及其描述子。本实施例对于具体的特征提取方法不做任何限制,可依据实际应用场景进行具体设置。
在提取所述目标图像的特征点以及所述参考图像的特征点之后,所述景别确定装置可以将所述目标图像的特征点与所述参考图像的特征点进行匹配,例如可以确定所述目标图像的特征点和所述参考图像的特征点在矢量空间中的距离(比如欧式距离、余弦距离等),如果所述目标图像的特征点和所述参考图像的特征点在矢量空间中的距离小于预设距离,确定两个特征点为匹配成功的特征点匹配对;其中,所述目标图像的特征点和所述参考图像的特征点在矢量空间中的距离可以通过所述目标图像的特征点的描述子和所述参考图像的特征点的描述子计算得到。
在一些可选的实现方式中,由于矢量空间的高维性,相似的距离可能有大量其他的错误匹配,因此,为了提高特征点匹配的准确性,在确定所述目标图像的特征点和所述参考图像的特征点匹配成功之后,需要对匹配成功的特征点匹配对进行筛选。在一个例子中,比如可以使用经典的随机抽样一致算法(Random Sample Consensus,RANSAC),根据拟合的几何关系来剔除误匹配的特征点匹配对,所述几何关系可以表示所述目标图像与所述参考图像的空间映射关系,如所述几何关系可以是所述目标图像与所述参考图像之间的变换矩阵等,即如果所述特征点匹配对不满足所述几何关系,则剔除该特征点匹配对,如果所述特征点对满足所述几何关系,则保留该特征点匹配对。在另一个例子中,可以通过比较在矢量空间中的最近邻距离与次近邻距离的方式 来筛选特征点匹配对,比如对于所述目标图像的特征点,在所述参考图像中确定与该特征点在矢量空间中距离最近的特征点(与所述目标图像的特征点匹配)和距离次近的特征点,如果最近距离和次近距离的比值小于预设比率阈值,则保留这一对特征点匹配对,否则,剔除这一对特征点匹配对。
在一些可选的实现方式中,为了保证匹配结果的准确性,在获取匹配成功的特征点匹配对之后或者在筛选特征点匹配对之后,所述景别确定装置统计特征点匹配对的数量,如果特征点匹配对的数量大于预设数量,确定所述目标图像和所述参考图像匹配成功,则所述景别确定装置可以根据匹配成功的特征点,分别确定所述目标图像的第一匹配区域和所述参考图像的第二匹配区域。比如对于所述目标图像,所述景别确定装置可以根据所述目标图像中与所述参考图像匹配成功的特征点来确定第一匹配区域;对于所述参考图像,所述景别确定装置可以根据所述参考图像中与所述目标图像匹配成功的特征点来确定第二匹配区域。其中,所述预设数量可依据实际应用场景进行具体设置,本实施例对此不做任何限制。
在一个例子中,请参阅图2以及图3,所述第一匹配区域根据所述目标图像中的所述匹配成功的特征点的外接图形(如外接矩形、外接圆等)确定;以及所述第二匹配区域根据所述参考图像中的所述匹配成功的特征点的外接图形(如外接矩形、外接圆等)确定。在另一个例子中,所述第一匹配区域可以是以所述目标图像中的所述匹配成功的特征点为中心的预设尺寸的区块的集合,所述第二匹配区域可以是以所述参考图像中的所述匹配成功的特征点为中心的预设尺寸的区块的集合。
在获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域,所述景别确定装置可以根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异,确定所述目标图像的景别。本实施例中,基于第一匹配区域和第二匹配区域之间的尺寸区别来自动确定目标图像的景别,从而有利于减少用户的操作,提高景别确定效率。
在一些实施例中,在不知道所述参考图像的景别的情况下,所述景别确定装置所确定的所述目标图像的景别为所述目标图像相对于所述参考图像的相对景别。比如假设所述参考图像为第一景别,如果所述第一匹配区域的尺寸和第二匹配区域的尺寸相同,则所述参考图像也为第一景别;如果所述第一匹配区域的尺寸大于第二匹配区域的尺寸,所述参考图像为第二景别;如果所述第一匹配区域的尺寸小于第二匹配区域的尺寸,所述参考图像为第三景别;其中按照景别从远到近的顺序依次为第三景别、第一景别、第二景别。本实施例中无需确定参考图像的景别,也不需要对景别进行严格的划分,只需根据尺寸的差异区分相对景别的远近即可。
在一些实施例中,在可以获取所述参考图像的景别的情况下,例如所述参考图像的景别可以通过用户的标注得到,比如在用户在交互界面上从预设视频片段选择了参考图像或者所述景别确定装置基于上述的方式确定了所述参考图像之后,请参阅图4,可以在交互界面上显示所述参考图像以及有关于所述参考图像的景别的输入控件,用户可以基于所述输入控件输入所述参考图像的景别,其中,所述输入控件包括但不限于如图4所示的输入框(该输入框允许用户输入有关于景别的文本内容)或者复选框(该复选框允许用户在有限数量的景别选项中选择一个景别选项)等。
在获取用户标注的所述参考图像的景别之后,所述景别确定装置可以根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异、以及所述参考图像的景别,确定所述目标图像的景别。比如假设所述参考图像的景别为中景,如果所述第一匹配区域的尺寸和第二匹配区域的尺寸相同,则所述参考图像的景别也为中景;如果所述第一匹配区域的尺寸大于第二匹配区域的尺寸,所述参考图像的景别为近景或特写;如果所述第一匹配区域的尺寸小于第二匹配区域的尺寸,所述参考图像的景别为远景。
在一些可能的实施方式中,在确定所述目标图像的景别时,所述景别确定装置可以根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异,确定所述目标图像相对于所述参考图像的尺度,然后根据所述目标图像相对于所述参考图像的尺度,确定所述目标图像的景别。
作为例子,所述目标图像相对于所述参考图像的尺度可以是所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的比值。作为另一个例子,所述目标图像相对于所述参考图像的尺度还可以是所述第一匹配区域的面积和第二匹配区域的面积之间的比值,其中,所述第一匹配区域的面积基于所述第一匹配区域的尺寸确定,所述第二匹配区域的面积基于第二匹配区域的尺寸确定。
作为例子,所述目标图像的景别可以基于所述目标图像相对于所述参考图像的尺度以及预存的映射关系确定,所述映射关系指示不同尺度所对应的景别或者不同尺寸范围所对应的景别。
其中,所述参考图像的尺度为预设值。在一个例子中,假设所述参考图像的尺度为1,设所述目标图像相对于所述参考图像的尺度为S,所述第一匹配区域的尺寸(宽高)为(w1,h1),所述第二匹配区域的尺寸(宽高)为(w0,h0),则S=max(w1/w0,h1/h0),其中,max()为取最大值函数。基于上述方式,可以确定图2中的目标图像1以及图3中的目标图像2的尺度如表1所示,其中,x,y表示所述第二匹配区域在参考图像中的位置或者第一匹配区域在目标图像中的位置。
表1
在一示例性的实施例中,如果所述目标图像相对于所述参考图像的尺度与所述参考图像的尺度之差小于预设差值,换句话说,即所述第一匹配区域的尺寸和第二匹配区域的尺寸相差不大的情况,则所述目标图像与所述参考图像的景别相同,否则,所述目标图像与所述参考图像的景别不同;假设所述参考图像的尺度为1,如果所述目标图像相对于所述参考图像的尺度大于1且两者的尺寸差值超过预设差值,则所述目标图像的景别比所述参考图像的景别更近;如果所述目标图像相对于所述参考图像的尺度小于1且两者的尺寸差值超过预设差值,则所述目标图像的景别比所述参考图像的景别更远。可以理解的是,所述预设差值可依据实际应用场景进行具体设置,本实施例对此不做任何限制。
同理,在所述目标图像有多个的情况下,如果任意两个目标图像的尺度之差小于所述预设差值,表明所述任意两个目标图像的第二匹配区域的尺寸相差不大,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
在一示例性的实施例中,在所述目标图像有多个的情况下,所述景别确定装置可以根据所述目标图像相对于所述参考图像的尺度,对多个所述目标图像进行排序,然后按照所述目标图像的排列顺序确定所述目标图像的景别;比如如果相邻两个目标图像的尺度之差小于所述预设差值,则所述相邻两个目标图像的景别相同,否则,所述相邻两个目标图像的景别不同。
在一个例子中,可以按照尺度从小到大的顺序(即景别从远到近的顺序)对所述目标图像进行排序,或者也可以按照尺寸从大到小的顺序(即景别从近到远的顺序)对所述目标图像进行排序,进而可以根据各个目标图像的排列顺序确定各个目标图像的景别的远近。
示例性的,在所述参考图像的景别未知的情况下,所述景别确定装置还可以根据所述目标图像相对于所述参考图像的尺度、和所述参考图像的尺度,对多个所述目标图像和所述参考图像一起进行排序,然后按照所述目标图像的排列顺序确定所述目标图像的景别以及按照所述参考图像的排列顺序确定所述参考图像的景别。本实施例中不需要对景别进行严格的划分,只需根据排列顺序区别相对景别的远近即可。
在另一些可能的实施方式中,所述景别确定装置可以根据所述第一匹配区域的尺 寸确定所述第一匹配区域在所述目标图像中的第一占据比例,以及,根据第二匹配区域的尺寸确定所述第二匹配区域在所述参考图像中的第二占据比例;然后根据所述第一占据比例和所述第二占据比例之间的差异,确定所述目标图像的景别。
在所述参考图像的景别未知的情况下,所确定的目标图像的景别为所述目标图像相对于所述参考图像的相对景别。
在可以获取所述参考图像的景别的情况下,所述景别确定装置可以根据所述参考图像的景别、以及所述第一占据比例和所述第二占据比例之间的差异,确定所述目标图像的景别。
在一示例性的实施例中,如果所述第一占据比例和所述第二占据比例之间的差值小于预设比例差值,换句话说,即所述第一匹配区域的尺寸和第二匹配区域的尺寸相差不大的情况,则所述目标图像和所述参考图像的景别相同,否则,所述目标图像和所述参考图像的景别不同;比如如果所述第一占据比例大于所述第二占据比例且两者的差值超过预设比例差值,则所述目标图像的景别比所述参考图像的景别更近;如果所述第一占据比例小于所述第二占据比例且两者的差值超过预设比例差值,则所述目标图像的景别比所述参考图像的景别更远。可以理解的是,所述预设比例差值可依据实际应用场景进行具体设置,本实施例对此不做任何限制。本实施例中不需要对景别进行严格的划分,只需根据占据比例的差异区分相对景别的远近即可。
同理,在所述目标图像有多个的情况下,如果任意两个目标图像中所述第一占据比例之差小于预设比例差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
在一些实施例中,在确定所述目标图像的景别之后,可以将所述目标图像及其景别信息显示在交互界面上,以便用户确认;或者,可以将所述目标图像中标记该目标图像的景别;或者,可以将所述目标图像及其景别关联存储,在用户查看所述目标图像时,可以同步显示该目标图像的景别,方便用户可以基于该目标图像的景别对目标图像进行自定义视频剪辑。
在一些实施例中,在所述目标图像有多个的情况下,所述景别确定装置还可以根据所述目标图像的景别,按照预设的剪辑模板组合多个所述目标图像,生成目标视频。其中,所述剪辑模板至少指示不同景别的目标图像的组合顺序和/或展示方式,从而有利于提升目标视频的空间层次感、方向感以及艺术表现力。示例性的,所述剪辑模板中还可以指示不同景别的目标图像的第二匹配区域的展示位置,比如指示不同景别的目标图像的第二匹配区域的展示位置相同,从而实现目标视频中画面的对准,达到更 丰富的展示效果。
进一步地,由于所述目标景别是基于所述参考图像来确定的,所述参考图像的相对景别或者景别也是可以确定的,所述参考图像也与所述目标图像具有相同的景象或者对象,可以组合编辑,则所述剪辑模板还可以指示不同景别的多个目标图像和所述参考图像的组合顺序和/或展示方式。示例性的,在使用所述剪辑模板对确定了景别的视频片段(包括参考图像和多个目标景别)进行视频编辑获取目标视频之后,还可以在所述目标视频中高亮所述参考图像或者对所述参考图像进行标注,以便用户在观看目标视频时可以直观了解到基于所述参考图像进行景别确定和视频编辑后的效果。
相应的,请参阅图5,本申请实施例还提供了一种景别确定装置20,包括:
用于存储可执行指令的存储器21;
一个或多个处理器22;
其中,所述一个或多个处理器22执行所述可执行指令时,被单独地或共同地配置成:
获取预设视频片段中的参考图像和目标图像;
若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
在一些实施例中,所述处理器22还用于:
对目标图像和预设的参考图像进行特征匹配;
若匹配成功,分别获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域;其中,所述匹配成功指示所述参考图像和所述目标图像均包括同一对象的相同部分;
根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别。
在一实施例中,所述处理器22还用于:分别提取所述目标图像和所述参考图像的特征点,并将所述目标图像的特征点与所述参考图像的特征点进行匹配;若匹配成功,根据匹配成功的特征点分别确定所述目标图像的第一匹配区域和所述参考图像的第二匹配区域。
在一实施例中,在所述目标图像与所述参考图像中,匹配成功的特征点匹配对的数量大于预设数量。
在一实施例中,在所述目标图像与所述参考图像中,匹配成功的两个特征点在矢量空间中的距离小于预设距离。
在一实施例中,在所述目标图像与所述参考图像中,匹配成功的两个特征点满足预设几何关系。
在一实施例中,所述目标图像的特征点基于预先训练好的神经网络模型对所述目标图像进行特征提取得到;以及,所述参考图像的特征点基于所述神经网络模型对所述参考图像进行特征提取得到;其中,所述神经网络模型使用若干亮度不同的样本图像训练得到。
在一实施例中,所述第一匹配区域根据所述目标图像中的所述匹配成功的特征点的外接图形确定;所述第二匹配区域根据所述参考图像中的所述匹配成功的特征点的外接图形确定。
在一实施例中,所述目标图像的景别为所述目标图像相对于所述参考图像的相对景别。
在一实施例中,所述处理器22还用于:获取所述参考图像的景别;根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异、以及所述参考图像的景别,确定所述目标图像的景别。
在一实施例中,所述处理器22还用于:根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异,确定所述目标图像相对于所述参考图像的尺度;根据所述目标图像相对于所述参考图像的尺度,确定所述目标图像的景别。
在一实施例中,所述参考图像的尺度为预设值。
在一实施例中,如果所述目标图像相对于所述参考图像的尺度与所述参考图像的尺度之差小于预设差值,则所述目标图像与所述参考图像的景别相同。
在一实施例中,在所述目标图像有多个的情况下,所述处理器22还用于:根据所述目标图像相对于所述参考图像的尺度,对多个所述目标图像进行排序;按照所述目标图像的排列顺序确定所述目标图像的景别。
在一实施例中,所述目标图像相对于所述参考图像的尺度为所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的比值。
在一实施例中,所述目标图像的景别基于所述目标图像相对于所述参考图像的尺度以及预存的映射关系确定,所述映射关系指示不同尺度所对应的景别。
在一实施例中,在所述目标图像有多个的情况下,如果任意两个目标图像的尺度之差小于预设差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标 图像的景别不同。
在一实施例中,所述处理器22还用于:根据所述第一匹配区域的尺寸确定所述第一匹配区域在所述目标图像中的第一占据比例,以及,根据第二匹配区域的尺寸确定所述第二匹配区域在所述参考图像中的第二占据比例;根据所述第一占据比例和所述第二占据比例之间的差异,确定所述目标图像的景别。
在一实施例中,在所述目标图像有多个的情况下,如果任意两个目标图像中所述第一占据比例之差小于预设比例差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
在一实施例中,如果所述第一占据比例和所述第二占据比例之间的差值小于预设比例差值,则所述目标图像和所述参考图像的景别相同。
在一实施例中,在所述目标图像有多个的情况下,所述处理器22还用于:至少根据所述目标图像的景别,按照预设的剪辑模板组合多个所述目标图像,生成目标视频。
在一实施例中,所述剪辑模板至少指示不同景别的目标图像的组合顺序和/或展示方式。
在一实施例中,所述参考图像从所述预设视频片段中随机选取得到;或者,所述参考图像为从所述预设视频片段确定的包括有指定对象的图像;或者,所述参考图像为从所述预设视频片段确定的包含的图像信息大于预设阈值的图像;或者,所述参考图像为从所述预设视频片段确定的美学质量满足预设条件的图像;或者,所述参考图像基于用户指令从所述预设视频片段中确定。
这里描述的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器中并且由控制器执行。
上述设备中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由装置的处理器执行以完成上述方法。例如,非临 时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由终端的处理器执行时,使得终端能够执行上述方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (47)
- 一种景别确定方法,其特征在于,包括:获取预设视频片段中的参考图像和目标图像;若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
- 根据权利要求1所述的方法,其特征在于,所述若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别,包括:对目标图像和预设的参考图像进行特征匹配;若匹配成功,分别获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域;其中,所述匹配成功指示所述参考图像和所述目标图像均包括同一对象的相同部分;根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别。
- 根据权利要求2所述的方法,其特征在于,所述对目标图像和预设的参考图像进行特征匹配,包括:分别提取所述目标图像和所述参考图像的特征点,并将所述目标图像的特征点与所述参考图像的特征点进行匹配;所述分别获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域,包括:根据匹配成功的特征点,分别确定所述目标图像的第一匹配区域和所述参考图像的第二匹配区域。
- 根据权利要求3所述的方法,其特征在于,在所述目标图像与所述参考图像中,匹配成功的特征点匹配对的数量大于预设数量。
- 根据权利要求3所述的方法,其特征在于,在所述目标图像与所述参考图像中,匹配成功的两个特征点在矢量空间中的距离小于预设距离。
- 根据权利要求3所述的方法,其特征在于,在所述目标图像与所述参考图像中,匹配成功的两个特征点满足预设几何关系。
- 根据权利要求3所述的方法,其特征在于,所述目标图像的特征点基于预先训练好的神经网络模型对所述目标图像进行特征提取得到;以及,所述参考图像的特征点基于所述神经网络模型对所述参考图像进行特征提取得到;其中,所述神经网络模型使用若干亮度不同的样本图像训练得到。
- 根据权利要求3至7任意一项所述的方法,其特征在于,所述第一匹配区域根据所述目标图像中的所述匹配成功的特征点的外接图形确定;所述第二匹配区域根据所述参考图像中的所述匹配成功的特征点的外接图形确定。
- 根据权利要求1所述的方法,其特征在于,所述确定所述目标图像的景别,包括:确定所述目标图像相对于所述参考图像的相对景别。
- 根据权利要求2所述的方法,其特征在于,还包括:获取所述参考图像的景别;所述根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别,包括:根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异、以及所述参考图像的景别,确定所述目标图像的景别。
- 根据权利要求2所述的方法,其特征在于,所述根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别,包括:根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异,确定所述目标图像相对于所述参考图像的尺度;根据所述目标图像相对于所述参考图像的尺度,确定所述目标图像的景别。
- 根据权利要求11所述的方法,其特征在于,所述参考图像的尺度为预设值。
- 根据权利要求12所述的方法,其特征在于,如果所述目标图像相对于所述参考图像的尺度与所述参考图像的尺度之差小于预设差值,则所述目标图像与所述参考图像的景别相同。
- 根据权利要求11或12所述的方法,其特征在于,在所述目标图像有多个的情况下,所述根据所述目标图像相对于所述参考图像的尺度,确定所述目标图像的景别,包括:根据所述目标图像相对于所述参考图像的尺度,对多个所述目标图像进行排序;按照所述目标图像的排列顺序确定所述目标图像的景别。
- 根据权利要求11所述的方法,其特征在于,所述目标图像相对于所述参考图像的尺度为所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的比值。
- 根据权利要求11所述的方法,其特征在于,所述目标图像的景别基于所述目标图像相对于所述参考图像的尺度以及预存的映射关系确定,所述映射关系指示不同 尺度所对应的景别。
- 根据权利要求11所述的方法,其特征在于,在所述目标图像有多个的情况下,如果任意两个目标图像的尺度之差小于预设差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
- 根据权利要求2所述的方法,其特征在于,所述根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别,包括:根据所述第一匹配区域的尺寸确定所述第一匹配区域在所述目标图像中的第一占据比例,以及,根据第二匹配区域的尺寸确定所述第二匹配区域在所述参考图像中的第二占据比例;根据所述第一占据比例和所述第二占据比例之间的差异,确定所述目标图像的景别。
- 根据权利要求18所述的方法,其特征在于,在所述目标图像有多个的情况下,如果任意两个目标图像中所述第一占据比例之差小于预设比例差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
- 根据权利要求18所述的方法,其特征在于,如果所述第一占据比例和所述第二占据比例之间的差值小于预设比例差值,则所述目标图像和所述参考图像的景别相同。
- 根据权利要求1所述的方法,其特征在于,在所述目标图像有多个的情况下,所述方法还包括:至少根据所述目标图像的景别,按照预设的剪辑模板组合多个所述目标图像,生成目标视频。
- 根据权利要求21所述的方法,其特征在于,所述剪辑模板至少指示不同景别的目标图像的组合顺序和/或展示方式。
- 根据权利要求1所述的方法,其特征在于,所述参考图像从所述预设视频片段中随机选取得到;或者,所述参考图像为从所述预设视频片段确定的包括有指定对象的图像;或者,所述参考图像为从所述预设视频片段确定的包含的图像信息大于预设阈值的图像;或者,所述参考图像为从所述预设视频片段确定的美学质量满足预设条件的图像;或者,所述参考图像基于用户指令从所述预设视频片段中确定。
- 一种景别确定装置,其特征在于,包括:用于存储可执行指令的存储器;一个或多个处理器;其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成:获取预设视频片段中的参考图像和目标图像;若所述参考图像和所述目标图像均包括同一对象的相同部分,根据所述参考图像中所述对象的该部分和所述目标图像中所述对象的该部分之间的关系,确定所述目标图像的景别。
- 根据权利要求24所述的装置,其特征在于,所述处理器还用于:对目标图像和预设的参考图像进行特征匹配;若匹配成功,分别获取所述目标图像的第一匹配区域和所述参考图像的第二匹配区域;其中,所述匹配成功指示所述参考图像和所述目标图像均包括同一对象的相同部分;根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的关系,确定所述目标图像的景别。
- 根据权利要求25所述的装置,其特征在于,所述处理器还用于:分别提取所述目标图像和所述参考图像的特征点,并将所述目标图像的特征点与所述参考图像的特征点进行匹配;若匹配成功,根据匹配成功的特征点分别确定所述目标图像的第一匹配区域和所述参考图像的第二匹配区域。
- 根据权利要求26所述的装置,其特征在于,在所述目标图像与所述参考图像中,匹配成功的特征点匹配对的数量大于预设数量。
- 根据权利要求26所述的装置,其特征在于,在所述目标图像与所述参考图像中,匹配成功的两个特征点在矢量空间中的距离小于预设距离。
- 根据权利要求26所述的装置,其特征在于,在所述目标图像与所述参考图像中,匹配成功的两个特征点满足预设几何关系。
- 根据权利要求26所述的装置,其特征在于,所述目标图像的特征点基于预先训练好的神经网络模型对所述目标图像进行特征提取得到;以及,所述参考图像的特征点基于所述神经网络模型对所述参考图像进行特征提取得到;其中,所述神经网络模型使用若干亮度不同的样本图像训练得到。
- 根据权利要求26至30任意一项所述的装置,其特征在于,所述第一匹配区域根据所述目标图像中的所述匹配成功的特征点的外接图形确定;所述第二匹配区域根据所述参考图像中的所述匹配成功的特征点的外接图形确定。
- 根据权利要求24所述的装置,其特征在于,所述目标图像的景别为所述目标图像相 对于所述参考图像的相对景别。
- 根据权利要求25所述的装置,其特征在于,所述处理器还用于:获取所述参考图像的景别;根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异、以及所述参考图像的景别,确定所述目标图像的景别。
- 根据权利要求25所述的装置,其特征在于,所述处理器还用于:根据所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的差异,确定所述目标图像相对于所述参考图像的尺度;根据所述目标图像相对于所述参考图像的尺度,确定所述目标图像的景别。
- 根据权利要求34所述的装置,其特征在于,所述参考图像的尺度为预设值。
- 根据权利要求35所述的装置,其特征在于,如果所述目标图像相对于所述参考图像的尺度与所述参考图像的尺度之差小于预设差值,则所述目标图像与所述参考图像的景别相同。
- 根据权利要求34或35所述的装置,其特征在于,在所述目标图像有多个的情况下,所述处理器还用于:根据所述目标图像相对于所述参考图像的尺度,对多个所述目标图像进行排序;按照所述目标图像的排列顺序确定所述目标图像的景别。
- 根据权利要求34所述的装置,其特征在于,所述目标图像相对于所述参考图像的尺度为所述第一匹配区域的尺寸和第二匹配区域的尺寸之间的比值。
- 根据权利要求34所述的装置,其特征在于,所述目标图像的景别基于所述目标图像相对于所述参考图像的尺度以及预存的映射关系确定,所述映射关系指示不同尺度所对应的景别。
- 根据权利要求34所述的装置,其特征在于,在所述目标图像有多个的情况下,如果任意两个目标图像的尺度之差小于预设差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
- 根据权利要求25所述的装置,其特征在于,所述处理器还用于:根据所述第一匹配区域的尺寸确定所述第一匹配区域在所述目标图像中的第一占据比例,以及,根据第二匹配区域的尺寸确定所述第二匹配区域在所述参考图像中的第二占据比例;根据所述第一占据比例和所述第二占据比例之间的差异,确定所述目标图像的景别。
- 根据权利要求41所述的装置,其特征在于,在所述目标图像有多个的情况下,如果任意两个目标图像中所述第一占据比例之差小于预设比例差值,则所述任意两个目标图像的景别相同,否则,所述任意两个目标图像的景别不同。
- 根据权利要求41所述的装置,其特征在于,如果所述第一占据比例和所述第二占据比例之间的差值小于预设比例差值,则所述目标图像和所述参考图像的景别相同。
- 根据权利要求25所述的装置,其特征在于,在所述目标图像有多个的情况下,所述处理器还用于:至少根据所述目标图像的景别,按照预设的剪辑模板组合多个所述目标图像,生成目标视频。
- 根据权利要求44所述的装置,其特征在于,所述剪辑模板至少指示不同景别的目标图像的组合顺序和/或展示方式。
- 根据权利要求24所述的装置,其特征在于,所述参考图像从所述预设视频片段中随机选取得到;或者,所述参考图像为从所述预设视频片段确定的包括有指定对象的图像;或者,所述参考图像为从所述预设视频片段确定的包含的图像信息大于预设阈值的图像;或者,所述参考图像为从所述预设视频片段确定的美学质量满足预设条件的图像;或者,所述参考图像基于用户指令从所述预设视频片段中确定。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如权利要求1至23任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/101798 WO2022266878A1 (zh) | 2021-06-23 | 2021-06-23 | 景别确定方法、装置及计算机可读存储介质 |
CN202180098494.XA CN117561547A (zh) | 2021-06-23 | 2021-06-23 | 景别确定方法、装置及计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/101798 WO2022266878A1 (zh) | 2021-06-23 | 2021-06-23 | 景别确定方法、装置及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022266878A1 true WO2022266878A1 (zh) | 2022-12-29 |
Family
ID=84545006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/101798 WO2022266878A1 (zh) | 2021-06-23 | 2021-06-23 | 景别确定方法、装置及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117561547A (zh) |
WO (1) | WO2022266878A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116311082A (zh) * | 2023-05-15 | 2023-06-23 | 广东电网有限责任公司湛江供电局 | 基于关键部位与图像匹配的穿戴检测方法及系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130163874A1 (en) * | 2010-08-16 | 2013-06-27 | Elya Shechtman | Determining Correspondence Between Image Regions |
CN107437076A (zh) * | 2017-08-02 | 2017-12-05 | 陈雷 | 基于视频分析的景别划分的方法及系统 |
CN108520263A (zh) * | 2018-03-29 | 2018-09-11 | 优酷网络技术(北京)有限公司 | 一种全景图像的识别方法、系统及计算机存储介质 |
CN108960209A (zh) * | 2018-08-09 | 2018-12-07 | 腾讯科技(深圳)有限公司 | 身份识别方法、装置及计算机可读存储介质 |
CN109712177A (zh) * | 2018-12-25 | 2019-05-03 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
CN111476780A (zh) * | 2020-04-07 | 2020-07-31 | 腾讯科技(深圳)有限公司 | 一种图像检测方法、装置、电子设备以及存储介质 |
CN111709296A (zh) * | 2020-05-18 | 2020-09-25 | 北京奇艺世纪科技有限公司 | 一种景别识别方法、装置、电子设备及可读存储介质 |
US10997426B1 (en) * | 2019-03-05 | 2021-05-04 | Amazon Technologies, Inc. | Optimal fragmentation of video based on shot analysis |
-
2021
- 2021-06-23 WO PCT/CN2021/101798 patent/WO2022266878A1/zh active Application Filing
- 2021-06-23 CN CN202180098494.XA patent/CN117561547A/zh active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130163874A1 (en) * | 2010-08-16 | 2013-06-27 | Elya Shechtman | Determining Correspondence Between Image Regions |
CN107437076A (zh) * | 2017-08-02 | 2017-12-05 | 陈雷 | 基于视频分析的景别划分的方法及系统 |
CN108520263A (zh) * | 2018-03-29 | 2018-09-11 | 优酷网络技术(北京)有限公司 | 一种全景图像的识别方法、系统及计算机存储介质 |
CN108960209A (zh) * | 2018-08-09 | 2018-12-07 | 腾讯科技(深圳)有限公司 | 身份识别方法、装置及计算机可读存储介质 |
CN109712177A (zh) * | 2018-12-25 | 2019-05-03 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
US10997426B1 (en) * | 2019-03-05 | 2021-05-04 | Amazon Technologies, Inc. | Optimal fragmentation of video based on shot analysis |
CN111476780A (zh) * | 2020-04-07 | 2020-07-31 | 腾讯科技(深圳)有限公司 | 一种图像检测方法、装置、电子设备以及存储介质 |
CN111709296A (zh) * | 2020-05-18 | 2020-09-25 | 北京奇艺世纪科技有限公司 | 一种景别识别方法、装置、电子设备及可读存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116311082A (zh) * | 2023-05-15 | 2023-06-23 | 广东电网有限责任公司湛江供电局 | 基于关键部位与图像匹配的穿戴检测方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN117561547A (zh) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359538B (zh) | 卷积神经网络的训练方法、手势识别方法、装置及设备 | |
US8750573B2 (en) | Hand gesture detection | |
US8792722B2 (en) | Hand gesture detection | |
US11176355B2 (en) | Facial image processing method and apparatus, electronic device and computer readable storage medium | |
US11538096B2 (en) | Method, medium, and system for live preview via machine learning models | |
CN104202547B (zh) | 投影画面中提取目标物体的方法、投影互动方法及其系统 | |
US20180321776A1 (en) | Method for acting on augmented reality virtual objects | |
WO2021213067A1 (zh) | 物品显示方法、装置、设备及存储介质 | |
US10679041B2 (en) | Hybrid deep learning method for recognizing facial expressions | |
CN109948566B (zh) | 一种基于权重融合与特征选择的双流人脸反欺诈检测方法 | |
US10902053B2 (en) | Shape-based graphics search | |
CN109271930B (zh) | 微表情识别方法、装置与存储介质 | |
US20200410723A1 (en) | Image Synthesis Method And Apparatus | |
WO2017181892A1 (zh) | 前景分割方法及装置 | |
JP2022550948A (ja) | 3次元顔モデル生成方法、装置、コンピュータデバイス及びコンピュータプログラム | |
CN110298380A (zh) | 图像处理方法、装置及电子设备 | |
WO2023077742A1 (zh) | 视频处理方法及装置、神经网络的训练方法及装置 | |
US11783192B2 (en) | Hybrid deep learning method for recognizing facial expressions | |
CN103995864B (zh) | 一种图像检索方法和装置 | |
CN111325107A (zh) | 检测模型训练方法、装置、电子设备和可读存储介质 | |
CN107895021B (zh) | 图像识别方法及装置、计算机装置和计算机可读存储介质 | |
WO2022266878A1 (zh) | 景别确定方法、装置及计算机可读存储介质 | |
CN107153806B (zh) | 一种人脸检测方法及装置 | |
CN114845158A (zh) | 视频封面的生成方法、视频发布方法及相关设备 | |
US20140050404A1 (en) | Combining Multiple Image Detectors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21946379 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180098494.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21946379 Country of ref document: EP Kind code of ref document: A1 |