CN111833456A - Image processing method, device, equipment and computer readable storage medium - Google Patents
Image processing method, device, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111833456A CN111833456A CN202010623484.0A CN202010623484A CN111833456A CN 111833456 A CN111833456 A CN 111833456A CN 202010623484 A CN202010623484 A CN 202010623484A CN 111833456 A CN111833456 A CN 111833456A
- Authority
- CN
- China
- Prior art keywords
- real object
- display
- virtual
- real
- virtual tag
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 230000003190 augmentative effect Effects 0.000 claims abstract description 56
- 230000000694 effects Effects 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000009877 rendering Methods 0.000 claims description 78
- 239000003086 colorant Substances 0.000 claims description 18
- 230000015654 memory Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000009286 beneficial effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 241001465754 Metazoa Species 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 241000220317 Rosa Species 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure discloses an image processing method, an image processing device, an image processing apparatus and a computer storage medium, wherein the method comprises the following steps: acquiring a real scene image; identifying the real scene image to obtain attribute information of at least one real object in the real scene image; determining whether to display a display result of a virtual tag of each real object according to the attribute information of the real object; acquiring virtual tag data corresponding to the real object under the condition that the display result shows the virtual tag of the real object; and displaying the real object in the real scene image and the augmented reality effect of the virtual label corresponding to the real object on a display device by using the virtual label data. Therefore, the display workload is reduced, the shielding of other contents needing to be displayed can be avoided, and the display of the augmented reality effect is facilitated.
Description
Technical Field
The present disclosure relates to computer vision technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a computer-readable storage medium.
Background
Augmented Reality (AR) technology is a technology that fuses virtual information with the real world; the augmented reality technology superimposes entity information into a real world after simulation, so that a real environment and a virtual object are presented in the same interface in real time. How to improve the effect of an augmented reality scene presented by augmented reality equipment is a technical problem to be solved urgently.
Disclosure of Invention
Embodiments of the present disclosure are intended to provide a technical solution for image processing.
In a first aspect, an embodiment of the present disclosure provides an image processing method, where the method includes:
acquiring a real scene image; identifying the real scene image to obtain attribute information of at least one real object in the real scene image; determining whether to display a display result of a virtual tag of each real object according to the attribute information of the real object; acquiring virtual tag data corresponding to the real object under the condition that the display result shows the virtual tag of the real object; and displaying the real object in the real scene image and the augmented reality effect of the virtual label corresponding to the real object on a display device by using the virtual label data.
In one embodiment, the displaying, on a display device, an augmented reality effect of the real object in the real scene image and a virtual tag corresponding to the real object by using the virtual tag data includes: determining the display position of the virtual tag in the real scene image according to the attribute information of the real object; rendering by using the virtual tag data at the display position of the virtual tag, and displaying the augmented reality effect of the virtual tag corresponding to the real object while displaying the real object in the real scene image on the display device.
Therefore, the display position of the virtual label of the real object is determined according to the attribute information of the real object, the virtual label is displayed more reasonably, the augmented reality effect is better, and the requirements of users can be met better.
In one embodiment, the attribute information of the real object includes at least one of: position information of the real object in the real scene image; identification information of the real object; picture proportion information of the real object in the real scene image; picture proportion information of the real object on the display device; display parameters of the real object.
It can be seen that, by using the attribute information of the real object, the position information of the real object in the real scene image, the identification information of the real object, the picture proportion information of the real object in the real scene image, and the picture proportion information of the real object on the display device can be determined, which is further beneficial to determining the display position of the virtual tag, whether the virtual tag is displayed, and the rendering data of the virtual tag in the rear.
In one embodiment, the attribute information of the real object includes position information of the real object in the real scene image; the determining whether to display the display result of the virtual tag of each real object according to the attribute information of the real object includes: determining the display result of the virtual tag as a virtual tag displaying the real object in the case that the position information of the real object in the real scene image is within a specific area range.
The virtual tags of the real objects in the specific area range can be displayed by setting the specific area range, so that the virtual tags of the real objects in the specific area range can be highlighted according to user requirements, and the improvement of user experience is facilitated.
In one embodiment, the attribute information of the real object includes: in the case of picture proportion information of the object in the real scene image or picture proportion information of the real object on the display device, the determining whether to display a display result of the virtual tag of the real object according to the attribute information of each real object includes: and determining the display result of the virtual label as the virtual label of the real object when the picture proportion information is larger than or equal to a specific threshold value.
It can be seen that the method of determining whether to display a virtual tag of a real object through picture-aspect ratio information may be advantageous to more accurately determine whether it is necessary to display a virtual tag of a real object.
In one embodiment, the attribute information of the real object includes identification information of the real object; the determining whether to display the display result of the virtual tag of the real object according to the attribute information of each object includes: and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
Therefore, the method for determining whether to display the virtual label of the real object or not through the identification information of the real object is beneficial to eliminating the real object which is not required to be displayed and reducing the display workload.
In one embodiment, the obtaining of the virtual tag data corresponding to the real object includes at least one of: acquiring the set rendering parameters of the virtual label; determining rendering parameters of the virtual tag according to the display parameters of the real object; and determining rendering parameters of the virtual label according to the display position of the virtual label.
It can be seen that the virtual tags with the same background color and text color can be obtained through the preset rendering parameters of the virtual tags; according to the display parameters of the real object, the rendering parameters of the virtual label are determined, so that the display effect of the virtual label obtained according to the rendering parameters is better, and the virtual label is easier to identify by a user; according to the display position of the virtual label of the real object, the determined rendering parameter of the virtual label considers the color of the surrounding environment of the virtual note, so that the display effect of the virtual label obtained according to the rendering parameter is better, and the user can recognize the virtual label more easily.
In one embodiment, in a case that the display parameter of the real object includes a display color of the real object and the rendering parameter of the virtual tag includes a background color of the virtual tag, the determining the rendering parameter of the virtual tag according to the display parameter of the real object includes: determining a first background color set which can be rendered by the virtual label according to the display color of the real object and a first specific rule; determining a background color of the virtual label from the first set of background colors.
It can be seen that, through the display color of the real object and the background color of the virtual tag determined by the first specific rule, the contrast formed by the display color of the real object and the display color of the real object can meet the requirement of the preset contrast threshold, so that the display effect of the virtual tag obtained according to the background color of the virtual tag is better, and the virtual tag is easier to identify by the user.
In one embodiment, the rendering parameters of the virtual tag include a background color of the virtual tag; determining rendering parameters of the virtual tag according to the display position of the virtual tag comprises: determining the image color of the position of the virtual label according to the display position of the virtual label; determining a second background color set which can be rendered by the virtual label according to the image color at the position of the virtual label and a second specific rule; determining a background color of the label from the second set of background colors.
It can be seen that the contrast formed by the image color at the position of the virtual tag and the background color of the virtual tag determined by the second specific rule can meet the requirement of the preset contrast threshold, so that the display effect of the virtual tag obtained according to the background color of the virtual tag is better, and the user can recognize the virtual tag more easily.
In one embodiment, the content of the virtual tag includes text, the rendering parameter of the virtual tag includes a background color of the virtual tag, the rendering is performed with the virtual tag data at the display position of the virtual tag, and the augmented reality effect of the virtual tag corresponding to the real object is displayed while the real object in the real scene image is displayed on the display device, including: determining the color of the text according to the background color of the virtual label; rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
Therefore, the optimal text color in the virtual label is determined according to the background color of the virtual label, and the rendering is performed according to the virtual label color and the optimal text color, so that the obtained virtual label has a better display effect and a better reality enhancement effect, and can better meet the requirements of users.
In one embodiment, the display device comprises a display screen which is movable on a preset slide rail and is provided with an image acquisition unit; the image acquisition unit is used for acquiring real scene images in real time in the moving process of the display screen.
It can be seen that real-time acquisition of images of a real scene can be achieved through the display device.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including: the device comprises a real scene identification module, a determination module, an acquisition module and a display module, wherein the real scene identification module is used for acquiring a real scene image; identifying the real scene image to obtain attribute information of at least one real object in the real scene image; the determining module is used for determining whether to display the display result of the virtual label of each real object according to the attribute information of each real object; the obtaining module is configured to obtain virtual tag data corresponding to the real object when the display result indicates that the virtual tag of the real object is displayed; the display module is configured to display, on a display device, the augmented reality effect of the real object in the real scene image and the virtual tag corresponding to the real object by using the virtual tag data.
In an embodiment, the display module is specifically configured to determine a display position of the virtual tag in the real scene image according to the attribute information of the real object; rendering by using the virtual tag data at the display position of the virtual tag, and displaying the augmented reality effect of the virtual tag corresponding to the real object while displaying the real object in the real scene image on the display device.
In one embodiment, the attribute information of the real object includes at least one of: position information of the real object in the real scene image; identification information of the real object; picture proportion information of the real object in the real scene image; picture proportion information of the real object on the display device; display parameters of the real object.
In one embodiment, the attribute information of the real object includes position information of the real object in the real scene image; the determining module is specifically configured to determine the display result of the virtual tag as the virtual tag for displaying the real object when the position information of the real object in the real scene image is within a specific area range.
In one embodiment, the attribute information of the real object includes: the determining module is specifically configured to determine a display result of the virtual tag as a virtual tag for displaying the real object when the picture proportion information of the object in the real scene image or the picture proportion information of the real object on the display device is greater than or equal to a specific threshold.
In one embodiment, the attribute information of the real object includes identification information of the real object; the determining module is specifically configured to determine, when the identification information of the real object satisfies a specific condition, a display result of the virtual tag as a virtual tag on which the real object is displayed.
In an embodiment, the obtaining module is specifically configured to obtain virtual tag data corresponding to the real object, where the obtaining module includes at least one of: acquiring the set rendering parameters of the virtual label; determining rendering parameters of the virtual tag according to the display parameters of the real object; and determining rendering parameters of the virtual label according to the display position of the virtual label.
In an embodiment, in a case that the display parameter of the real object includes a display color of the real object, and the rendering parameter of the virtual tag includes a background color of the virtual tag, the obtaining module is specifically configured to determine, according to the display color of the real object and a first specific rule, a first background color set that the virtual tag can render; determining a background color of the virtual label from the first set of background colors.
In an embodiment, the rendering parameter of the virtual tag includes a background color of the virtual tag, and the obtaining module is specifically configured to determine an image color at a position where the virtual tag is located according to a display position of the virtual tag; determining a second background color set which can be rendered by the virtual label according to the image color at the position of the virtual label and a second specific rule; determining a background color of the virtual label from the second set of background colors.
In one embodiment, the content of the virtual tag includes text, the rendering parameter of the virtual tag includes a background color of the virtual tag, and the display module is specifically configured to determine a color of the text according to the background color of the virtual tag; rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
In one embodiment, the display device comprises a display screen which is movable on a preset slide rail and is provided with an image acquisition unit; the image acquisition unit is used for acquiring real scene images in real time in the moving process of the display screen.
In a third aspect, the disclosed embodiments provide an electronic device comprising a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to execute any one of the image processing methods when the computer program is executed.
In a fourth aspect, the disclosed embodiments also provide a computer storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method of any one of the above.
In the embodiment of the disclosure, in the image processing method, whether to display the virtual tag of the real object identified in the real scene picture, that is, to determine the virtual tag suitable for displaying, may be determined according to the attribute information of the real object, so that not only is the display workload reduced, but also the occlusion of other contents to be displayed may be reduced, which is beneficial to displaying the augmented reality effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of another application scenario provided by the embodiment of the present disclosure;
FIG. 3 is a schematic display diagram of a label of a building provided by an embodiment of the disclosure;
fig. 4 is a flowchart of an image processing method provided in an embodiment of the present disclosure;
fig. 5 is a flowchart of another image processing method provided by the embodiment of the present disclosure;
fig. 6 is a flowchart of another image processing method provided by the embodiment of the present disclosure;
fig. 7 is a flowchart of still another image processing method provided by the embodiment of the disclosure;
fig. 8 is a flowchart of still another image processing method provided in the embodiment of the present disclosure;
fig. 9 is a flowchart of another image processing method provided by the embodiment of the present disclosure;
fig. 10 is a flowchart of still another image processing method provided by an embodiment of the present disclosure;
fig. 11 is a schematic diagram of a composition structure of image processing provided by the embodiment of the present disclosure;
fig. 12 is an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, the use of the phrase "including a. -. said." does not exclude the presence of other elements (e.g., steps in a method or elements in a device, such as portions of circuitry, processors, programs, software, etc.) in the method or device in which the element is included.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, e.g., U and/or W, which may mean: u exists alone, U and W exist simultaneously, and W exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of U, W, V, and may mean including any one or more elements selected from the group consisting of U, W and V.
For example, the image processing method provided by the embodiment of the present disclosure includes a series of steps, but the image processing method provided by the embodiment of the present disclosure is not limited to the described steps, and similarly, the image processing apparatus provided by the embodiment of the present disclosure includes a series of modules, but the image processing apparatus provided by the embodiment of the present disclosure is not limited to include the explicitly described modules, and may also include modules that are required to be set for acquiring relevant information or performing processing based on the information.
The embodiments of the present disclosure may be applied to an electronic device (such as a mobile phone, a tablet, augmented reality glasses, etc.) or a server supporting augmented reality technology, or a combination thereof. When the embodiment of the present disclosure is applied to a server, the server may be connected to other devices having a communication function and a camera, where the connection mode may be wired connection or Wireless connection, and the Wireless connection may be, for example, bluetooth connection, Wireless broadband (WIFI) connection, and the like.
In some embodiments of the present disclosure, the electronic device supporting the augmented reality technology may also be a relatively novel display device, that is, a display screen that moves on a slide rail and is provided with an image acquisition unit. When the display screen slides to a certain position, the real scene image acquired by the image acquisition unit in real time is processed, and finally the augmented reality effect of the real object and the virtual label corresponding to the real object in the real scene image is displayed on the display screen. The user can also trigger the relevant information on the augmented reality effect displayed by the display screen to acquire more detailed information or other relevant information.
In a formula implementation manner, the display screen may also be a display screen capable of rotating or moving in other manners, and during the movement of the display screen, the real scene image acquired in real time by the image acquisition unit is displayed in real time. Here, the kind of the display screen is not limited, and the display screen may be a touch screen or a non-touch screen.
Fig. 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure, as shown in fig. 1, a movable display screen 101 may be disposed in a building, and in other embodiments, the movable display screen 101 may be disposed at an edge of the building or outside the building. The removable display 101 may be used to photograph the building, display the building and tags related to the building. The building displayed by the movable display screen 101 may be a photographed building, or may be a rendering model of the building corresponding to the photographed building, or may be a part of the photographed building and a part of the rendering model of the building, for example, in the case of photographing the building H and the building J, the movable display screen 101 may determine that the building model of the building H is H ', the building model of the building J is J', and the movable display screen 101 may display the building H and the building model J ', or may display the building model H' and the building J. The label of the building can be at least one of building number information of the building, company information of the building, floor information, responsible person information and the like.
Fig. 2 is a schematic diagram of another application scenario provided in the embodiment of the present disclosure, as shown in fig. 2, the display device in the embodiment of the present disclosure may further include a terminal device 201, and a user may hold or wear the terminal device 201 to enter between buildings and shoot the buildings to display at least one of the buildings, building models, and building labels on the terminal device 201.
A terminal device may refer to a terminal, an access terminal device, a subscriber unit, a subscriber station, a mobile station, a remote terminal device, a mobile device, a User Equipment (UE), a wireless communication device, a User agent, or a User Equipment. The terminal device may be a server, a mobile phone, a tablet computer, a laptop computer, a palmtop computer, a personal digital assistant, a portable media player, an intelligent sound box, a navigation device, a display device, a wearable device such as an intelligent bracelet, a Virtual Reality (VR) device, an augmented Reality device, a pedometer, a digital Television (TV) or a desktop computer, and the like.
Fig. 3 is a schematic display diagram of a tag of a building according to an embodiment of the present disclosure, as shown in fig. 3, an image of the building and a tag corresponding to the building may be displayed on a display screen (including a display screen of a movable display screen or a display screen of a terminal device), the tag may be pointed to the building, and related information of the building may be displayed in the tag, for example, in an embodiment of the present disclosure, the related information of the building may include a company name of the building, a company introduction of the building, and for example, the company name may be displayed: XXXX group headquarters, introduced as: capital registration: XXXXXX, annual business income super XXX, and XXX for 23 consecutive years. In other embodiments of the present disclosure, the related information of the building may further include at least one of a company identification of the building, floor information of the building, a contact address of a person in charge of the building, and the like.
The related information of the building is displayed on the display screen, so that the user can directly know the information of the company where the building is located through the related information, the user can easily acquire the information of the building, and great convenience is provided for the user.
In an implementation manner, under the condition that an association exists between at least two buildings, one tag corresponding to the associated at least two buildings can be displayed, the one tag can point to each building of the at least two buildings, so that a user can easily know that the association exists between the at least two buildings, and because the at least two buildings correspondingly display one tag, the display screen can be simply displayed, the phenomenon that the display of the display screen is disordered is avoided, the user can also know the information of the at least two buildings through the one tag, and the user can conveniently read the related information of the at least two buildings.
In one embodiment, the display style of the tag may also be defined, for example, the display style of the tag may be made consistent with the style of the building it matches, or the display style of the tag may be consistent with the style of all buildings currently displayed on the display screen. The consistent display style may be that the display colors are the same or similar. For example, when the building color or building model color corresponding to the tag is dark blue, the tag color may be displayed in dark blue. In another real-time manner, the display style of the tag may be inconsistent with the style of all buildings displayed on the matched building or display screen, for example, in the case that the color of the building or building model corresponding to the tag is dark blue, the color of the tag may be yellow, white, etc. display.
The display style of the label is limited, so that the display of the display screen can be integrally displayed, and the visual comfort of a user is improved.
In an embodiment, when a current real scanned object is displayed on a display screen, virtual tags associated with the real scanned object may be displayed in an overlapping manner, where virtual tags corresponding to all scanned real objects need to be displayed, so that virtual tags are also displayed for real objects (e.g., dogs, pedestrians, etc.) that do not need to be identified, which not only increases display workflow, but also may cause shielding of other parts that need to be displayed, and is not favorable for displaying virtual scenes.
In order to solve the above technical problem, in some embodiments of the present disclosure, an image processing method is provided, and embodiments of the present disclosure may be applied to any image processing scene, for example, a scene such as an augmented reality AV effect display of a display screen.
Fig. 4 is a flowchart of an image processing method provided in an embodiment of the present disclosure, and as shown in fig. 4, the flowchart may include:
step 401: and acquiring a real scene image.
Here, the real scene may be a building indoor scene, a street scene, a specific object, or the like, which can be superimposed with a virtual object, and by superimposing the virtual object in the real scene, an augmented reality effect may be presented in the augmented reality device.
In the embodiment of the present disclosure, the manner of acquiring the real scene image may be to scan the real scene in real time through the display device to obtain an image of a Red, Green, Blue (RGB) color mode or other color modes of the real scene.
In one example, the display device may include a display screen movable on a preset slide rail and provided with an image capture unit; the image acquisition unit is used for acquiring real scene images in real time in the moving process of the display screen.
It can be seen that real-time acquisition of images of a real scene can be achieved through the display device.
Step 402: and identifying the real scene image to obtain the attribute information of at least one real object in the real scene image.
In one example, the attribute information of the real object includes at least one of: position information of the real object in the real scene image; identification information of the real object; picture proportion information of the real object in the real scene image; picture proportion information of the real object on the display device; display parameters of the real object.
For the position information of the real object in the real scene, for example, the coordinate position information of the real object in the real scene image may be coordinate position information of the real object in the real scene image, for example, the position information of the building P in the real scene image may be an abscissa value and an ordinate value of the building P in a coordinate system with a lower left corner of the real scene image as a coordinate origin, that is, a (x, y), where x represents a vertical distance value of the position of the building P in the real scene image from the coordinate origin, and y represents a horizontal distance value of the position of the building P in the real scene image from the coordinate origin.
As for the identification information of the real object, exemplarily, a name of the real object may be used. For example, the presentation information for the building P may be "rose building", "star hotel", or the like.
For the picture proportion information of the real object in the real scene image, illustratively, it may be a ratio of an area of the real object in the real scene image to an area of the real scene image, for example, for the area value of the real scene image being S1, the area value of the real object F in the real scene image being S2, and the picture proportion information of the real object in the real scene image may be a value of S2/S1.
For the picture proportion information of the real object on the display device, illustratively, it may be a ratio of an area of the real object in the real scene image to an area of the display screen of the display device, for example, for the area value of the display screen of the display device being S3, the area value of the real object F in the real scene image being S2, and the picture proportion information of the real object on the display device may be a value of S2/S3.
As for the display parameter of the real object, illustratively, it may be a display color of the real object, for example, a display color of a building, a display color of a dog, a display color of a tree, or the like.
In an embodiment, the identifying the real scene image to obtain the attribute information of at least one real object in the real scene image may be identifying a real object in the real scene image, and obtaining the attribute information of all real objects in the real scene image by analyzing the real object.
It can be seen that, by using the attribute information of the real object, the position information of the real object in the real scene image, the identification information of the real object, the picture proportion information of the real object in the real scene image, and the picture proportion information of the real object on the display device can be determined, which is further beneficial to determining the display position of the virtual tag, whether the virtual tag is displayed, and the rendering data of the virtual tag in the rear.
Step 403: and determining whether to display the display result of the virtual label of each real object according to the attribute information of each real object.
In an example, whether the virtual tag of each real object is necessary to be displayed may be determined according to a preset determination rule and according to attribute information of each real object, for example, for a real object such as a person, an animal, a tree, a stool, etc., a person may determine that the real object is a person, an animal, a tree, a stool, etc. directly according to the real object in the real scene image without being identified by a tag, and therefore, the virtual tag of the real object such as a person, an animal, a tree, a stool, etc. is not displayed; however, for the building a, the building B, and the building C, since people cannot determine the names and the uses of the building a, the building B, and the building C directly from the building a, the building B, and the building C in the real scene image, it is necessary to display virtual tags of the building a, the building B, and the building C to clarify the names and the uses of the building a, the building B, and the building C.
Step 404: and acquiring virtual tag data corresponding to the real object under the condition that the display result shows the virtual tag of the real object.
Here, the virtual tag data may be a rendering parameter of the virtual tag, for example, the virtual tag data may include a background color of the virtual tag, and in the case where the content of the virtual tag includes text, the rendering parameter of the virtual tag further includes a color of the text.
For the implementation manner of obtaining the virtual tag data corresponding to the real object when the display result is that the virtual tag of the real object is displayed, for example, when the virtual tag of the real object needs to be displayed, the rendering parameter of the virtual tag may be obtained according to preset virtual tag data, or the data of the virtual tag may be determined according to the display color of the real object or the display position of the virtual tag.
Step 405: and displaying the real object in the real scene image and the augmented reality effect of the virtual label corresponding to the real object on a display device by using the virtual label data.
In one possible implementation, the virtual tag may be rendered by using a background color and a text color of the virtual tag, and an augmented reality effect of the virtual tag corresponding to the real object is displayed while the real object in the real scene image is displayed on the display device.
In practical applications, the steps 401 to 405 may be implemented based on a Processor of a cloud platform server or an augmented reality Device and a display of the augmented reality Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
In an embodiment, the augmented reality device may send the acquired real scene image to the cloud platform server, and the cloud platform server executes steps 401 to 405 to determine whether a display result of a virtual tag of a real object and virtual tag data need to be displayed, and then may send the display result and the virtual tag data to the augmented reality device, and then display augmented reality data including the virtual tag through the augmented reality device. In another embodiment, after the augmented reality device acquires the image of the real scene, steps 401 to 405 may be performed to obtain a display result and virtual tag data of whether the virtual tag of the real object needs to be displayed, and then the augmented reality data including the virtual tag may be directly displayed by the augmented reality device.
It can be seen that, in the embodiment of the present disclosure, whether to display the virtual tag of the real object identified in the real scene picture, that is, to determine the virtual tag suitable for displaying, may be determined according to the attribute information of the real object in the image processing method, so that not only is the display workload reduced, but also the occlusion of other contents to be displayed may be reduced, which is beneficial to displaying the augmented reality effect.
Fig. 5 is a flowchart of another image processing method provided in the embodiment of the present disclosure, and as shown in fig. 5, the flowchart may include:
step 501: and acquiring a real scene image.
Step 502: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes position information of the real object in the real scene image.
Step 503: determining the display result of the virtual tag as a virtual tag displaying the real object in the case that the position information of the real object in the real scene image is within a specific area range.
In one example, the specific area range may be an area range preset by the user according to a requirement, for example, the specific area range may be a middle area of the real scene image, or may be an area other than the middle area of the real scene image. Here, the shape of the specific area range is not limited, and may be a square, a rectangle, or a circle, and the user may set the shape as desired; meanwhile, the area value of the specific area range is not limited, but the area value of the specific area range is smaller than that of the real scene image.
As an embodiment, for the real object tree, since the picture ratio information of the real object tree in the real birth pass image is very small or belongs to a real object which does not need to be displayed with a virtual note, since the real object tree is located in a specific area range in the real scene image, it is necessary to determine the display result of the virtual note as a virtual label for displaying the real object tree.
In one embodiment, a further determination may be made as to whether to display a virtual tag of a real object located within a specific area range according to picture proportion information of the real object in the real scene image. For example, for the building P, if the building P is within the specific area range, only one side of the building P is within the specific area range, that is, the picture occupation ratio of the building P in the real scene image is smaller than the preset area threshold, and therefore, the virtual tag of the building P is not displayed even if the building P is within the specific area range.
In one example, the display result of the virtual tag may be determined as a virtual tag in which the real object is not displayed in a case where the position information of the real object in the real scene image is not within a specific area range.
The virtual tags of the real objects in the specific area range can be displayed by setting the specific area range, so that the virtual tags of the real objects in the specific area range can be highlighted according to user requirements, and the improvement of user experience is facilitated.
Step 504: and acquiring virtual tag data corresponding to the real object under the condition that the display result shows the virtual tag of the real object.
Step 505: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
In one example, the display position of the virtual tag is determined in the real scene image according to the attribute information of the real object, the display position of the virtual tag may be determined according to the position information of the real object in the real scene image, or the display position of the virtual tag may be determined jointly according to the position information of the real object in the real scene image and the picture proportion information of the real object in the real scene image. For example, for the real object F located at the center of the real scene image, the set of virtual tag display position ranges of the real object F may be determined according to the position information of the real object F (center of the real scene image), for example, the display position of the virtual tag of the real object F may be at the upper right, lower right, above, below, and the like of the real object F. Of course, the picture proportion information of the real object F in the real scene image may also be considered at the same time, and when the picture proportion of the real object F in the real scene image is large, a position scheme more favorable for the display effect, that is, the display position of the optimal virtual tag, for example, the upper right of the real object F, may be selected from the set of virtual tag display position ranges of the real object F.
Step 506: rendering by using the virtual tag data at the display position of the virtual tag, and displaying the augmented reality effect of the virtual tag corresponding to the real object while displaying the real object in the real scene image on the display device.
In some possible embodiments, rendering may be performed at the display position of the optimal virtual tag by using the determined background color data and text color data of the virtual tag, and simultaneously displaying the real object in the real scene image and the virtual tag corresponding to the real object on the display device, so as to achieve an effect of augmented reality.
Therefore, the display position of the virtual label of the real object is determined according to the attribute information of the real object, the virtual label is displayed more reasonably, the augmented reality effect is better, and the requirements of users can be met better.
Fig. 6 is a flowchart of another image processing method provided in the embodiment of the present disclosure, and as shown in fig. 6, the flowchart may include:
step 601: and acquiring a real scene image.
Step 602: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes picture proportion information of the real object in the real scene image or picture proportion information of the real object on the display device.
Step 603: and determining the display result of the virtual label as the virtual label of the real object when the picture proportion information is larger than or equal to a specific threshold value.
In one embodiment, the specific threshold may be a percentage threshold set according to user requirements, for example, the specific threshold may be 10% or 5%.
In a case where the picture proportion information is equal to or greater than a specific threshold, the determination of the display result of the virtual tag as the virtual tag displaying the real object may be that the virtual tag displaying the real object is displayed in a case where the picture proportion of the real object in the real scene image is equal to or greater than 5% of a first threshold, and the picture proportion of the real object on the display device is equal to or greater than 10% (of course, 5%) of a second threshold.
In one example, the display result of the virtual tag may be determined not to display the virtual tag of the real object in a case where the screen occupation ratio information is less than a certain threshold value.
It can be seen that the method of determining whether to display a virtual tag of a real object through picture-aspect ratio information may be advantageous to more accurately determine whether it is necessary to display a virtual tag of a real object.
Step 604: and acquiring the set rendering parameters of the virtual label under the condition that the display result is that the virtual label of the real object is displayed.
In an example, the set rendering parameter of the virtual tag is obtained, which may be a fixed virtual tag rendering parameter preset by a user, and all virtual tags to be displayed are rendered according to the preset rendering parameter. For example, the rendering parameters for all virtual tags may be set such that the background color is white and the text color is black.
It can be seen that the virtual tags with the same background color and text color can be obtained through the preset rendering parameters of the virtual tags.
Step 605: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
Step 606: and under the condition that the content of the virtual tag comprises text and the rendering parameter of the virtual tag comprises the background color of the virtual tag, determining the color of the text according to the background color of the virtual tag.
In one example, the color of the text is determined according to the background color of the virtual tag, which may be determining a color set with a relatively large contrast according to the background color of the virtual tag, and selecting one of the color sets as the color of the text. For example, for the case where the background color of the virtual label is white, the determined set of colors may include: black, blue and red, the color with the highest contrast, i.e. the optimal text color, in the color set may be selected as the text color, or one color may be arbitrarily selected from the color set, for example, black.
Step 607: rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
In some possible embodiments, the rendering may be performed according to an optimal background color and an optimal text color of the virtual tag, and the real object in the real scene image and the virtual tag corresponding to the real object are displayed on the display device, so as to achieve an effect of augmented reality.
Therefore, the optimal text color in the virtual label is determined according to the background color of the virtual label, and the rendering is performed according to the virtual label color and the optimal text color, so that the obtained virtual label has a better display effect and a better reality enhancement effect, and can better meet the requirements of users.
Fig. 7 is a flowchart of still another image processing method according to an embodiment of the present disclosure, and as shown in fig. 7, the flowchart may include:
step 701: and acquiring a real scene image.
Step 702: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes identification information of the real object.
Step 703: and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
Here, the specific condition may be that the type of the real object is a building class, for example.
In one possible embodiment, it may be determined whether the real object F is of a specific type according to the name of the real object F, for example, whether the real object F is a building, and if so, a virtual tag of the real object F is displayed.
In one embodiment, the display result of the virtual tag may be further determined as not displaying the virtual tag of the real object in a case where the identification information of the real object does not satisfy a specific condition.
Therefore, the method for determining whether to display the virtual label of the real object or not through the identification information of the real object is beneficial to eliminating the real object which is not required to be displayed and reducing the display workload.
Step 704: and under the condition that the display result is that the virtual label of the real object is displayed, determining the rendering parameter of the virtual label according to the display parameter of the real object.
Here, the display parameter of the real object may be a display color of the real object in the real scene image, for example, the display parameter of the building P may be a display color of the building P in the real scene image, and that is, the color may be gray or light blue.
In one embodiment, in a case where a virtual tag of a real object needs to be displayed, a background color and a text color of the virtual tag may be determined according to a display color of the real object in an image of a real scene. For example, the background color of the virtual tag may be determined to be white and the text color may be determined to be black according to the display color of the real object building P, which is gray.
Therefore, the rendering parameters of the virtual label are determined according to the display parameters of the real object, so that the display effect of the virtual label obtained according to the rendering parameters is better, and the virtual label is easier to identify by a user.
Step 705: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
Step 706: and under the condition that the content of the virtual tag comprises text and the rendering parameter of the virtual tag comprises the background color of the virtual tag, determining the color of the text according to the background color of the virtual tag.
Step 707: rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
Fig. 8 is a flowchart of a further image processing method provided in an embodiment of the present disclosure, and as shown in fig. 8, the flowchart may include:
step 801: and acquiring a real scene image.
Step 802: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes identification information of the real object.
Step 803: and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
Step 804: and under the condition that the display result is that the virtual label of the real object is displayed, determining the rendering parameter of the virtual label according to the display position of the virtual label.
In one embodiment, in a case where a virtual tag of a real object needs to be displayed, an image color at a display position of the virtual tag may be determined according to the display position of the virtual tag, and a background color and a text color of the virtual tag may be determined according to the image color at the display position of the virtual tag. For example, if the display position of the virtual tag is located in the sky, the background color and text color of the virtual tag may be determined according to the color of the sky.
Therefore, the display effect of the virtual label obtained according to the rendering parameters is better, and the user can identify the virtual label more easily.
Step 805: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
Step 806: and under the condition that the content of the virtual tag comprises text and the rendering parameter of the virtual tag comprises the background color of the virtual tag, determining the color of the text according to the background color of the virtual tag.
Step 807: rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
Fig. 9 is a flowchart of another image processing method provided in an embodiment of the present disclosure, and as shown in fig. 9, the flowchart may include:
step 901: and acquiring a real scene image.
Step 902: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes identification information of the real object and display parameters of the real object.
Step 903: and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
Step 904: and under the condition that the display result is that the virtual label of the real object is displayed, the display parameter of the real object comprises the display color of the real object, and the rendering parameter of the virtual label comprises the background color of the virtual label, determining a first background color set which can be rendered by the virtual label according to the display color of the real object and a first specific rule.
Here, the first specific rule may be a correspondence of a display color of the real object and a background color of the virtual tag, and a contrast of the display color of the real object and the background color of the virtual tag needs to be greater than a preset first contrast threshold requirement, for example, the preset contrast threshold may be 100: 1.
In one example, one display color of the real object may correspond to the background color of the plurality of virtual tags, as long as it is greater than the first contrast threshold requirement. For example, the display color of the real object is light blue, and the background color of the corresponding virtual tag may be determined to include white and light purple.
In one embodiment, the background colors of a plurality of virtual tags corresponding to the display color of the real object may be determined according to the corresponding relationship between the display color of the real object and the background color of the virtual tags, and the corresponding background colors of the plurality of virtual tags may be determined as a first background color set renderable by the virtual tags. For example, in a case where the display color of the real object is white, it may be determined that the first background color set includes: black, blue and red.
Step 905: determining a background color of the virtual label from the first set of background colors.
For the implementation of determining the background color of the virtual tag from the first background color set, it may be, for example, selecting a color with the greatest contrast with the display color of the real object from the first background color set as the background color of the virtual tag; one of the colors may be arbitrarily selected from the first background color set as the background color of the virtual label.
It can be seen that, through the display color of the real object and the background color of the virtual tag determined by the first specific rule, the contrast formed by the display color of the real object and the display color of the real object can meet the requirement of the preset contrast threshold, so that the display effect of the virtual tag obtained according to the background color of the virtual tag is better, and the virtual tag is easier to identify by the user.
Step 906: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
Step 907: and under the condition that the content of the virtual tag comprises text and the rendering parameter of the virtual tag comprises the background color of the virtual tag, determining the color of the text according to the background color of the virtual tag.
Step 908: rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
Fig. 10 is a flowchart of still another image processing method according to an embodiment of the present disclosure, and as shown in fig. 10, the flowchart may include:
step 1001: and acquiring a real scene image.
Step 1002: identifying the real scene image to obtain attribute information of at least one real object in the real scene image; wherein the attribute information of the real object includes identification information of the real object and display parameters of the real object.
Step 1003: and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
Step 1004: and under the condition that the display result is that the virtual label of the real object is displayed and the rendering parameter of the virtual label comprises the background color of the virtual label, determining the image color of the position of the virtual label according to the display position of the virtual label.
In an embodiment, an image at the display position of the virtual tag in the real scene image may be determined according to the display position of the virtual tag, and then a corresponding image color may be determined, for example, the position of the virtual tag may be determined to be sky in the position information of the virtual tag of the building P in the real scene image, and then the image color at the position of the virtual tag of the building P may be determined to be a color of the sky.
Step 1005: and determining a second background color set which can be rendered by the virtual label according to the image color at the position of the virtual label and a second specific rule.
Here, the second specific rule may be a correspondence relationship between an image color at the position of the virtual tag and a background color of the virtual tag, and a contrast ratio between the image color at the position of the virtual tag and the background color of the virtual tag needs to be greater than a preset second contrast threshold requirement, for example, the preset second contrast threshold may be 150: 1. The preset second contrast threshold may be the same as or different from the first contrast threshold, and the user may set the second contrast threshold as required.
In one example, the color of the image at the position of the virtual label may correspond to the background color of the virtual labels, as long as the color is greater than the second contrast threshold requirement. For example, the color of the image at the position of the virtual tag is light blue, and the background color of the corresponding virtual tag can be determined to include white and light purple.
In an embodiment, the background colors of a plurality of virtual labels corresponding to the image color at the position of the virtual label may be determined according to the corresponding relationship between the image color at the position of the virtual label and the background color of the virtual label, and the corresponding background colors of the plurality of virtual labels may be determined as a second background color set that the virtual label can render. For example, for a case where the color of the image at the position of the virtual label is white, it may be determined that the second background color set includes: black, blue and red.
Step 1006: determining a background color of the virtual label from the second set of background colors.
For the implementation manner of determining the background color of the virtual label from the second background color set, exemplarily, a color with the maximum contrast formed with the image color at the position of the virtual label may be selected from the second background color set as the background color of the virtual label; one of the colors may be arbitrarily selected from the second background color set as the background color of the virtual label.
It can be seen that the contrast formed by the image color at the position of the virtual tag and the background color of the virtual tag determined by the second specific rule can meet the requirement of the preset contrast threshold, so that the display effect of the virtual tag obtained according to the background color of the virtual tag is better, and the user can recognize the virtual tag more easily.
Step 1007: and determining the display position of the virtual label in the real scene image according to the attribute information of the real object.
Step 1008: and under the condition that the content of the virtual tag comprises text and the rendering parameter of the virtual tag comprises the background color of the virtual tag, determining the color of the text according to the background color of the virtual tag.
Step 1009: rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
Fig. 11 is a schematic diagram of a composition structure of image processing provided in the embodiment of the present disclosure, and as shown in fig. 11, the apparatus may include: a real scene recognition module 1101, a determination module 1102, an acquisition module 1103, and a display module 1104, wherein,
the real scene recognition module 1101 is configured to obtain a real scene image; identifying the real scene image to obtain attribute information of at least one real object in the real scene image;
the determining module 1102 is configured to determine whether to display a display result of a virtual tag of each real object according to the attribute information of each real object;
the obtaining module 1103 is configured to obtain virtual tag data corresponding to the real object when the display result is that the virtual tag of the real object is displayed;
the display module 1104 is configured to display, on a display device, an augmented reality effect of the real object in the real scene image and the virtual tag corresponding to the real object by using the virtual tag data.
In an embodiment, the display module 1104 is specifically configured to determine a display position of the virtual tag in the real scene image according to the attribute information of the real object; rendering by using the virtual tag data at the display position of the virtual tag, and displaying the augmented reality effect of the virtual tag corresponding to the real object while displaying the real object in the real scene image on the display device.
In one embodiment, the attribute information of the real object includes at least one of:
position information of the real object in the real scene image;
identification information of the real object;
picture proportion information of the real object in the real scene image;
picture proportion information of the real object on the display device;
display parameters of the real object.
In one embodiment, the attribute information of the real object includes position information of the real object in the real scene image;
the determining module 1102 is specifically configured to determine the display result of the virtual tag as the virtual tag for displaying the real object when the position information of the real object in the real scene image is within a specific area range.
In one embodiment, the attribute information of the real object includes: the determining module 1102 is specifically configured to determine, as the virtual tag for displaying the real object, a display result of the virtual tag when the picture proportion information of the object in the real scene image or the picture proportion information of the real object on the display device is greater than or equal to a specific threshold.
In one embodiment, the attribute information of the real object includes identification information of the real object; the determining module 1102 is specifically configured to determine, when the identification information of the real object meets a specific condition, a display result of the virtual tag as a virtual tag for displaying the real object.
In an embodiment, the obtaining module 1103 is specifically configured to obtain virtual tag data corresponding to the real object, where the virtual tag data includes at least one of:
acquiring the set rendering parameters of the virtual label;
determining rendering parameters of the virtual tag according to the display parameters of the real object;
and determining rendering parameters of the virtual label according to the display position of the virtual label.
In an embodiment, in a case that the display parameter of the real object includes a display color of the real object, and the rendering parameter of the virtual tag includes a background color of the virtual tag, the obtaining module 1103 is specifically configured to determine, according to the display color of the real object and a first specific rule, a first background color set that the virtual tag can render; determining a background color of the virtual label from the first set of background colors.
In an embodiment, the rendering parameter of the virtual tag includes a background color of the virtual tag, and the obtaining module 1103 is specifically configured to determine an image color at a position where the virtual tag is located according to a display position of the virtual tag; determining a second background color set which can be rendered by the virtual label according to the image color at the position of the virtual label and a second specific rule; determining a background color of the virtual label from the second set of background colors.
In an embodiment, the content of the virtual tag includes text, the rendering parameter of the virtual tag includes a background color of the virtual tag, and the display module 1104 is specifically configured to determine the color of the text according to the background color of the virtual tag; rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
In one embodiment, the display device comprises a display screen which is movable on a preset slide rail and is provided with an image acquisition unit; the image acquisition unit is used for acquiring real scene images in real time in the moving process of the display screen.
In practical applications, the real scene recognition module 1101, the determination module 1102, the obtaining module 1103 and the display module 1104 may be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller and a microprocessor.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to an image processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, a usb disk, or the like, and when the computer program instructions corresponding to an image processing method in the storage medium are read or executed by an electronic device, any one of the image processing methods of the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 12, it shows an electronic device provided by an embodiment of the present disclosure, which may include: a memory 1201 and a processor 1202; wherein,
the memory 1201 is used for storing computer programs and data;
the processor 1202 is configured to execute the computer program stored in the memory to implement any one of the image processing methods of the foregoing embodiments.
In practical applications, the memory 1201 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 1202.
The processor 1202 may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor. It is to be understood that, for different augmented reality cloud platforms, the electronic devices for implementing the above-described processor functions may be other, and the embodiments of the present disclosure are not particularly limited.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present disclosure may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in the various product embodiments provided by the disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided by the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
While the embodiments of the present disclosure have been described in connection with the drawings, the present disclosure is not limited to the specific embodiments described above, which are intended to be illustrative rather than limiting, and it will be apparent to those of ordinary skill in the art in light of the present disclosure that many more modifications can be made without departing from the spirit of the disclosure and the scope of the appended claims.
Claims (14)
1. An image processing method, comprising:
acquiring a real scene image;
identifying the real scene image to obtain attribute information of at least one real object in the real scene image;
determining whether to display a display result of a virtual tag of each real object according to the attribute information of the real object;
acquiring virtual tag data corresponding to the real object under the condition that the display result shows the virtual tag of the real object;
and displaying the real object in the real scene image and the augmented reality effect of the virtual label corresponding to the real object on a display device by using the virtual label data.
2. The method of claim 1, wherein the using the virtual tag data to present an augmented reality effect of the real object and a virtual tag corresponding to the real object in the real scene image on a display device comprises:
determining the display position of the virtual tag in the real scene image according to the attribute information of the real object;
rendering by using the virtual tag data at the display position of the virtual tag, and displaying the augmented reality effect of the virtual tag corresponding to the real object while displaying the real object in the real scene image on the display device.
3. The method according to claim 1 or 2, wherein the attribute information of the real object comprises at least one of:
position information of the real object in the real scene image;
identification information of the real object;
picture proportion information of the real object in the real scene image;
picture proportion information of the real object on the display device;
display parameters of the real object.
4. The method according to claim 3, wherein the attribute information of the real object includes position information of the real object in the real scene image;
the determining whether to display the display result of the virtual tag of each real object according to the attribute information of the real object includes:
determining the display result of the virtual tag as a virtual tag displaying the real object in the case that the position information of the real object in the real scene image is within a specific area range.
5. The method according to claim 3, wherein the attribute information of the real object comprises: in the case of picture proportion information of the object in the real scene image, or picture proportion information of the real object on the display device,
the determining whether to display the display result of the virtual tag of each real object according to the attribute information of the real object includes:
and determining the display result of the virtual label as the virtual label of the real object when the picture proportion information is larger than or equal to a specific threshold value.
6. The method according to claim 3, wherein the attribute information of the real object includes identification information of the real object;
the determining whether to display the display result of the virtual tag of the real object according to the attribute information of each object includes:
and determining the display result of the virtual tag as a virtual tag displaying the real object when the identification information of the real object satisfies a specific condition.
7. The method according to any one of claims 1 to 6, wherein the obtaining of the virtual tag data corresponding to the real object comprises at least one of:
acquiring the set rendering parameters of the virtual label;
determining rendering parameters of the virtual tag according to the display parameters of the real object;
and determining rendering parameters of the virtual label according to the display position of the virtual label.
8. The method according to claim 7, wherein in the case where the display parameter of the real object includes a display color of the real object and the rendering parameter of the virtual tag includes a background color of the virtual tag,
determining rendering parameters of the virtual tag according to the display parameters of the real object, including:
determining a first background color set which can be rendered by the virtual label according to the display color of the real object and a first specific rule;
determining a background color of the virtual label from the first set of background colors.
9. The method of claim 7, wherein the rendering parameters of the virtual tag include a background color of the virtual tag,
determining rendering parameters of the virtual tag according to the display position of the virtual tag comprises:
determining the image color of the position of the virtual label according to the display position of the virtual label;
determining a second background color set which can be rendered by the virtual label according to the image color at the position of the virtual label and a second specific rule;
determining a background color of the virtual label from the second set of background colors.
10. The method of any of claims 1 to 9, wherein the content of the virtual tag comprises text, the rendering parameters of the virtual tag comprise a background color of the virtual tag,
the rendering is performed at the display position of the virtual tag by using the virtual tag data, and the augmented reality effect of the virtual tag corresponding to the real object is displayed while the real object in the real scene image is displayed on the display device, including:
determining the color of the text according to the background color of the virtual label;
rendering according to the background color of the virtual tag and the color of the text, and displaying the real object in the real scene image on the display equipment, and simultaneously displaying the augmented reality effect of the virtual tag corresponding to the real object.
11. The method according to any one of claims 1 to 10, wherein the display device comprises a display screen movable on a preset slide rail and provided with an image acquisition unit;
the image acquisition unit is used for acquiring real scene images in real time in the moving process of the display screen.
12. An image processing apparatus, characterized in that the apparatus comprises: a real scene recognition module, a determination module, an acquisition module and a display module, wherein,
the real scene identification module is used for acquiring a real scene image; identifying the real scene image to obtain attribute information of at least one real object in the real scene image;
the determining module is used for determining whether to display the display result of the virtual label of each real object according to the attribute information of each real object;
the obtaining module is configured to obtain virtual tag data corresponding to the real object when the display result indicates that the virtual tag of the real object is displayed;
the display module is configured to display, on a display device, the augmented reality effect of the real object in the real scene image and the virtual tag corresponding to the real object by using the virtual tag data.
13. An electronic device comprising a processor and a memory for storing a computer program operable on the processor; wherein,
the processor is configured to execute the image processing method according to any one of claims 1 to 11 when the computer program is executed.
14. A computer storage medium on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the image processing method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010623484.0A CN111833456A (en) | 2020-06-30 | 2020-06-30 | Image processing method, device, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010623484.0A CN111833456A (en) | 2020-06-30 | 2020-06-30 | Image processing method, device, equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111833456A true CN111833456A (en) | 2020-10-27 |
Family
ID=72901027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010623484.0A Pending CN111833456A (en) | 2020-06-30 | 2020-06-30 | Image processing method, device, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111833456A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784693A (en) * | 2020-12-31 | 2021-05-11 | 珠海金山网络游戏科技有限公司 | Image processing method and device |
CN117093105A (en) * | 2023-10-17 | 2023-11-21 | 先临三维科技股份有限公司 | Label display method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951260A (en) * | 2017-03-27 | 2017-07-14 | 联想(北京)有限公司 | Virtual objects access method and virtual display device under a kind of virtual scene |
CN107449440A (en) * | 2016-06-01 | 2017-12-08 | 北京三星通信技术研究有限公司 | The display methods and display device for prompt message of driving a vehicle |
WO2017217595A1 (en) * | 2016-06-14 | 2017-12-21 | 주식회사 엔토소프트 | Server and system for implementing augmented reality image based on positioning information |
CN109427219A (en) * | 2017-08-29 | 2019-03-05 | 深圳市掌网科技股份有限公司 | Take precautions against natural calamities learning method and device based on augmented reality education scene transformation model |
CN111198608A (en) * | 2018-11-16 | 2020-05-26 | 广东虚拟现实科技有限公司 | Information prompting method and device, terminal equipment and computer readable storage medium |
CN111316333A (en) * | 2018-09-30 | 2020-06-19 | 华为技术有限公司 | Information prompting method and electronic equipment |
-
2020
- 2020-06-30 CN CN202010623484.0A patent/CN111833456A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107449440A (en) * | 2016-06-01 | 2017-12-08 | 北京三星通信技术研究有限公司 | The display methods and display device for prompt message of driving a vehicle |
WO2017217595A1 (en) * | 2016-06-14 | 2017-12-21 | 주식회사 엔토소프트 | Server and system for implementing augmented reality image based on positioning information |
CN106951260A (en) * | 2017-03-27 | 2017-07-14 | 联想(北京)有限公司 | Virtual objects access method and virtual display device under a kind of virtual scene |
CN109427219A (en) * | 2017-08-29 | 2019-03-05 | 深圳市掌网科技股份有限公司 | Take precautions against natural calamities learning method and device based on augmented reality education scene transformation model |
CN111316333A (en) * | 2018-09-30 | 2020-06-19 | 华为技术有限公司 | Information prompting method and electronic equipment |
CN111198608A (en) * | 2018-11-16 | 2020-05-26 | 广东虚拟现实科技有限公司 | Information prompting method and device, terminal equipment and computer readable storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784693A (en) * | 2020-12-31 | 2021-05-11 | 珠海金山网络游戏科技有限公司 | Image processing method and device |
CN117093105A (en) * | 2023-10-17 | 2023-11-21 | 先临三维科技股份有限公司 | Label display method, device, equipment and storage medium |
CN117093105B (en) * | 2023-10-17 | 2024-04-16 | 先临三维科技股份有限公司 | Label display method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
KR101759453B1 (en) | Automated image cropping and sharing | |
US10650264B2 (en) | Image recognition apparatus, processing method thereof, and program | |
EP3454250A1 (en) | Facial image processing method and apparatus and storage medium | |
CN109816745A (en) | Human body thermodynamic chart methods of exhibiting and Related product | |
CN107204034B (en) | A kind of image processing method and terminal | |
WO2020211427A1 (en) | Segmentation and recognition method, system, and storage medium based on scanning point cloud data | |
CN113840049A (en) | Image processing method, video flow scene switching method, device, equipment and medium | |
CN110019599A (en) | Obtain method, system, device and the electronic equipment of point of interest POI information | |
CN106982240A (en) | The display methods and device of information | |
CN114202622B (en) | Virtual building generation method, device, equipment and computer readable storage medium | |
CN111448568A (en) | Context-based application demonstration | |
CN110084204A (en) | Image processing method, device and electronic equipment based on target object posture | |
US8830251B2 (en) | Method and system for creating an image | |
CN111862341A (en) | Virtual object driving method and device, display equipment and computer storage medium | |
CN111833456A (en) | Image processing method, device, equipment and computer readable storage medium | |
CN111815782A (en) | Display method, device and equipment of AR scene content and computer storage medium | |
CN111340848A (en) | Object tracking method, system, device and medium for target area | |
CN110929063A (en) | Album generating method, terminal device and computer readable storage medium | |
CN109522503A (en) | The virtual message board system in tourist attractions based on AR Yu LBS technology | |
CN110267079B (en) | Method and device for replacing human face in video to be played | |
CN114358112A (en) | Video fusion method, computer program product, client and storage medium | |
CN110852132B (en) | Two-dimensional code space position confirmation method and device | |
CN111640190A (en) | AR effect presentation method and apparatus, electronic device and storage medium | |
CN111862339A (en) | Virtual label display method, device, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201027 |
|
RJ01 | Rejection of invention patent application after publication |