CN113674293A - Picture processing method and device, electronic equipment and computer readable medium - Google Patents
Picture processing method and device, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN113674293A CN113674293A CN202110961726.1A CN202110961726A CN113674293A CN 113674293 A CN113674293 A CN 113674293A CN 202110961726 A CN202110961726 A CN 202110961726A CN 113674293 A CN113674293 A CN 113674293A
- Authority
- CN
- China
- Prior art keywords
- picture
- texture map
- region
- map
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 67
- 238000013507 mapping Methods 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 29
- 238000009877 rendering Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims description 78
- 239000003550 marker Substances 0.000 claims description 51
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 238000001914 filtration Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 23
- 230000018109 developmental process Effects 0.000 abstract description 9
- 238000007654 immersion Methods 0.000 abstract description 9
- 238000011161 development Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 description 18
- 230000003190 augmentative effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000010422 painting Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a picture processing method and device, electronic equipment and a computer readable medium, and relates to the technical field of automatic program design. Wherein, the method comprises the following steps: acquiring a draft picture, and taking the draft picture as a first texture map; extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map; filling a pixel value of a preset segmentation picture according to the second texture map to obtain a filled segmentation picture; and fusing the filled segmentation picture and the UV mapping to obtain a model mapping, and rendering and displaying the three-dimensional model corresponding to the draft picture according to the model mapping. Through the steps, the model chartlet required by rendering and displaying of the three-dimensional model can be automatically generated in real time on the basis of the picture of the drawing manuscript, and then the three-dimensional model corresponding to the picture of the drawing manuscript is generated and displayed in real time, so that the immersion sense of the user participating in the intelligence development activities is improved.
Description
Technical Field
The present invention relates to the field of automatic programming technologies, and in particular, to a method and an apparatus for processing a picture, an electronic device, and a computer-readable medium.
Background
With the popularization of televisions and digital equipment, traditional activities such as painting seem to be difficult to gain people's favor. As a result, people spend more and more time in the digital world of digital devices far from real world activities.
In the current situation that people spend too much time in the digital world, how to return people to the activities in the real world based on the augmented reality technology is a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, the invention provides a picture processing method, a picture processing device, an electronic device and a computer readable medium, which can automatically generate a model map required by rendering and displaying a three-dimensional model based on a draft picture in real time, further generate and display the three-dimensional model corresponding to the draft picture in real time, and improve the immersion of a user in an intelligence-developing activity.
To achieve the above object, according to a first aspect of the present invention, a picture processing method is provided.
The picture processing method comprises the following steps: acquiring a draft picture, and taking the draft picture as a first texture map; extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map; filling a pixel value of a preset segmentation picture according to the second texture map to obtain a filled segmentation picture; fusing the filled segmentation picture and the UV map to obtain a model map, and rendering and displaying a three-dimensional model corresponding to the draft picture according to the model map; wherein the segmentation picture is designed according to the UV map.
Optionally, the extracting the region of interest from the first texture map includes: and acquiring a marker mark image matched with the first texture map from a plurality of preset marker mark images, and extracting a region of interest from the first texture map based on the marker mark image matched with the first texture map.
Optionally, the extracting the region of interest from the first texture map includes: extracting a region of interest from the first texture map based on a SURF algorithm.
Optionally, the extracting the region of interest from the first texture map includes: and extracting a region of interest from the first texture map based on a contour detection connected region algorithm.
Optionally, the pixel value filling, according to the second texture map, a preset segmented picture to obtain a filled segmented picture includes: segmenting a plurality of sub-regions from the second texture map according to a preset mask picture; and filling the pixel values of the sub-regions in the second texture map into corresponding sub-regions in a preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
Optionally, the segmenting a plurality of sub-regions from the second texture map according to a preset mask picture includes: acquiring a mask picture matched with the first texture map from a plurality of preset mask pictures; wherein the mask map comprises a plurality of sub-regions; and for each sub-region in the mask image, obtaining the coordinate value of the sub-region, and segmenting the sub-regions with the same coordinate value from the second texture image according to the coordinate value of the sub-region.
Optionally, the method further comprises: pre-processing the first texture map prior to the extracting of the region of interest from the first texture map, the pre-processing comprising at least one of: filtering the first texture map, performing edge enhancement processing on the first texture map, performing equalization histogram processing on the first texture map, and adjusting the brightness and contrast of the first texture map.
Optionally, the filtering the first texture map includes: the first texture map is filtered based on a median filtering algorithm.
Optionally, the method further comprises: and converting the picture format of the first texture map before preprocessing the first texture map.
Optionally, the acquiring the draft picture includes: after receiving a picture processing request, inquiring a picture storage system according to a picture identifier of a picture draft carried by the picture processing request to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft: or after receiving the picture processing request, acquiring one or more picture files from the video stream uploaded by the user; the video stream is obtained by shooting a colored drawing manuscript by the user terminal.
To achieve the above object, according to a second aspect of the present invention, there is provided a picture processing apparatus.
The picture processing apparatus of the present invention includes: the acquisition module is used for acquiring the picture of the draft and taking the picture as a first texture map; the extraction module is used for extracting an interested region from the first texture map and taking the extracted interested region as a second texture map; the filling module is used for filling pixel values in a preset segmentation picture according to the second texture map so as to obtain a filled segmentation picture; the fusion module is used for fusing the filled segmentation pictures and the UV maps to obtain model maps, and rendering and displaying the three-dimensional model corresponding to the draft picture according to the model maps; wherein the segmentation picture is designed according to the UV map.
Optionally, the extracting module extracts a region of interest from the first texture map includes: and acquiring a marker mark image matched with the first texture map from a plurality of preset marker mark images, and extracting a region of interest from the first texture map based on the marker mark image matched with the first texture map.
Optionally, the filling module performs pixel value filling on a preset segmented picture according to the second texture map to obtain a filled segmented picture, where the filling module includes: the filling module divides a plurality of sub-regions from the second texture map according to a preset mask picture; and the filling module fills the pixel values of the sub-region in the second texture map into the corresponding sub-region in the preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
To achieve the above object, according to a third aspect of the present invention, there is provided an electronic apparatus.
The electronic device of the present invention includes: one or more processors; and storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors implement the picture processing method of the present invention.
To achieve the above object, according to a fourth aspect of the present invention, there is provided a computer-readable medium.
The computer-readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements the picture processing method of the present invention.
One embodiment of the above invention has the following advantages or benefits: in the invention, a draft picture is obtained and taken as a first texture map; extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map; filling a pixel value of a preset segmentation picture according to the second texture map to obtain a filled segmentation picture; and fusing the filled divided picture and the UV map to obtain a model map, rendering and displaying the three-dimensional model corresponding to the draft picture according to the model map, automatically generating the model map required by rendering and displaying the three-dimensional model based on the draft picture in real time, further generating and displaying the three-dimensional model corresponding to the draft picture in real time, and improving the immersion of the user in the intelligence-developing activities.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
FIG. 2 is a schematic main flowchart of a picture processing method according to a first embodiment of the present invention;
FIG. 3 is a schematic main flowchart of a picture processing method according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of main blocks of a picture processing apparatus according to a third embodiment of the present invention;
FIG. 5 is a schematic block diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and technical features of the embodiments of the present invention may be combined with each other without affecting the implementation of the present invention.
With the popularization of televisions and digital equipment, traditional activities such as painting seem to be difficult to gain people's favor. As a result, people spend more and more time in the digital world of digital devices far from real world activities. For example, a children's painting album is an educational learning tool for educating children to recognize things, learn paintings, fill colors, and expand imagination. The children can use the painting brush to fill colors in the preset outline area of the album, thereby attracting the attention of the children, enabling the children to be familiar with the method of using the painting brush and the shape perception of different objects in the magic color entertainment, feeling the infinite fun brought by knowledge and developing the interest in learning the knowledge. The children picture album can also capture the imagination of children, and provides the children with the opportunity of expressing the internal originality and idea. However, due to the popularity of television and digital devices, such traditional educational activities seem to be difficult to gain children's favor. The result of this is that children spend more and more time in the digital world of digital devices far from the intellectual real world activities.
In view of this, the invention provides a picture processing method, a picture processing device, an electronic device and a computer readable medium, which can automatically generate a model map required by rendering and displaying a three-dimensional model based on a draft picture in real time, further generate and display the three-dimensional model corresponding to the draft picture in real time, and improve the immersion of a user in an intelligence-developing activity.
Before describing embodiments of the present invention in detail, some technical terms related to the embodiments of the present invention will be described.
Augmented reality technology: also known as enhanced Virtual Reality (Augmented Virtual Reality), is a further extension of Virtual Reality technology, which makes a computer-generated Virtual environment and an objectively existing Real environment (Real environment) coexist in the same Augmented Reality system by means of necessary equipment. And presenting an augmented reality environment integrating the virtual object and the real environment for the user from the sense and experience effects. The augmented reality technology has the new characteristics of virtual-real combination, real-time interaction, three-dimensional registration, object pose and illumination stripes, and is a new research direction in rapid development. Augmented reality technology has wide application not only in application fields similar to virtual reality technology, such as data model visualization, virtual training, entertainment, art, and the like.
Vufaria SDK: is a software development kit for augmented reality applications for mobile devices. It utilizes computer vision techniques to recognize and capture planar images or simple three-dimensional objects (e.g., boxes) in real time, and then allows the developer to place the virtual object through the camera viewfinder and adjust the position of the virtual object on the physical background in front of the lens. The Vuforia platform supports two recognition modes of local recognition and cloud recognition. The local identification mode is to identify the identification picture, match the identification picture with local data and return a matching result.
Unity 3D: is a professional game engine, and has a highly optimized graphics rendering pipeline and a built-in NVIDIAPHYSX physical engine. The Unity3D can simulate the movement and collision of objects in three-dimensional space more truly, and give feedback to users through GUI, particle system, sound effect and other auxiliary means, thereby providing more ideas for the design and development of game prop systems. The biggest advantage of Unity3D is high cost performance and can be published as a web browsing way, and users can directly experience without downloading clients. Unity3D supports various scripting languages, including JavaScript, C #, Python, and is compatible with various operating systems.
Fig. 1 shows an exemplary system architecture 100 to which a picture processing method or a picture processing apparatus according to an embodiment of the present invention can be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as an intelligence activity interaction application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server providing support for applications of the smart active interaction class browsed by the user using the terminal devices 101, 102, 103. For example, the background management server may process a picture processing request or the like sent by the terminal device through the network, and feed back a processing result (such as a model map or a rendered three-dimensional model) to the terminal device.
It should be noted that the image processing method provided by the embodiment of the present invention is generally executed by the server 105, and accordingly, the image processing apparatus is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
First embodiment
Fig. 2 is a schematic main flow chart of a picture processing method according to a first embodiment of the invention. As shown in fig. 2, the image processing method according to the embodiment of the present invention includes:
step S201: and acquiring a draft picture, and taking the draft picture as a first texture map.
In an alternative example, step S201 includes: after receiving a picture processing request, inquiring a picture storage system according to a picture identifier of a picture draft carried by the picture processing request to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft.
In this optional example, after the user operates a certain specified control on an intelligent activity interaction Application (APP) page, for example, after the user selects one or more of the rendered pictures from the rendered picture list and clicks a button such as "dynamic rendering", the user terminal sends a picture processing request carrying an identifier of the rendered picture selected by the user to the server. After receiving a picture processing request, a server analyzes a picture identifier of a picture draft carried by the picture processing request, and then queries a picture storage system according to the picture identifier of the picture draft to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft.
In another alternative example, step S201 includes: and after receiving the picture processing request, acquiring one or more picture files from the video stream uploaded by the user. The video stream is obtained by shooting a colored drawing manuscript by the user terminal.
In this optional example, the user terminal may obtain a video stream by calling a camera to shoot the colored drawing manuscript, and then carry the video stream in the picture processing request and send the picture processing request to the server. After receiving the picture processing request, the server analyzes the video stream carried by the picture processing request, and then acquires one or more picture files from the video stream.
Step S202: and extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map.
In an alternative example, step S202 includes: acquiring a marker image matched with the first texture map from a plurality of preset marker images, and extracting a region of interest from the first texture map based on the marker image matched with the first texture map.
The marker image is a two-dimensional matrix code, which is generally used in image recognition technology, and image symbol recognition is performed through a specific template matching algorithm. Generally, information of a marker image is stored in advance, coordinates of a virtual object in a camera are calculated according to the information, the marker image is searched and identified in the current image through an image identification technology, the accurate matching of a virtual object picture and a real scene picture is achieved, and then the virtual object picture and the real scene picture are displayed in an overlapping mode on the marker image.
In the embodiment of the invention, corresponding marker images are set in advance for different types of picture draft pictures so as to meet the display requirements of different scenes and models. In step S202, the type of the picture of the drawing may be determined, then the marker image corresponding to the type of the picture of the drawing is acquired according to the type of the picture of the drawing, and then the region of interest is extracted from the picture of the drawing according to the acquired marker image. In specific implementation, the type information of the picture of the drawing manuscript can be carried in the picture processing request, so that the server side can directly analyze the type information of the picture of the drawing manuscript from the picture processing request. Or, the server may also perform recognition processing on the picture of the drawing manuscript based on an image recognition technology to recognize the type of the picture of the drawing manuscript. For example, assuming that the region of interest is a rectangular region in the draft picture, the rectangular region is extracted by a preset marker image and is used as a second texture picture. In the embodiment of the invention, the region of interest is extracted from the first texture map based on the marker image matched with the first texture map, so that the picture processing speed is greatly increased, the timeliness of three-dimensional model display is further improved, and the user experience is improved.
In another alternative example, step S202 includes: extracting a region of interest from the first texture map based on a SURF algorithm.
SURF (Speeded-Up Robust Features) algorithm is a Robust local feature point detection and description algorithm. The SURF algorithm is an improvement on the SIFT algorithm, and on the basis that the excellent performance characteristics of the SIFT operator are kept, the SURF algorithm overcomes the defects of high SIFT calculation complexity and long time consumption, improves the execution efficiency of the algorithm, and provides possibility for the application of the algorithm in a real-time computer vision system. The method comprises the following steps of extracting characteristic points from images, obtaining matching point pairs of two images by using nearest neighbor matching, solving a transformation relation between the images by using RANSAC and a least square method, and finally obtaining the images after registration. The overall thought process of the SURF algorithm is similar to that of the SIFT feature matching algorithm, and the main difference is that the SURF algorithm adopts block filtering approximation to replace second-order Gaussian filtering when generating a scale space, and then utilizes Haar wavelets to replace histograms to calculate the main direction of feature points and generate feature vectors. The SURF algorithm greatly improves the speed of feature extraction, and can still keep the extracted feature vectors to have rotation invariance and scale invariance, so that the effect of the SURF algorithm in feature matching is ideal. In the embodiment of the invention, the region of interest is extracted based on the SURF algorithm, so that the real-time property of picture processing is ensured.
In yet another alternative example, step S202 includes: and extracting a region of interest from the first texture map based on a contour detection connected region algorithm.
Based on the contour detection connected region algorithm, the basic idea is to identify a connected region (such as a rectangle) in an image by using a contour detection method, and to constrain the angle, the area and the direction of the identified connected region, so as to identify a region of interest which meets requirements in a scene.
Step S203: and filling a pixel value of a preset segmentation picture according to the second texture map to obtain the filled segmentation picture.
Wherein the segmentation picture is designed according to a UV map. In specific implementation, the segmentation picture can be designed according to the UV map of the corresponding three-dimensional model. As an alternative design principle, the segmentation picture and the UV map are made as similar as possible, which facilitates subsequent extraction of texture and creation of the map.
Exemplarily, step S203 includes: segmenting a plurality of sub-regions from the second texture map according to a preset mask picture; and filling the pixel values of the sub-regions in the second texture map into corresponding sub-regions in a preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
The mask picture is designed according to the preset pattern of the area to be filled in the colored drawing picture book. The mask picture is used for performing region segmentation on the second texture map. In step S203, different sub-regions of the second texture map are segmented according to the plurality of sub-regions included in the mask map. And after the segmentation is finished, filling the pixel values of the sub-regions in the segmented second texture map into the sub-regions appointed in the segmented picture according to the sub-region coordinate mapping relation. And the sub-region coordinate mapping relation is a coordinate mapping relation between the sub-region of the mask picture and the sub-region of the segmentation picture. Optionally, when designing the sub-region coordinate mapping relationship, selecting a similar sub-region between the second texture map and the segmented picture for mapping.
Optionally, in step S203, the segmenting a plurality of sub-regions from the second texture map according to a preset mask picture includes: acquiring a mask picture matched with the first texture map from a plurality of preset mask pictures; wherein the mask map comprises a plurality of sub-regions; and for each sub-region in the mask image, obtaining the coordinate value of the sub-region, and segmenting the sub-regions with the same coordinate value from the second texture image according to the coordinate value of the sub-region. In specific implementation, corresponding mask pictures can be set in advance for different types of draft pictures so as to meet the segmentation requirements of different scenes and models. In step S203, the type of the draft picture may be determined, then a mask picture corresponding to the type of the draft picture is obtained according to the type of the draft picture, and then a plurality of sub-regions are segmented from the second texture map according to the obtained mask picture.
In the embodiment of the invention, considering that the area of the three-dimensional model is complex, the second texture map is divided into a plurality of sub-areas by using the mask map, and the divided pictures are filled according to the pixel values of the sub-areas, so that the picture processing flow is facilitated to be simplified. Moreover, the problem that the mapping of the whole body is not preferable because the UV map and the second texture map are not consistent due to the shape and the position coordinate can be solved through the processing.
Step S204: and fusing the filled segmentation picture and the UV mapping to obtain a model mapping, and rendering and displaying the three-dimensional model corresponding to the draft picture according to the model mapping.
After the filled segmented picture is created, the filled segmented picture and the UV map are fused, which may be referred to as UV matching.
In specific implementation, the colored drawing draft is a two-dimensional picture, and dimension information is lost relative to a three-dimensional model. The UV map is obtained by expanding the actual three-dimensional model, so that the information content of the UV map is larger than the information content provided by the segmentation picture. Therefore, when merging the segmentation picture and the UV map, the adjacent unfilled areas can also be filled by means of the existing map information.
After the model map is created, the marker identification and the three-dimensional model information calibration are carried out, enough information is obtained to set the position of the virtual model, and then the rendering and the display of the model are carried out. In particular, rendering and display of the three-dimensional model may be performed based on Unity 3D.
The virtual-real combining process in augmented reality involves three aspects: three-dimensional registration, object pose and illumination stripes. Three-dimensional registration is the registration of a virtual graphic in a real scene by tracking the camera pose in real time. The position information of the virtual object is completed in a stage of 'feature matching' for continuously calling the virtual object model by comparing the similarity of the real object image feature and the virtual model feature. Because the surrounding environment and the illumination condition are changed, the virtual object automatically collects the texture information from the real object image, so that the immersion feeling of the augmented reality environment can be improved, and the virtual and real seamless combination is really realized.
In the embodiment of the invention, the model chartlet required by rendering and displaying the three-dimensional model can be automatically generated on the basis of the draft picture in real time through the steps, so that the three-dimensional model corresponding to the draft picture is generated and displayed in real time, and the immersion sense of the user participating in the intelligence development activity is improved.
The embodiment of the invention explores the role of the augmented reality technology in the field of education guidance. The three-dimensional model rendered by the model mapping obtained according to the children's drawing draft can be generated on the screen of the mobile terminal by shooting the album drawn by the children, and the model mapping can be dynamically modified according to the children's drawing draft, so that on one hand, the interest of the children is attracted, on the other hand, the imagination and the drawing capability of the children are improved, and the healthy growth of the children is promoted.
Second embodiment
Fig. 3 is a schematic main flow chart of a picture processing method according to a second embodiment of the invention. As shown in fig. 3, the image processing method according to the embodiment of the present invention includes:
step S301: and acquiring a draft picture, and taking the draft picture as a first texture map.
In an alternative example, step S301 includes: after receiving a picture processing request, inquiring a picture storage system according to a picture identifier of a picture draft carried by the picture processing request to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft.
In this optional example, after the user operates a certain specified control on an intelligent activity interaction Application (APP) page, for example, after the user selects one or more of the rendered pictures from the rendered picture list and clicks a button such as "dynamic rendering", the user terminal sends a picture processing request carrying an identifier of the rendered picture selected by the user to the server. After receiving a picture processing request, a server analyzes a picture identifier of a picture draft carried by the picture processing request, and then queries a picture storage system according to the picture identifier of the picture draft to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft.
In another alternative example, step S301 includes: and after receiving the picture processing request, acquiring one or more picture files from the video stream uploaded by the user. The video stream is obtained by shooting a colored drawing manuscript by the user terminal.
In this optional example, the user terminal may obtain a video stream by calling a camera to shoot the colored drawing manuscript, and then carry the video stream in the picture processing request and send the picture processing request to the server. After receiving the picture processing request, the server analyzes the video stream carried by the picture processing request, and then acquires one or more picture files from the video stream.
Optionally, after obtaining the draft picture, the method according to the embodiment of the present invention further includes the following steps: and converting the picture format of the picture of the draft. For example, when software development is performed based on the Vuforia platform, an Image may be acquired based on the CameraDevice class of the Vuforia platform, and a picture taken by the camera may be converted from the OpenGL ES rendering format of the camera to a tracking format required for subsequent picture detection matching and the like through an Image Converter (Image Converter) at a pixel level.
Step S302: and preprocessing the first texture map.
Wherein the pre-processing the first texture map comprises at least one of: filtering the first texture map, performing edge enhancement processing on the first texture map, performing equalization histogram processing on the first texture map, and adjusting the brightness and contrast of the first texture map.
In an alternative example, step S302 includes: filtering the first texture map, then carrying out noise reduction processing on the first texture map, then carrying out edge enhancement processing on the first texture map, then carrying out equalization histogram processing on the first texture map, and then adjusting the brightness and contrast of the first texture map.
Further, when filtering the first texture map, a median filtering algorithm may be employed. In the embodiment of the present invention, since the first texture map needs to be segmented into "regions of interest", edge detection is involved. Generally, edge detection needs to reduce noise of an image in advance to avoid false detection, a common method is to perform gaussian filtering on the image, however, the image becomes blurred, and when the edge of the image to be detected is not obvious or the resolution of the image is not high, the edge strength to be detected is weakened while noise is reduced. In view of this, in the embodiment of the present invention, median filtering is employed in filtering the first texture map, whereby relatively sharp paper edges can be retained.
Step S303: acquiring a marker image matched with the first texture map from a plurality of preset marker images, extracting an interested area from the first texture map based on the marker image matched with the first texture map, and taking the interested area as a second texture map.
The marker image is a two-dimensional matrix code, which is generally used in image recognition technology, and image symbol recognition is performed through a specific template matching algorithm. Generally, information of a marker image is stored in advance, coordinates of a virtual object in a camera are calculated according to the information, the marker image is searched and identified in the current image through an image identification technology, the accurate matching of a virtual object picture and a real scene picture is achieved, and then the virtual object picture and the real scene picture are displayed in an overlapping mode on the marker image.
In the embodiment of the invention, corresponding marker images are set in advance for different types of picture draft pictures so as to meet the display requirements of different scenes and models. In step S202, the type of the picture of the drawing may be determined, then the marker image corresponding to the type of the picture of the drawing is acquired according to the type of the picture of the drawing, and then the region of interest is extracted from the picture of the drawing according to the acquired marker image. In specific implementation, the type information of the picture of the drawing manuscript can be carried in the picture processing request, so that the server side can directly analyze the type information of the picture of the drawing manuscript from the picture processing request. Or, the server may also perform recognition processing on the picture of the drawing manuscript based on an image recognition technology to recognize the type of the picture of the drawing manuscript. For example, assuming that the region of interest is a rectangular region in the draft picture, the rectangular region is extracted by a preset marker image and is used as a second texture picture. In the embodiment of the invention, the region of interest is extracted from the first texture map based on the marker image matched with the first texture map, so that the picture processing speed is greatly increased, the timeliness of three-dimensional model display is further improved, and the user experience is improved.
Step S304: and segmenting a plurality of sub-regions from the second texture map according to a preset mask picture.
The mask picture is designed according to the preset pattern of the area to be filled in the colored drawing picture book. The mask picture is used for performing region segmentation on the second texture map. Optionally, in step S304, the segmenting a plurality of sub-regions from the second texture map according to a preset mask picture includes: acquiring a mask picture matched with the first texture map from a plurality of preset mask pictures; wherein the mask map comprises a plurality of sub-regions; and for each sub-region in the mask image, obtaining the coordinate value of the sub-region, and segmenting the sub-regions with the same coordinate value from the second texture image according to the coordinate value of the sub-region. In specific implementation, corresponding mask pictures can be set in advance for different types of draft pictures so as to meet the segmentation requirements of different scenes and models. In step S304, the type of the draft picture may be determined, then a mask picture corresponding to the type of the draft picture is obtained according to the type of the draft picture, and then a plurality of sub-regions are segmented from the second texture map according to the obtained mask picture.
Step S305: and filling the pixel values of the sub-regions in the second texture map into corresponding sub-regions in a preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
After different sub-regions of the second texture map are segmented according to the plurality of sub-regions contained in the mask map, pixel values of the sub-regions in the segmented second texture map are filled into the sub-regions appointed in the segmented picture according to the sub-region coordinate mapping relation. And the sub-region coordinate mapping relation is a coordinate mapping relation between the sub-region of the mask picture and the sub-region of the segmentation picture. Optionally, when designing the sub-region coordinate mapping relationship, selecting a similar sub-region between the second texture map and the segmented picture for mapping.
Step S306: and fusing the filled segmentation picture and the UV mapping to obtain a model mapping, and rendering and displaying the three-dimensional model corresponding to the draft picture according to the model mapping.
Wherein the segmentation picture is designed according to a UV map. In specific implementation, the segmentation picture can be designed according to the UV map of the corresponding three-dimensional model. As an alternative design principle, the segmentation picture and the UV map are made as similar as possible, which facilitates subsequent extraction of texture and creation of the map.
After the filled segmented picture is created, the filled segmented picture and the UV map are fused, which may be referred to as UV matching. In specific implementation, the colored drawing draft is a two-dimensional picture, and dimension information is lost relative to a three-dimensional model. The UV map is obtained by expanding the actual three-dimensional model, so that the information content of the UV map is larger than the information content provided by the segmentation picture. Therefore, when merging the segmentation picture and the UV map, the adjacent unfilled areas can also be filled by means of the existing map information.
After the model map is created, the marker identification and the three-dimensional model information calibration are carried out, enough information is obtained to set the position of the virtual model, and then the rendering and the display of the model are carried out. In particular, rendering and display of the three-dimensional model may be performed based on Unity 3D.
In the embodiment of the invention, the model chartlet required by rendering and displaying the three-dimensional model can be automatically generated on the basis of the draft picture in real time through the steps, so that the three-dimensional model corresponding to the draft picture is generated and displayed in real time, and the immersion sense of the user participating in the intelligence development activity is improved.
The embodiment of the invention explores the role of the augmented reality technology in the field of education guidance. The three-dimensional model rendered by the model mapping obtained according to the children's drawing draft can be generated on the screen of the mobile terminal by shooting the album drawn by the children, and the model mapping can be dynamically modified according to the children's drawing draft, so that on one hand, the interest of the children is attracted, on the other hand, the imagination and the drawing capability of the children are improved, and the healthy growth of the children is promoted.
Third embodiment
FIG. 4 is a schematic diagram of main blocks of a picture processing apparatus according to a third embodiment of the present invention. As shown in fig. 4, the image processing apparatus 400 according to the embodiment of the present invention includes: an acquisition module 401, an extraction module 402, a filling module 403, and a fusion module 404.
The obtaining module 401 is configured to obtain a draft picture, and use the draft picture as a first texture map.
In an optional example, the obtaining module 401 obtains the draft picture by: after receiving the picture processing request, the obtaining module 401 queries the picture storage system according to the picture identifier of the drawing manuscript carried in the picture processing request, so as to obtain one or more drawing manuscript pictures corresponding to the drawing manuscript picture identifier.
In this optional example, after the user operates a certain specified control on an intelligent activity interaction Application (APP) page, for example, after the user selects one or more of the rendered pictures from the rendered picture list and clicks a button such as "dynamic rendering", the user terminal sends a picture processing request carrying an identifier of the rendered picture selected by the user to the server. After receiving a picture processing request, a server analyzes a picture identifier of a picture draft carried by the picture processing request, and then queries a picture storage system according to the picture identifier of the picture draft to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft.
In another optional example, the obtaining module 401 obtains the draft picture by: after receiving the picture processing request, the obtaining module 401 obtains one or more picture files from the video stream uploaded by the user. The video stream is obtained by shooting a colored drawing manuscript by the user terminal.
In this optional example, the user terminal may obtain a video stream by calling a camera to shoot the colored drawing manuscript, and then carry the video stream in the picture processing request and send the picture processing request to the server. After receiving the picture processing request, the server analyzes the video stream carried by the picture processing request, and then acquires one or more picture files from the video stream.
An extracting module 402, configured to extract an area of interest from the first texture map, and use the extracted area of interest as a second texture map.
In an alternative example, the extracting module 402 extracts the region of interest from the first texture map comprises: the extraction module 402 obtains a marker image matched with the first texture map from a plurality of preset marker images, and the extraction module 402 extracts a region of interest from the first texture map based on the marker image matched with the first texture map.
The marker image is a two-dimensional matrix code, which is generally used in image recognition technology, and image symbol recognition is performed through a specific template matching algorithm. Generally, information of a marker image is stored in advance, coordinates of a virtual object in a camera are calculated according to the information, the marker image is searched and identified in the current image through an image identification technology, the accurate matching of a virtual object picture and a real scene picture is achieved, and then the virtual object picture and the real scene picture are displayed in an overlapping mode on the marker image.
In the embodiment of the invention, corresponding marker images are set in advance for different types of picture draft pictures so as to meet the display requirements of different scenes and models. In step S202, the type of the picture of the drawing may be determined, then the marker image corresponding to the type of the picture of the drawing is acquired according to the type of the picture of the drawing, and then the region of interest is extracted from the picture of the drawing according to the acquired marker image. In specific implementation, the type information of the picture of the drawing manuscript can be carried in the picture processing request, so that the server side can directly analyze the type information of the picture of the drawing manuscript from the picture processing request. Or, the server may also perform recognition processing on the picture of the drawing manuscript based on an image recognition technology to recognize the type of the picture of the drawing manuscript. For example, assuming that the region of interest is a rectangular region in the draft picture, the rectangular region is extracted by a preset marker image and is used as a second texture picture. In the embodiment of the invention, the region of interest is extracted from the first texture map based on the marker image matched with the first texture map, so that the picture processing speed is greatly increased, the timeliness of three-dimensional model display is further improved, and the user experience is improved.
And a filling module 403, configured to perform pixel value filling on a preset segmented picture according to the second texture map to obtain a filled segmented picture.
Wherein the segmentation picture is designed according to a UV map. In specific implementation, the segmentation picture can be designed according to the UV map of the corresponding three-dimensional model. As an alternative design principle, the segmentation picture and the UV map are made as similar as possible, which facilitates subsequent extraction of texture and creation of the map.
Illustratively, the filling module 403 performs pixel value filling on the preset segmented picture according to the second texture map to obtain a filled segmented picture, including: the filling module 403 divides a plurality of sub-regions from the second texture map according to a preset mask picture; the filling module 403 fills the pixel values of the sub-region in the second texture map into the corresponding sub-region in the preset segmented picture according to the sub-region coordinate mapping relationship, so as to obtain the filled segmented picture.
The mask picture is designed according to the preset pattern of the area to be filled in the colored drawing picture book. The mask picture is used for performing region segmentation on the second texture map. The filling module 403 firstly divides different sub-regions of the second texture map according to the plurality of sub-regions included in the mask map. And after the segmentation is finished, filling the pixel values of the sub-regions in the segmented second texture map into the sub-regions appointed in the segmented picture according to the sub-region coordinate mapping relation. And the sub-region coordinate mapping relation is a coordinate mapping relation between the sub-region of the mask picture and the sub-region of the segmentation picture. Optionally, when designing the sub-region coordinate mapping relationship, selecting a similar sub-region between the second texture map and the segmented picture for mapping.
Optionally, the filling module 403 segmenting a plurality of sub-regions from the second texture map according to a preset mask picture includes: the filling module 403 acquires a mask picture matched with the first texture map from a plurality of preset mask pictures; wherein the mask map comprises a plurality of sub-regions; for each sub-region in the mask map, the filling module 403 obtains the coordinate values of the sub-region, and the filling module 403 partitions the sub-region with the same coordinate value from the second texture map according to the coordinate values of the sub-region. In specific implementation, corresponding mask pictures can be set in advance for different types of draft pictures so as to meet the segmentation requirements of different scenes and models. The filling module 403 may determine the type of the rough sketch picture, then obtain a mask picture corresponding to the rough sketch picture according to the type of the rough sketch picture, and then segment a plurality of sub-regions from the second texture map according to the obtained mask picture.
In the embodiment of the invention, considering that the area of the three-dimensional model is complex, the second texture map is divided into a plurality of sub-areas by using the mask map, and the divided pictures are filled according to the pixel values of the sub-areas, so that the picture processing flow is facilitated to be simplified. Moreover, the problem that the mapping of the whole body is not preferable because the UV map and the second texture map are not consistent due to the shape and the position coordinate can be solved through the processing.
And a fusion module 404, configured to fuse the filled segmented picture and the UV map to obtain a model map, so as to render and display the three-dimensional model corresponding to the draft picture according to the model map.
After the populated segmented picture is created, the fusion module 404 fuses the populated segmented picture and the UV map, which may be referred to as UV matching.
In specific implementation, the colored drawing draft is a two-dimensional picture, and dimension information is lost relative to a three-dimensional model. The UV map is obtained by expanding the actual three-dimensional model, so that the information content of the UV map is larger than the information content provided by the segmentation picture. Therefore, when merging the segmentation picture and the UV map, the adjacent unfilled areas can also be filled by means of the existing map information.
After the model map is created, the marker identification and the three-dimensional model information calibration are carried out, enough information is obtained to set the position of the virtual model, and then the rendering and the display of the model are carried out. In particular, rendering and display of the three-dimensional model may be performed based on Unity 3D.
In the embodiment of the invention, the model chartlet required by rendering and displaying the three-dimensional model can be automatically generated on the basis of the draft picture in real time through the device, so that the three-dimensional model corresponding to the draft picture is generated and displayed in real time, and the immersion sense of the user participating in the intelligence development activity is improved.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use in implementing an electronic device of an embodiment of the present invention. The computer system illustrated in FIG. 5 is only an example and should not impose any limitations on the scope of use or functionality of embodiments of the invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, an extraction module, a population module, and a fusion module. The names of these modules do not in some cases constitute a limitation on the modules themselves, and for example, the capture module may also be described as a "module for capturing draft pictures".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform the following: acquiring a draft picture, and taking the draft picture as a first texture map; extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map; filling a pixel value of a preset segmentation picture according to the second texture map to obtain a filled segmentation picture; fusing the filled segmentation picture and the UV map to obtain a model map, and rendering and displaying a three-dimensional model corresponding to the draft picture according to the model map; wherein the segmentation picture is designed according to the UV map.
According to the technical scheme of the embodiment of the invention, the model chartlet required by rendering and displaying the three-dimensional model can be automatically generated on the basis of the draft picture in real time, so that the three-dimensional model corresponding to the draft picture is generated and displayed in real time, and the immersion feeling of the user participating in the intelligence development activity is improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (15)
1. A picture processing method, characterized in that the method comprises:
acquiring a draft picture, and taking the draft picture as a first texture map;
extracting an interested region from the first texture map, and taking the extracted interested region as a second texture map;
filling a pixel value of a preset segmentation picture according to the second texture map to obtain a filled segmentation picture;
fusing the filled segmentation picture and the UV map to obtain a model map, and rendering and displaying a three-dimensional model corresponding to the draft picture according to the model map; wherein the segmentation picture is designed according to the UV map.
2. The method of claim 1, wherein the extracting the region of interest from the first texture map comprises:
and acquiring a marker mark image matched with the first texture map from a plurality of preset marker mark images, and extracting a region of interest from the first texture map based on the marker mark image matched with the first texture map.
3. The method of claim 1, wherein the extracting the region of interest from the first texture map comprises:
extracting a region of interest from the first texture map based on a SURF algorithm.
4. The method of claim 1, wherein the extracting the region of interest from the first texture map comprises:
and extracting a region of interest from the first texture map based on a contour detection connected region algorithm.
5. The method according to claim 1, wherein the pixel value padding the preset segmented picture according to the second texture map to obtain a padded segmented picture comprises:
segmenting a plurality of sub-regions from the second texture map according to a preset mask picture; and filling the pixel values of the sub-regions in the second texture map into corresponding sub-regions in a preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
6. The method of claim 5, wherein the segmenting a plurality of sub-regions from the second texture map according to a preset mask picture comprises:
acquiring a mask picture matched with the first texture map from a plurality of preset mask pictures; wherein the mask map comprises a plurality of sub-regions; and for each sub-region in the mask image, obtaining the coordinate value of the sub-region, and segmenting the sub-regions with the same coordinate value from the second texture image according to the coordinate value of the sub-region.
7. The method of claim 1, further comprising:
pre-processing the first texture map prior to the extracting of the region of interest from the first texture map, the pre-processing comprising at least one of: filtering the first texture map, performing edge enhancement processing on the first texture map, performing equalization histogram processing on the first texture map, and adjusting the brightness and contrast of the first texture map.
8. The method of claim 7, wherein filtering the first texture map comprises:
the first texture map is filtered based on a median filtering algorithm.
9. The method of claim 7, further comprising:
and converting the picture format of the first texture map before preprocessing the first texture map.
10. The method of claim 1, wherein the obtaining the draft picture comprises:
after receiving a picture processing request, inquiring a picture storage system according to a picture identifier of a picture draft carried by the picture processing request to obtain one or more picture of the picture draft corresponding to the picture identifier of the picture draft: or,
after receiving a picture processing request, acquiring one or more picture files from a video stream uploaded by a user; the video stream is obtained by shooting a colored drawing manuscript by the user terminal.
11. A picture processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the picture of the draft and taking the picture as a first texture map;
the extraction module is used for extracting an interested region from the first texture map and taking the extracted interested region as a second texture map;
the filling module is used for filling pixel values in a preset segmentation picture according to the second texture map so as to obtain a filled segmentation picture;
the fusion module is used for fusing the filled segmentation pictures and the UV maps to obtain model maps, and rendering and displaying the three-dimensional model corresponding to the draft picture according to the model maps; wherein the segmentation picture is designed according to the UV map.
12. The apparatus of claim 11, wherein the extraction module extracts a region of interest from the first texture map comprises:
and acquiring a marker mark image matched with the first texture map from a plurality of preset marker mark images, and extracting a region of interest from the first texture map based on the marker mark image matched with the first texture map.
13. The apparatus of claim 11, wherein the padding module performs pixel value padding on the preset segmented picture according to the second texture map to obtain a padded segmented picture, and comprises:
the filling module divides a plurality of sub-regions from the second texture map according to a preset mask picture; and the filling module fills the pixel values of the sub-region in the second texture map into the corresponding sub-region in the preset segmentation picture according to the sub-region coordinate mapping relation so as to obtain the filled segmentation picture.
14. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
15. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961726.1A CN113674293B (en) | 2021-08-20 | 2021-08-20 | Picture processing method, device, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961726.1A CN113674293B (en) | 2021-08-20 | 2021-08-20 | Picture processing method, device, electronic equipment and computer readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113674293A true CN113674293A (en) | 2021-11-19 |
CN113674293B CN113674293B (en) | 2024-08-13 |
Family
ID=78544580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110961726.1A Active CN113674293B (en) | 2021-08-20 | 2021-08-20 | Picture processing method, device, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113674293B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426163A (en) * | 2012-05-24 | 2013-12-04 | 索尼公司 | System and method for rendering affected pixels |
WO2018040511A1 (en) * | 2016-06-28 | 2018-03-08 | 上海交通大学 | Method for implementing conversion of two-dimensional image to three-dimensional scene based on ar |
CN108665530A (en) * | 2018-04-25 | 2018-10-16 | 厦门大学 | Three-dimensional modeling implementation method based on single picture |
CN110866531A (en) * | 2019-10-15 | 2020-03-06 | 深圳新视达视讯工程有限公司 | Building feature extraction method and system based on three-dimensional modeling and storage medium |
CN111612880A (en) * | 2020-05-28 | 2020-09-01 | 广州欧科信息技术股份有限公司 | Three-dimensional model construction method based on two-dimensional drawing, electronic device and storage medium |
CN111862342A (en) * | 2020-07-16 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Texture processing method and device for augmented reality, electronic equipment and storage medium |
CN113012293A (en) * | 2021-03-22 | 2021-06-22 | 平安科技(深圳)有限公司 | Stone carving model construction method, device, equipment and storage medium |
-
2021
- 2021-08-20 CN CN202110961726.1A patent/CN113674293B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426163A (en) * | 2012-05-24 | 2013-12-04 | 索尼公司 | System and method for rendering affected pixels |
WO2018040511A1 (en) * | 2016-06-28 | 2018-03-08 | 上海交通大学 | Method for implementing conversion of two-dimensional image to three-dimensional scene based on ar |
CN108665530A (en) * | 2018-04-25 | 2018-10-16 | 厦门大学 | Three-dimensional modeling implementation method based on single picture |
CN110866531A (en) * | 2019-10-15 | 2020-03-06 | 深圳新视达视讯工程有限公司 | Building feature extraction method and system based on three-dimensional modeling and storage medium |
CN111612880A (en) * | 2020-05-28 | 2020-09-01 | 广州欧科信息技术股份有限公司 | Three-dimensional model construction method based on two-dimensional drawing, electronic device and storage medium |
CN111862342A (en) * | 2020-07-16 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Texture processing method and device for augmented reality, electronic equipment and storage medium |
CN113012293A (en) * | 2021-03-22 | 2021-06-22 | 平安科技(深圳)有限公司 | Stone carving model construction method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113674293B (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180276882A1 (en) | Systems and methods for augmented reality art creation | |
US20120050305A1 (en) | Apparatus and method for providing augmented reality (ar) using a marker | |
CN108111911B (en) | Video data real-time processing method and device based on self-adaptive tracking frame segmentation | |
CN112017257B (en) | Image processing method, apparatus and storage medium | |
CN108109161B (en) | Video data real-time processing method and device based on self-adaptive threshold segmentation | |
EP3533218B1 (en) | Simulating depth of field | |
CN113220251B (en) | Object display method, device, electronic equipment and storage medium | |
CN113411550B (en) | Video coloring method, device, equipment and storage medium | |
CN112598780A (en) | Instance object model construction method and device, readable medium and electronic equipment | |
CN114581611B (en) | Virtual scene construction method and device | |
US20160086365A1 (en) | Systems and methods for the conversion of images into personalized animations | |
CN108961375A (en) | A kind of method and device generating 3-D image according to two dimensional image | |
CN107743263B (en) | Video data real-time processing method and device and computing equipment | |
CN110969641A (en) | Image processing method and device | |
CN113269781A (en) | Data generation method and device and electronic equipment | |
CN107766803A (en) | Video personage based on scene cut dresss up method, apparatus and computing device | |
CN113673567B (en) | Panorama emotion recognition method and system based on multi-angle sub-region self-adaption | |
CN106503174B (en) | Scene visualization method and system based on network three-dimensional modeling | |
CN115967823A (en) | Video cover generation method and device, electronic equipment and readable medium | |
CN113674293B (en) | Picture processing method, device, electronic equipment and computer readable medium | |
Liu et al. | Fog effect for photography using stereo vision | |
CN112511815A (en) | Image or video generation method and device | |
CN111107264A (en) | Image processing method, image processing device, storage medium and terminal | |
CN115187497A (en) | Smoking detection method, system, device and medium | |
CN107945201B (en) | Video landscape processing method and device based on self-adaptive threshold segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |