[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117671119A - Method, device, equipment and storage medium for dynamic processing of static image - Google Patents

Method, device, equipment and storage medium for dynamic processing of static image Download PDF

Info

Publication number
CN117671119A
CN117671119A CN202311617323.0A CN202311617323A CN117671119A CN 117671119 A CN117671119 A CN 117671119A CN 202311617323 A CN202311617323 A CN 202311617323A CN 117671119 A CN117671119 A CN 117671119A
Authority
CN
China
Prior art keywords
image
static image
offset
dynamic
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311617323.0A
Other languages
Chinese (zh)
Inventor
刘伟
马广博
张应团
王波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
China Unicom Western Innovation Research Institute Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
China Unicom Western Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Digital Technology Co Ltd, China Unicom Western Innovation Research Institute Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202311617323.0A priority Critical patent/CN117671119A/en
Publication of CN117671119A publication Critical patent/CN117671119A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for dynamic processing of a static image, which can be used in the technical field of image processing. In the scheme, a server of a dynamic processing platform firstly extracts textures of a static image and a depth map uploaded by a user according to a texture loader provided by a three-party library, then processes the obtained textures to obtain shader materials comprising a vertex shader and a fragment shader, and finally combines a three-dimensional scene, a plane geometry and the shader materials to generate a final dynamic image. Wherein the drawing of the fragment shader is based on the static image texture and the offset in both the horizontal and vertical directions. The static image dynamic processing method can enable a user to adjust images within a certain range, and increase interactivity.

Description

Method, device, equipment and storage medium for dynamic processing of static image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for dynamic processing of a still image.
Background
With the rapid development of image processing technology, the demand for realizing the dynamic of still images is increasing.
In the existing technical scheme for realizing the dynamic of the static image, the static image in the album corresponding to the electronic device is generally selected as a first image, then preset processing is carried out, a depth map, a background full-view map, a contour map and visual data of the image are obtained, and finally dynamic images under different visual conditions are generated based on the images and the data. The dynamic image is used as an album cover diagram of the user, and the moving track of the dynamic image can be changed based on the shaking direction of the current user electronic equipment and the sliding direction of the fingers of the user.
However, the above method for processing the dynamic still image depends on a large number of image types, and cannot control the motion trajectory of the image accurately, so that the interaction feeling is poor.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for dynamic processing of a static image, which are used for solving the technical problem of interactivity deviation in the existing dynamic processing scheme of the static image.
In a first aspect, the present application provides a method for dynamically processing a static image, which is applied to a server side of a dynamic processing platform, where the method includes:
acquiring a static image uploaded by a user and a depth map of the static image;
Respectively extracting textures of the static image and the depth map according to a texture loader provided by a three-party library to obtain a static image texture and a depth image texture;
defining a unitorms variable and creating a vertex shader according to the shader material creation method provided by the three-party library;
according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and a value of a unimorphs variable selected by a user respectively calculate the offset in the horizontal direction and the offset in the vertical direction;
generating new texture samples according to the static image textures, the offset in the horizontal direction and the offset in the vertical direction to obtain a drawn fragment shader;
and generating dynamic images according to the three-dimensional scene and the plane geometry created based on the method provided by the three-party library, the vertex shader and the fragment shader.
In one possible design of the first aspect, the method further comprises:
and displaying the dynamic image to the user through a client of a dynamic processing platform.
In one possible design of the first aspect, the method further comprises:
Receiving a value of a user-selected unitorm variable uploaded by the user through a client of a dynamic processing platform, wherein the value of the unitorm variable comprises: a first variable value for indicating whether the offset is automatically performed, a second variable value for indicating whether the offset is performed according to the position of the mouse, and an adjustable coefficient for controlling the size of the offset.
In one possible design of the first aspect, the texture loader provided according to the three-party library extracts textures of the static image and the depth map, respectively, and before obtaining the static image texture and the depth image texture, the method further includes:
and creating the three-dimensional scene according to the three-dimensional library, wherein the three-dimensional scene comprises a scene, a perspective camera and a renderer.
In one possible design of the first aspect, the three-dimensional scene and planar geometry created according to the method provided based on the three-party library, the vertex shader and the fragment shader generate a dynamic image, including:
creating the plane geometry according to the method provided by the three-party library;
creating a mesh by the three-party library, the planar geometry, the vertex shader, and the fragment shader;
And adding the grid into the three-dimensional scene to obtain the dynamic image.
In one possible design of the first aspect, the acquiring the user uploaded static image and the depth map of the static image includes:
receiving the static image and the depth map of the static image uploaded by a client of the dynamic processing platform;
or,
receiving the static image uploaded by the client of the dynamic processing platform;
and carrying out depth extraction on the static image according to a preset depth image extraction method to obtain a depth image of the static image.
In one possible design of the first aspect, the calculating the depth value of the depth image texture according to the preset offset calculation formula, uv coordinate information of each vertex of the vertex shader and a value of a uniforms variable selected by a user, respectively calculates a horizontal offset and a vertical offset, includes:
according to a preset horizontal offset calculation formula, calculating the horizontal offset of the depth value of the depth image texture, the coordinate value of the horizontal direction in uv coordinate information of each vertex of the vertex shader, and the value of the uniforms variable selected by the user;
According to a preset vertical offset calculation formula, calculating the depth value of the depth image texture, the coordinate value of the vertical direction in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by the user;
the offset calculation formula includes the horizontal offset calculation formula and the vertical offset calculation formula.
In a second aspect, the present application provides a still image dynamic processing apparatus, including:
the acquisition module is used for acquiring the static image uploaded by the user and the depth map of the static image;
the extraction module is used for respectively extracting textures of the static image and the depth image according to a texture loader provided by a three-party library to obtain a static image texture and a depth image texture;
the processing module is used for defining a unitorms variable and creating a vertex shader according to the shader material creation method provided by the three-party library;
the processing module is further used for calculating the offset in the horizontal direction and the offset in the vertical direction according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and the value of the unimorphs variable selected by the user;
The processing module is further used for generating new texture samples according to the static image textures, the offset in the horizontal direction and the offset in the vertical direction to obtain a drawn fragment shader;
the processing module is further used for generating dynamic images according to the three-dimensional scene and the plane geometry created based on the method provided by the three-party library, the vertex shader and the fragment shader.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the still image dynamic processing method according to any one of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the static image dynamic processing method according to any one of the first aspects when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program for implementing the static image dynamic processing method according to any one of the first aspects when the computer program is executed by a processor.
The static image dynamic processing method, the static image dynamic processing device, static image dynamic processing equipment and the storage medium can be used in the technical field of image processing. Based on the static image and the depth map uploaded by the user, the server side of the dynamic processing platform firstly extracts corresponding textures by means of texture loaders provided by three-party libraries, so that the static image textures and the depth image textures are obtained, and the dependence on image types is reduced. And then, according to a shader material creation method provided by the three-party library, defining a unitorms variable, creating a vertex shader, respectively calculating offset in the horizontal direction and the vertical direction based on a preset formula, and drawing by combining with a static image texture to complete the fragment shader. The mode can enable the user to adjust corresponding parameters according to the actual demands of the user, and improve the experience of the user. And finally, generating a final dynamic image based on the three-dimensional scene, the plane geometry and the shader material. The static image dynamic processing method is based on the use of three-party libraries, so that the technical difficulty is reduced, and the interaction feeling is increased.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of a static image dynamic processing method provided in the present application;
FIG. 2 is a flowchart of a method for dynamic processing of still images according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a second embodiment of a method for dynamic processing of still images according to the present application;
fig. 4 is a schematic flow chart of a third embodiment of a static image dynamic processing method provided in the present application;
fig. 5 is a schematic flow chart of a fourth embodiment of a method for dynamically processing a still image provided in the present application;
fig. 6 is a schematic flow chart of a fifth embodiment of a method for dynamically processing a still image provided in the present application;
fig. 7 is a flowchart of a sixth embodiment of a method for dynamically processing a still image provided in the present application;
fig. 8 is a flowchart of a seventh embodiment of a static image dynamic processing method provided in the present application;
FIG. 9 is a flowchart illustrating a method for dynamic processing of still images according to an embodiment eight of the present application;
FIG. 10 is a flow chart of a colorant texture creation provided herein;
FIG. 11 is a flow chart for implementing the dynamic of a still image provided by the present application;
FIG. 12 is a schematic structural diagram of a still image dynamic processing apparatus according to an embodiment of the present disclosure;
Fig. 13 is a schematic structural diagram of a static image dynamic processing electronic device provided in the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards, and provide corresponding operation entries for the user to select authorization or rejection.
With the gradual maturity of technologies such as computer graphics and image processing, dynamic processing of static images is widely focused.
In the existing static image dynamic processing schemes, electronic equipment is used as a carrier, an acquired static image is used as a first image, various preprocessing operations are performed on the first image, a corresponding depth map, a background incomplete map, a contour map, visual data and the like are generated, and a final dynamic image is generated based on the images and the data. The obtained dynamic image is generally used as an album cover image of the user, and the motion trail of the dynamic image can be changed based on the shaking direction of the current user electronic equipment and the sliding direction of the fingers of the user, so that a certain image dynamic effect is provided for the user.
However, the above-described still image dynamic processing method has the following problems: in the process of realizing the dynamic of the static image, depending image types are too many, such as a depth map, a background complement map, a contour map and the like, which requires technicians to grasp the generation schemes of different types of images in the process of realizing the dynamic of the static image, thereby bringing higher technical realization difficulty to a certain extent. Meanwhile, the processing method for the dynamic still image has certain limitation in the aspects of relevant setting of the motion trail of the stored image, accurate numerical control of the motion trail and the like, and the display form is limited to a mobile phone photo album picture and cannot be fused with other scenes, so that the interaction is not strong.
In order to solve the above problems, the inventor finds that, in the research process of the dynamic processing method for the static image, the types of the images depending on the form of the electronic device are too many, the difficulty in technical implementation is high, the technical staff is required to master the unaware native WebGL language, and the dynamic processing of the static image cannot be realized efficiently. Meanwhile, in the process of realizing dynamic processing on a static image, related settings of an image motion track cannot be stored in real time, customized adjustment cannot be performed in the aspect of accurate numerical control of the motion track, a final dynamic image display form is limited to a mobile phone album image, fusion with other scenes cannot be performed, and interaction experience is remarkably reduced. The inventor considers whether dependence on the form of the electronic equipment can be eliminated in the dynamic processing process of the static image, and the interaction is increased while the technology implementation difficulty is reduced. Based on the above, the inventor provides a method for creating dynamic images based on three.js, wherein three.js is a WebGL engine, and various three-dimensional scenes are created by using the three.js, so that a technician can quickly create three-dimensional scenes of a browser without grasping the original WebGL language which is obscure. Specifically, in an operating environment supporting a browser of the WebGL technology, a user logs in a client of a dynamic processing platform, uploads a static image and a depth map to corresponding positions, sets related parameters such as horizontal offset amplitude, vertical offset amplitude, automatic offset, mouse offset and the like of the image according to platform instructions, and finally returns a final dynamic image to the server of the dynamic processing platform for presentation to the user. The server side of the dynamic processing platform is used for guaranteeing that the image is not excessively torn, and two parameters of the horizontal offset amplitude and the vertical offset amplitude can be set to be maximum values, so that a user can adjust the image within the range of the maximum values. Meanwhile, in the process of realizing the dynamic of the static image, the server side of the dynamic processing platform is mainly based on the use of three-party libraries, so that the technical realization difficulty is reduced. After the user obtains the dynamic image meeting the actual demands, the dynamic image can be fused with other scenes, such as click linkage and the like, so that interaction is enhanced to a certain extent, and user experience is improved.
Fig. 1 is a schematic application scenario diagram of a static image dynamic processing method provided in the present application. As shown in fig. 1, an application scenario of the static image dynamic processing method provided in the present application includes a server side 100 of a dynamic processing platform and a client side 101 of the dynamic processing platform. The server 100 of the dynamic processing platform is connected with the client 101 of the dynamic processing platform through a communication network.
The server 100 of the dynamic processing platform is mainly based on three-party libraries, and provides a function of dynamic image generation for users. The server 100 of the dynamic processing platform supports adjustment of image related parameters, so as to achieve simplicity of the dynamic image generating process. Although only one server 100 of the dynamic processing platform is shown in fig. 1, it should be understood that there may be two or more servers 100 of the dynamic processing platform.
The client 101 of the dynamic processing platform mainly provides a user with a portal for uploading still images and depth maps, and a function for setting image-related parameters. Although only one client 101 of the dynamic processing platform is shown in fig. 1, it should be understood that there may be two or more clients 101 of the dynamic processing platform.
When a user has a requirement for dynamic processing of a static image, the client 101 of the dynamic processing platform can be logged in based on the running environment supporting the browser of WebGL technology. The static image and the corresponding depth map are uploaded according to the relevant instructions of the client 101 of the dynamic processing platform and some parameters are set with respect to the image. When the server 100 of the dynamic processing platform receives the dynamic processing requirement of the static image uploaded by the client 101 of the dynamic processing platform, a series of processing is started on the static image and the depth map uploaded by the user based on the three-party library, so as to obtain the material of the shader, the three-dimensional scene and the geometric plane, generate the dynamic image, and return the dynamic image to the client 101 of the dynamic processing platform for presentation to the user.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a first embodiment of a method for dynamically processing a still image according to the present application. As shown in fig. 2, the flow of the static image dynamic method may include:
S201: and acquiring the static image uploaded by the user and a depth map of the static image.
In this step, based on the static image dynamic requirement of the user, the user can upload the corresponding static image and depth map at the file uploading entry provided by the client of the dynamic processing platform. After the uploading of the user is completed, the server side of the dynamic processing platform acquires the static image uploaded by the user and the depth map of the static image.
Wherein, the static image refers to a single frame and a picture. For example, JPG, JPEG, and BMP formatted files are all still images.
The depth map of the static image is also called as a distance image, and represents the distance between a certain point in the image and a camera, the geometric shape of the visible surface of a scene is directly reflected, the depth image can be calculated into point cloud data through coordinate conversion, and the point cloud data with regular and necessary information can also be reversely calculated into depth image data. The method for obtaining the depth map of the static image comprises a laser radar depth imaging method, a computer stereoscopic vision imaging method, a coordinate measuring machine method, a moire fringe method, a structured light method and the like.
The server of the dynamic processing platform is a related device for providing a dynamic function of a static image, and may be, for example, a processing device having a dynamic function platform of a static image, or may be a server of a platform.
S202: and respectively extracting textures of the static image and the depth map according to a texture loader provided by the three-party library, so as to obtain the static image textures and the depth image textures.
In this step, after the server side of the dynamic processing platform obtains the static image and the depth map of the static image uploaded by the user in step S201, the server side of the dynamic processing platform extracts textures of the static image and the depth map according to texture loaders provided by three-party libraries, respectively, to obtain static image textures and depth image textures.
Wherein, three.js is WebGL third party library written by JavaScript, and provides a plurality of 3D display functions. Specifically, three.js is a 3D engine running in a browser that can be used to create various three-dimensional scenes, including various objects such as cameras, shadows, textures, etc.
The texture loader is provided by three libraries of three js, the texture being an outer skin wrapped around the geometry, just like a decal wrapped around a wall. There are generally two ways to create textures, the first way is to use Texture construction, i.e., texture (image, mapping, wrapS, magFilter, minFilter, format, type, anisotropy, encoding). Where image is created as a picture object, typically using imagentils or ImageLoader classes, the image object may include images, such as PNG, JPG, GIF, DDS, etc., videos, such as MP4, OGG/OGV, etc., or cube maps. The second way is to load the texture using a texture loader, which can be expressed as var texture=new three. The texturoloader is used as a Texture loader, a class of Texture can be loaded, a picture is loaded by a load method of the texturoloader of the Texture map, a Texture object Texture can be returned, and the Texture object Texture can be used as a value of a map attribute of a model material color map.
For example, the name of the static image is a.png, the name of the depth map corresponding to the static image is b.png, and the service end of the dynamic processing platform extracts textures of a.png and b.png respectively by means of a texture loader provided by a three-party library, wherein the extraction processes are var texture 1=new three.texturoloader (). Load ("texture/a.png") and var texture 2=new three.texturoloader (). Load ("texture/b.png").
S203: according to the shader material creation method provided by the three-party library, a unitorm variable is defined and a vertex shader is created.
In this step, after obtaining the static image texture and the depth image texture by the texture loader based on the server side of the dynamic processing platform in step S202, a vertex shader is created by defining a unitorm variable according to the shader texture creation method provided by the three-party library.
Among these, the nature of shader materials is that of materials, with the aim of creating a mesh, which is the most common visible object in 3D computer graphics, that needs to be created by geometry and materials.
The uniforms variable, also known as a unified variable, is applicable to both vertex shaders and fragment shaders and functions to pass data from an application to either the vertex shader or the fragment shader. Inside the loader program, the unitorms variable is just like a constant that is involved in the programming language, i.e. the value of the variable cannot be modified by the loader program, which is typically used to represent a transformation matrix, illumination parameters, texture samplers, etc.
Vertex shaders are shaders that process vertices, each vertex executing a single vertex shader, which provides a general programmable method of operating on the vertex, the input to the vertex shader including attributes, unified data, and shader program content. Where attributes refer to data for each vertex provided using an array of vertices, unified data refers to constant data used by the vertex shader, shader programs refer to vertex shader program source code or executable files, describing the operations to be performed on the vertices. When the vertex shader processes vertices, vertex data of the model, such as vertex positions, normal line information, vertex colors, and the like, need to be acquired.
S204: according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and the value of the uniforms variable selected by the user are respectively calculated to obtain the offset in the horizontal direction and the offset in the vertical direction.
In this step, based on the depth image texture obtained by the texture loader in step S202, the server side of the dynamic processing platform calculates a formula according to a preset offset, and calculates a horizontal offset and a vertical offset according to the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader, and a value of the unimorphs variable selected by the user, for subsequent generation of new texture samples.
The offset calculation formula can be used for adjusting the position of the image, so that the image can slide, translate and the like in a certain range, and is mainly divided into an offset in the horizontal direction and an offset in the vertical direction, and the related parameters include the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and the value of a unitorms variable selected by a user.
The depth values of the depth image texture are obtained by sampling the depth image texture based on the fragment shader, and the range of the depth values of the depth image texture is generally 0 to 1, and the depth values generally show nonlinear distribution.
The uv coordinate is a two-dimensional coordinate system used in computer graphics to determine a point on a texture image, u representing a position in the horizontal direction and v representing a position in the vertical direction. The uv coordinate system typically uses values between 0 and 1 to represent pixel locations on a texture. When a corresponding object is rendered based on the three-party library, the uv coordinates are mapped with the object vertex coordinates, and the whole texture is attached to the surface of the three-dimensional model in the rendering process, so that a finer texture representation effect is realized. When the offset is calculated, a user can set a corresponding unitorms variable value based on the actual requirement of the user, so that the individuation of the static image dynamic realization process is realized.
S205: and generating new texture samples according to the static image textures, the offset in the horizontal direction and the offset in the vertical direction, and obtaining the drawn fragment shader.
In this step, based on the horizontal offset and the vertical offset calculated in step S204 and the static image texture obtained in step S202, the server side of the dynamic processing platform will generate new texture samples, so as to obtain the rendered fragment shader.
The pixel shader is also called a pixel shader, and mainly processes pixel results finally displayed on a screen. The fragment shader receives the data transmitted by the vertex shader and calculates the color pixels.
The server of the dynamic processing platform obtains pixel values of each unit, namely depth values, based on sampling of the texture of the static image by the fragment shader, and then generates new texture samples by combining the offset in the horizontal direction and the offset in the vertical direction, thereby completing drawing of the fragment shader.
S206: a dynamic image is generated according to a three-dimensional scene and a plane geometry, a vertex shader and a fragment shader which are created based on a method provided by a three-party library.
In this step, after the vertex shader is created based on step S203 and the fragment shader is drawn in step S205, the server side of the dynamic processing platform also needs to create a three-dimensional scene and a planar geometry based on the method provided by the three-party library, thereby completing the generation of the dynamic image.
The three-dimensional scene comprises a scene, a perspective camera, a renderer and the like, and is mainly used for displaying the generated dynamic image.
The planar geometry is created based on the method provided by the three-party library. In the three-party library, there are many common types of geometries, including points, lines, line segments, polygons, cubes, spheres, and the like. For example, a sphere is a closed surface made up of a set of discrete points, all of which are equidistant from a center point, and in a three-party library, a sphere geometry can be created by specifying the radius and number of segments of latitude and longitude of the sphere. In one possible implementation, since the user-uploaded static image is planar, a planar geometry is created through a three-way library.
The static image dynamic processing method provided by the embodiment mainly relates to a process how a server side of a dynamic processing platform generates a dynamic image according to a static image and a depth image uploaded by a user. In the whole process, the server side of the dynamic processing platform mainly creates a vertex shader and a fragment shader by means of a method provided by a three-party library, thereby completing the drawing of shader materials. The fragment shader is obtained according to the offset of the horizontal direction and the vertical direction and the static image texture. By the dynamic processing method for the static image, the user can adjust the image in a certain range, the interactivity between the user and the image is enhanced, and meanwhile, the technical difficulty of dynamic realization of the static image is reduced.
Fig. 3 is a flow chart of a second embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 3, on the basis of the above embodiment, the flow of the static image dynamic method further includes:
s301: and displaying the dynamic image to a user through a client of the dynamic processing platform.
In this step, after the server side of the dynamic processing platform completes the dynamic processing of the static image, the generated dynamic image is displayed to the user through the client side of the dynamic processing platform.
The client of the dynamic processing platform is connected with the server of the dynamic processing platform through network connection, and after the server of the dynamic processing platform generates a corresponding dynamic image according to the user demand, the generated dynamic image is transmitted to the client of the dynamic processing platform in a network connection mode and is displayed to the user. If the user checks the corresponding dynamic image and meets the requirement of the user, the dynamic image can be embedded into the corresponding front-end engineering project and displayed together with other page elements or embedded into other positions, so that the corresponding requirement of the user is realized. If the user does not meet the requirements of the user after viewing the corresponding dynamic image, editing and modification can be continued and fed back to the server side of the dynamic processing platform until the generated dynamic image meets the requirements of the user.
The static image dynamic processing method provided by the embodiment mainly relates to the problem of how to process the generated dynamic image. And after the dynamic processing of the static image is completed based on the server side of the dynamic processing platform, the generated dynamic image is displayed to a user through the client side of the dynamic processing platform. This process may be iterated until the generated dynamic image meets the user's needs. The dynamic processing method of the static image provides the user with the editable function, so that the user can repeatedly modify the generated dynamic image, meanwhile, the relevant setting executed by the user is saved, the repetitive work of the user is reduced, and the experience of the user is improved.
Fig. 4 is a schematic flow chart of a third embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 4, on the basis of any one of the above embodiments, the flow of the static image dynamic method may include:
s401: receiving a value of a user-selected unitorm variable uploaded by a user through a client of a dynamic processing platform, wherein the value of the unitorm variable comprises: a first variable value for indicating whether the offset is automatically performed, a second variable value for indicating whether the offset is performed according to the position of the mouse, and an adjustable coefficient for controlling the size of the offset.
In this step, the user uploads the static image and the depth image at the client of the dynamic processing platform, and selects the value of the corresponding unitorms variable according to the actual requirement.
The value of the unitorms variable comprises three parts, wherein the first part is a first variable value for indicating whether the offset is automatically performed, the second part is a second variable value for indicating whether the offset is performed according to the position of the mouse, and the third part is an adjustable coefficient for controlling the size of the offset.
The first variable value for indicating whether to automatically offset mainly sets whether the image automatically offsets according to the corresponding time, for example, if the image is automatically offset every 0.3 seconds, the finally generated dynamic image will realize a certain automatic offset every 0.3 seconds.
The second variable value, which is used to indicate whether to shift according to the position of the mouse, mainly is to set whether the image shifts according to the mouse, for example, if the user sets that the image shifts according to the position of the mouse, the finally generated dynamic image will realize the function of shifting according to the moving direction of the mouse.
The adjustable coefficient controlling the magnitude of the offset is mainly an adjustable value controlling the magnitude of the offset. For example, if the adjustable coefficient of the set control offset is 0.3, when the offset in the horizontal and vertical directions is calculated later, the offset is substituted into a preset offset calculation formula. If the adjustable coefficient of the control variable size, namely 0, is not set, the adjustment value is not involved in the subsequent calculation of the horizontal and vertical offsets.
After the user finishes the selection of the corresponding unimorphs variable value at the client of the dynamic processing platform, the server of the dynamic processing platform receives the value of the unimorphs variable selected by the user and uploaded by the user through the client of the dynamic processing platform, so that data support is provided for the calculation of the follow-up offset.
The static image dynamic processing method provided by the embodiment mainly relates to the process of which and how the values of the unitorms variable comprise. The value of the unitorms variable comprises three parts, namely whether to automatically shift, whether to move along with a mouse and an adjustable coefficient, and the three parts can be selected and set by a user according to actual requirements, so that the user can realize self-defined adjustment of a dynamic image, and the interaction between the user and the image is improved.
Fig. 5 is a flowchart of a fourth embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 5, before the texture loader provided by the three-party library extracts textures of the static image and the depth map respectively to obtain the static image textures and the depth image textures, the flow of the static image dynamic method further includes:
s501: and creating a three-dimensional scene according to the three-party library, wherein the three-dimensional scene comprises a scene, a perspective camera and a renderer.
In this step, the server of the dynamic processing platform needs to create a three-dimensional scene according to the three-party library of three-ee.js before the texture loader provided by the three-party library of three-ee.js extracts the textures of the static image and the depth map respectively to obtain the static image texture and the depth image texture.
The three-dimensional scene comprises a scene, a perspective camera and a renderer. A scene is a carrier that can see everything on a browser, which is a container of all content in three.js, for example, a scene can be created using the code const scene=new thread.
The perspective camera is a mode of creating objects which accord with people in the real world in a browser, the observed content is influenced by the angle, the position and other attributes of the perspective camera, the perspective camera simulates the human eyes, the real world in the human eyes can be effectively simulated by using the perspective camera, and in three.js, the perspective camera can be created by using codes Perrective camera (fov: number, aspect: number, near: number, far: number).
The renderer provides the function of quickly rendering the created object into the page, which calculates what the scene object will be rendered in the browser based on the camera's angle.
The static image dynamic processing method provided by the embodiment mainly relates to a process of creating a three-dimensional scene based on a three-party library. The three-dimensional scene comprises three parts of contents of a scene, a perspective camera and a renderer. The scene is a container of objects, a technician can put a needed character into the scene, the camera is used for facing the scene, a proper scene is taken from the scene, the scene is shot, and the renderer is used for taking pictures shot by the camera and putting the pictures into a browser for display. Based on the creation of the three-dimensional scene, the display of the dynamic image can be realized, and the actual demands of users are met.
Fig. 6 is a flowchart of a fifth embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 6, on the basis of any one of the above embodiments, a three-dimensional scene and a plane geometry created according to a method provided based on a three-party library, a vertex shader and a fragment shader, a dynamic image is generated, and the flow of the dynamic image method may include:
s601: the planar geometry is created according to the method provided by the three-party library of three.
In this step, after obtaining the shader materials based on the server of the dynamic processing platform, a plane geometry is created based on the method provided by the three-party library.
The planar geometry is an essential component for generating dynamic images. The three.js provides many ways to create geometry objects, such as BoxGeometry, sphereGeometry, etc., while the three.js can also customize planar geometry objects. Because the static image is planar, the geometry created by the three.js trigonometric library is a planar geometry. For example, the size of a plane can be defined by the code var planeGeometry =new three.plane geometry (width, height), the appearance of a plane geometry being set by a material object.
S602: the mesh is created by three-way library of three-way, planar geometry, vertex shader, and fragment shader.
In this step, after creating the plane geometry according to the method provided by the three-party library in step S601, the server side of the dynamic processing platform combines the vertex shader and the fragment shader to create the mesh.
Wherein, based on the vertex shader and the fragment shader, a shader material is formed, and the material is used for creating a corresponding plane object. Grids refer to visual objects in a computation graph for displaying various 3D objects. The essential elements before creating the mesh are a planar geometry defining the shape of the mesh and a shader material defining the appearance of the mesh surface to simulate the effect that the object surface ultimately exhibits under the influence of light, shadows, etc.
S603: and adding the grid into the three-dimensional scene to obtain a dynamic image.
In this step, after creating a grid based on the server side of the dynamic processing platform in step S602, the grid is added to the three-dimensional scene, so as to obtain a final dynamic image.
The grid is created based on the plane geometry and the coloring matter material, and after the grid is added into a scene, a service end of the dynamic processing platform can present a corresponding dynamic image. For example, a cylinder with a height of 3 and a radius of 12 is created by using a method provided by three-party library, that is, var geometry=new three.
The static image dynamic processing method provided in this embodiment mainly describes elements required for generating a dynamic image. The server of the dynamic processing platform firstly creates a grid based on a plane geometry created by a three-party library providing method and a shader material formed by a vertex shader and a fragment shader, and then adds the grid into a three-dimensional scene to generate a corresponding dynamic image. The static image has fewer dependent elements in the process of realizing the dynamic state, the technical realization difficulty is low, and the generated dynamic image can be fed back to the user in a short time, so that the experience of the user is improved.
Fig. 7 is a flowchart of a sixth embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 7, on the basis of any one of the above embodiments, the process of obtaining the static image and the depth map of the static image uploaded by the user may include:
s701: a static image and a depth map of the static image uploaded by a client of a dynamic processing platform are received.
In this step, before implementing the dynamic processing of the static image, the server side of the dynamic processing platform needs to acquire the static image and the depth map corresponding to the static image. In one possible implementation, a user uploads a static image and a depth map corresponding to the static image through a client of a dynamic processing platform.
The storage location of the static image may be a local device of the user, such as a mobile phone, a tablet computer, etc. The storage format may be BMP, GIF, JPEG, wherein the GIF format is relatively special and is classified into a dynamic GIF file format and a static GIF file format.
The depth map corresponding to the static image, also called a distance image, is a distance from the recording shooting equipment to each point in the scene in the image, and reflects the geometric shape of an object in the scene. There are various ways to obtain the depth map corresponding to the static image, for example, a sensor such as a laser radar or structured light is used to obtain depth information, and then the depth information is converted into a depth image, or parallax information of a binocular or multi-view camera is used to calculate depth, and then the depth information is converted into a depth image. The user can select a proper acquisition mode according to actual conditions to obtain a depth map corresponding to the static image.
The method for dynamically processing the static image provided by the embodiment mainly describes a mode that the server side of the dynamic processing platform acquires the static image and the corresponding depth map. In the implementation mode, the static image and the depth map corresponding to the static image are provided by a user, and after the user uploads the corresponding file through the client of the dynamic processing platform, the server of the dynamic processing platform can receive the corresponding file to perform subsequent dynamic processing on the static image. The method can provide an image uploading inlet for the user according to all image types of the user, and realize the whole process of dynamic static image, thereby reducing the technical realization difficulty and enhancing the interaction feeling between the user and the image.
Fig. 8 is a flowchart of a seventh embodiment of a static image dynamic processing method provided in the present application. As shown in fig. 8, on the basis of any one of the above embodiments, the process of obtaining the static image and the depth map of the static image uploaded by the user may include:
s801: a static image uploaded by a client of a dynamic processing platform is received.
In this step, before implementing the dynamic processing of the static image, the server side of the dynamic processing platform needs to acquire the static image and the depth map corresponding to the static image. In one possible implementation, the user may upload the static image through the client of the dynamic processing platform, and the subsequent processing operation is performed by the server of the dynamic processing platform.
The step S701 may be referred to for the storage location and the storage format of the static image, which is not described herein.
S802: and carrying out depth extraction on the static image according to a preset depth image extraction method to obtain a depth image of the static image.
In this step, in the dynamic generation process of the static image, the required image types include the static image and the depth map corresponding to the static image, so after the server side of the dynamic processing platform obtains the static image from the client side of the dynamic processing platform in step S801, the server side of the dynamic processing platform also needs to perform depth extraction on the static image according to a preset depth map extraction method, so as to obtain the depth map of the static image.
The preset depth map extraction method is integrated at the server side of the dynamic processing platform, and after the server side of the dynamic processing platform receives the image transmitted by the client side of the dynamic processing platform, whether the generation of the depth map corresponding to the static image is performed is judged based on the image type. In one possible implementation manner, the user only uploads the static image through the client of the dynamic processing platform, and at this time, a depth map extraction method preset by the server of the dynamic processing platform is triggered to extract a corresponding depth map of the static image.
The principle of the depth map implementation is that the depth value of each pixel point is calculated according to the information such as the color, the texture and the like of the pixels in the static image. The preset depth map extraction method can be realized in an algorithm mode, such as a depth estimation method based on a single image, a depth estimation method based on a plurality of images, a deep learning method and the like, and a depth map corresponding to the static image can be obtained.
The method for dynamically processing the static image provided in this embodiment mainly describes another way for the server side of the dynamic processing platform to acquire the static image and the corresponding depth map. In this manner, the static image is provided by the user, who can upload the corresponding file through the client of the dynamic processing platform. And the depth map corresponding to the static image is subjected to depth extraction by a server side of the dynamic processing platform through a preset depth map extraction method, so that the depth map is obtained. According to the method, the depth map can be obtained based on a preset depth map extraction method according to the static image uploaded by the user. In the process, a user does not need to generate a depth map, and the dynamic efficiency of the static image can be improved.
Fig. 9 is a flowchart of an embodiment eight of a static image dynamic processing method provided in the present application. As shown in fig. 9, based on any one of the above embodiments, according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by the user, the offset in the horizontal direction and the offset in the vertical direction are calculated respectively, and the flow of the static image dynamic method may include:
s901: and calculating the horizontal offset according to a preset horizontal offset calculation formula, the depth value of the depth image texture, the horizontal coordinate value in uv coordinate information of each vertex of the vertex shader and the value of the unimorphs variable selected by the user.
In this step, when the server side of the dynamic processing platform draws the primitive shader, the corresponding offsets in the horizontal and vertical directions need to be set. When calculating the offset in the horizontal direction according to a preset offset calculation formula in the horizontal direction, the parameters involved are the depth value of the depth image texture, the coordinate value in the horizontal direction in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by the user.
The preset calculation formula of the horizontal offset amount may be:
float x=vUv.x+(uMouse.x+sin(uTime))*0.02*depthValue
in this formula, vUv is uv coordinate information of each vertex, and vuv.x is a coordinate value in the horizontal direction in the uv coordinate information, which is transmitted from the vertex shader. The uMouse is a custom unitorm variable, which refers to the position of the mouse, i.e. whether the parameters can be adjusted or not, and whether the parameters are offset according to the mouse. uTime is also a custom unitorms variable, corresponding to the meaning of whether the image automatically shifts within a certain time interval. 0.02 as a coefficient, can be understood as an adjustable value for controlling the magnitude of the offset, and can be set by a user according to actual requirements. depthValue is a pixel value obtained by texture sampling the depth image.
In one possible implementation, based on a preset horizontal offset calculation formula, if both the uMouse variable and the uTime variable select no, then there are no two values directly in the horizontal offset calculation formula. Specifically selected by the actual needs of the user.
S902: calculating the offset in the vertical direction according to a preset vertical offset calculation formula, the depth value of the depth image texture, the coordinate value in the vertical direction in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by a user; the offset calculation formula comprises a horizontal offset calculation formula and a vertical offset calculation formula.
In this step, after calculating the horizontal offset according to the preset horizontal offset calculation formula, the server of the dynamic processing platform based on step S902 will calculate the vertical offset according to the preset vertical offset calculation formula. The offset calculation formula comprises a horizontal offset calculation formula and a vertical offset calculation formula. The vertical direction offset calculation formula is different from the horizontal direction offset calculation formula in that coordinate values involved in the vertical direction offset calculation formula are coordinate values in the vertical direction in uv coordinate information of each vertex of the vertex shader, and coordinate values involved in the horizontal direction offset calculation formula are coordinate values in the horizontal direction in uv coordinate information of each vertex of the vertex shader.
The preset vertical direction offset calculation formula may be:
float y=vUv.y+(uMouse.y+sin(uTime))*0.03*depthValue
in this formula, vUv is uv coordinate information of each vertex, and vuv.y is a coordinate value in a vertical direction in the uv coordinate information, which is transmitted from the vertex shader. Both uMouse and urtime are custom unitorms variables for the user to select, and specific meanings are mentioned in step S901, and are not described here again. And 0.03 is used as a coefficient, the size of the offset can be controlled, and the offset can be set by a user according to actual requirements. Similarly, depthValue is a pixel value obtained by texture sampling of a depth image.
In one possible implementation, neither the uMouse variable nor the uTime variable may be set, in which case the preset vertical offset calculation formula may be:
float y=vUv.y
and particularly, how to set the device, and selecting the device according to the actual requirements of users. Therefore, the offset is not fixed, and needs to be controlled according to the adjusted parameters, and the offset is preset as a default value at the server side of the dynamic processing platform in the initial stage.
The static image dynamic processing method provided by the embodiment mainly relates to how to calculate the offset. The offset calculation formula comprises a horizontal offset calculation formula and a vertical offset calculation formula. In the two offset calculation formulas, there is a value of a unitorm variable which can be selected by a user, and the unitorm variable can be adjusted and selected according to actual requirements, so that the user can adjust images in a certain range, and the interaction inductance is enhanced.
FIG. 10 is a flowchart of shader material creation as provided herein. As shown in fig. 10, the flow of the creation of the shader material is as follows:
s1001: the unitorms variable is defined and a vertex shader is created.
S1002: texture sampling is performed on the depth image.
S1003: a three-dimensional scene is created.
S1004: and calculating the offset in the horizontal direction and the vertical direction through a preset formula.
S1005: a new pixel value is calculated, and a voxel shader is painted.
Firstly defining unitorms variables such as mouse offset, time, image texture, depth image texture and the like by a method of creating shader materials through a thread js library, creating a vertex shader, and then performing texture sampling on a depth image in a fragment shader to obtain pixel values under each unit. And respectively calculating the offset in the horizontal direction and the vertical direction by combining the pixel value of the previous depth image through a preset formula, and then generating new texture samples according to the previous static image textures and the offset to obtain new pixel values, thereby finishing drawing the fragment shader.
Fig. 11 is a flowchart for implementing the dynamic of a still image provided in the present application. As shown in fig. 11, the still image dynamic process includes:
s1101: starting.
S1102: and uploading the static image and the depth image by a user.
S1103: a three-dimensional scene is created.
S1104: and acquiring an image texture and a depth image texture.
S1105: creating shader materials.
S1106: a planar geometry is created.
S1107: the geometric and material combinations create objects that are added to the scene.
S1108: and (5) ending.
The method comprises the steps that a user uploads a local static picture of the user on a corresponding website through a browser supporting WebGL, a depth map of the corresponding static picture is created through a three-party library, and the method mainly comprises a scene, a perspective camera and a renderer. Then, respectively acquiring image textures and depth image textures based on images uploaded by a user through a texture loader provided by a three-party library, and creating shader materials through a three-party library method. Because the image is planar, a planar geometry is created by three. Finally, a visible object in the computer graph, namely a grid, is created through three.js, the grid is created based on the previous geometry and the coloring material quality, and can be added into a scene, and a corresponding website can present a corresponding dynamic image.
Fig. 12 is a schematic structural diagram of a static image dynamic processing device according to an embodiment of the present application. As shown in fig. 12, the still image dynamic device 1200 includes:
an obtaining module 1201, configured to obtain a static image uploaded by a user and a depth map of the static image;
The extraction module 1202 is configured to extract textures of the static image and the depth map according to texture loaders provided by three-party library, so as to obtain a static image texture and a depth image texture;
the processing module 1203 is configured to define a unitorms variable and create a vertex shader according to the shader material creation method provided by the three-party library;
the processing module 1203 is further configured to calculate a horizontal offset and a vertical offset according to a preset offset calculation formula, a depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader, and a value of a uniforms variable selected by a user;
the processing module 1203 is further configured to generate a new texture sample according to the static image texture, the offset in the horizontal direction, and the offset in the vertical direction, to obtain a mapped fragment shader;
the processing module 1203 is further configured to generate a dynamic image according to the three-dimensional scene and the planar geometry created by the method provided by the three-party library, the vertex shader, and the fragment shader.
Optionally, the processing module 1203 is further configured to:
and displaying the dynamic image to a user through a client of the dynamic processing platform.
Optionally, the obtaining module 1201 is further configured to:
receiving a value of a user-selected unitorm variable uploaded by a user through a client of a dynamic processing platform, wherein the value of the unitorm variable comprises: a first variable value for indicating whether the offset is automatically performed, a second variable value for indicating whether the offset is performed according to the position of the mouse, and an adjustable coefficient for controlling the size of the offset.
Optionally, the processing module 1203 is further configured to:
and creating a three-dimensional scene according to the three-party library, wherein the three-dimensional scene comprises a scene, a perspective camera and a renderer.
Optionally, the processing module 1203 is further configured to:
creating a plane geometry according to a method provided by a three-party library of three.js;
creating a grid through three-party libraries of three-dimensional, planar geometry, vertex shader and fragment shader;
and adding the grid into the three-dimensional scene to obtain a dynamic image.
Optionally, the obtaining module 1201 is further configured to:
receiving a static image and a depth map of the static image uploaded by a client of a dynamic processing platform;
or,
receiving a static image uploaded by a client of a dynamic processing platform;
and carrying out depth extraction on the static image according to a preset depth image extraction method to obtain a depth image of the static image.
Optionally, the processing module 1203 is further configured to:
calculating the horizontal offset according to a preset horizontal offset calculation formula, the depth value of the depth image texture, the horizontal coordinate value in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by the user;
calculating the offset in the vertical direction according to a preset vertical offset calculation formula, the depth value of the depth image texture, the coordinate value in the vertical direction in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by a user;
the offset calculation formula comprises a horizontal offset calculation formula and a vertical offset calculation formula.
The static image dynamic processing device provided by the embodiments can be used for executing the static image dynamic processing method in any method embodiment, and has similar implementation principle and technical effects.
Fig. 13 is a schematic structural diagram of a static image dynamic processing electronic device provided in the present application. As shown in fig. 13, the electronic device may specifically include a receiver 1300, a transmitter 1301, a processor 1302, and a memory 1303. The receiver 1300 and the transmitter 1301 are used for realizing data transmission between the electronic device and the server of the front page, and the memory 1303 stores computer execution instructions; the processor 1302 executes the computer-executable instructions stored in the memory 1303 to implement the still image dynamic processing method in the above embodiment.
The present embodiment provides a computer-readable storage medium in which computer-executable instructions are stored, which when executed by a processor, are configured to implement the static image dynamic processing method in the above embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program, and the computer program realizes the static image dynamic processing method provided by any one of the embodiments when being executed by a processor.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments, and that the acts and modules referred to are not necessarily required in the present application.
It should be further noted that, although the steps in the flowchart are sequentially shown as indicated by arrows, the steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in the flowcharts may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order in which the sub-steps or stages are performed is not necessarily sequential, and may be performed in turn or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
It should be understood that the above-described device embodiments are merely illustrative, and that the device of the present application may be implemented in other ways. For example, the division of the units/modules in the above embodiments is merely a logic function division, and there may be another division manner in actual implementation. For example, multiple units, modules, or components may be combined, or may be integrated into another system, or some features may be omitted or not performed.
In addition, each functional unit/module in each embodiment of the present application may be integrated into one unit/module, or each unit/module may exist alone physically, or two or more units/modules may be integrated together, unless otherwise specified. The integrated units/modules described above may be implemented either in hardware or in software program modules.
The integrated units/modules, if implemented in hardware, may be digital circuits, analog circuits, etc. Physical implementations of hardware structures include, but are not limited to, transistors, memristors, and the like. The processor may be any suitable hardware processor, such as CPU, GPU, FPGA, DSP and ASIC, etc., unless otherwise specified. Unless otherwise indicated, the storage elements may be any suitable magnetic or magneto-optical storage medium, such as resistive Random Access Memory RRAM (Resistive Random Access Memory), dynamic Random Access Memory DRAM (Dynamic Random Access Memory), static Random Access Memory SRAM (Static Random-Access Memory), enhanced dynamic Random Access Memory EDRAM (Enhanced Dynamic Random Access Memory), high-Bandwidth Memory HBM (High-Bandwidth Memory), hybrid Memory cube HMC (Hybrid Memory Cube), etc.
The integrated units/modules may be stored in a computer readable memory if implemented in the form of software program modules and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments. The technical features of the foregoing embodiments may be arbitrarily combined, and for brevity, all of the possible combinations of the technical features of the foregoing embodiments are not described, however, all of the combinations of the technical features should be considered as being within the scope of the disclosure.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for dynamically processing a static image, which is applied to a server side of a dynamic processing platform, the method comprising:
acquiring a static image uploaded by a user and a depth map of the static image;
respectively extracting textures of the static image and the depth map according to a texture loader provided by a three-party library to obtain a static image texture and a depth image texture;
Defining a unitorms variable and creating a vertex shader according to the shader material creation method provided by the three-party library;
according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and a value of a unimorphs variable selected by a user respectively calculate the offset in the horizontal direction and the offset in the vertical direction;
generating new texture samples according to the static image textures, the offset in the horizontal direction and the offset in the vertical direction to obtain a drawn fragment shader;
and generating dynamic images according to the three-dimensional scene and the plane geometry created based on the method provided by the three-party library, the vertex shader and the fragment shader.
2. The method according to claim 1, wherein the method further comprises:
and displaying the dynamic image to the user through a client of a dynamic processing platform.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving a value of a user-selected unitorm variable uploaded by the user through a client of a dynamic processing platform, wherein the value of the unitorm variable comprises: a first variable value for indicating whether the offset is automatically performed, a second variable value for indicating whether the offset is performed according to the position of the mouse, and an adjustable coefficient for controlling the size of the offset.
4. The method according to claim 1 or 2, wherein before the texture loader provided according to the three-way library extracts textures of the static image and the depth map, respectively, the method further comprises:
and creating the three-dimensional scene according to the three-dimensional library, wherein the three-dimensional scene comprises a scene, a perspective camera and a renderer.
5. The method according to claim 1 or 2, wherein the generating of the dynamic image by the vertex shader and the fragment shader according to the three-dimensional scene and the planar geometry created based on the method provided based on the three-party library comprises:
creating the plane geometry according to the method provided by the three-party library;
creating a mesh by the three-party library, the planar geometry, the vertex shader, and the fragment shader;
and adding the grid into the three-dimensional scene to obtain the dynamic image.
6. The method of claim 2, wherein the obtaining the user uploaded static image and the depth map of the static image comprises:
Receiving the static image and the depth map of the static image uploaded by a client of the dynamic processing platform;
or,
receiving the static image uploaded by the client of the dynamic processing platform;
and carrying out depth extraction on the static image according to a preset depth image extraction method to obtain a depth image of the static image.
7. The method according to claim 1 or 2, wherein the calculating the depth value of the depth image texture according to the preset offset calculation formula, uv coordinate information of each vertex of the vertex shader, and the value of the uniforms variable selected by the user, respectively calculates the offset in the horizontal direction and the offset in the vertical direction, includes:
according to a preset horizontal offset calculation formula, calculating the horizontal offset of the depth value of the depth image texture, the coordinate value of the horizontal direction in uv coordinate information of each vertex of the vertex shader, and the value of the uniforms variable selected by the user;
according to a preset vertical offset calculation formula, calculating the depth value of the depth image texture, the coordinate value of the vertical direction in uv coordinate information of each vertex of the vertex shader, and the value of the unimorphs variable selected by the user;
The offset calculation formula includes the horizontal offset calculation formula and the vertical offset calculation formula.
8. A still image dynamic processing apparatus, comprising:
the acquisition module is used for acquiring the static image uploaded by the user and the depth map of the static image;
the extraction module is used for respectively extracting textures of the static image and the depth image according to a texture loader provided by a three-party library to obtain a static image texture and a depth image texture;
the processing module is used for defining a unitorms variable and creating a vertex shader according to the shader material creation method provided by the three-party library;
the processing module is further used for calculating the offset in the horizontal direction and the offset in the vertical direction according to a preset offset calculation formula, the depth value of the depth image texture, uv coordinate information of each vertex of the vertex shader and the value of the unimorphs variable selected by the user;
the processing module is further used for generating new texture samples according to the static image textures, the offset in the horizontal direction and the offset in the vertical direction to obtain a drawn fragment shader;
The processing module is further used for generating dynamic images according to the three-dimensional scene and the plane geometry created based on the method provided by the three-party library, the vertex shader and the fragment shader.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the still image dynamic processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out the static image dynamic processing method according to any one of claims 1 to 7.
CN202311617323.0A 2023-11-29 2023-11-29 Method, device, equipment and storage medium for dynamic processing of static image Pending CN117671119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311617323.0A CN117671119A (en) 2023-11-29 2023-11-29 Method, device, equipment and storage medium for dynamic processing of static image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311617323.0A CN117671119A (en) 2023-11-29 2023-11-29 Method, device, equipment and storage medium for dynamic processing of static image

Publications (1)

Publication Number Publication Date
CN117671119A true CN117671119A (en) 2024-03-08

Family

ID=90081977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617323.0A Pending CN117671119A (en) 2023-11-29 2023-11-29 Method, device, equipment and storage medium for dynamic processing of static image

Country Status (1)

Country Link
CN (1) CN117671119A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334227A (en) * 2024-06-12 2024-07-12 山东捷瑞数字科技股份有限公司 Visual texture mapping method and system based on three-dimensional engine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334227A (en) * 2024-06-12 2024-07-12 山东捷瑞数字科技股份有限公司 Visual texture mapping method and system based on three-dimensional engine

Similar Documents

Publication Publication Date Title
EP2092487B1 (en) Image-mapped point cloud with ability to accurately represent point coordinates
US20140181630A1 (en) Method and apparatus for adding annotations to an image
Wang et al. Neural light field estimation for street scenes with differentiable virtual object insertion
CN111970503B (en) Three-dimensional method, device and equipment for two-dimensional image and computer readable storage medium
EP3533218B1 (en) Simulating depth of field
US11663467B2 (en) Methods and systems for geometry-aware image contrast adjustments via image-based ambient occlusion estimation
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
Ebner et al. Multi‐view reconstruction of dynamic real‐world objects and their integration in augmented and virtual reality applications
WO2014094874A1 (en) Method and apparatus for adding annotations to a plenoptic light field
JP4996922B2 (en) 3D visualization
CN116152417B (en) Multi-viewpoint perspective space fitting and rendering method and device
CN117671119A (en) Method, device, equipment and storage medium for dynamic processing of static image
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
Waschbüsch et al. 3d video billboard clouds
JP7387029B2 (en) Single-image 3D photography technology using soft layering and depth-aware inpainting
CN112634439B (en) 3D information display method and device
US11941782B2 (en) GPU-based lens blur rendering using depth maps
CN114842127A (en) Terrain rendering method and device, electronic equipment, medium and product
EP2962290B1 (en) Relaying 3d information by depth simulation using 2d pixel displacement
Lechlek et al. Interactive hdr image-based rendering from unstructured ldr photographs
CN117611781B (en) Flattening method and device for live-action three-dimensional model
Mora et al. Visualization and computer graphics on isotropically emissive volumetric displays
Congote et al. Volume ray casting in WebGL
Habbecke et al. An Intuitive Interface for Interactive High Quality Image‐Based Modeling
US12147896B2 (en) Methods and systems for geometry-aware image contrast adjustments via image-based ambient occlusion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination