[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024212842A1 - Hair processing method and apparatus, and electronic device and storage medium - Google Patents

Hair processing method and apparatus, and electronic device and storage medium Download PDF

Info

Publication number
WO2024212842A1
WO2024212842A1 PCT/CN2024/085495 CN2024085495W WO2024212842A1 WO 2024212842 A1 WO2024212842 A1 WO 2024212842A1 CN 2024085495 W CN2024085495 W CN 2024085495W WO 2024212842 A1 WO2024212842 A1 WO 2024212842A1
Authority
WO
WIPO (PCT)
Prior art keywords
hair
guide
particle
posture
initial
Prior art date
Application number
PCT/CN2024/085495
Other languages
French (fr)
Chinese (zh)
Inventor
席维杰
谢选孟
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024212842A1 publication Critical patent/WO2024212842A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the embodiments of the present disclosure relate to the field of image processing technology, and in particular to a hair processing method, device, electronic device and storage medium.
  • the hair of humans or animals is dynamically simulated on a hair-by-hair basis to produce realistic visual effects.
  • the solution in the related art is usually to simulate only the guide hair in the hair data, and then apply the simulated pose of the guide hair to other ordinary hair, so as to achieve simulation of all hairs.
  • Embodiments of the present disclosure provide a hair processing method, device, electronic device, and storage medium.
  • an embodiment of the present disclosure provides a hair treatment method, comprising:
  • the hair data is acquired, and the hair data is divided into guide hair and common hair; the guide hair is processed based on its initial posture to obtain a simulated posture of the guide hair; motion information of the guide hair is obtained according to the simulated posture and the initial posture of the guide hair, wherein the motion information represents a posture change feature of the simulated posture of the guide hair relative to the initial posture of the guide hair; a hair rendering image is obtained according to the motion information and the initial posture of the common hair.
  • an embodiment of the present disclosure provides a hair processing device, comprising:
  • An acquisition module used for acquiring hair data and dividing the hair data into guide hair and ordinary hair
  • a simulation module is used to process the guide hair based on its initial posture to obtain a simulation posture of the guide hair. posture
  • a processing module configured to obtain motion information of the guide hair according to the simulated pose and the initial pose of the guide hair, wherein the motion information represents a posture change feature of the simulated pose of the guide hair relative to the initial pose of the guide hair;
  • a rendering module is used to obtain a hair rendering image according to the motion information and the initial position of the ordinary hair.
  • an electronic device including:
  • a processor and a memory communicatively connected to the processor
  • the memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory to implement the hair processing method as described in the first aspect and various possible designs of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer execution instructions.
  • the computer-readable storage medium stores computer execution instructions.
  • an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the hair processing method as described in the first aspect and various possible designs of the first aspect.
  • the hair processing method, device, electronic device and storage medium provided in the present embodiment obtain hair data, the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair; based on the hair data, the guide hair is simulated to obtain the simulated posture of the guide hair; according to the simulated posture and the initial posture of the guide hair, the motion information of the guide hair is obtained, the motion information characterizes the posture change characteristics of the simulated posture of the guide hair relative to the initial posture of the guide hair; according to the motion information and the initial posture of the common hair, a hair rendering image is obtained.
  • the motion information characterizing the posture change characteristics of the guide hair is obtained through the simulated posture of the guide hair, and the motion information is used as the posture change amount to redirect the common hair, so that the common hair obtains the corresponding simulation result based on the original initial posture combined with the motion information, thereby obtaining the hair rendering image.
  • FIG1 is a diagram showing an application scenario of a hair treatment method provided by an embodiment of the present disclosure
  • FIG2 is a schematic diagram of a first flow chart of a hair treatment method according to an embodiment of the present disclosure
  • FIG3 is a schematic diagram of a posture of a hair provided by an embodiment of the present disclosure.
  • FIG4 is a schematic diagram of the positions of a guide hair and a normal hair provided by an embodiment of the present disclosure
  • FIG5 is a flow chart of a specific implementation of step S102 in the embodiment shown in FIG2 ;
  • FIG6 is a schematic diagram of a particle motion state of a hair particle provided by an embodiment of the present disclosure.
  • FIG7 is a flow chart of a specific implementation of step S103 in the embodiment shown in FIG2 ;
  • FIG8 is a schematic diagram of a process of generating a simulated posture of ordinary hair provided by an embodiment of the present disclosure
  • FIG9 is a second flow chart of the hair treatment method provided by an embodiment of the present disclosure.
  • FIG10 is a flowchart of a specific implementation of step S203 in the embodiment shown in FIG9 ;
  • FIG11 is a flowchart of another specific implementation of step S203 in the embodiment shown in FIG9 ;
  • FIG12 is a schematic diagram of a virtual hair particle provided by an embodiment of the present disclosure.
  • FIG13 is a schematic diagram of generating weighted motion information provided by an embodiment of the present disclosure.
  • FIG14 is a flowchart of a specific implementation method of step S208 in the embodiment shown in FIG9 ;
  • FIG15 is a structural block diagram of a hair processing device provided by an embodiment of the present disclosure.
  • FIG16 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 17 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present disclosure.
  • user information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • user information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • FIG1 is a diagram of an application scenario of the hair processing method provided by the embodiment of the present disclosure.
  • the hair processing method provided by the embodiment of the present disclosure can be applied to application scenarios such as video special effects generation and real-time rendering of games. More specifically, it can be applied to the application scenario of real-time rendering of video special effects.
  • the method provided by the embodiment of the present disclosure can be applied In a terminal device, such as a smart phone, a personal computer, etc.
  • FIG1 takes a smart phone as an example for introduction.
  • the terminal device runs a target application that needs to perform hair special effects rendering, such as a game application, a short video application, etc.
  • the terminal device dynamically renders each hair.
  • the hair of the portrait shown in FIG1 based on the hair processing method provided in the embodiment of the present disclosure, dynamically simulates and renders each hair that makes up the hair, thereby achieving the purpose of dynamically displaying the hair of a person or an animal in the image display interface of the target application.
  • the solution in the related art is usually to simulate a small number of guide hairs in the hair data, and then apply the simulated posture of each guide hair to the corresponding large number of ordinary hairs, so as to achieve rapid simulation of all hairs.
  • the embodiment of the present disclosure provides a hair processing method to solve the above-mentioned problem.
  • FIG. 2 is a flow chart of a hair treatment method according to an embodiment of the present disclosure.
  • the method of this embodiment can be applied in a terminal device, and the hair treatment method includes:
  • Step S101 Acquire hair data, and divide the hair data into guide hair and ordinary hair.
  • hair data is data used to characterize the hair constituting the hair of a person or an animal.
  • the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair.
  • the posture of the corresponding hair can be determined by the hair data, and the posture refers to the position and posture of the hair.
  • FIG3 is a schematic diagram of the posture of a hair provided by an embodiment of the present disclosure. Taking the rendering of a character's hair as an example, as shown in FIG3, among the many hairs constituting the hair, there are hair A and hair B, wherein the posture corresponding to hair A is posture pos_1, and the posture corresponding to hair B is posture pos_2.
  • hair A and hair B are located at different positions, and secondly, hair A and hair B have different postures, more specifically, hair A corresponds to "straight hair”, and hair B corresponds to "curly hair”.
  • the postures corresponding to the hair the positions and postures of different hairs can be distinguished, so that the hairs at different positions show different hair postures.
  • the hair data includes two types of hair data, namely, guide hair data and common hair data.
  • the guide hair is used to represent the overall style, hair distribution, and/or hair direction of a person or animal's hair, and plays a role similar to "skeleton".
  • the position change of the guide hair is calculated, and the position change of the guide hair is compared with the virtual hair.
  • the constraint relationship between other elements in the simulated environment is used to simulate the guide hairs, thereby realizing the movement of the hair and the interaction with the external environment.
  • ordinary hairs are used to further fill the gaps between the guide hairs, so as to make the overall hair fuller.
  • Ordinary hairs do not need to be simulated based on the motion state, so compared with the simulation calculation of guide hairs, it has a smaller amount of calculation.
  • one guide hair corresponds to multiple ordinary hairs.
  • the ratio of guide hair to ordinary hair is 1:100.
  • the hair data includes the corresponding posture data of the guide hair and the ordinary hair in the initial state, namely, the initial posture and the initial pose.
  • the initial posture of the guide hair and the initial posture of a group of ordinary hairs corresponding to the guide hair can be determined.
  • FIG4 is a schematic diagram of the posture of a guide hair and ordinary hair provided by an embodiment of the present disclosure.
  • hair A (shown as A in the figure) is the guide hair, and hair A_1, hair A_2, hair A_3, hair A_4 (shown as A_1, A_2, A_3, A_4, etc.) located near hair A are ordinary hairs, wherein hair A has an initial posture pos_1, hair A_2 has an initial posture pos_2, hair A_3 has an initial posture pos_3, and hair A_4 has an initial posture pos_4.
  • different ordinary hairs corresponding to the same guide hair are located at different positions and have different postures, that is, different initial postures. In the subsequent processing steps, each ordinary hair will be simulated based on the different initial postures of each ordinary hair, so that it retains part of the initial posture.
  • the position (initial position) of the guide hair in the initial state and the position (initial position) of the common hair in the initial state may be preset according to the specific needs of the target application or may be randomly generated, and no specific limitation is imposed here. In a group of common hairs corresponding to the same guide hair, the initial positions corresponding to the common hairs are not completely the same.
  • Step S102 processing the guide hair based on its initial posture to obtain a simulated posture of the guide hair.
  • the guide hair is simulated, that is, the position and posture of the guide hair in the next frame are predicted and simulated, that is, the simulated posture.
  • the guide hair performs random movement, that is, based on the initial posture of the guide hair, the initial posture is randomly changed within a certain range to obtain a posture different from the initial posture, that is, the simulated posture; more specifically, for example, the guide hair and ordinary hair in the hair data are both composed of multiple hair particles, and the distance between the hair particles is fixed. Therefore, there is a distance constraint between adjacent hair particles, and a random position offset is applied to each hair particle of the guide hair to obtain the simulated posture of the guide hair.
  • the hair data includes the initial coordinates of the hair particles constituting the guide hair, as shown in FIG5 , step S102
  • Specific implementation methods include:
  • Step S1021 acquiring a particle motion state of a hair particle guiding the hair, wherein the particle motion state represents a motion speed and/or motion acceleration of the hair particle.
  • Step S1022 obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state.
  • Step S1023 obtaining a simulated posture of the guide hair according to the simulated coordinates of at least two hair particles of the guide hair.
  • the hair particles of the guide hair are hair particles
  • the hair particles of the ordinary hair are hair particles of the ordinary hair, which will be introduced in the subsequent steps.
  • the particle motion state of the hair particle that is, the motion speed and/or motion acceleration of the hair particle in the motion state.
  • the motion speed and motion acceleration of the hair particle are generated by the traction force of other hair particles adjacent to the hair particle on the guide hair on the hair particle.
  • the motion speed and motion acceleration of the hair particle can also be generated by the constraint force of external objects in the image environment (such as other hair, obstacles in the environment, etc.) on the hair particle.
  • FIG6 is a schematic diagram of a particle motion state of a hair particle provided by an embodiment of the present disclosure.
  • the guide hair L1 includes a hair particle P1, a hair particle P2 and a hair particle P3 (shown as P1, P2 and P3 in the figure).
  • the hair particle P1 is a hair root particle
  • the coordinates are, that is, the hair particle P1 is located on the surface of the head model (corresponding to the hair root located on the scalp).
  • the hair particle P1 When the head model moves, the hair particle P1 is driven to move, so that the hair particle P1 produces a corresponding movement; then, the hair particle P2 adjacent to the hair particle P1 is subjected to the traction force N1 of the hair particle P1 and moves, so that the hair particle P2 produces a corresponding speed v2 and acceleration a3 (in the same direction as N1); and the hair particle P3 adjacent to the hair particle P2 is also subjected to the traction force N2 of the hair particle P2, and produces a corresponding speed v3 and acceleration a3 (in the same direction as N2).
  • the speed and acceleration of the hair particles shown in the figure are only for illustration.
  • the corresponding particle motion state of each hair particle is affected by the specific model parameters of the guiding hair and can be obtained by simulating based on the model parameters of the guiding hair through existing dynamic simulation tools. The specific implementation method will not be repeated here.
  • the initial coordinates of the corresponding hair particle are updated according to the particle motion state of each hair particle of the guide hair in the hair data, and the simulation coordinates corresponding to each hair particle are obtained.
  • xt+1 is the position coordinate of the hair particle at time t+1
  • xt is the position coordinate of the hair particle at time t
  • vt is the velocity of the hair particle at time t
  • at is the acceleration of the hair particle at time t
  • dt is the time interval between two moments (e.g., time t and time t+1).
  • xt+1 is the simulation coordinate
  • xt is the initial coordinate
  • vt and at are the motion state of the particle.
  • the specific process is similar and will not be repeated here.
  • the simulation posture of the guide hair can be obtained.
  • the simulation coordinates of the hair particles corresponding to the guide hair are three-dimensional coordinates, which can be represented by an array; the simulation coordinates of each hair particle are combined to obtain the corresponding simulation posture, which can be an array, a matrix or a structure, which will not be described in detail.
  • Step S103 obtaining motion information of the guide hair according to the simulation posture and the initial posture, wherein the motion information represents a posture change feature of the simulation posture of the guide hair relative to the initial posture.
  • a posture change feature between the simulated posture and the initial posture i.e., motion information
  • the motion information can be represented by a mapping vector between the coordinates of the hair particles of the guide hair at the initial posture and the coordinates of the hair particles at the simulated posture.
  • the initial posture includes the initial coordinates of at least two hair particles corresponding to the guide hair;
  • the simulated posture includes the simulated coordinates of at least two hair particles corresponding to the guide hair.
  • the specific implementation steps of step S103 include:
  • Step S1031 Obtaining a coordinate transformation vector of the hair particle according to the initial coordinates of the hair particle and the corresponding simulation coordinates.
  • Step S1032 obtaining motion information of the guiding hair according to the coordinate change vectors of at least two hair particles.
  • the hair particle has corresponding different coordinates, namely, the initial coordinate and the simulation coordinate.
  • the initial coordinate is x 0
  • the simulation coordinate is x 1 .
  • x 1 T(x 2 )
  • T is the coordinate transformation between the initial coordinates and the simulated coordinates, that is, the coordinate transformation vector of the hair particle. Then, based on the combination of the coordinate change vectors of at least two hair particles, the motion information of the guide hair is obtained.
  • Step S104 obtaining a hair rendering image according to the motion information and the initial position and posture of the ordinary hair.
  • the motion information of the guide hair is transmitted to the corresponding ordinary hair, so that the ordinary hair generates the same posture change as the guide hair on the basis of the initial posture (initial posture), thereby obtaining the second simulated hair corresponding to the ordinary hair, and then performing image rendering based on the second simulated hair. You can get the hair rendering image.
  • step S104 includes:
  • Step S1041 Based on the motion information, the initial posture of the ordinary hair is simulated to obtain the simulated posture of the ordinary hair.
  • Step S1042 performing image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
  • FIG8 is a schematic diagram of a process for generating a simulated posture of ordinary hair provided by an embodiment of the present disclosure.
  • motion information is used to characterize the posture change characteristics of the guided hair.
  • the posture change characteristics represented by the motion information are to guide the hair to rotate 30 degrees clockwise.
  • the ordinary hair rotates with the same posture change characteristics based on the motion information on the basis of the initial posture to generate a simulated posture.
  • a renderer performs image rendering to obtain a hair rendering image composed of ordinary hair.
  • the hair rendering image can be displayed in the corresponding display interface according to the needs of the target application.
  • the ordinary hair is integrated with the motion information of the guide hair on the basis of retaining the original hair characteristics (initial position), thereby indirectly realizing the driving of the ordinary hair by the external environment. Since the initial positions of the ordinary hairs in a group of ordinary hairs corresponding to the same guide hair are not completely the same, the second simulated hairs of the ordinary hairs in a group of ordinary hairs corresponding to the same guide hair are also not completely the same, so that more differences are generated between different ordinary hairs, so that the generated hair rendering image is more realistic and natural.
  • hair data is obtained, and the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair; based on the hair data, the guide hair is simulated to obtain the simulated posture of the guide hair; according to the simulated posture and the initial posture, the motion information of the guide hair is obtained, and the motion information characterizes the posture change characteristics of the simulated posture of the guide hair relative to the initial posture; according to the motion information and the initial posture of the common hair, a hair rendering image is obtained.
  • the motion information characterizing the posture change characteristics of the guide hair is obtained through the simulated posture of the guide hair, and the motion information is used as the posture change amount to redirect the common hair, so that the common hair obtains the corresponding simulation result based on the original initial posture in combination with the motion information, and then obtains the hair rendering image. Since the initial postures of the common hairs are different, after redirection in combination with the corresponding motion information, the simulation result is affected by the initial postures of the common hairs, so that the simulation results of the common hairs are also different, thereby achieving the retention of hair details and improving the authenticity and visual effect of the hair rendering image.
  • FIG9 is a second flow chart of a hair treatment method provided by an embodiment of the present disclosure. This embodiment is described in detail based on the embodiment shown in FIG2 , and adds a step of correcting motion information.
  • the hair treatment method includes:
  • Step S201 Acquire hair data, where the hair data includes initial coordinates of hair particles constituting guide hair and initial coordinates of hair particles constituting ordinary hair.
  • Step S202 Acquire the particle motion state of the hair particle guiding the hair.
  • the hair data includes different initial positions of the guide hair, and the initial positions of the guide hair are represented by the initial coordinates of the hair particles of the guide hair.
  • the hair data also includes the initial positions of the ordinary hair corresponding to each guide hair, and the initial positions of the ordinary hair are represented by the initial coordinates of the hair particles of the ordinary hair.
  • the hair particles of the guide hair have corresponding particle motion states, such as the motion speed and motion acceleration of the first particle hair.
  • the particle motion state can be obtained through a dynamic simulation tool, which has been introduced in the embodiment shown in FIG2 and will not be repeated here.
  • Step S203 obtaining simulation coordinates according to the initial coordinates of the hair particle guiding the hair and the corresponding particle motion state.
  • the initial coordinates of the hair particle are transformed according to the particle motion state to obtain the corresponding simulation coordinates.
  • the particle motion state includes the first motion velocity of the hair particle and the first motion acceleration of the hair particle.
  • the simulation coordinates can be obtained by integrating the first motion velocity and the first motion acceleration once and twice, respectively, and then adding them to the initial coordinates.
  • the simulation coordinates can also be obtained only by integrating the first motion velocity once, or only by integrating the first motion acceleration twice, which will not be repeated here.
  • step S203 includes:
  • Step S2031 obtaining a first attenuation speed corresponding to the hair particle according to the first movement speed of the hair particle.
  • Step S2032 Correcting the particle motion state of the hair particle according to the first attenuation speed to obtain a first corrected motion state.
  • Step S2033 obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding first corrected motion state.
  • the movement speed of the hair particles in the hair is guided to apply a matching attenuation speed, wherein the first movement speed of the hair particles is positively correlated with the corresponding first attenuation speed, that is, the greater the first movement speed, the greater the first attenuation speed. Then, the first movement speed is subtracted from the corresponding first attenuation speed to obtain the particle motion state of the hair particles on the guided hair after applying friction damping, that is, the first corrected motion state, such as the corrected motion speed and corrected motion acceleration of the hair particles.
  • the simulation coordinates are obtained, thereby further improving the accuracy of the simulation coordinates and improving the authenticity of the hair rendering effect.
  • the method of obtaining the corresponding simulation coordinates based on the initial coordinates of the hair particles and the corresponding particle motion state (first corrected motion state) has been introduced in the previous embodiment and will not be repeated here.
  • step S203 includes:
  • Step S2034 setting a virtual hair particle of the guide hair, the virtual hair particle is located in the extension direction of one side of the hair tip particle of the guide hair.
  • Step S2035 According to the position constraint relationship between the virtual hair particle and the hair tip particle, the second decay speed corresponding to the hair tip particle is obtained, wherein the direction of the second decay speed is opposite to the direction of the line connecting the adjacent hair particles from the hair tip particle to the hair tip particle.
  • Step S2036 Correcting the particle motion state of the hair particle according to the second attenuation speed to obtain a second corrected motion state.
  • Step S2037 obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding second corrected motion state.
  • the movement of each hair particle on the guiding hair is mainly generated based on the traction force of other hair particles adjacent to it.
  • the hair particles at the root of the guiding hair i.e., the hair root particles
  • the hair particles at the root of the guiding hair are directly fixed to the surface of the head model and are mainly affected by the constraint force of the head model. Therefore, their positions are fixed and move with the movement of the head model, which is also equivalent to the driving force source of the guiding hair movement. Therefore, the position of the hair root particles will not deviate.
  • the hair particles in the middle part of the guiding hair will be subject to the constraint force of the hair particles on both sides (bidirectional constraint). Therefore, the movement position of the hair particles in the middle part is usually stable.
  • the hair tip particle the last hair particle located at the outer end of the guiding hair (i.e., the hair tip particle) is only constrained by the hair particles adjacent to one side. Therefore, when simulating it, the problem of unstable motion state will occur, and in terms of visual performance, the problem of "abnormal shaking of the hair tip" will occur.
  • a virtual hair particle is generated, wherein the coordinates of the virtual hair particle can be determined based on the coordinates of the hair tip particle and at least one hair particle before the hair tip particle. More specifically, for example, according to the coordinates of the hair tip particle and at least one hair particle before the hair tip particle, the corresponding hair curvature and the distance between the hair particles are calculated, and then the simulation coordinates of the virtual hair particle are obtained based on the calculated hair curvature and the distance between the hair particles.
  • FIG12 is a schematic diagram of a virtual hair particle provided by the embodiment of the present disclosure.
  • the coordinates of the hair tip particle P1 of the guiding hair and the two hair particles P2 and P3 (shown as P1, P2 and P3) before the hair tip particle are obtained, and then according to the distance between the coordinates of P1, P2 and P3, and the corresponding hair curvature, the position of the virtual hair particle P0 (shown as P0) is obtained, and the construction of the virtual hair particle is completed.
  • the position constraint relationship between the virtual hair particle and the hair tip particle is used to calculate the second decay velocity corresponding to the hair tip particle.
  • the second decay velocity is a velocity component of the hair tip particle caused by the traction force of the virtual hair particle on the hair tip particle.
  • the direction of the second decay velocity is the opposite direction of the line connecting the hair tip particle to the adjacent hair particle.
  • the particle motion state of the hair particle is corrected in combination with the second decay velocity pair of the hair tip particle to obtain a second corrected motion state, for example, a corrected motion state is obtained.
  • the simulation coordinates are obtained.
  • the specific implementation method is similar to the implementation method of obtaining the simulation coordinates based on the first corrected motion state and the initial coordinates of the hair particles. It has been described in detail in the previous embodiment and will not be repeated here.
  • virtual hair particles are constructed to constrain hair tip particles, thereby improving the motion stability of the hair tip particles and improving the visual effect of the hair rendering image.
  • the two methods of obtaining simulation coordinates in the embodiments corresponding to Figures 10 and 11 above can also be used in combination, that is, in one possible implementation, the first corrected motion state and the second corrected motion state corresponding to the hair particle are respectively obtained, and then the initial coordinates are transformed based on the first corrected motion state and the second corrected motion state in turn to obtain simulation coordinates, thereby obtaining more accurate simulation coordinates and improving the visual effect of the hair rendering image.
  • Step S204 obtaining a simulated posture of the guide hair according to the simulated coordinates of at least two hair particles of the guide hair.
  • step S204 the method further includes:
  • Step S205 Obtain constraint information of the guide hair, and correct the simulation pose according to the constraint information to obtain a corrected simulation pose, wherein the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs.
  • the guide hair may also be configured with constraint information, and the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs. More specifically, for example, the constraint information is used to limit the hair length of the guide hair to be within a preset length interval; the constraint information is used to limit the hair shape of the guide hair, such as the curvature, to be within a preset curvature interval, thereby ensuring the actual authenticity of the guide hair; the constraint information is used to limit the distance between the guide hairs to be greater than a length threshold, thereby ensuring the rationality of the distribution of the guide hairs and improving the overall visual effect of the hair rendering image.
  • the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs. More specifically, for example, the constraint information is used to limit the hair length of the guide hair to be within a preset length interval; the constraint information is used to limit the hair shape
  • the constraint information of the guide hair can be preset as needed, or can be automatically generated based on the appearance model of a person or an animal, which will not be described in detail here.
  • the simulation pose of the guide hair is corrected by the above constraint information to obtain a corrected simulation pose, so that the simulation result of the guide hair is more realistic and the rendering effect of the subsequent hair rendering image is improved.
  • Step S206 obtaining motion information of the guide hair according to the simulated posture or the modified simulated posture and the initial posture, wherein the motion information includes coordinate transformation vectors corresponding to the hair particles constituting the guide hair.
  • step S206 the method further includes:
  • Step S207 obtaining guide data corresponding to the hair data, and obtaining weighted motion information corresponding to each common hair according to the guide data, wherein the guide data is used to characterize the mapping relationship and mapping weight between the guide hair and the corresponding common hair.
  • the guide data is data that characterizes the mapping relationship and mapping weight between the guide hair strands and the corresponding ordinary hair strands, wherein the guide data may be data included in the hair strand data, or may be independent data corresponding to the hair strand data.
  • the guide data includes an index identifier corresponding to each guide hair strand, and a hair strand identifier of the ordinary hair strand corresponding to each guide hair strand.
  • the guide data also includes a mapping authority corresponding to each ordinary hair strand, wherein the mapping authority is, for example, a normalized value that characterizes the degree to which the ordinary hair strand is affected by the operation of the guide hair strand, and the value range is, for example, (0,1]. After weighting the motion information corresponding to each ordinary hair strand by the mapping weight, the weighted motion information corresponding to each ordinary hair strand is obtained.
  • FIG13 is a schematic diagram of generating weighted motion information provided by an embodiment of the present disclosure.
  • the corresponding motion information generated based on the simulated posture of the guide hair A is motion information Info_A, and the motion information is used to rotate the guide hair counterclockwise by 30 degrees.
  • the common hair corresponding to the guide hair A includes common hair A_1, common hair A_2 and common hair A_3 (shown as A_1, A_2 and A_3 in the figure), and the mapping weight coef_w corresponding to the common hair A_1 is 1, the mapping weight coef_w corresponding to the common hair A_2 is 0.7, and the mapping weight coef_w corresponding to the common hair A_1 is 0.3.
  • the weighted motion information of the common hair A_1 is generated as Info_A1, which is used to rotate the common hair A_1 counterclockwise by 30 degrees (shown as 30 in the figure, the same below);
  • the weighted motion information of the common hair A_2 is generated as Info_A2, which is used to rotate the common hair A_2 counterclockwise by 21 degrees (the product of 30 degrees and the weight coefficient 0.7, the same below);
  • the weighted motion information of the common hair A_3 is generated as Info_A3, which is used to rotate the common hair A_3 counterclockwise by 9 degrees.
  • mapping weights of the common hair corresponding to the guide hair are further determined through the guide data, so that the expression styles of the common hair are more diversified, the hair details are increased, and the rendering effect of the hair rendering image is improved.
  • Step S208 Based on the motion information or the weighted motion information, the initial posture of the ordinary hair is simulated to obtain a simulated posture of the ordinary hair.
  • step S208 includes:
  • Step S2081 based on the correspondence between the guide hair and the ordinary hair, obtaining the hair particles of the ordinary hair corresponding to the hair particles of the guide hair.
  • Step S2082 Obtain the target coordinate change vector corresponding to the hair particle of each ordinary hair according to the weighted motion information or the weighted motion information.
  • Step S2083 based on the coordinate change amount of the hair particle of the ordinary hair and the corresponding initial coordinate, the simulation coordinate corresponding to the hair particle of the ordinary hair is obtained.
  • Step S2084 obtaining a simulated pose of the ordinary hair according to the simulated coordinates of at least two hair particles of the ordinary hair.
  • hair particles of ordinary hair corresponding to the hair particles in each ordinary hair corresponding to the guide hair are obtained, wherein the number of hair particles of the ordinary hair of each ordinary hair is the same as the number of hair particles of the corresponding guide hair. Therefore, based on the particle index of the hair particle and the particle index of the hair particle of the ordinary hair, the hair particles of the ordinary hair corresponding to the hair particle can be obtained.
  • the weighted motion information or the weighted motion information contains the coordinate change vector corresponding to the hair particle of each guide hair, and based on the mapping relationship determined in the previous step, the target coordinate change vector corresponding to the hair particle of each ordinary hair is obtained.
  • the specific implementation method of the coordinate change vector has been introduced under the relevant steps in the embodiment shown in FIG. 2, and will not be repeated here. Furthermore, the initial coordinates corresponding to the hair particles of ordinary hair are transformed by the target coordinate change vector, so that the motion information of the guide hair is transferred to the corresponding ordinary hair, so that the ordinary hair performs a similar motion to the guide hair, and the simulation coordinates corresponding to the hair particles of the ordinary hair are obtained, and then the simulation posture of the ordinary hair is obtained according to the set of simulation coordinates of at least two hair particles of each ordinary hair.
  • Step S209 performing image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
  • step S204 and step 209 have been introduced in step S102 and step 104 in the embodiment shown in FIG. 2 , and will not be described in detail here.
  • FIG15 is a structural block diagram of a hair treatment device provided by an embodiment of the present disclosure.
  • the hair treatment device 3 includes:
  • a simulation module 32 configured to simulate the guide hair based on the hair data to obtain a simulated posture of the guide hair
  • a processing module 33 for obtaining motion information of the guide hair according to the simulated posture and the initial posture, wherein the motion information represents a posture change feature of the simulated posture of the guide hair relative to the initial posture;
  • the rendering module 34 is used to obtain a hair rendering image according to the motion information and the initial position of the ordinary hair.
  • the hair data includes initial coordinates of hair particles constituting the guiding hair; the simulation module 32 is specifically used to: obtain the particle motion state of the hair particles of the guiding hair, wherein the particle motion state represents the motion speed and/or motion acceleration of the hair particles; obtain simulation coordinates according to the initial coordinates of the hair particles and the corresponding particle motion state; obtain the simulation posture of the guiding hair according to the simulation coordinates of at least two hair particles of the guiding hair.
  • the particle motion state includes a first motion speed of the hair particle.
  • the simulation module 32 obtains the simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state, it is specifically used to: obtain a first decay speed corresponding to the hair particle according to the first motion speed of the hair particle; correct the particle motion state of the hair particle according to the first decay speed to obtain a first corrected motion state; and obtain a first decay speed corresponding to the hair particle according to the initial coordinates of the hair particle and the corresponding First, correct the motion state to obtain the simulation coordinates.
  • the hair particles of the guiding hair include hair tip particles.
  • the simulation module 32 obtains the simulation coordinates according to the initial coordinates of the hair particles and the corresponding particle motion states, it is specifically used to: generate virtual hair particles after the hair tip particles of the guiding hair; obtain the second decay speed corresponding to the hair tip particles according to the position constraint relationship between the virtual hair particles and the hair tip particles, wherein the direction of the second decay speed is opposite to the direction of the line connecting the adjacent hair particles from the hair tip particles to the hair tip particles; correct the particle motion state of the hair particles according to the second decay speed to obtain the second corrected motion state; obtain the simulation coordinates according to the initial coordinates of the hair particles and the corresponding second corrected motion state.
  • the rendering module 34 is specifically used to: simulate the initial posture of ordinary hair based on motion information to obtain the simulated posture of ordinary hair; perform image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
  • the motion information includes coordinate transformation vectors corresponding to the hair particles constituting the guide hair
  • the initial posture includes the initial coordinates of at least two hair particles of the ordinary hair
  • the rendering module 34 simulates the initial posture of the ordinary hair based on the motion information to obtain the simulated posture of the ordinary hair, it is specifically used to: obtain the hair particles of the ordinary hair corresponding to the hair particles of the guide hair in the ordinary hair corresponding to the guide hair; obtain the coordinate change vectors corresponding to the hair particles of each ordinary hair according to the motion information; obtain the simulated coordinates corresponding to the hair particles of the ordinary hair based on the coordinate change amount of the hair particles of the ordinary hair and the corresponding initial coordinates; obtain the simulated posture of the ordinary hair according to the simulated coordinates of the hair particles of at least two ordinary hair.
  • the initial posture includes the initial coordinates of at least two hair particles corresponding to the guiding hair;
  • the simulation posture includes the simulation coordinates of at least two hair particles corresponding to the guiding hair;
  • the processing module 33 is specifically used to: obtain the coordinate transformation vector of the hair particle according to the initial coordinates and the corresponding simulation coordinates of the hair particle; obtain the motion information of the guiding hair according to the coordinate change vectors of at least two hair particles.
  • the acquisition module 31 is further used to: acquire guide data corresponding to the hair data, the guide data being used to characterize the mapping relationship and mapping weight between the guide hair and the corresponding ordinary hair; the rendering module 34 is specifically used to: obtain weighted motion information corresponding to each ordinary hair according to the guide data; and obtain a hair rendering image according to the weighted motion information of the ordinary hair and the initial posture of the ordinary hair.
  • the simulation module 32 is further used to: obtain constraint information of the guide hair, the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs; correct the simulated posture according to the constraint information to obtain a corrected simulated posture; the processing module 33 is specifically used to: obtain the motion information of the guide hair according to the corrected simulated posture and the initial posture of the guide hair.
  • the acquisition module 31, simulation module 32, processing module 33 and rendering module 34 are connected in sequence.
  • the hair processing device 3 provided in this embodiment can implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be repeated in this embodiment.
  • FIG16 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG16 , the electronic device 4 includes:
  • the memory 42 stores computer executable instructions
  • the processor 41 executes the computer-executable instructions stored in the memory 42 to implement the hair processing method in the embodiments shown in FIGS. 2 to 14 .
  • processor 41 and the memory 42 are connected via a bus 43 .
  • An embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored.
  • the computer-executable instructions are executed by a processor, they are used to implement the hair processing method provided in any one of the embodiments corresponding to Figures 2 to 14 of the present disclosure.
  • the present disclosure provides a computer program product, including a computer program.
  • the computer program is executed by a processor, the hair processing method in the embodiments shown in FIGS. 2 to 14 is implemented.
  • FIG. 17 it shows a schematic diagram of the structure of an electronic device 900 suitable for implementing the embodiment of the present disclosure
  • the electronic device 900 may be a terminal device or a server.
  • the terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (Portable Android Devices, PADs), portable multimedia players (PMPs), vehicle terminals (such as vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • PDAs personal digital assistants
  • PADs Portable Android Devices
  • PMPs portable multimedia players
  • vehicle terminals such as vehicle navigation terminals
  • fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 17 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.
  • the electronic device 900 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 902 or a program loaded from a storage device 908 to a random access memory (RAM) 903.
  • a processing device e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • Various programs and data required for the operation of the electronic device 900 are also stored in the RAM 903.
  • the processing device 901, the ROM 902, and the RAM 903 are connected to each other via a bus 904.
  • An input/output (I/O) interface 905 is also connected to the bus 904.
  • the following devices can be connected to the I/O interface 905: including, for example, a touch screen, a touch pad, a keyboard, a mouse, Input devices 906 such as cameras, microphones, accelerometers, gyroscopes, etc.; output devices 907 such as liquid crystal displays (LCDs), speakers, vibrators, etc.; storage devices 908 such as magnetic tapes, hard disks, etc.; and communication devices 909.
  • the communication device 909 can allow the electronic device 900 to communicate with other devices wirelessly or by wire to exchange data.
  • FIG. 17 shows an electronic device 900 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication device 909, or installed from a storage device 908, or installed from a ROM 902.
  • the processing device 901 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
  • This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above.
  • the computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (RF), etc., or any suitable combination of the above.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device executes the method shown in the above embodiment.
  • Computer programs for performing operations of the present disclosure may be written in one or more programming languages, or a combination of such languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as an independent software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., using an Internet service provider to connect through the Internet).
  • LAN local area network
  • WAN wide area network
  • each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the name of a unit does not limit the unit itself in some cases.
  • the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLDs), and the like.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOCs systems on chips
  • CPLDs complex programmable logic devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provided in the embodiments of the present disclosure are a hair processing method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring hair data, and dividing the hair data into guide hair and ordinary hair; processing the guide hair on the basis of an initial pose thereof, so as to obtain a simulation pose of the guide hair; according to the simulation pose and the initial pose of the guide hair, obtaining motion information of the guide hair, wherein the motion information represents a posture change feature of the simulation pose of the guide hair relative to the initial pose of the guide hair; and obtaining a hair rendering image according to the motion information and an initial pose of the ordinary hair.

Description

发丝处理方法、装置、电子设备及存储介质Hair processing method, device, electronic device and storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请以申请号为202310397317.2、申请日为2023年04月12日的中国专利申请为基础,并主张其优先权,该中国专利申请的公开内容在此作为整体引入本申请中。This application is based on and claims the priority of a Chinese patent application with application number 202310397317.2 and application date April 12, 2023. The disclosed content of the Chinese patent application is hereby introduced as a whole into this application.
技术领域Technical Field
本公开实施例涉及图像处理技术领域,尤其涉及一种发丝处理方法、装置、电子设备及存储介质。The embodiments of the present disclosure relate to the field of image processing technology, and in particular to a hair processing method, device, electronic device and storage medium.
背景技术Background Art
当前,在短视频生成和游戏制作等领域,为了追求更加真实的视觉效果,针对人或动物的毛发,会以发丝为单位进行动态模拟,从而表现出逼真的视觉效果,在该过程中,需要基于发丝数据对发丝的运动状态进行仿真,得到发丝在下一帧图像中的仿真位姿,之后再基于该仿真位姿进行渲染,实现发丝的运动。Currently, in the fields of short video generation and game production, in order to pursue more realistic visual effects, the hair of humans or animals is dynamically simulated on a hair-by-hair basis to produce realistic visual effects. In this process, it is necessary to simulate the motion state of the hair based on the hair data to obtain the simulated pose of the hair in the next frame of the image, and then render based on the simulated pose to realize the movement of the hair.
但由于发丝数据中包含的发丝数量巨大,难以在满足渲染实时性要求的前提下,实现对每一根发丝的独立仿真,因此,相关技术中的方案,通常是仅对发丝数据中的引导发丝进行仿真,之后再将引导发丝的仿真位姿应用于其他普通发丝,从而实现全部发丝的仿真。However, due to the huge number of hairs contained in the hair data, it is difficult to achieve independent simulation of each hair while meeting the real-time rendering requirements. Therefore, the solution in the related art is usually to simulate only the guide hair in the hair data, and then apply the simulated pose of the guide hair to other ordinary hair, so as to achieve simulation of all hairs.
发明内容Summary of the invention
本公开实施例提供一种发丝处理方法、装置、电子设备及存储介质。Embodiments of the present disclosure provide a hair processing method, device, electronic device, and storage medium.
第一方面,本公开实施例提供一种发丝处理方法,包括:In a first aspect, an embodiment of the present disclosure provides a hair treatment method, comprising:
获取发丝数据,将发丝数据分为引导发丝和普通发丝;基于引导发丝的初始位姿对其进行处理,得到所述引导发丝的仿真位姿;根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,所述运动信息表征所述引导发丝的仿真位姿相对所述引导发丝的初始位姿的姿态变化特征;根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像。The hair data is acquired, and the hair data is divided into guide hair and common hair; the guide hair is processed based on its initial posture to obtain a simulated posture of the guide hair; motion information of the guide hair is obtained according to the simulated posture and the initial posture of the guide hair, wherein the motion information represents a posture change feature of the simulated posture of the guide hair relative to the initial posture of the guide hair; a hair rendering image is obtained according to the motion information and the initial posture of the common hair.
第二方面,本公开实施例提供一种发丝处理装置,包括:In a second aspect, an embodiment of the present disclosure provides a hair processing device, comprising:
获取模块,用于获取发丝数据,并将发丝数据分为引导发丝和普通发丝;An acquisition module, used for acquiring hair data and dividing the hair data into guide hair and ordinary hair;
仿真模块,用于基于引导发丝的初始位姿对其进行处理,得到所述引导发丝的仿真位 姿;A simulation module is used to process the guide hair based on its initial posture to obtain a simulation posture of the guide hair. posture;
处理模块,用于根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,所述运动信息表征所述引导发丝的仿真位姿相对所述引导发丝的初始位姿的姿态变化特征;a processing module, configured to obtain motion information of the guide hair according to the simulated pose and the initial pose of the guide hair, wherein the motion information represents a posture change feature of the simulated pose of the guide hair relative to the initial pose of the guide hair;
渲染模块,用于根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像。A rendering module is used to obtain a hair rendering image according to the motion information and the initial position of the ordinary hair.
第三方面,本公开实施例提供一种电子设备,包括:In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
处理器,以及与所述处理器通信连接的存储器;A processor, and a memory communicatively connected to the processor;
所述存储器存储计算机执行指令;The memory stores computer-executable instructions;
所述处理器执行所述存储器存储的计算机执行指令,以实现如上第一方面以及第一方面各种可能的设计所述的发丝处理方法。The processor executes the computer-executable instructions stored in the memory to implement the hair processing method as described in the first aspect and various possible designs of the first aspect.
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的发丝处理方法。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer execution instructions. When a processor executes the computer execution instructions, the hair processing method described in the first aspect and various possible designs of the first aspect is implemented.
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的发丝处理方法。In a fifth aspect, an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the hair processing method as described in the first aspect and various possible designs of the first aspect.
本实施例提供的发丝处理方法、装置、电子设备及存储介质,通过获取发丝数据,所述发丝数据用于表征引导发丝的初始位姿,以及与所述引导发丝对应的普通发丝的初始位姿;基于所述发丝数据,对所述引导发丝进行仿真,得到所述引导发丝的仿真位姿;根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,所述运动信息表征所述引导发丝的仿真位姿相对所述引导发丝的初始位姿的姿态变化特征;根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像。通过引导发丝的仿真位姿,得到表征引导发丝的位姿变化特征的运动信息,并将运动信息作为位姿变化量,对普通发丝进行重定向,使普通发丝在原有的初始位姿的基础上,结合运动信息得到对应的仿真结果,进而得到发丝渲染图像。The hair processing method, device, electronic device and storage medium provided in the present embodiment obtain hair data, the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair; based on the hair data, the guide hair is simulated to obtain the simulated posture of the guide hair; according to the simulated posture and the initial posture of the guide hair, the motion information of the guide hair is obtained, the motion information characterizes the posture change characteristics of the simulated posture of the guide hair relative to the initial posture of the guide hair; according to the motion information and the initial posture of the common hair, a hair rendering image is obtained. The motion information characterizing the posture change characteristics of the guide hair is obtained through the simulated posture of the guide hair, and the motion information is used as the posture change amount to redirect the common hair, so that the common hair obtains the corresponding simulation result based on the original initial posture combined with the motion information, thereby obtaining the hair rendering image.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure or related technologies, the following briefly introduces the drawings required for use in the embodiments or related technical descriptions. Obviously, the drawings described below are some embodiments of the present disclosure. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative labor.
图1为本公开实施例提供的发丝处理方法的一种应用场景图; FIG1 is a diagram showing an application scenario of a hair treatment method provided by an embodiment of the present disclosure;
图2为本公开实施例提供发丝处理方法的流程示意图一;FIG2 is a schematic diagram of a first flow chart of a hair treatment method according to an embodiment of the present disclosure;
图3为本公开实施例提供的一种发丝的位姿示意图;FIG3 is a schematic diagram of a posture of a hair provided by an embodiment of the present disclosure;
图4为本公开实施例提供的一种引导发丝和普通发丝的位姿示意图;FIG4 is a schematic diagram of the positions of a guide hair and a normal hair provided by an embodiment of the present disclosure;
图5为图2所示实施例中步骤S102的具体实现方式的流程图;FIG5 is a flow chart of a specific implementation of step S102 in the embodiment shown in FIG2 ;
图6为本公开实施例提供的一种发丝质点的质点运动状态的示意图;FIG6 is a schematic diagram of a particle motion state of a hair particle provided by an embodiment of the present disclosure;
图7为图2所示实施例中步骤S103的具体实现方式的流程图;FIG7 is a flow chart of a specific implementation of step S103 in the embodiment shown in FIG2 ;
图8为本公开实施例提供的一种生成普通发丝的仿真位姿的过程示意图;FIG8 is a schematic diagram of a process of generating a simulated posture of ordinary hair provided by an embodiment of the present disclosure;
图9为本公开实施例提供的发丝处理方法的流程示意图二;FIG9 is a second flow chart of the hair treatment method provided by an embodiment of the present disclosure;
图10为图9所示实施例中步骤S203的一种具体实现方式的流程图;FIG10 is a flowchart of a specific implementation of step S203 in the embodiment shown in FIG9 ;
图11为图9所示实施例中步骤S203的另一种具体实现方式的流程图;FIG11 is a flowchart of another specific implementation of step S203 in the embodiment shown in FIG9 ;
图12为本公开实施例提供的一种虚拟发丝质点的示意图;FIG12 is a schematic diagram of a virtual hair particle provided by an embodiment of the present disclosure;
图13为本公开实施例提供的一种加权运动信息的生成示意图;FIG13 is a schematic diagram of generating weighted motion information provided by an embodiment of the present disclosure;
图14为图9所示实施例中步骤S208的具体实现方式的流程图;FIG14 is a flowchart of a specific implementation method of step S208 in the embodiment shown in FIG9 ;
图15为本公开实施例提供的发丝处理装置的结构框图;FIG15 is a structural block diagram of a hair processing device provided by an embodiment of the present disclosure;
图16为本公开实施例提供的一种电子设备的结构示意图;FIG16 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure;
图17为本公开实施例提供的电子设备的硬件结构示意图。FIG. 17 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present disclosure.
具体实施方式DETAILED DESCRIPTION
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solution and advantages of the embodiments of the present disclosure clearer, the technical solution in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present disclosure.
需要说明的是,本公开所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,并且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准,并提供有相应的操作入口,供用户选择授权或者拒绝。It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data used for analysis, stored data, displayed data, etc.) involved in this disclosure are all information and data authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with the relevant laws, regulations and standards of relevant countries and regions, and provide corresponding operation entrances for users to choose to authorize or refuse.
下面对本公开实施例的应用场景进行解释:The application scenarios of the embodiments of the present disclosure are explained below:
图1为本公开实施例提供的发丝处理方法的一种应用场景图,本公开实施例提供的发丝处理方法,可以应用于视频特效生成和游戏实时渲染等应用场景。更具体地,可以应用于视频特效的实时渲染的应用场景下,如图1所示,本公开实施例提供的方法,可以应用 于终端设备,例如为智能手机、个人电脑等,图1中以智能手机为例进行介绍。其中,终端设备运行有需要进行发丝特效渲染的目标应用,例如为游戏应用、短视频应用等,当终端设备内运行的目标应用在需要动态显示人或动物的发丝时(即发丝的位姿会随着人或动物的位姿的改变而改变),终端设备对每一根发丝进行动态渲染。例如图1中所示人像的头发,基于本公开实施例提供的发丝处理方法,对组成头发的每一根发丝进行动态的仿真和渲染,从而实现在目标应用的图像显示界面内,动态展示人或动物的发丝的目的。FIG1 is a diagram of an application scenario of the hair processing method provided by the embodiment of the present disclosure. The hair processing method provided by the embodiment of the present disclosure can be applied to application scenarios such as video special effects generation and real-time rendering of games. More specifically, it can be applied to the application scenario of real-time rendering of video special effects. As shown in FIG1, the method provided by the embodiment of the present disclosure can be applied In a terminal device, such as a smart phone, a personal computer, etc., FIG1 takes a smart phone as an example for introduction. Among them, the terminal device runs a target application that needs to perform hair special effects rendering, such as a game application, a short video application, etc. When the target application running in the terminal device needs to dynamically display the hair of a person or an animal (that is, the position of the hair will change with the change of the position of the person or the animal), the terminal device dynamically renders each hair. For example, the hair of the portrait shown in FIG1, based on the hair processing method provided in the embodiment of the present disclosure, dynamically simulates and renders each hair that makes up the hair, thereby achieving the purpose of dynamically displaying the hair of a person or an animal in the image display interface of the target application.
相关技术中,针对人或动物的毛发的动态渲染,通常首先需要基于发丝数据对发丝的运动状态进行仿真,得到发丝在下一帧图像中的仿真位姿,之后再基于该仿真位姿进行渲染,从而使发丝表现出运动状态。但由于发丝数据中包含的发丝数量巨大,难以在满足渲染实时性要求的前提下,实现对每一根发丝的独立仿真,因此,相关技术中的方案,通常是对发丝数据中少量的引导发丝进行仿真,之后再将每一引导发丝的仿真位姿应用于与其对应的大量普通发丝,从而实现全部发丝的快速仿真。然而,相关技术中的上述方案,在将引导发丝的仿真位姿应用于对应的普通发丝后,会导致普通发丝自身的位姿属性被覆盖,使同一引导发丝对应的大量普通发丝之间,具有较高的姿态相似性,从而导致发丝细节的丢失,进而利用上述具有较高姿态相似性的发丝进行图像渲染,导致生成的发丝渲染图像存在真实性低,视觉效果差的问题。本公开实施例提供一种发丝处理方法以解决上述问题。In the related art, for the dynamic rendering of human or animal hair, it is usually necessary to first simulate the motion state of the hair based on the hair data to obtain the simulated posture of the hair in the next frame image, and then render based on the simulated posture, so that the hair shows the motion state. However, due to the huge number of hairs contained in the hair data, it is difficult to achieve independent simulation of each hair under the premise of meeting the real-time requirements of rendering. Therefore, the solution in the related art is usually to simulate a small number of guide hairs in the hair data, and then apply the simulated posture of each guide hair to the corresponding large number of ordinary hairs, so as to achieve rapid simulation of all hairs. However, in the above-mentioned solution in the related art, after applying the simulated posture of the guide hair to the corresponding ordinary hair, the posture attributes of the ordinary hair itself will be overwritten, so that the large number of ordinary hairs corresponding to the same guide hair have a high posture similarity, which leads to the loss of hair details, and then the hair with high posture similarity is used for image rendering, resulting in the problem of low authenticity and poor visual effect of the generated hair rendering image. The embodiment of the present disclosure provides a hair processing method to solve the above-mentioned problem.
参考图2,图2为本公开实施例提供发丝处理方法的流程示意图一。本实施例的方法可以应用在终端设备中,该发丝处理方法包括:Referring to FIG. 2 , FIG. 2 is a flow chart of a hair treatment method according to an embodiment of the present disclosure. The method of this embodiment can be applied in a terminal device, and the hair treatment method includes:
步骤S101:获取发丝数据,将发丝数据分为引导发丝和普通发丝。Step S101: Acquire hair data, and divide the hair data into guide hair and ordinary hair.
示例性地,参考上述应用场景中的介绍,发丝数据是用于表征构成人或动物的毛发的发丝的数据,示例性地,所述发丝数据用于表征引导发丝的初始位姿,以及与所述引导发丝对应的普通发丝的初始位姿。通过发丝数据可以确定对应发丝的位姿,位姿是指发丝的位置和姿态。图3为本公开实施例提供的一种发丝的位姿示意图,以渲染人物的头发为例,如图3所示,在构成头发的众多发丝中,包括发丝A和发丝B,其中,发丝A对应的位姿为位姿pos_1,发丝B对应的位姿为位姿pos_2。参考图中所示,首先发丝A和发丝B位于不同位置,其次,发丝A和发丝B具不同姿态,更具体地,即发丝A对应“直发”,而发丝B对应“卷发”。通过发丝所对应的位姿,可以实现对不同发丝的位置和姿态的区分,进而使不同位置处的发丝表现出不同的发丝姿态。Exemplarily, referring to the introduction in the above application scenario, hair data is data used to characterize the hair constituting the hair of a person or an animal. Exemplarily, the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair. The posture of the corresponding hair can be determined by the hair data, and the posture refers to the position and posture of the hair. FIG3 is a schematic diagram of the posture of a hair provided by an embodiment of the present disclosure. Taking the rendering of a character's hair as an example, as shown in FIG3, among the many hairs constituting the hair, there are hair A and hair B, wherein the posture corresponding to hair A is posture pos_1, and the posture corresponding to hair B is posture pos_2. As shown in the reference figure, firstly, hair A and hair B are located at different positions, and secondly, hair A and hair B have different postures, more specifically, hair A corresponds to "straight hair", and hair B corresponds to "curly hair". Through the postures corresponding to the hair, the positions and postures of different hairs can be distinguished, so that the hairs at different positions show different hair postures.
为进一步地,发丝数据包括两类发丝的数据,分别为引导发丝的数据和普通发丝的数据。其中,引导发丝是用于表现人或动物的毛发的整体样式、和/或发丝分布,和/或发丝走向,起到类似“骨骼”的作用,在后续步骤中,通过计算引导发丝的位姿变化,以及与虚 拟环境中其他元素的约束关系,对引导发丝进行仿真计算,从而实现毛发的运动,以及与外界环境的交互。而普通发丝则是在引导发丝的基础上,用于进一步对引导发丝之间的间隙进行填充,从而使整体毛发更加丰满。普通发丝无需基于运动状态进行仿真计算,因此相比引导发丝的仿真计算,具有更小的计算量,通常一根引导发丝对应多根普通发丝,例如,引导发丝与普通发丝的比例为1:100。通过设置少量的引导发丝和对应的普通发丝,可以减少发丝仿真计算的计算量。To further explain, the hair data includes two types of hair data, namely, guide hair data and common hair data. The guide hair is used to represent the overall style, hair distribution, and/or hair direction of a person or animal's hair, and plays a role similar to "skeleton". In the subsequent steps, the position change of the guide hair is calculated, and the position change of the guide hair is compared with the virtual hair. The constraint relationship between other elements in the simulated environment is used to simulate the guide hairs, thereby realizing the movement of the hair and the interaction with the external environment. On the basis of the guide hairs, ordinary hairs are used to further fill the gaps between the guide hairs, so as to make the overall hair fuller. Ordinary hairs do not need to be simulated based on the motion state, so compared with the simulation calculation of guide hairs, it has a smaller amount of calculation. Usually, one guide hair corresponds to multiple ordinary hairs. For example, the ratio of guide hair to ordinary hair is 1:100. By setting a small number of guide hairs and corresponding ordinary hairs, the amount of calculation of hair simulation calculation can be reduced.
进一步地,发丝数据中包含有引导发丝和普通发丝在初始状态下各自对应的位姿数据,即初始位姿和初始位姿。通过发丝数据,可以确定引导发丝的初始位姿,以及与该引导发丝对应的一组普通发丝各自对应的初始位姿,图4为本公开实施例提供的一种引导发丝和普通发丝的位姿示意图,如图4所示,根据发丝数据,发丝A(图中示为A)为引导发丝,位于发丝A附近的发丝A_1、发丝A_2、发丝A_3、发丝A_4(图中分别示为A_1、A_2、A_3、A_4)等为普通发丝,其中,发丝A具有初始位姿pos_1,发丝A_2具有初始位姿pos_2,发丝A_3具有初始位姿pos_3、发丝A_4具有初始位姿pos_4。如图中所示,对于同一个引导发丝所对应的不同的普通发丝,位于不同的位置,同时具有不同的姿态,即具有不同的初始位姿,在后续的处理步骤中,会在各普通发丝所具有的不同的初始位姿的基础上,对各普通发丝进行仿真,从而使其保留部分初始位姿。Furthermore, the hair data includes the corresponding posture data of the guide hair and the ordinary hair in the initial state, namely, the initial posture and the initial pose. Through the hair data, the initial posture of the guide hair and the initial posture of a group of ordinary hairs corresponding to the guide hair can be determined. FIG4 is a schematic diagram of the posture of a guide hair and ordinary hair provided by an embodiment of the present disclosure. As shown in FIG4, according to the hair data, hair A (shown as A in the figure) is the guide hair, and hair A_1, hair A_2, hair A_3, hair A_4 (shown as A_1, A_2, A_3, A_4, etc.) located near hair A are ordinary hairs, wherein hair A has an initial posture pos_1, hair A_2 has an initial posture pos_2, hair A_3 has an initial posture pos_3, and hair A_4 has an initial posture pos_4. As shown in the figure, different ordinary hairs corresponding to the same guide hair are located at different positions and have different postures, that is, different initial postures. In the subsequent processing steps, each ordinary hair will be simulated based on the different initial postures of each ordinary hair, so that it retains part of the initial posture.
其中,引导发丝在初始状态下的位姿(初始位姿),以及普通发丝在初始状态下的位姿(初始位姿),可以是根据目标应用的具体需要预设的,也可以是随机生成的,此处不进行具体限制。同一引导发丝对应的一组普通发丝中,各普通发丝对应的初始位姿不完全相同。The position (initial position) of the guide hair in the initial state and the position (initial position) of the common hair in the initial state may be preset according to the specific needs of the target application or may be randomly generated, and no specific limitation is imposed here. In a group of common hairs corresponding to the same guide hair, the initial positions corresponding to the common hairs are not completely the same.
步骤S102:基于引导发丝的初始位姿对其进行处理,得到所述引导发丝的仿真位姿。Step S102: processing the guide hair based on its initial posture to obtain a simulated posture of the guide hair.
示例性地,根据发丝数据中的引导发丝,对引导发丝进行仿真,即预测、模拟引导发丝在下一帧中的位姿,即仿真位姿。其中,引导发丝的位姿在下一帧中发生变化的实现方式中,一种可能的实现方式为:引导发丝进行随机运动,即在引导发丝的初始位姿的基础上,在该初始位姿在一定范围内进行随机改变,得到与初始位姿不同的位姿,即仿真位姿;更具体地,例如,发丝数据中的引导发丝与普通发丝均是由多个发丝质点构成,发丝质点之间的距离固定,因此,相邻发丝质点之间具有距离约束,对引导发丝的各发丝质点施加随机位置偏移,即可得到引导发丝的仿真位姿。Exemplarily, according to the guide hair in the hair data, the guide hair is simulated, that is, the position and posture of the guide hair in the next frame are predicted and simulated, that is, the simulated posture. Among the implementation methods in which the posture of the guide hair changes in the next frame, one possible implementation method is: the guide hair performs random movement, that is, based on the initial posture of the guide hair, the initial posture is randomly changed within a certain range to obtain a posture different from the initial posture, that is, the simulated posture; more specifically, for example, the guide hair and ordinary hair in the hair data are both composed of multiple hair particles, and the distance between the hair particles is fixed. Therefore, there is a distance constraint between adjacent hair particles, and a random position offset is applied to each hair particle of the guide hair to obtain the simulated posture of the guide hair.
在另一种可能的实现方式中,为模拟发丝在环境中的真实表现,基于环境对引导发丝的影响,例如图像中人或动物的移动带动毛发的运动,对引导发丝的仿真位姿进行模拟。具体地,发丝数据包括构成引导发丝的发丝质点的初始坐标,如图5所示,步骤S102的 具体实现方式包括:In another possible implementation, in order to simulate the real performance of hair in the environment, based on the influence of the environment on the guide hair, such as the movement of hair driven by the movement of people or animals in the image, the simulated posture of the guide hair is simulated. Specifically, the hair data includes the initial coordinates of the hair particles constituting the guide hair, as shown in FIG5 , step S102 Specific implementation methods include:
步骤S1021:获取引导发丝的发丝质点的质点运动状态,其中,质点运动状态表征发丝质点的运动速度和/或运动加速度。Step S1021: acquiring a particle motion state of a hair particle guiding the hair, wherein the particle motion state represents a motion speed and/or motion acceleration of the hair particle.
步骤S1022:根据发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标。Step S1022: obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state.
步骤S1023:根据引导发丝的至少两个发丝质点的仿真坐标,得到引导发丝的仿真位姿。Step S1023: obtaining a simulated posture of the guide hair according to the simulated coordinates of at least two hair particles of the guide hair.
示例性地,本实施例中,引导发丝的发丝质点为发丝质点,对应的,普通发丝的发丝质点为普通发丝的发丝质点,在后续步骤中进行介绍。发丝质点的质点运动状态,即发丝质点在运动状态下的运动速度,和/或运动加速度。该发丝质点的运动速度和运动加速度,由引导发丝上与发丝质点相邻的其他发丝质点对发丝质点的牵引力而产生,可选地,该发丝质点的运动速度和运动加速度还可以由图像环境中的外部对象(例如其他发丝、环境中的障碍物等)对发丝质点的约束力而产生。Exemplarily, in this embodiment, the hair particles of the guide hair are hair particles, and correspondingly, the hair particles of the ordinary hair are hair particles of the ordinary hair, which will be introduced in the subsequent steps. The particle motion state of the hair particle, that is, the motion speed and/or motion acceleration of the hair particle in the motion state. The motion speed and motion acceleration of the hair particle are generated by the traction force of other hair particles adjacent to the hair particle on the guide hair on the hair particle. Optionally, the motion speed and motion acceleration of the hair particle can also be generated by the constraint force of external objects in the image environment (such as other hair, obstacles in the environment, etc.) on the hair particle.
图6为本公开实施例提供的一种发丝质点的质点运动状态的示意图,如图所示,引导发丝L1上,包含有发丝质点P1、发丝质点P2和发丝质点P3(图中示为P1、P2和P3)。其中,发丝质点P1为发根质点,坐标为即发丝质点P1位于头部模型的表面(对应位于头皮的发根),当头部模型运动时,带动发丝质点P1运动,从而使发丝质点P1产生对应的运动;之后,与发丝质点P1相邻的发丝质点P2,受到发丝质点P1的牵引力N1而发生运动,从而使发丝质点P2产生对应的速度v2和加速度a3(与N1同向);而与发丝质点P2相邻的发丝质点P3,也同样受到发丝质点P2的牵引力N2的作用,产生对应的速度v3和加速度a3(与N2同向)。其中,图中所示的发丝质点的速度和加速度仅为示意,各发丝质点的对应的质点运动状态,受到引导发丝的具体模型参数影响,可以通过现有的动力学模拟工具基于引导发丝的模型参数进行模拟而得到,具体实现方式不再赘述。FIG6 is a schematic diagram of a particle motion state of a hair particle provided by an embodiment of the present disclosure. As shown in the figure, the guide hair L1 includes a hair particle P1, a hair particle P2 and a hair particle P3 (shown as P1, P2 and P3 in the figure). Among them, the hair particle P1 is a hair root particle, and the coordinates are, that is, the hair particle P1 is located on the surface of the head model (corresponding to the hair root located on the scalp). When the head model moves, the hair particle P1 is driven to move, so that the hair particle P1 produces a corresponding movement; then, the hair particle P2 adjacent to the hair particle P1 is subjected to the traction force N1 of the hair particle P1 and moves, so that the hair particle P2 produces a corresponding speed v2 and acceleration a3 (in the same direction as N1); and the hair particle P3 adjacent to the hair particle P2 is also subjected to the traction force N2 of the hair particle P2, and produces a corresponding speed v3 and acceleration a3 (in the same direction as N2). Among them, the speed and acceleration of the hair particles shown in the figure are only for illustration. The corresponding particle motion state of each hair particle is affected by the specific model parameters of the guiding hair and can be obtained by simulating based on the model parameters of the guiding hair through existing dynamic simulation tools. The specific implementation method will not be repeated here.
进一步地,示例性地,在获得引导发丝上的各发丝质点的质点运动状态后,根据发丝数据中引导发丝的各发丝质点的质点运动状态,对对应的发丝质点的初始坐标进行位姿更新,得到各发丝质点对应的仿真坐标。具体地,根据发丝质点的初始坐标和对应的质点运动状态,计算仿真坐标的方法,可以通过式(1)实现:
xt+1=xt+vt×dt+at×dt2                (1)
Further, illustratively, after obtaining the particle motion state of each hair particle on the guide hair, the initial coordinates of the corresponding hair particle are updated according to the particle motion state of each hair particle of the guide hair in the hair data, and the simulation coordinates corresponding to each hair particle are obtained. Specifically, the method of calculating the simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state can be implemented by formula (1):
x t+1 =x t +v t ×dt+a t ×dt 2 (1)
其中,xt+1为t+1时刻发丝质点的位置坐标,xt为t时刻发丝质点的位置坐标,vt为t时刻发丝质点的速度,at为t时刻发丝质点的加速度,dt为两时刻(例如t时刻与t+1时刻)间的时间间隔。其中,在t=0时,xt+1即仿真坐标;xt即初始坐标,vt和at即质点运动状态。示例性地,在得到仿真坐标后,为实现发丝的动态渲染,可以进一步基于t=1时刻下 的仿真坐标,计算t=2时刻下的仿真坐标,即将上述步骤中的初始坐标,更新为t=1时刻下的仿真坐标,再基于该更新后的初始坐标,计算t=2时刻下的仿真坐标,从而实现引导发丝实时地动态仿真。具体过程类似,不再赘述。Wherein, xt+1 is the position coordinate of the hair particle at time t+1, xt is the position coordinate of the hair particle at time t, vt is the velocity of the hair particle at time t, at is the acceleration of the hair particle at time t, and dt is the time interval between two moments (e.g., time t and time t+1). Wherein, at t=0, xt+1 is the simulation coordinate; xt is the initial coordinate, and vt and at are the motion state of the particle. Exemplarily, after obtaining the simulation coordinate, in order to realize the dynamic rendering of the hair, it can be further based on the time t=1. The simulation coordinates at time t=2 are calculated, that is, the initial coordinates in the above step are updated to the simulation coordinates at time t=1, and then the simulation coordinates at time t=2 are calculated based on the updated initial coordinates, thereby guiding the hair to be dynamically simulated in real time. The specific process is similar and will not be repeated here.
之后,基于引导发丝的至少两个发丝质点的仿真坐标,即可得到引导发丝的仿真位姿。示例性地,引导发丝对应的发丝质点的仿真坐标为三维坐标,可以通数组表示;将各发丝质点的仿真坐标进行组合,即可得到对应的仿真位姿,该仿真位姿可以为数组、矩阵或结构体,不再距离赘述。Afterwards, based on the simulation coordinates of at least two hair particles of the guide hair, the simulation posture of the guide hair can be obtained. Exemplarily, the simulation coordinates of the hair particles corresponding to the guide hair are three-dimensional coordinates, which can be represented by an array; the simulation coordinates of each hair particle are combined to obtain the corresponding simulation posture, which can be an array, a matrix or a structure, which will not be described in detail.
步骤S103:根据仿真位姿和初始位姿,得到引导发丝的运动信息,运动信息表征引导发丝的仿真位姿相对初始位姿的姿态变化特征。Step S103: obtaining motion information of the guide hair according to the simulation posture and the initial posture, wherein the motion information represents a posture change feature of the simulation posture of the guide hair relative to the initial posture.
示例性地,在得到引导发丝的仿真位姿,根据仿真位姿和发丝数据中的初始位姿进行比较计算,即可得到仿真位姿和初始位姿之间的姿态变化特征,即运动信息。其中,示例性地,运动信息可以基于引导发丝在初始位姿时发丝质点的坐标和在仿真位姿时发丝质点的坐标之间的映射向量来表示。具体地,初始位姿包括引导发丝对应的至少两个发丝质点的初始坐标;仿真位姿包括引导发丝对应的至少两个发丝质点的仿真坐标,如图7所示,步骤S103的具体实现步骤包括:Exemplarily, after obtaining the simulated posture of the guide hair, a posture change feature between the simulated posture and the initial posture, i.e., motion information, is obtained by comparing and calculating the simulated posture and the initial posture. Exemplarily, the motion information can be represented by a mapping vector between the coordinates of the hair particles of the guide hair at the initial posture and the coordinates of the hair particles at the simulated posture. Specifically, the initial posture includes the initial coordinates of at least two hair particles corresponding to the guide hair; the simulated posture includes the simulated coordinates of at least two hair particles corresponding to the guide hair. As shown in FIG7 , the specific implementation steps of step S103 include:
步骤S1031:根据发丝质点的初始坐标和对应的仿真坐标,得到发丝质点的坐标变换向量。Step S1031: Obtaining a coordinate transformation vector of the hair particle according to the initial coordinates of the hair particle and the corresponding simulation coordinates.
步骤S1032:根据至少两个发丝质点的坐标变化向量,得到引导发丝的运动信息。Step S1032: obtaining motion information of the guiding hair according to the coordinate change vectors of at least two hair particles.
示例性地,针对引导发丝中的同一个发丝质点,当引导发丝处于初始位姿下和处于仿真位姿下时,若引导发丝发生运动,则该发丝质点具有对应的不同坐标,即初始坐标和仿真坐标。例如,初始坐标为x0,仿真坐标为x1。初始坐标和仿真坐标之间存在映射关系:
x1=T(x2)
For example, for the same hair particle in the guide hair, when the guide hair is in the initial posture and in the simulation posture, if the guide hair moves, the hair particle has corresponding different coordinates, namely, the initial coordinate and the simulation coordinate. For example, the initial coordinate is x 0 and the simulation coordinate is x 1 . There is a mapping relationship between the initial coordinate and the simulation coordinate:
x 1 =T(x 2 )
T为初始坐标仿真坐标之间的坐标变换,即发丝质点的坐标变换向量。之后,基于至少两个发丝质点的坐标变化向量的组合,得到引导发丝的运动信息。例如,引导发丝A包括三个发丝质点,分别为P1、P2和P3,则引导发丝A对应的运行信息Info=[T1,T2,T3],其中,T1为发丝质点P1的坐标变换向量、T2为发丝质点P2的坐标变换向量、T3为发丝质点P3的坐标变换向量。T is the coordinate transformation between the initial coordinates and the simulated coordinates, that is, the coordinate transformation vector of the hair particle. Then, based on the combination of the coordinate change vectors of at least two hair particles, the motion information of the guide hair is obtained. For example, the guide hair A includes three hair particles, namely P1, P2 and P3, then the operation information corresponding to the guide hair A is Info=[ T1 , T2 , T3 ], where T1 is the coordinate transformation vector of the hair particle P1, T2 is the coordinate transformation vector of the hair particle P2, and T3 is the coordinate transformation vector of the hair particle P3.
步骤S104:根据运动信息和普通发丝的初始位姿,得到发丝渲染图像。Step S104: obtaining a hair rendering image according to the motion information and the initial position and posture of the ordinary hair.
示例性地,在得引导发丝的到运动信息后,将引导发丝的运动信息传递至对应的普通发丝,从而使普通发丝在初始位姿(初始位姿)的基础上,产生与引导发丝相同的位姿变化,从而得到普通发丝对应的第二仿真发丝,之后,基于该第二仿真发丝进行图像渲染, 即可得到发丝渲染图像。Exemplarily, after obtaining the motion information of the guide hair, the motion information of the guide hair is transmitted to the corresponding ordinary hair, so that the ordinary hair generates the same posture change as the guide hair on the basis of the initial posture (initial posture), thereby obtaining the second simulated hair corresponding to the ordinary hair, and then performing image rendering based on the second simulated hair. You can get the hair rendering image.
其中,示例性地,步骤S104的具体实现方式包括:Exemplarily, the specific implementation of step S104 includes:
步骤S1041:基于运动信息,对普通发丝的初始位姿进行仿真,得到普通发丝的仿真位姿。Step S1041: Based on the motion information, the initial posture of the ordinary hair is simulated to obtain the simulated posture of the ordinary hair.
步骤S1042:基于普通发丝的仿真位姿进行图像渲染,得到发丝渲染图像。Step S1042: performing image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
图8为本公开实施例提供的一种生成普通发丝的仿真位姿的过程示意图,如图8所示,运动信息用于表征引导发丝的姿态变化特征,例如图中,运动信息所表征姿态变化特征,为引导发丝顺时针旋转30度;基于该运动信息传递给普通发丝后,普通发丝基于该运动信息,在初始位姿的基础上,以相同的姿态变化特征进行旋转,生成仿真位姿。之后,基于普通发丝的仿真结果,即仿真位姿,例如渲染器进行图像渲染,即可得到由普通发丝构成的发丝渲染图像。再之后,可以根据目标应用的需要,将发丝渲染图像显示在对应的显示界面内。通过上述步骤,使普通发丝在保留原本的发丝特征(初始位姿)的基础上,融合了引导发丝的运动信息,从而间接实现了外部环境对普通发丝的驱动,由于同一引导发丝对应的一组普通发丝中,各普通发丝的初始位姿不完全相同,因此同一引导发丝对应的一组普通发丝中的各普通发丝的第二仿真发丝也不完全相同,使不同的普通发丝之间产生更多的差异性,从而使生成的发丝渲染图像更加真实、自然。FIG8 is a schematic diagram of a process for generating a simulated posture of ordinary hair provided by an embodiment of the present disclosure. As shown in FIG8 , motion information is used to characterize the posture change characteristics of the guided hair. For example, in the figure, the posture change characteristics represented by the motion information are to guide the hair to rotate 30 degrees clockwise. After the motion information is transmitted to the ordinary hair, the ordinary hair rotates with the same posture change characteristics based on the motion information on the basis of the initial posture to generate a simulated posture. Afterwards, based on the simulation result of the ordinary hair, that is, the simulated posture, for example, a renderer performs image rendering to obtain a hair rendering image composed of ordinary hair. Afterwards, the hair rendering image can be displayed in the corresponding display interface according to the needs of the target application. Through the above steps, the ordinary hair is integrated with the motion information of the guide hair on the basis of retaining the original hair characteristics (initial position), thereby indirectly realizing the driving of the ordinary hair by the external environment. Since the initial positions of the ordinary hairs in a group of ordinary hairs corresponding to the same guide hair are not completely the same, the second simulated hairs of the ordinary hairs in a group of ordinary hairs corresponding to the same guide hair are also not completely the same, so that more differences are generated between different ordinary hairs, so that the generated hair rendering image is more realistic and natural.
在本实施例中,通过获取发丝数据,发丝数据用于表征引导发丝的初始位姿,以及与引导发丝对应的普通发丝的初始位姿;基于发丝数据,对引导发丝进行仿真,得到引导发丝的仿真位姿;根据仿真位姿和初始位姿,得到引导发丝的运动信息,运动信息表征引导发丝的仿真位姿相对初始位姿的姿态变化特征;根据运动信息和普通发丝的初始位姿,得到发丝渲染图像。通过引导发丝的仿真位姿,得到表征引导发丝的位姿变化特征的运动信息,并将运动信息作为位姿变化量,对普通发丝进行重定向,使普通发丝在原有的初始位姿的基础上,结合运动信息得到对应的仿真结果,进而得到发丝渲染图像,由于各普通发丝的初始位姿存在差异,因此在结合对应的运动信息进行重定向后,仿真结果受到各普通发丝的初始位姿的影响,使各普通发丝的仿真结果也存在差异,从而实现了发丝细节的保留,提高发丝渲染图像的真实性和视觉效果。In this embodiment, hair data is obtained, and the hair data is used to characterize the initial posture of the guide hair and the initial posture of the common hair corresponding to the guide hair; based on the hair data, the guide hair is simulated to obtain the simulated posture of the guide hair; according to the simulated posture and the initial posture, the motion information of the guide hair is obtained, and the motion information characterizes the posture change characteristics of the simulated posture of the guide hair relative to the initial posture; according to the motion information and the initial posture of the common hair, a hair rendering image is obtained. The motion information characterizing the posture change characteristics of the guide hair is obtained through the simulated posture of the guide hair, and the motion information is used as the posture change amount to redirect the common hair, so that the common hair obtains the corresponding simulation result based on the original initial posture in combination with the motion information, and then obtains the hair rendering image. Since the initial postures of the common hairs are different, after redirection in combination with the corresponding motion information, the simulation result is affected by the initial postures of the common hairs, so that the simulation results of the common hairs are also different, thereby achieving the retention of hair details and improving the authenticity and visual effect of the hair rendering image.
参考图9,图9为本公开实施例提供的发丝处理方法的流程示意图二。本实施例中详细描述在图2所示实施例的基础上,增加了对运动信息进行修正的步骤,该发丝处理方法包括:Referring to FIG9 , FIG9 is a second flow chart of a hair treatment method provided by an embodiment of the present disclosure. This embodiment is described in detail based on the embodiment shown in FIG2 , and adds a step of correcting motion information. The hair treatment method includes:
步骤S201:获取发丝数据,发丝数据包括构成引导发丝的发丝质点的初始坐标和构成普通发丝的发丝质点的初始坐标。 Step S201: Acquire hair data, where the hair data includes initial coordinates of hair particles constituting guide hair and initial coordinates of hair particles constituting ordinary hair.
步骤S202:获取引导发丝的发丝质点的质点运动状态。Step S202: Acquire the particle motion state of the hair particle guiding the hair.
示例性地,发丝数据中包含有不同的引导发丝的初始位姿,引导发丝的初始位姿通过引导发丝的发丝质点的初始坐标来表示,对应的,发丝数据中还包含各引导发丝对应的普通发丝的初始位姿,普通发丝的初始位姿通过普通发丝的普通发丝的发丝质点的初始坐标来表示。之后,当引导发丝出现运动或运动趋势时,引导发丝的发丝质点均有相应的质点运动状态,例如包括第一质点发丝的运动速度、运动加速度等,质点运动状态的获取方式,可以通过动力学模拟工具,在图2所示实施例中已进行介绍,此处不再赘述。Exemplarily, the hair data includes different initial positions of the guide hair, and the initial positions of the guide hair are represented by the initial coordinates of the hair particles of the guide hair. Correspondingly, the hair data also includes the initial positions of the ordinary hair corresponding to each guide hair, and the initial positions of the ordinary hair are represented by the initial coordinates of the hair particles of the ordinary hair. Afterwards, when the guide hair moves or has a movement trend, the hair particles of the guide hair have corresponding particle motion states, such as the motion speed and motion acceleration of the first particle hair. The particle motion state can be obtained through a dynamic simulation tool, which has been introduced in the embodiment shown in FIG2 and will not be repeated here.
步骤S203:根据引导发丝的发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标。Step S203: obtaining simulation coordinates according to the initial coordinates of the hair particle guiding the hair and the corresponding particle motion state.
示例性地,在得到发丝质点的质点运动状态后,根据该质点运动状态对发丝质点的初始坐标进行坐标变换,即可得到对应的仿真坐标,示例性地,质点运动状态包括发丝质点的第一运动速度和发丝质点的第一运动加速度。通过对第一运动速度和第一运动加速度分别进行一次积分和二次积分,再与初始坐标进行相加,即可得到仿真坐标。具体实现方式可参见图2所示实施例中式(1)的相关说明。当然,可以理解的是,在其他可能的实现方式中,也可以仅通过对第一运动速度的一次积分,或者,仅通过对第一运动加速度的二次积分,来得到仿真坐标,此处不再赘述。Exemplarily, after obtaining the particle motion state of the hair particle, the initial coordinates of the hair particle are transformed according to the particle motion state to obtain the corresponding simulation coordinates. Exemplarily, the particle motion state includes the first motion velocity of the hair particle and the first motion acceleration of the hair particle. The simulation coordinates can be obtained by integrating the first motion velocity and the first motion acceleration once and twice, respectively, and then adding them to the initial coordinates. For the specific implementation method, please refer to the relevant description of formula (1) in the embodiment shown in Figure 2. Of course, it can be understood that in other possible implementation methods, the simulation coordinates can also be obtained only by integrating the first motion velocity once, or only by integrating the first motion acceleration twice, which will not be repeated here.
一种可能的实现方式中,如图10所示,步骤S203的具体实现方式包括:In a possible implementation, as shown in FIG10 , a specific implementation of step S203 includes:
步骤S2031:根据发丝质点的第一运动速度,得到发丝质点对应的第一衰减速度。Step S2031: obtaining a first attenuation speed corresponding to the hair particle according to the first movement speed of the hair particle.
步骤S2032:根据第一衰减速度对发丝质点的质点运动状态进行修正,得到第一修正运动状态。Step S2032: Correcting the particle motion state of the hair particle according to the first attenuation speed to obtain a first corrected motion state.
步骤S2033:根据发丝质点的初始坐标和对应的第一修正运动状态,得到仿真坐标。Step S2033: obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding first corrected motion state.
真实环境下毛发中的发丝,会运动过程中,会受到摩擦力等外部力的作用形成摩擦阻尼,为了提高在对发丝图像渲染过程中发丝的视觉真实性,本实施例中,通过引导发丝中发丝质点的运动速度,来为其施加匹配的衰减速度,其中,发丝质点的第一运动速度与对应的第一衰减速度正相关,即的第一运动速度越大,第一衰减速度越大,之后,第一运动速度减去对应的第一衰减速度,得到引导发丝上的发丝质点在施加摩擦阻尼后的质点运动状态,即第一修正运动状态,例如发丝质点的修正运动速度、修正运动加速度。之后,基于发丝质点的初始坐标和对应的第一修正运动状态,得到仿真坐标,从而进一步的提高仿真坐标的准确性,提高发丝渲染效果的真实性。基于发丝质点的初始坐标和对应的质点运动状态(第一修正运动状态),得到对应的仿真坐标的方法,之前实施例中已进行介绍,此处不再赘述。 In the real environment, the hair in the hair will be affected by external forces such as friction during the movement to form friction damping. In order to improve the visual authenticity of the hair in the process of rendering the hair image, in this embodiment, the movement speed of the hair particles in the hair is guided to apply a matching attenuation speed, wherein the first movement speed of the hair particles is positively correlated with the corresponding first attenuation speed, that is, the greater the first movement speed, the greater the first attenuation speed. Then, the first movement speed is subtracted from the corresponding first attenuation speed to obtain the particle motion state of the hair particles on the guided hair after applying friction damping, that is, the first corrected motion state, such as the corrected motion speed and corrected motion acceleration of the hair particles. Then, based on the initial coordinates of the hair particles and the corresponding first corrected motion state, the simulation coordinates are obtained, thereby further improving the accuracy of the simulation coordinates and improving the authenticity of the hair rendering effect. The method of obtaining the corresponding simulation coordinates based on the initial coordinates of the hair particles and the corresponding particle motion state (first corrected motion state) has been introduced in the previous embodiment and will not be repeated here.
另一种可能的实现方式中,如图11所示,步骤S203的具体实现方式包括:In another possible implementation, as shown in FIG11 , the specific implementation of step S203 includes:
步骤S2034:设置引导发丝的虚拟发丝质点,虚拟发丝质点位于引导发丝的发梢质点一侧的延伸方向上。Step S2034: setting a virtual hair particle of the guide hair, the virtual hair particle is located in the extension direction of one side of the hair tip particle of the guide hair.
步骤S2035:根据虚拟发丝质点与发梢质点之间的位置约束关系,得到发梢质点对应的第二衰减速度,其中,第二衰减速度的方向为由发梢质点至发梢质点的相邻发丝质点的连线方向的反方向。Step S2035: According to the position constraint relationship between the virtual hair particle and the hair tip particle, the second decay speed corresponding to the hair tip particle is obtained, wherein the direction of the second decay speed is opposite to the direction of the line connecting the adjacent hair particles from the hair tip particle to the hair tip particle.
步骤S2036:根据第二衰减速度对发丝质点的质点运动状态进行修正,得到第二修正运动状态。Step S2036: Correcting the particle motion state of the hair particle according to the second attenuation speed to obtain a second corrected motion state.
步骤S2037:根据发丝质点的初始坐标和对应的第二修正运动状态,得到仿真坐标。Step S2037: obtaining simulation coordinates according to the initial coordinates of the hair particle and the corresponding second corrected motion state.
示例性地,引导发丝上各发丝质点的运动,主要是基于与其相邻的其他发丝质点的牵引力而产生的,其中,一方面,引导发丝根部的发丝质点(即发根质点),直接固定于头部模型表面,主要受到头部模型的约束力影响,因此其位置随头部模型的运动而固定运动,也相当于引导发丝运动的驱动力来源,因此发根质点的位置不会出现偏差,另一方面,引导发丝的中间部分的发丝质点,均会受到相邻两侧的发丝质点的约束力(双向约束),因此,中间部分的发丝质点的运动位置,通常也是稳定的。但位于引导发丝外端部的最后一个发丝质点(即发梢质点),仅受到一侧相邻的发丝质点的约束,因此,在对其进行仿真是,会出现运动状态不稳定的问题,在视觉表现上,即出现“发梢异常抖动”的问题。Exemplarily, the movement of each hair particle on the guiding hair is mainly generated based on the traction force of other hair particles adjacent to it. On the one hand, the hair particles at the root of the guiding hair (i.e., the hair root particles) are directly fixed to the surface of the head model and are mainly affected by the constraint force of the head model. Therefore, their positions are fixed and move with the movement of the head model, which is also equivalent to the driving force source of the guiding hair movement. Therefore, the position of the hair root particles will not deviate. On the other hand, the hair particles in the middle part of the guiding hair will be subject to the constraint force of the hair particles on both sides (bidirectional constraint). Therefore, the movement position of the hair particles in the middle part is usually stable. However, the last hair particle located at the outer end of the guiding hair (i.e., the hair tip particle) is only constrained by the hair particles adjacent to one side. Therefore, when simulating it, the problem of unstable motion state will occur, and in terms of visual performance, the problem of "abnormal shaking of the hair tip" will occur.
为解决上述问题,本公开实施例中,在引导发丝的发丝质点之后,生成虚拟发丝质点,其中,该虚拟发丝质点的坐标,可以基于发梢质点,以及发梢质点之前的至少一个发丝质点的坐标确定,更具体地,例如,根据发梢质点,以及发梢质点之前相邻的至少一个发丝质点的坐标,计算对应的发丝曲率,以及发丝质点间隔距离,之后基于计算得到的发丝曲率和发丝质点间隔距离,得到虚拟发丝质点的仿真坐标。图12为本公开实施例提供的一种虚拟发丝质点的示意图,如图12所示,根据发丝数据,获得引导发丝的发梢质点P1和发梢质点之前的两个发丝质点P2和P3(图中示为P1、P2和P3)的坐标,之后根据P1、P2和P3的坐标之间的间隔距离,以及对应的发丝曲率,得到虚拟发丝质点P0(图中示为P0)所在的位置,完成虚拟发丝质点的构造。To solve the above problem, in the embodiment of the present disclosure, after the hair particle of the guiding hair, a virtual hair particle is generated, wherein the coordinates of the virtual hair particle can be determined based on the coordinates of the hair tip particle and at least one hair particle before the hair tip particle. More specifically, for example, according to the coordinates of the hair tip particle and at least one hair particle before the hair tip particle, the corresponding hair curvature and the distance between the hair particles are calculated, and then the simulation coordinates of the virtual hair particle are obtained based on the calculated hair curvature and the distance between the hair particles. FIG12 is a schematic diagram of a virtual hair particle provided by the embodiment of the present disclosure. As shown in FIG12, according to the hair data, the coordinates of the hair tip particle P1 of the guiding hair and the two hair particles P2 and P3 (shown as P1, P2 and P3) before the hair tip particle are obtained, and then according to the distance between the coordinates of P1, P2 and P3, and the corresponding hair curvature, the position of the virtual hair particle P0 (shown as P0) is obtained, and the construction of the virtual hair particle is completed.
之后,进一步地,基于动力学仿真工具,利用虚拟发丝质点与发梢质点之间的位置约束关系,计算得到发梢质点对应的第二衰减速度,第二衰减速度即由于虚拟发丝质点对发梢质点的牵引力所导致的发梢质点的一个速度分量,其中,第二衰减速度的方向为由发梢质点至发梢质点的相邻发丝质点的连线方向的反方向。再之后,结合发梢质点的第二衰减速度对,对发丝质点的质点运动状态进行修正,得到第二修正运动状态,例如得到修正运 动速度和修正运动加速度。进而基于修正后的第二修正运动状态和发丝质点的初始坐标,得到仿真坐标,具体实现方式与基于第一修正运动状态和发丝质点的初始坐标,得到仿真坐标的实现方式类似,之前实施例中已进行详细介绍,此处不再赘述。Afterwards, based on the dynamics simulation tool, the position constraint relationship between the virtual hair particle and the hair tip particle is used to calculate the second decay velocity corresponding to the hair tip particle. The second decay velocity is a velocity component of the hair tip particle caused by the traction force of the virtual hair particle on the hair tip particle. The direction of the second decay velocity is the opposite direction of the line connecting the hair tip particle to the adjacent hair particle. Afterwards, the particle motion state of the hair particle is corrected in combination with the second decay velocity pair of the hair tip particle to obtain a second corrected motion state, for example, a corrected motion state is obtained. Then, based on the corrected second corrected motion state and the initial coordinates of the hair particles, the simulation coordinates are obtained. The specific implementation method is similar to the implementation method of obtaining the simulation coordinates based on the first corrected motion state and the initial coordinates of the hair particles. It has been described in detail in the previous embodiment and will not be repeated here.
本实施例步骤中,通过构造虚拟发丝质点来对发梢质点进行约束,从而提高发梢质点的运动稳定性,提高发丝渲染图像的视觉效果。In the steps of this embodiment, virtual hair particles are constructed to constrain hair tip particles, thereby improving the motion stability of the hair tip particles and improving the visual effect of the hair rendering image.
当然,可以理解的是,上述图10和图11所对应的实施例中的两种得到仿真坐标的方法,还可以结合使用,即在一种可能实现方式中,分别获得发丝质点对应的第一修正运动状态和第二修正运动状态,之后依次基于第一修正运动状态和第二修正运动状态对初始坐标进行坐标变换,得到仿真坐标,从而得到更加准确的仿真坐标,提高发丝渲染图像的视觉效果。Of course, it can be understood that the two methods of obtaining simulation coordinates in the embodiments corresponding to Figures 10 and 11 above can also be used in combination, that is, in one possible implementation, the first corrected motion state and the second corrected motion state corresponding to the hair particle are respectively obtained, and then the initial coordinates are transformed based on the first corrected motion state and the second corrected motion state in turn to obtain simulation coordinates, thereby obtaining more accurate simulation coordinates and improving the visual effect of the hair rendering image.
步骤S204:根据引导发丝的至少两个发丝质点的仿真坐标,得到引导发丝的仿真位姿。Step S204: obtaining a simulated posture of the guide hair according to the simulated coordinates of at least two hair particles of the guide hair.
可选地,在步骤S204之后,还包括:Optionally, after step S204, the method further includes:
步骤S205:获取引导发丝的约束信息,并根据约束信息对仿真位姿进行修正,得到修正仿真位姿,其中,约束信息用于限制引导发丝的发丝长度和/或发丝形态,和/或至少两个引导发丝之间的距离。Step S205: Obtain constraint information of the guide hair, and correct the simulation pose according to the constraint information to obtain a corrected simulation pose, wherein the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs.
示例性地,引导发丝还可以配置约束信息,约束信息用于限制引导发丝的发丝长度和/或发丝形态,和/或至少两个引导发丝之间的距离。更具体地,例如,约束信息用于限制引导发丝的发丝长度在预设长度区间内;约束信息用于限制引导发丝的发丝形态,例如曲率,位于预设曲率区间内;从而保证引导发丝的实际真实性;约束信息用于限制引导发丝之间的距离大于长度阈值,从而保证引导发丝分布的合理性,提高发丝渲染图像的整体视觉效果。Exemplarily, the guide hair may also be configured with constraint information, and the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs. More specifically, for example, the constraint information is used to limit the hair length of the guide hair to be within a preset length interval; the constraint information is used to limit the hair shape of the guide hair, such as the curvature, to be within a preset curvature interval, thereby ensuring the actual authenticity of the guide hair; the constraint information is used to limit the distance between the guide hairs to be greater than a length threshold, thereby ensuring the rationality of the distribution of the guide hairs and improving the overall visual effect of the hair rendering image.
进一步地,该引导发丝的约束信息可以根据需要预先设置的,也可以是基于人或动物的外观模型自动生成的,此处不再具体介绍。在得到仿真位姿后,通过以上约束信息对引导发丝的仿真位姿进行修正,得到修正仿真位姿,从而使引导发丝的仿真结果更加真实,提高后续发丝渲染图像的渲染效果。Furthermore, the constraint information of the guide hair can be preset as needed, or can be automatically generated based on the appearance model of a person or an animal, which will not be described in detail here. After obtaining the simulation pose, the simulation pose of the guide hair is corrected by the above constraint information to obtain a corrected simulation pose, so that the simulation result of the guide hair is more realistic and the rendering effect of the subsequent hair rendering image is improved.
步骤S206:根据仿真位姿或修正仿真位姿,以及初始位姿,得到引导发丝的运动信息,运动信息中包括构成引导发丝的发丝质点对应的坐标变换向量。Step S206: obtaining motion information of the guide hair according to the simulated posture or the modified simulated posture and the initial posture, wherein the motion information includes coordinate transformation vectors corresponding to the hair particles constituting the guide hair.
在图2所示实施例中,已对基于引导发丝的仿真位姿和初始位姿,确定运动信息的具体实现方式,与本实施例步骤中基于修正仿真位姿以及初始位姿确定运动信息的具体实现方式类似,此处不再赘述,可参见图2所示实施例中相关步骤下的具体介绍。In the embodiment shown in FIG. 2 , a specific implementation method for determining motion information based on the simulated posture and the initial posture of the guided hair has been described, which is similar to the specific implementation method for determining motion information based on the corrected simulated posture and the initial posture in the steps of this embodiment, and will not be repeated here. Please refer to the specific introduction under the relevant steps in the embodiment shown in FIG. 2 .
可选地,在步骤S206之后,还包括: Optionally, after step S206, the method further includes:
步骤S207:获取发丝数据对应的引导数据,并根据引导数据,获得各普通发丝对应的加权运动信息,引导数据用于表征引导发丝与对应的普通发丝的映射关系和映射权重。Step S207: obtaining guide data corresponding to the hair data, and obtaining weighted motion information corresponding to each common hair according to the guide data, wherein the guide data is used to characterize the mapping relationship and mapping weight between the guide hair and the corresponding common hair.
示例性地,引导数据是表征导发丝与对应的普通发丝的映射关系和映射权重的数据,其中,引导数据可以是包含发丝数据中的数据,也可以是与发丝数据对应的独立数据。一种可能的实现方式中,引导数据中包含有各引导发丝对应的索引标识,以及与每一引导发丝对应的普通发丝的发丝标识。进一步地,引导数据中还包括每一普通发丝对应的映射权限,其中,映射权利例如为表征普通发丝受引导发丝的运行的影响程度的归一化值,取值区间例如为(0,1]。通过映射权重对各普通发丝对应的运动信息进行加权后,得到各普通发丝对应的加权运动信息。Exemplarily, the guide data is data that characterizes the mapping relationship and mapping weight between the guide hair strands and the corresponding ordinary hair strands, wherein the guide data may be data included in the hair strand data, or may be independent data corresponding to the hair strand data. In a possible implementation, the guide data includes an index identifier corresponding to each guide hair strand, and a hair strand identifier of the ordinary hair strand corresponding to each guide hair strand. Furthermore, the guide data also includes a mapping authority corresponding to each ordinary hair strand, wherein the mapping authority is, for example, a normalized value that characterizes the degree to which the ordinary hair strand is affected by the operation of the guide hair strand, and the value range is, for example, (0,1]. After weighting the motion information corresponding to each ordinary hair strand by the mapping weight, the weighted motion information corresponding to each ordinary hair strand is obtained.
图13为本公开实施例提供的一种加权运动信息的生成示意图,如图13所示,基于引导发丝A的仿真位姿生成对应的运动信息为运动信息Info_A,该运行信息用于使引导发丝逆时针旋转30度。基于引导数据,引导发丝A对应的普通发丝包括普通发丝A_1、普通发丝A_2和普通发丝A_3(图中示为A_1、A_2和A_3),且普通发丝A_1对应的映射权重coef_w=1、普通发丝A_2对应的映射权重coef_w=0.7、普通发丝A_1对应的映射权重coef_w=0.3,则根据引导数据,生成普通发丝A_1的加权运动信息为Info_A1,用于使普通发丝A_1逆时针旋转30度(图中示为30,下同);生成普通发丝A_2的加权运动信息为Info_A2,用于使普通发丝A_2逆时针旋转21度(30度与权重系数0.7的乘积,下同);生成普通发丝A_3的加权运动信息为Info_A3,用于使普通发丝A_3逆时针旋转9度。FIG13 is a schematic diagram of generating weighted motion information provided by an embodiment of the present disclosure. As shown in FIG13 , the corresponding motion information generated based on the simulated posture of the guide hair A is motion information Info_A, and the motion information is used to rotate the guide hair counterclockwise by 30 degrees. Based on the guidance data, the common hair corresponding to the guide hair A includes common hair A_1, common hair A_2 and common hair A_3 (shown as A_1, A_2 and A_3 in the figure), and the mapping weight coef_w corresponding to the common hair A_1 is 1, the mapping weight coef_w corresponding to the common hair A_2 is 0.7, and the mapping weight coef_w corresponding to the common hair A_1 is 0.3. Then, according to the guidance data, the weighted motion information of the common hair A_1 is generated as Info_A1, which is used to rotate the common hair A_1 counterclockwise by 30 degrees (shown as 30 in the figure, the same below); the weighted motion information of the common hair A_2 is generated as Info_A2, which is used to rotate the common hair A_2 counterclockwise by 21 degrees (the product of 30 degrees and the weight coefficient 0.7, the same below); and the weighted motion information of the common hair A_3 is generated as Info_A3, which is used to rotate the common hair A_3 counterclockwise by 9 degrees.
本实施例中,通过引导数据,进一步确定引导发丝对应的普通发丝的映射权重,从而使普通发丝的表现样式更加多样化,增加发丝细节,提高发丝渲染图像的渲染效果。In this embodiment, the mapping weights of the common hair corresponding to the guide hair are further determined through the guide data, so that the expression styles of the common hair are more diversified, the hair details are increased, and the rendering effect of the hair rendering image is improved.
步骤S208:基于运动信息或加权运动信息,对普通发丝的初始位姿进行仿真,得到普通发丝的仿真位姿。Step S208: Based on the motion information or the weighted motion information, the initial posture of the ordinary hair is simulated to obtain a simulated posture of the ordinary hair.
在一种可能的实现方式中,如图14所示,步骤S208的具体实现方式包括:In a possible implementation, as shown in FIG14 , a specific implementation of step S208 includes:
步骤S2081:基于所述引导发丝与普通发丝的对应关系,获得与所述引导发丝的发丝质点对应的普通发丝的发丝质点。Step S2081: based on the correspondence between the guide hair and the ordinary hair, obtaining the hair particles of the ordinary hair corresponding to the hair particles of the guide hair.
步骤S2082:根据加权运动信息或加权运动信息,得到各普通发丝的发丝质点对应的目标坐标变化向量。Step S2082: Obtain the target coordinate change vector corresponding to the hair particle of each ordinary hair according to the weighted motion information or the weighted motion information.
步骤S2083:基于普通发丝的发丝质点的坐标变化量和对应的初始坐标,得到普通发丝的发丝质点对应的仿真坐标。Step S2083: based on the coordinate change amount of the hair particle of the ordinary hair and the corresponding initial coordinate, the simulation coordinate corresponding to the hair particle of the ordinary hair is obtained.
步骤S2084:根据至少两个普通发丝的发丝质点的仿真坐标,得到普通发丝的仿真位姿。 Step S2084: obtaining a simulated pose of the ordinary hair according to the simulated coordinates of at least two hair particles of the ordinary hair.
示例性地,基于引导数据,得到引导发丝对应的各普通发丝中,分别与发丝质点对应的普通发丝的发丝质点,其中,各普通发丝的普通发丝的发丝质点的数量,与对应的引导发丝的发丝质点的数量相同,因此,基于发丝质点的质点索引和普通发丝的发丝质点的质点索引,即可得到与发丝质点对应的普通发丝的发丝质点。之后,加权运动信息或加权运动信息中包含有各引导发丝的发丝质点对应的坐标变化向量,基于上个步骤中确定的映射关系,得到各普通发丝的发丝质点对应的目标坐标变化向量。坐标变化向量的具体实现方式在图2所示实施例中相关步骤下已进行介绍,此处不再赘述。进一步地,通过目标坐标变化向量对普通发丝的发丝质点对应的初始坐标进行坐标变换,从而使引导发丝的运动信息转移至对应的普通发丝,使普通发丝执行与引导发丝类似的运动,得到普通发丝的发丝质点对应的仿真坐标,进而根据各普通发丝的至少两个普通发丝的发丝质点的仿真坐标的集合,得到普通发丝的仿真位姿。Exemplarily, based on the guide data, hair particles of ordinary hair corresponding to the hair particles in each ordinary hair corresponding to the guide hair are obtained, wherein the number of hair particles of the ordinary hair of each ordinary hair is the same as the number of hair particles of the corresponding guide hair. Therefore, based on the particle index of the hair particle and the particle index of the hair particle of the ordinary hair, the hair particles of the ordinary hair corresponding to the hair particle can be obtained. Afterwards, the weighted motion information or the weighted motion information contains the coordinate change vector corresponding to the hair particle of each guide hair, and based on the mapping relationship determined in the previous step, the target coordinate change vector corresponding to the hair particle of each ordinary hair is obtained. The specific implementation method of the coordinate change vector has been introduced under the relevant steps in the embodiment shown in FIG. 2, and will not be repeated here. Furthermore, the initial coordinates corresponding to the hair particles of ordinary hair are transformed by the target coordinate change vector, so that the motion information of the guide hair is transferred to the corresponding ordinary hair, so that the ordinary hair performs a similar motion to the guide hair, and the simulation coordinates corresponding to the hair particles of the ordinary hair are obtained, and then the simulation posture of the ordinary hair is obtained according to the set of simulation coordinates of at least two hair particles of each ordinary hair.
步骤S209:基于普通发丝的仿真位姿进行图像渲染,得到发丝渲染图像。Step S209: performing image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
本实施例中,步骤S204、步骤209的实现方式与在图2所示实施例中的步骤S102、步骤104中已进行介绍,在此不再一一赘述。In this embodiment, the implementation methods of step S204 and step 209 have been introduced in step S102 and step 104 in the embodiment shown in FIG. 2 , and will not be described in detail here.
对应于上文实施例的发丝处理方法,图15为本公开实施例提供的发丝处理装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图15,发丝处理装置3包括:Corresponding to the hair treatment method of the above embodiment, FIG15 is a structural block diagram of a hair treatment device provided by an embodiment of the present disclosure. For ease of explanation, only the parts related to the embodiment of the present disclosure are shown. Referring to FIG15 , the hair treatment device 3 includes:
获取模块31,用于获取发丝数据,发丝数据用于表征引导发丝的初始位姿,以及与引导发丝对应的普通发丝的初始位姿;An acquisition module 31, used for acquiring hair data, where the hair data is used for representing the initial position and posture of the guide hair and the initial position and posture of the common hair corresponding to the guide hair;
仿真模块32,用于基于发丝数据,对引导发丝进行仿真,得到引导发丝的仿真位姿;A simulation module 32, configured to simulate the guide hair based on the hair data to obtain a simulated posture of the guide hair;
处理模块33,用于根据仿真位姿和初始位姿,得到引导发丝的运动信息,运动信息表征引导发丝的仿真位姿相对初始位姿的姿态变化特征;A processing module 33, for obtaining motion information of the guide hair according to the simulated posture and the initial posture, wherein the motion information represents a posture change feature of the simulated posture of the guide hair relative to the initial posture;
渲染模块34,用于根据运动信息和普通发丝的初始位姿,得到发丝渲染图像。The rendering module 34 is used to obtain a hair rendering image according to the motion information and the initial position of the ordinary hair.
在本公开的一个实施例中,发丝数据包括构成引导发丝的发丝质点的初始坐标;仿真模块32,具体用于:获取引导发丝的发丝质点的质点运动状态,其中,质点运动状态表征发丝质点的运动速度和/或运动加速度;根据发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标;根据引导发丝的至少两个发丝质点的仿真坐标,得到引导发丝的仿真位姿。In one embodiment of the present disclosure, the hair data includes initial coordinates of hair particles constituting the guiding hair; the simulation module 32 is specifically used to: obtain the particle motion state of the hair particles of the guiding hair, wherein the particle motion state represents the motion speed and/or motion acceleration of the hair particles; obtain simulation coordinates according to the initial coordinates of the hair particles and the corresponding particle motion state; obtain the simulation posture of the guiding hair according to the simulation coordinates of at least two hair particles of the guiding hair.
在本公开的一个实施例中,质点运动状态包括发丝质点的第一运动速度,仿真模块32在根据发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标时,具体用于:根据发丝质点的第一运动速度,得到发丝质点对应的第一衰减速度;根据第一衰减速度对发丝质点的质点运动状态进行修正,得到第一修正运动状态;根据发丝质点的初始坐标和对应的 第一修正运动状态,得到仿真坐标。In one embodiment of the present disclosure, the particle motion state includes a first motion speed of the hair particle. When the simulation module 32 obtains the simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state, it is specifically used to: obtain a first decay speed corresponding to the hair particle according to the first motion speed of the hair particle; correct the particle motion state of the hair particle according to the first decay speed to obtain a first corrected motion state; and obtain a first decay speed corresponding to the hair particle according to the initial coordinates of the hair particle and the corresponding First, correct the motion state to obtain the simulation coordinates.
在本公开的一个实施例中,引导发丝的发丝质点中包括发梢质点,仿真模块32在根据发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标时,具体用于:在引导发丝的发梢质点之后,生成虚拟发丝质点;根据虚拟发丝质点与发梢质点之间的位置约束关系,得到发梢质点对应的第二衰减速度,其中,第二衰减速度的方向为由发梢质点至发梢质点的相邻发丝质点的连线方向的反方向;根据第二衰减速度对发丝质点的质点运动状态进行修正,得到第二修正运动状态;根据发丝质点的初始坐标和对应的第二修正运动状态,得到仿真坐标。In one embodiment of the present disclosure, the hair particles of the guiding hair include hair tip particles. When the simulation module 32 obtains the simulation coordinates according to the initial coordinates of the hair particles and the corresponding particle motion states, it is specifically used to: generate virtual hair particles after the hair tip particles of the guiding hair; obtain the second decay speed corresponding to the hair tip particles according to the position constraint relationship between the virtual hair particles and the hair tip particles, wherein the direction of the second decay speed is opposite to the direction of the line connecting the adjacent hair particles from the hair tip particles to the hair tip particles; correct the particle motion state of the hair particles according to the second decay speed to obtain the second corrected motion state; obtain the simulation coordinates according to the initial coordinates of the hair particles and the corresponding second corrected motion state.
在本公开的一个实施例中,渲染模块34,具体用于:基于运动信息,对普通发丝的初始位姿进行仿真,得到普通发丝的仿真位姿;基于普通发丝的仿真位姿进行图像渲染,得到发丝渲染图像。In one embodiment of the present disclosure, the rendering module 34 is specifically used to: simulate the initial posture of ordinary hair based on motion information to obtain the simulated posture of ordinary hair; perform image rendering based on the simulated posture of ordinary hair to obtain a hair rendering image.
在本公开的一个实施例中,运动信息中包括构成引导发丝的发丝质点对应的坐标变换向量,初始位姿包括普通发丝的至少两个普通发丝的发丝质点的初始坐标;渲染模块34在基于运动信息,对普通发丝的初始位姿进行仿真,得到普通发丝的仿真位姿时,具体用于:获取引导发丝对应的普通发丝中,与引导发丝的发丝质点对应的普通发丝的发丝质点;根据运动信息,得到各普通发丝的发丝质点对应的坐标变化向量;基于普通发丝的发丝质点的坐标变化量和对应的初始坐标,得到普通发丝的发丝质点对应的仿真坐标;根据至少两个普通发丝的发丝质点的仿真坐标,得到普通发丝的仿真位姿。In one embodiment of the present disclosure, the motion information includes coordinate transformation vectors corresponding to the hair particles constituting the guide hair, and the initial posture includes the initial coordinates of at least two hair particles of the ordinary hair; when the rendering module 34 simulates the initial posture of the ordinary hair based on the motion information to obtain the simulated posture of the ordinary hair, it is specifically used to: obtain the hair particles of the ordinary hair corresponding to the hair particles of the guide hair in the ordinary hair corresponding to the guide hair; obtain the coordinate change vectors corresponding to the hair particles of each ordinary hair according to the motion information; obtain the simulated coordinates corresponding to the hair particles of the ordinary hair based on the coordinate change amount of the hair particles of the ordinary hair and the corresponding initial coordinates; obtain the simulated posture of the ordinary hair according to the simulated coordinates of the hair particles of at least two ordinary hair.
在本公开的一个实施例中,初始位姿包括引导发丝对应的至少两个发丝质点的初始坐标;仿真位姿包括引导发丝对应的至少两个发丝质点的仿真坐标;处理模块33,具体用于:根据发丝质点的初始坐标和对应的仿真坐标,得到发丝质点的坐标变换向量;根据至少两个发丝质点的坐标变化向量,得到引导发丝的运动信息。In one embodiment of the present disclosure, the initial posture includes the initial coordinates of at least two hair particles corresponding to the guiding hair; the simulation posture includes the simulation coordinates of at least two hair particles corresponding to the guiding hair; the processing module 33 is specifically used to: obtain the coordinate transformation vector of the hair particle according to the initial coordinates and the corresponding simulation coordinates of the hair particle; obtain the motion information of the guiding hair according to the coordinate change vectors of at least two hair particles.
在本公开的一个实施例中,获取模块31,还用于:获取发丝数据对应的引导数据,引导数据用于表征引导发丝与对应的普通发丝的映射关系和映射权重;渲染模块34,具体用于:根据引导数据,获得各普通发丝对应的加权运动信息;根据普通发丝的加权运动信息和普通发丝的初始位姿,得到发丝渲染图像。In one embodiment of the present disclosure, the acquisition module 31 is further used to: acquire guide data corresponding to the hair data, the guide data being used to characterize the mapping relationship and mapping weight between the guide hair and the corresponding ordinary hair; the rendering module 34 is specifically used to: obtain weighted motion information corresponding to each ordinary hair according to the guide data; and obtain a hair rendering image according to the weighted motion information of the ordinary hair and the initial posture of the ordinary hair.
在本公开的一个实施例中,在基于发丝数据,对引导发丝进行仿真,得到引导发丝的仿真位姿之后,仿真模块32,还用于:获取引导发丝的约束信息,约束信息用于限制引导发丝的发丝长度和/或发丝形态,和/或至少两个引导发丝之间的距离;根据约束信息对仿真位姿进行修正,得到修正仿真位姿;处理模块33,具体用于:根据引导发丝的修正仿真位姿和初始位姿,得到引导发丝的运动信息。 In one embodiment of the present disclosure, after simulating the guide hair based on the hair data to obtain the simulated posture of the guide hair, the simulation module 32 is further used to: obtain constraint information of the guide hair, the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two guide hairs; correct the simulated posture according to the constraint information to obtain a corrected simulated posture; the processing module 33 is specifically used to: obtain the motion information of the guide hair according to the corrected simulated posture and the initial posture of the guide hair.
其中,获取模块31、仿真模块32、处理模块33和渲染模块34依次连接。本实施例提供的发丝处理装置3可以执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。The acquisition module 31, simulation module 32, processing module 33 and rendering module 34 are connected in sequence. The hair processing device 3 provided in this embodiment can implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be repeated in this embodiment.
图16为本公开实施例提供的一种电子设备的结构示意图,如图16所示,该电子设备4包括:FIG16 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG16 , the electronic device 4 includes:
处理器41,以及与处理器41通信连接的存储器42;A processor 41, and a memory 42 communicatively connected to the processor 41;
存储器42存储计算机执行指令;The memory 42 stores computer executable instructions;
处理器41执行存储器42存储的计算机执行指令,以实现如图2-图14所示实施例中的发丝处理方法。The processor 41 executes the computer-executable instructions stored in the memory 42 to implement the hair processing method in the embodiments shown in FIGS. 2 to 14 .
其中,可选地,处理器41和存储器42通过总线43连接。Optionally, the processor 41 and the memory 42 are connected via a bus 43 .
相关说明可以对应参见图2-图14所对应的实施例中的步骤所对应的相关描述和效果进行理解,此处不做过多赘述。The relevant instructions can be understood by referring to the relevant descriptions and effects corresponding to the steps in the embodiments corresponding to Figures 2 to 14, and no further details will be given here.
本公开实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,计算机执行指令被处理器执行时用于实现本公开图2-图14所对应的实施例中任一实施例提供的发丝处理方法。An embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored. When the computer-executable instructions are executed by a processor, they are used to implement the hair processing method provided in any one of the embodiments corresponding to Figures 2 to 14 of the present disclosure.
本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如图2-图14所示实施例中的发丝处理方法,The present disclosure provides a computer program product, including a computer program. When the computer program is executed by a processor, the hair processing method in the embodiments shown in FIGS. 2 to 14 is implemented.
参考图17,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图17示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring to FIG. 17 , it shows a schematic diagram of the structure of an electronic device 900 suitable for implementing the embodiment of the present disclosure, and the electronic device 900 may be a terminal device or a server. The terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (Portable Android Devices, PADs), portable multimedia players (PMPs), vehicle terminals (such as vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc. The electronic device shown in FIG. 17 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.
如图17所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read Only Memory,简称ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,简称RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。As shown in FIG. 17 , the electronic device 900 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 902 or a program loaded from a storage device 908 to a random access memory (RAM) 903. Various programs and data required for the operation of the electronic device 900 are also stored in the RAM 903. The processing device 901, the ROM 902, and the RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、 摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图17示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 905: including, for example, a touch screen, a touch pad, a keyboard, a mouse, Input devices 906 such as cameras, microphones, accelerometers, gyroscopes, etc.; output devices 907 such as liquid crystal displays (LCDs), speakers, vibrators, etc.; storage devices 908 such as magnetic tapes, hard disks, etc.; and communication devices 909. The communication device 909 can allow the electronic device 900 to communicate with other devices wirelessly or by wire to exchange data. Although FIG. 17 shows an electronic device 900 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network through a communication device 909, or installed from a storage device 908, or installed from a ROM 902. When the computer program is executed by the processing device 901, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(RF)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried. This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above. The computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (RF), etc., or any suitable combination of the above.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device executes the method shown in the above embodiment.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程 序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer programs for performing operations of the present disclosure may be written in one or more programming languages, or a combination of such languages. The program code may be executed entirely on the user's computer, partially on the user's computer, as an independent software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., using an Internet service provider to connect through the Internet).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to various embodiments of the present disclosure. In this regard, each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some implementations as replacements, the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。The units involved in the embodiments described in the present disclosure may be implemented by software or hardware. The name of a unit does not limit the unit itself in some cases. For example, the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above herein may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLDs), and the like.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。 In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing. A more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an explanation of the technical principles used. Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by a specific combination of the above technical features, but should also cover other technical solutions formed by any combination of the above technical features or their equivalent features without departing from the above disclosed concept. For example, the above features are replaced with the technical features with similar functions disclosed in the present disclosure (but not limited to) by each other to form a technical solution.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。In addition, although each operation is described in a specific order, this should not be understood as requiring these operations to be performed in the specific order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although some specific implementation details are included in the above discussion, these should not be interpreted as limiting the scope of the present disclosure. Some features described in the context of a separate embodiment can also be implemented in a single embodiment in combination. On the contrary, the various features described in the context of a single embodiment can also be implemented in multiple embodiments individually or in any suitable sub-combination mode.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。 Although the subject matter has been described in language specific to structural features and/or methodological logical actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described above. On the contrary, the specific features and actions described above are merely example forms of implementing the claims.

Claims (14)

  1. 一种发丝处理方法,包括:A hair treatment method, comprising:
    获取发丝数据,将发丝数据分为引导发丝和普通发丝;Acquire hair data, and divide the hair data into guide hair and common hair;
    基于引导发丝的初始位姿对其进行处理,得到所述引导发丝的仿真位姿;Processing the guide hair based on its initial posture to obtain a simulated posture of the guide hair;
    根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,所述运动信息表征所述引导发丝的仿真位姿相对所述引导发丝的初始位姿的姿态变化特征;According to the simulation posture and the initial posture of the guide hair, motion information of the guide hair is obtained, wherein the motion information represents a posture change feature of the simulation posture of the guide hair relative to the initial posture of the guide hair;
    根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像。A hair rendering image is obtained according to the motion information and the initial position and posture of the ordinary hair.
  2. 根据权利要求1所述的方法,其中所述基于引导发丝的初始位姿进行处理,得到所述引导发丝的仿真位姿,包括:The method according to claim 1, wherein the processing based on the initial pose of the guide hair to obtain the simulated pose of the guide hair comprises:
    获取所述引导发丝的发丝质点的质点运动状态,其中,所述质点运动状态表征所述发丝质点的运动速度和/或运动加速度;Acquiring a particle motion state of the hair particle guiding the hair, wherein the particle motion state represents a motion speed and/or motion acceleration of the hair particle;
    根据所述发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标;According to the initial coordinates of the hair particles and the corresponding particle motion states, a simulation coordinate is obtained;
    根据所述引导发丝的至少两个发丝质点的仿真坐标,得到所述引导发丝的仿真位姿。The simulated position and posture of the guide hair is obtained according to the simulated coordinates of at least two hair particle points of the guide hair.
  3. 根据权利要求2所述的方法,其中所述质点运动状态包括所述发丝质点的第一运动速度,所述根据所述发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标,包括:The method according to claim 2, wherein the particle motion state includes a first motion speed of the hair particle, and obtaining the simulation coordinates according to the initial coordinates of the hair particle and the corresponding particle motion state comprises:
    根据所述发丝质点的第一运动速度,得到所述发丝质点对应的第一衰减速度;According to the first movement speed of the hair particle, a first attenuation speed corresponding to the hair particle is obtained;
    根据所述第一衰减速度对所述发丝质点的质点运动状态进行修正,得到第一修正运动状态;Correcting the particle motion state of the hair particle according to the first decay speed to obtain a first corrected motion state;
    根据所述发丝质点的初始坐标和对应的第一修正运动状态,得到仿真坐标。The simulation coordinates are obtained according to the initial coordinates of the hair particle and the corresponding first corrected motion state.
  4. 根据权利要求2所述的方法,其中所述引导发丝的至少两个发丝质点中包括发梢质点,所述根据所述发丝质点的初始坐标和对应的质点运动状态,得到仿真坐标,包括:The method according to claim 2, wherein the at least two hair particle points of the guiding hair include a hair tip particle point, and the obtaining of the simulation coordinates according to the initial coordinates of the hair particle points and the corresponding particle point motion states comprises:
    设置所述引导发丝的虚拟发丝质点,所述虚拟发丝质点位于所述引导发丝的发梢质点一侧的延伸方向上;Setting a virtual hair particle of the guide hair, wherein the virtual hair particle is located in an extension direction of one side of the hair tip particle of the guide hair;
    根据所述虚拟发丝质点与所述发梢质点之间的位置约束关系,得到所述发梢质点对应的第二衰减速度,其中,所述第二衰减速度的方向为由所述发梢质点至所述发梢质点的相邻发丝质点的连线方向的反方向;According to the position constraint relationship between the virtual hair particle point and the hair tip particle point, a second decay speed corresponding to the hair tip particle point is obtained, wherein the direction of the second decay speed is opposite to the direction of the line connecting the hair tip particle point to the adjacent hair particle point of the hair tip particle point;
    所述根据所述第二衰减速度对所述发丝质点的质点运动状态进行修正,得到第二修正运动状态;The particle motion state of the hair particle is corrected according to the second attenuation speed to obtain a second corrected motion state;
    根据所述发丝质点的初始坐标和对应的第二修正运动状态,得到仿真坐标。 The simulation coordinates are obtained according to the initial coordinates of the hair particle and the corresponding second corrected motion state.
  5. 根据权利要求1-4中任一项所述的方法,其中所述根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像,包括:The method according to any one of claims 1 to 4, wherein obtaining a hair rendering image according to the motion information and the initial position of the ordinary hair comprises:
    基于所述运动信息,对所述普通发丝的初始位姿进行处理,得到所述普通发丝的仿真位姿;Based on the motion information, the initial posture of the ordinary hair is processed to obtain a simulated posture of the ordinary hair;
    基于所述普通发丝的仿真位姿进行图像渲染,得到所述发丝渲染图像。Image rendering is performed based on the simulated posture of the ordinary hair to obtain the hair rendering image.
  6. 根据权利要求5所述的方法,其中The method according to claim 5, wherein
    所述基于所述运动信息,对所述普通发丝的初始位姿进行仿真,得到所述普通发丝的仿真位姿,包括:The simulating the initial posture of the common hair based on the motion information to obtain the simulated posture of the common hair includes:
    基于所述引导发丝与普通发丝的对应关系,获得与所述引导发丝的发丝质点对应的普通发丝的发丝质点;Based on the correspondence between the guide hair and the ordinary hair, obtaining the hair particle points of the ordinary hair corresponding to the hair particle points of the guide hair;
    根据所述运动信息,得到所述普通发丝的发丝质点对应的坐标变化量;According to the motion information, a coordinate change corresponding to the hair particle of the ordinary hair is obtained;
    基于所述普通发丝的发丝质点的坐标变化量和普通发丝的发丝质点的初始坐标,得到所述普通发丝的发丝质点对应的仿真坐标;Based on the coordinate change of the hair particle point of the ordinary hair and the initial coordinate of the hair particle point of the ordinary hair, the simulation coordinate corresponding to the hair particle point of the ordinary hair is obtained;
    根据至少两个所述普通发丝的发丝质点的仿真坐标,得到所述普通发丝的仿真位姿。The simulated position and posture of the common hair is obtained according to the simulated coordinates of at least two hair particles of the common hair.
  7. 根据权利要求1-6中任一项所述的方法,其中所述引导发丝的初始位姿包括所述引导发丝对应的至少两个发丝质点的初始坐标;所述仿真位姿包括所述引导发丝对应的所述至少两个发丝质点的仿真坐标;The method according to any one of claims 1 to 6, wherein the initial position of the guide hair comprises the initial coordinates of at least two hair particles corresponding to the guide hair; and the simulation position comprises the simulation coordinates of the at least two hair particles corresponding to the guide hair;
    所述根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,包括:The obtaining the motion information of the guide hair according to the simulation posture and the initial posture of the guide hair comprises:
    根据所述发丝质点的初始坐标和对应的仿真坐标,得到所述发丝质点的坐标变换量;Obtaining a coordinate transformation amount of the hair particle point according to the initial coordinates of the hair particle point and the corresponding simulation coordinates;
    根据至少两个所述发丝质点的坐标变化量,得到所述引导发丝的运动信息。The motion information of the guide hair is obtained according to the coordinate changes of at least two of the hair particles.
  8. 根据权利要求1-7中任一项所述的方法,所述方法还包括:The method according to any one of claims 1 to 7, further comprising:
    获取所述发丝数据对应的引导数据,所述引导数据用于表征所述引导发丝与对应的普通发丝的映射关系和映射权重;Acquire guide data corresponding to the hair data, wherein the guide data is used to characterize a mapping relationship and a mapping weight between the guide hair and the corresponding ordinary hair;
    所述根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像,包括:The step of obtaining a hair rendering image according to the motion information and the initial position of the ordinary hair includes:
    根据所述引导数据,获得各所述普通发丝对应的加权运动信息;Obtaining weighted motion information corresponding to each of the common hairs according to the guidance data;
    根据所述普通发丝的加权运动信息和普通发丝的初始位姿,得到发丝渲染图像。A hair rendering image is obtained according to the weighted motion information of the common hair and the initial position and posture of the common hair.
  9. 根据权利要求1-8中任一项所述的方法,其中在基于所述发丝数据,对所述引导发丝进行处理,得到所述引导发丝的仿真位姿之后,还包括:The method according to any one of claims 1 to 8, wherein after the guide hair is processed based on the hair data to obtain a simulated pose of the guide hair, the method further comprises:
    获取所述引导发丝的约束信息,所述约束信息用于限制所述引导发丝的发丝长度和/或发丝形态,和/或至少两个所述引导发丝之间的距离; Acquiring constraint information of the guide hair, wherein the constraint information is used to limit the hair length and/or hair shape of the guide hair, and/or the distance between at least two of the guide hairs;
    根据所述约束信息对所述仿真位姿进行修正,得到修正仿真位姿;Correcting the simulation pose according to the constraint information to obtain a corrected simulation pose;
    所述根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,包括:The obtaining the motion information of the guide hair according to the simulation posture and the initial posture of the guide hair comprises:
    根据所述引导发丝的修正仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息。The motion information of the guide hair is obtained according to the corrected simulation posture of the guide hair and the initial posture of the guide hair.
  10. 一种发丝处理装置,包括:A hair processing device, comprising:
    获取模块,用于获取发丝数据,并将发丝数据分为引导发丝和普通发丝;An acquisition module, used for acquiring hair data and dividing the hair data into guide hair and ordinary hair;
    仿真模块,用于基于引导发丝的初始位姿对其进行处理,得到所述引导发丝的仿真位姿;A simulation module, used for processing the guide hair based on its initial posture to obtain a simulation posture of the guide hair;
    处理模块,用于根据所述仿真位姿和所述引导发丝的初始位姿,得到所述引导发丝的运动信息,所述运动信息表征所述引导发丝的仿真位姿相对所述引导发丝的初始位姿的姿态变化特征;a processing module, configured to obtain motion information of the guide hair according to the simulated pose and the initial pose of the guide hair, wherein the motion information represents a posture change feature of the simulated pose of the guide hair relative to the initial pose of the guide hair;
    渲染模块,用于根据所述运动信息和所述普通发丝的初始位姿,得到发丝渲染图像。A rendering module is used to obtain a hair rendering image according to the motion information and the initial position of the ordinary hair.
  11. 一种电子设备,包括:处理器,以及与所述处理器通信连接的存储器;An electronic device comprises: a processor, and a memory communicatively connected to the processor;
    所述存储器存储计算机执行指令;The memory stores computer-executable instructions;
    所述处理器执行所述存储器存储的计算机执行指令,以实现如权利要求1至9中任一项所述的发丝处理方法。The processor executes the computer-executable instructions stored in the memory to implement the hair processing method according to any one of claims 1 to 9.
  12. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至9任一项所述的发丝处理方法。A computer-readable storage medium having computer-executable instructions stored therein, wherein when a processor executes the computer-executable instructions, the hair processing method according to any one of claims 1 to 9 is implemented.
  13. 一种计算机程序,所述计算机程序具有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至9任一项所述的发丝处理方法。A computer program, the computer program having computer executable instructions, and when a processor executes the computer executable instructions, the hair processing method according to any one of claims 1 to 9 is implemented.
  14. 一种计算机程序产品,所述计算机程序产品存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至9任一项所述的发丝处理方法。 A computer program product stores computer-executable instructions, and when a processor executes the computer-executable instructions, the hair processing method according to any one of claims 1 to 9 is implemented.
PCT/CN2024/085495 2023-04-12 2024-04-02 Hair processing method and apparatus, and electronic device and storage medium WO2024212842A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310397317.2A CN116416360A (en) 2023-04-12 2023-04-12 Hairline processing method and device, electronic equipment and storage medium
CN202310397317.2 2023-04-12

Publications (1)

Publication Number Publication Date
WO2024212842A1 true WO2024212842A1 (en) 2024-10-17

Family

ID=87054397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/085495 WO2024212842A1 (en) 2023-04-12 2024-04-02 Hair processing method and apparatus, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN116416360A (en)
WO (1) WO2024212842A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416360A (en) * 2023-04-12 2023-07-11 北京字跳网络技术有限公司 Hairline processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942090A (en) * 2014-04-11 2014-07-23 浙江大学 Data-driven real-time hair motion simulation method
US10685499B1 (en) * 2019-01-08 2020-06-16 Ephere Inc. Dynamic detail adaptive hair modeling and editing
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
CN115294256A (en) * 2022-08-16 2022-11-04 北京畅游创想软件技术有限公司 Hair processing method, device, electronic equipment and computer readable storage medium
CN116416360A (en) * 2023-04-12 2023-07-11 北京字跳网络技术有限公司 Hairline processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942090A (en) * 2014-04-11 2014-07-23 浙江大学 Data-driven real-time hair motion simulation method
US10685499B1 (en) * 2019-01-08 2020-06-16 Ephere Inc. Dynamic detail adaptive hair modeling and editing
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
CN115294256A (en) * 2022-08-16 2022-11-04 北京畅游创想软件技术有限公司 Hair processing method, device, electronic equipment and computer readable storage medium
CN116416360A (en) * 2023-04-12 2023-07-11 北京字跳网络技术有限公司 Hairline processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116416360A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
WO2021008627A1 (en) Game character rendering method and apparatus, electronic device, and computer-readable medium
WO2024212842A1 (en) Hair processing method and apparatus, and electronic device and storage medium
CN110069191B (en) Terminal-based image dragging deformation implementation method and device
WO2020186934A1 (en) Method, apparatus, and electronic device for generating animation containing dynamic background
CN115641375A (en) Method, device, equipment, medium and program product for processing hair of virtual object
CN109889893A (en) Method for processing video frequency, device and equipment
CN110322571B (en) Page processing method, device and medium
CN109698914A (en) A kind of lightning special efficacy rendering method, device, equipment and storage medium
CN110035271B (en) Fidelity image generation method and device and electronic equipment
CN112132936A (en) Picture rendering method and device, computer equipment and storage medium
CN112581635B (en) Universal quick face changing method and device, electronic equipment and storage medium
CN107221024B (en) Virtual object hair processing method and device, storage medium and electronic equipment
CN111080755B (en) Motion calculation method and device, storage medium and electronic equipment
CN110288532B (en) Method, apparatus, device and computer readable storage medium for generating whole body image
US20240331303A1 (en) Strand Simulation in Multiple Levels
WO2024094158A1 (en) Special effect processing method and apparatus, device, and storage medium
WO2024222171A1 (en) Service processing method and apparatus, and computer device and storage medium
WO2024131532A1 (en) Hair processing method and apparatus, and device and storage medium
WO2024169893A1 (en) Model construction method and apparatus, virtual image generation method and apparatus, device, and medium
CN110288523B (en) Image generation method and device
CN112734391A (en) Deduction method and device for transformer substation construction, computer equipment and storage medium
WO2024088144A1 (en) Augmented reality picture processing method and apparatus, and electronic device and storage medium
Prendinger et al. MPML3D: Scripting agents for the 3D internet
WO2023116562A1 (en) Image display method and apparatus, electronic device, and storage medium
KR102593043B1 (en) Augmented reality-based display methods, devices, storage media, and program products