Disclosure of Invention
The application provides a cartoon image generating system and a method based on character features, which are based on the generation of an initial cartoon image according to character feature data, and the basic cartoon image is obtained by automatically modifying the initial cartoon image by operating a writing pen, so that the cartoon design can be rapidly carried out, the user participation is facilitated, and the user experience is improved.
In view of this, an aspect of the present application proposes a character feature-based animation generation system, comprising: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; wherein the method comprises the steps of
The acquisition module is used for acquiring the identity information of the first object;
the generating module is used for generating a first behavior guideline of the first object according to the identity information;
the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object;
the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data;
the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen;
the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object.
Optionally, the acquiring module is further configured to acquire first voice data;
the processing module is further used for identifying the first voice data;
the processing module is further used for modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
Optionally, in the step of modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object, the processing module is specifically configured to:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
Optionally, the character feature data at least includes one of a face image, stature data, gender data and character feature data; in the step of generating the initial animation of the first object according to the character feature data, the processing module is specifically configured to:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
Optionally, the acquiring module is further configured to acquire a second behavior guideline of the first object;
the processing module is also used for controlling the movement of the basic cartoon image by using the second behavior guide and recording movement data;
the prompting module is further configured to output the second behavior guideline to prompt the first object to execute a corresponding second behavior action;
the acquisition module is further configured to acquire video data of the first object during the process of executing the second behavior action by the first object;
the processing module is further used for generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
In another aspect of the application, a cartoon image generating method based on character features comprises the following steps:
acquiring identity information of a first object;
generating a first behavior guide of the first object according to the identity information;
outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
collecting character characteristic data of the first object in the process that the first object executes the first behavior action;
generating an initial cartoon image of the first object according to the character characteristic data;
displaying the initial animation image on a display device;
the display device receives writing input of a writing pen;
and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object.
Optionally, while the display device receives writing input of the writing pen, the method further comprises:
collecting first voice data and identifying the first voice data;
and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
Optionally, the step of modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object includes:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
Optionally, the character feature data at least includes one of a face image, stature data, gender data and character feature data; the step of generating the initial cartoon image of the first object according to the character characteristic data comprises the following steps:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
Optionally, after the step of modifying the initial animation according to the writing input to generate the basic animation of the first object, the method further includes:
acquiring a second behavior guideline of the first object;
controlling the basic cartoon image to move by utilizing the second behavior guide, and recording movement data;
outputting the second behavior guide to prompt the first object to execute a corresponding second behavior action;
collecting video data of the first object in the process of executing the second behavior action by the first object;
and generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
By adopting the technical scheme of the application, the cartoon image generating system based on the character features is provided with an acquisition module, a generating module, a prompting module, a generating module, display equipment and a processing module; the acquisition module is used for acquiring the identity information of the first object; the generating module is used for generating a first behavior guideline of the first object according to the identity information; the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object; the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data; the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen; the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
A cartoon figure generating system and method based on character features according to some embodiments of the present application will be described with reference to fig. 1 to 2.
As shown in FIG. 1, one embodiment of the present application provides a character feature-based animation generation system, comprising: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; wherein the method comprises the steps of
The acquisition module is used for acquiring the identity information of the first object;
the generating module is used for generating a first behavior guideline of the first object according to the identity information;
the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object;
the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data;
the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen;
the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object.
It will be appreciated that the pen may be used to write input on a touch screen of a display device, and voice data may also be collected by a recording device provided thereon. In the embodiment of the application, firstly, the identity information of a first object (namely, the person to be used for generating the cartoon image) is acquired, wherein the identity information can be face data, fingerprint data, voiceprint data and the like; generating a first behavior guideline of the first object according to the identity information, specifically, acquiring corresponding personage data according to the identity information, and acquiring the behavior guideline corresponding to the personage from a behavior guideline library according to the personage data as the first behavior guideline of the first object; then, outputting the first behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding first behavior action, and acquiring character characteristic data of the first object through an acquisition sensor in the process of executing the first behavior action by the first object, wherein the character characteristic data at least comprises one of face images, stature data, gender data, health data, exercise capacity evaluation data and the like; and generating an initial cartoon image of the first object according to the character feature data, for example, a standard cartoon image matched with the character feature data can be selected from a standard cartoon image library to serve as a corresponding initial cartoon image (or the matched standard cartoon image is corrected according to a preset method to serve as the corresponding initial cartoon image). And displaying the initial cartoon image on a display device, and when detecting that the operation performed by using a writing pen exists in the area displayed by the initial cartoon image, receiving writing input of the writing pen by the display device, and modifying the initial cartoon image according to the writing input to generate the basic cartoon image of the first object.
By adopting the technical scheme of the embodiment, the cartoon image generating system comprises: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; the acquisition module is used for acquiring the identity information of the first object; the generating module is used for generating a first behavior guideline of the first object according to the identity information; the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object; the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data; the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen; the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
It should be understood that the block diagram of the character-based animation generation system shown in fig. 1 is only illustrative, and the number of the illustrated modules does not limit the scope of the present application.
In some possible embodiments of the present application, the acquiring module is further configured to acquire first voice data;
the processing module is further used for identifying the first voice data;
the processing module is further used for modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
It will be appreciated that in practice, a person may speak while writing/drawing with a pen, such as say "large eye point, longer leg point, thin face point … …", etc., in order to more accurately modify the initial animation, in an embodiment of the present application, first voice data is collected while the display device receives writing input from the writing pen, and the first voice data is recognized; and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
In some possible embodiments of the present application, in the step of modifying the initial animation according to the recognition result to generate the basic animation of the first object, the processing module is specifically configured to:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
It may be appreciated that, in the embodiment of the present application, the first keyword may be a name of a human body part, such as a face, eyes, nose, legs, etc., and the second keyword may be an adjective or a verb for a human body part, such as thin, big, small, long, or enlarged, lengthened, etc. And extracting a first keyword and a second keyword from the identification result by using a keyword extraction technology, determining a part to be modified of the initial cartoon image according to the first keyword, and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object. Through the scheme of the embodiment, the basic cartoon image can be obtained more accurately, more conveniently and efficiently.
In some possible embodiments of the present application, the character feature data includes at least one of a face image, stature data, gender data, and character feature data; in the step of generating the initial animation of the first object according to the character feature data, the processing module is specifically configured to:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
It can be appreciated that in order to generate a more matched initial cartoon image, in an embodiment of the present application, a standard cartoon image library may be preset, where the standard cartoon image library includes standard cartoon character components, such as a head component, a face component, a five-sense organ component, a limb component, a skin component, a hair component, a clothing component, and the like; the collected character characteristic data at least comprises one of face images, statue data and gender data, and standard cartoon images formed by various standard cartoon character components; according to the face data extracted from the character feature data, a standard cartoon image (such as a standard cartoon image with the face similarity reaching a preset value) corresponding to the face data of the first object can be selected from a standard cartoon image library, and then the standard cartoon image is corrected (such as figure proportion is modified, sex-distinguishing clothes are replaced, and the like) according to the figure data and the sex data, so that the initial cartoon image is obtained.
In some possible embodiments of the present application, the obtaining module is further configured to obtain a second behavior guideline of the first object;
the processing module is also used for controlling the movement of the basic cartoon image by using the second behavior guide and recording movement data;
the prompting module is further configured to output the second behavior guideline to prompt the first object to execute a corresponding second behavior action;
the acquisition module is further configured to acquire video data of the first object during the process of executing the second behavior action by the first object;
the processing module is further used for generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
It can be appreciated that, in order to provide more fun for users to use the animation, in the embodiment of the present application, a second behavior guideline of the first object is further acquired, and then a control instruction of the basic animation is generated by using the second behavior guideline to control the basic animation to move, and movement data is recorded; in addition, outputting the second behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding second behavior action; collecting video data of the first object in the process of executing the second behavior action by the first object; and generating the motion cartoon image of the first object by utilizing the video data, the motion data of the basic cartoon image and a three-dimensional image processing technology.
Referring to fig. 2, another embodiment of the present application provides a cartoon image generating method based on character features, including:
acquiring identity information of a first object;
generating a first behavior guide of the first object according to the identity information;
outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
collecting character characteristic data of the first object in the process that the first object executes the first behavior action;
generating an initial cartoon image of the first object according to the character characteristic data;
displaying the initial animation image on a display device;
the display device receives writing input of a writing pen;
and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object.
It will be appreciated that the pen may be used to write input on a touch screen of a display device, and voice data may also be collected by a recording device provided thereon. In the embodiment of the application, firstly, the identity information of a first object (namely, the person to be used for generating the cartoon image) is acquired, wherein the identity information can be face data, fingerprint data, voiceprint data and the like; generating a first behavior guideline of the first object according to the identity information, specifically, acquiring corresponding personage data according to the identity information, and acquiring the behavior guideline corresponding to the personage from a behavior guideline library according to the personage data as the first behavior guideline of the first object; then, outputting the first behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding first behavior action, and acquiring character characteristic data of the first object through an acquisition sensor in the process of executing the first behavior action by the first object, wherein the character characteristic data at least comprises one of face images, stature data, gender data, health data, exercise capacity evaluation data and the like; and generating an initial cartoon image of the first object according to the character feature data, for example, a standard cartoon image matched with the character feature data can be selected from a standard cartoon image library to serve as a corresponding initial cartoon image (or the matched standard cartoon image is corrected according to a preset method to serve as the corresponding initial cartoon image). And displaying the initial cartoon image on a display device, and when detecting that the operation performed by using a writing pen exists in the area displayed by the initial cartoon image, receiving writing input of the writing pen by the display device, and modifying the initial cartoon image according to the writing input to generate the basic cartoon image of the first object.
By adopting the technical scheme of the embodiment, the identity information of the first object is acquired; generating a first behavior guide of the first object according to the identity information; outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; collecting character characteristic data of the first object in the process that the first object executes the first behavior action; generating an initial cartoon image of the first object according to the character characteristic data; displaying the initial animation image on a display device; the display device receives writing input of a writing pen; and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
In some possible embodiments of the present application, while the display device receives writing input of a writing pen, the method further includes:
collecting first voice data and identifying the first voice data;
and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
It may be appreciated that in practice, a person may write/draw while speaking with a pen, such as say "large eye point, longer leg point, thin face point … …", etc., in order to modify the initial animation image more accurately, in an embodiment of the present application, the display device collects first voice data while receiving writing input of the writing pen, and recognizes the first voice data by using a voice recognition algorithm to obtain a recognition result; and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
In some possible embodiments of the present application, the step of modifying the initial animation according to the recognition result to generate the basic animation of the first object includes:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
It may be appreciated that, in the embodiment of the present application, the first keyword may be a name of a human body part, such as a face, eyes, nose, legs, etc., and the second keyword may be an adjective or a verb for a human body part, such as thin, big, small, long, or enlarged, lengthened, etc. And extracting a first keyword and a second keyword from the identification result by using a keyword extraction technology, determining a part to be modified of the initial cartoon image according to the first keyword, and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object. Through the scheme of the embodiment, the basic cartoon image can be obtained more accurately, more conveniently and efficiently.
In some possible embodiments of the present application, the character feature data includes at least one of a face image, stature data, gender data, and character feature data; the step of generating the initial cartoon image of the first object according to the character characteristic data comprises the following steps:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
It can be appreciated that in order to generate a more matched initial cartoon image, in an embodiment of the present application, a standard cartoon image library may be preset, where the standard cartoon image library includes standard cartoon character components, such as a head component, a face component, a five-sense organ component, a limb component, a skin component, a hair component, a clothing component, and the like; the collected character characteristic data at least comprises one of face images, statue data and gender data, and standard cartoon images formed by various standard cartoon character components; according to the face data extracted from the character feature data, a standard cartoon image (such as a standard cartoon image with the face similarity reaching a preset value) corresponding to the face data of the first object can be selected from a standard cartoon image library, and then the standard cartoon image is corrected (such as figure proportion is modified, sex-distinguishing clothes are replaced, and the like) according to the figure data and the sex data, so that the initial cartoon image is obtained.
In some possible embodiments of the present application, after the step of modifying the initial animation according to the writing input to generate the basic animation of the first object, the method further includes:
acquiring a second behavior guideline of the first object;
controlling the basic cartoon image to move by utilizing the second behavior guide, and recording movement data;
outputting the second behavior guide to prompt the first object to execute a corresponding second behavior action;
collecting video data of the first object in the process of executing the second behavior action by the first object;
and generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
It can be appreciated that, in order to provide more fun for users to use the animation, in the embodiment of the present application, a second behavior guideline of the first object is further acquired, and then a control instruction of the basic animation is generated by using the second behavior guideline to control the basic animation to move, and movement data is recorded; in addition, outputting the second behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding second behavior action; collecting video data of the first object in the process of executing the second behavior action by the first object; and generating the motion cartoon image of the first object by utilizing the video data, the motion data of the basic cartoon image and a three-dimensional image processing technology.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present application is disclosed above, the present application is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the application.