[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115512017B - Cartoon image generation system and method based on character features - Google Patents

Cartoon image generation system and method based on character features Download PDF

Info

Publication number
CN115512017B
CN115512017B CN202211280752.9A CN202211280752A CN115512017B CN 115512017 B CN115512017 B CN 115512017B CN 202211280752 A CN202211280752 A CN 202211280752A CN 115512017 B CN115512017 B CN 115512017B
Authority
CN
China
Prior art keywords
cartoon image
data
behavior
initial
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211280752.9A
Other languages
Chinese (zh)
Other versions
CN115512017A (en
Inventor
郑德权
吕念
雷俊文
吴毅峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhou Yi
Original Assignee
Kuang Wenwu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kuang Wenwu filed Critical Kuang Wenwu
Priority to CN202211280752.9A priority Critical patent/CN115512017B/en
Publication of CN115512017A publication Critical patent/CN115512017A/en
Application granted granted Critical
Publication of CN115512017B publication Critical patent/CN115512017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/04162Control or interface arrangements specially adapted for digitisers for exchanging data with external devices, e.g. smart pens, via the digitiser sensing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a cartoon image generating system and a cartoon image generating method based on character characteristics, which are implemented by acquiring identity information of a first object; generating a first behavior guide of the first object according to the identity information; outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; collecting character characteristic data of the first object in the process that the first object executes the first behavior action; generating an initial cartoon image of the first object according to the character characteristic data; displaying the initial animation image on a display device; the display device receives writing input of a writing pen; and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object. According to the application, after the initial cartoon image is generated according to the character characteristic data, the initial cartoon image is modified by the writing pen to obtain the basic cartoon image, so that the method is quick and convenient for users to participate, and the user experience is improved.

Description

Cartoon image generation system and method based on character features
Technical Field
The application relates to the technical field of cartoon image generation, in particular to a cartoon image generation system and method based on character features.
Background
The use of computers has become an integral part of the life of people, and in the process of using computer application technology, the use of an avatar is required in many situations. In recent years, with the continuous development of computer graphics technology and internet technology, the application of an avatar is ubiquitous in entertainment modes of people (such as the application of the avatar in the fields of animation, games, live broadcasting, short video and the like) so as to enhance the use fun of users, and the design and generation scheme of the avatar are also continuously developed and improved. However, the existing scheme is relatively immobilized, and the common user cannot participate in the design, or can participate in the design but cannot obtain a satisfactory effect due to lack of professional design knowledge.
Disclosure of Invention
The application provides a cartoon image generating system and a method based on character features, which are based on the generation of an initial cartoon image according to character feature data, and the basic cartoon image is obtained by automatically modifying the initial cartoon image by operating a writing pen, so that the cartoon design can be rapidly carried out, the user participation is facilitated, and the user experience is improved.
In view of this, an aspect of the present application proposes a character feature-based animation generation system, comprising: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; wherein the method comprises the steps of
The acquisition module is used for acquiring the identity information of the first object;
the generating module is used for generating a first behavior guideline of the first object according to the identity information;
the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object;
the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data;
the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen;
the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object.
Optionally, the acquiring module is further configured to acquire first voice data;
the processing module is further used for identifying the first voice data;
the processing module is further used for modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
Optionally, in the step of modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object, the processing module is specifically configured to:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
Optionally, the character feature data at least includes one of a face image, stature data, gender data and character feature data; in the step of generating the initial animation of the first object according to the character feature data, the processing module is specifically configured to:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
Optionally, the acquiring module is further configured to acquire a second behavior guideline of the first object;
the processing module is also used for controlling the movement of the basic cartoon image by using the second behavior guide and recording movement data;
the prompting module is further configured to output the second behavior guideline to prompt the first object to execute a corresponding second behavior action;
the acquisition module is further configured to acquire video data of the first object during the process of executing the second behavior action by the first object;
the processing module is further used for generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
In another aspect of the application, a cartoon image generating method based on character features comprises the following steps:
acquiring identity information of a first object;
generating a first behavior guide of the first object according to the identity information;
outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
collecting character characteristic data of the first object in the process that the first object executes the first behavior action;
generating an initial cartoon image of the first object according to the character characteristic data;
displaying the initial animation image on a display device;
the display device receives writing input of a writing pen;
and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object.
Optionally, while the display device receives writing input of the writing pen, the method further comprises:
collecting first voice data and identifying the first voice data;
and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
Optionally, the step of modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object includes:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
Optionally, the character feature data at least includes one of a face image, stature data, gender data and character feature data; the step of generating the initial cartoon image of the first object according to the character characteristic data comprises the following steps:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
Optionally, after the step of modifying the initial animation according to the writing input to generate the basic animation of the first object, the method further includes:
acquiring a second behavior guideline of the first object;
controlling the basic cartoon image to move by utilizing the second behavior guide, and recording movement data;
outputting the second behavior guide to prompt the first object to execute a corresponding second behavior action;
collecting video data of the first object in the process of executing the second behavior action by the first object;
and generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
By adopting the technical scheme of the application, the cartoon image generating system based on the character features is provided with an acquisition module, a generating module, a prompting module, a generating module, display equipment and a processing module; the acquisition module is used for acquiring the identity information of the first object; the generating module is used for generating a first behavior guideline of the first object according to the identity information; the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object; the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data; the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen; the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
Drawings
FIG. 1 is a schematic block diagram of a character feature-based animation generation system, provided in accordance with an embodiment of the application;
fig. 2 is a flowchart of a cartoon image generating method based on character features according to another embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
A cartoon figure generating system and method based on character features according to some embodiments of the present application will be described with reference to fig. 1 to 2.
As shown in FIG. 1, one embodiment of the present application provides a character feature-based animation generation system, comprising: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; wherein the method comprises the steps of
The acquisition module is used for acquiring the identity information of the first object;
the generating module is used for generating a first behavior guideline of the first object according to the identity information;
the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object;
the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data;
the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen;
the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object.
It will be appreciated that the pen may be used to write input on a touch screen of a display device, and voice data may also be collected by a recording device provided thereon. In the embodiment of the application, firstly, the identity information of a first object (namely, the person to be used for generating the cartoon image) is acquired, wherein the identity information can be face data, fingerprint data, voiceprint data and the like; generating a first behavior guideline of the first object according to the identity information, specifically, acquiring corresponding personage data according to the identity information, and acquiring the behavior guideline corresponding to the personage from a behavior guideline library according to the personage data as the first behavior guideline of the first object; then, outputting the first behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding first behavior action, and acquiring character characteristic data of the first object through an acquisition sensor in the process of executing the first behavior action by the first object, wherein the character characteristic data at least comprises one of face images, stature data, gender data, health data, exercise capacity evaluation data and the like; and generating an initial cartoon image of the first object according to the character feature data, for example, a standard cartoon image matched with the character feature data can be selected from a standard cartoon image library to serve as a corresponding initial cartoon image (or the matched standard cartoon image is corrected according to a preset method to serve as the corresponding initial cartoon image). And displaying the initial cartoon image on a display device, and when detecting that the operation performed by using a writing pen exists in the area displayed by the initial cartoon image, receiving writing input of the writing pen by the display device, and modifying the initial cartoon image according to the writing input to generate the basic cartoon image of the first object.
By adopting the technical scheme of the embodiment, the cartoon image generating system comprises: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; the acquisition module is used for acquiring the identity information of the first object; the generating module is used for generating a first behavior guideline of the first object according to the identity information; the prompting module is used for outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object; the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data; the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen; the processing module is further used for modifying the initial cartoon image according to the writing input, and generating the basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
It should be understood that the block diagram of the character-based animation generation system shown in fig. 1 is only illustrative, and the number of the illustrated modules does not limit the scope of the present application.
In some possible embodiments of the present application, the acquiring module is further configured to acquire first voice data;
the processing module is further used for identifying the first voice data;
the processing module is further used for modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
It will be appreciated that in practice, a person may speak while writing/drawing with a pen, such as say "large eye point, longer leg point, thin face point … …", etc., in order to more accurately modify the initial animation, in an embodiment of the present application, first voice data is collected while the display device receives writing input from the writing pen, and the first voice data is recognized; and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
In some possible embodiments of the present application, in the step of modifying the initial animation according to the recognition result to generate the basic animation of the first object, the processing module is specifically configured to:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
It may be appreciated that, in the embodiment of the present application, the first keyword may be a name of a human body part, such as a face, eyes, nose, legs, etc., and the second keyword may be an adjective or a verb for a human body part, such as thin, big, small, long, or enlarged, lengthened, etc. And extracting a first keyword and a second keyword from the identification result by using a keyword extraction technology, determining a part to be modified of the initial cartoon image according to the first keyword, and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object. Through the scheme of the embodiment, the basic cartoon image can be obtained more accurately, more conveniently and efficiently.
In some possible embodiments of the present application, the character feature data includes at least one of a face image, stature data, gender data, and character feature data; in the step of generating the initial animation of the first object according to the character feature data, the processing module is specifically configured to:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
It can be appreciated that in order to generate a more matched initial cartoon image, in an embodiment of the present application, a standard cartoon image library may be preset, where the standard cartoon image library includes standard cartoon character components, such as a head component, a face component, a five-sense organ component, a limb component, a skin component, a hair component, a clothing component, and the like; the collected character characteristic data at least comprises one of face images, statue data and gender data, and standard cartoon images formed by various standard cartoon character components; according to the face data extracted from the character feature data, a standard cartoon image (such as a standard cartoon image with the face similarity reaching a preset value) corresponding to the face data of the first object can be selected from a standard cartoon image library, and then the standard cartoon image is corrected (such as figure proportion is modified, sex-distinguishing clothes are replaced, and the like) according to the figure data and the sex data, so that the initial cartoon image is obtained.
In some possible embodiments of the present application, the obtaining module is further configured to obtain a second behavior guideline of the first object;
the processing module is also used for controlling the movement of the basic cartoon image by using the second behavior guide and recording movement data;
the prompting module is further configured to output the second behavior guideline to prompt the first object to execute a corresponding second behavior action;
the acquisition module is further configured to acquire video data of the first object during the process of executing the second behavior action by the first object;
the processing module is further used for generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
It can be appreciated that, in order to provide more fun for users to use the animation, in the embodiment of the present application, a second behavior guideline of the first object is further acquired, and then a control instruction of the basic animation is generated by using the second behavior guideline to control the basic animation to move, and movement data is recorded; in addition, outputting the second behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding second behavior action; collecting video data of the first object in the process of executing the second behavior action by the first object; and generating the motion cartoon image of the first object by utilizing the video data, the motion data of the basic cartoon image and a three-dimensional image processing technology.
Referring to fig. 2, another embodiment of the present application provides a cartoon image generating method based on character features, including:
acquiring identity information of a first object;
generating a first behavior guide of the first object according to the identity information;
outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action;
collecting character characteristic data of the first object in the process that the first object executes the first behavior action;
generating an initial cartoon image of the first object according to the character characteristic data;
displaying the initial animation image on a display device;
the display device receives writing input of a writing pen;
and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object.
It will be appreciated that the pen may be used to write input on a touch screen of a display device, and voice data may also be collected by a recording device provided thereon. In the embodiment of the application, firstly, the identity information of a first object (namely, the person to be used for generating the cartoon image) is acquired, wherein the identity information can be face data, fingerprint data, voiceprint data and the like; generating a first behavior guideline of the first object according to the identity information, specifically, acquiring corresponding personage data according to the identity information, and acquiring the behavior guideline corresponding to the personage from a behavior guideline library according to the personage data as the first behavior guideline of the first object; then, outputting the first behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding first behavior action, and acquiring character characteristic data of the first object through an acquisition sensor in the process of executing the first behavior action by the first object, wherein the character characteristic data at least comprises one of face images, stature data, gender data, health data, exercise capacity evaluation data and the like; and generating an initial cartoon image of the first object according to the character feature data, for example, a standard cartoon image matched with the character feature data can be selected from a standard cartoon image library to serve as a corresponding initial cartoon image (or the matched standard cartoon image is corrected according to a preset method to serve as the corresponding initial cartoon image). And displaying the initial cartoon image on a display device, and when detecting that the operation performed by using a writing pen exists in the area displayed by the initial cartoon image, receiving writing input of the writing pen by the display device, and modifying the initial cartoon image according to the writing input to generate the basic cartoon image of the first object.
By adopting the technical scheme of the embodiment, the identity information of the first object is acquired; generating a first behavior guide of the first object according to the identity information; outputting the first behavior guide to prompt the first object to execute a corresponding first behavior action; collecting character characteristic data of the first object in the process that the first object executes the first behavior action; generating an initial cartoon image of the first object according to the character characteristic data; displaying the initial animation image on a display device; the display device receives writing input of a writing pen; and modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object. According to the scheme provided by the embodiment of the application, on the basis of generating the initial cartoon image according to the character characteristic data, the initial cartoon image is automatically modified by using the writing pen to obtain the basic cartoon image, so that the cartoon design can be rapidly carried out, the participation of users is facilitated, and the user experience is improved.
In some possible embodiments of the present application, while the display device receives writing input of a writing pen, the method further includes:
collecting first voice data and identifying the first voice data;
and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
It may be appreciated that in practice, a person may write/draw while speaking with a pen, such as say "large eye point, longer leg point, thin face point … …", etc., in order to modify the initial animation image more accurately, in an embodiment of the present application, the display device collects first voice data while receiving writing input of the writing pen, and recognizes the first voice data by using a voice recognition algorithm to obtain a recognition result; and modifying the initial cartoon image according to the identification result to generate the basic cartoon image of the first object.
In some possible embodiments of the present application, the step of modifying the initial animation according to the recognition result to generate the basic animation of the first object includes:
extracting a first keyword and a second keyword from the identification result;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
It may be appreciated that, in the embodiment of the present application, the first keyword may be a name of a human body part, such as a face, eyes, nose, legs, etc., and the second keyword may be an adjective or a verb for a human body part, such as thin, big, small, long, or enlarged, lengthened, etc. And extracting a first keyword and a second keyword from the identification result by using a keyword extraction technology, determining a part to be modified of the initial cartoon image according to the first keyword, and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object. Through the scheme of the embodiment, the basic cartoon image can be obtained more accurately, more conveniently and efficiently.
In some possible embodiments of the present application, the character feature data includes at least one of a face image, stature data, gender data, and character feature data; the step of generating the initial cartoon image of the first object according to the character characteristic data comprises the following steps:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
It can be appreciated that in order to generate a more matched initial cartoon image, in an embodiment of the present application, a standard cartoon image library may be preset, where the standard cartoon image library includes standard cartoon character components, such as a head component, a face component, a five-sense organ component, a limb component, a skin component, a hair component, a clothing component, and the like; the collected character characteristic data at least comprises one of face images, statue data and gender data, and standard cartoon images formed by various standard cartoon character components; according to the face data extracted from the character feature data, a standard cartoon image (such as a standard cartoon image with the face similarity reaching a preset value) corresponding to the face data of the first object can be selected from a standard cartoon image library, and then the standard cartoon image is corrected (such as figure proportion is modified, sex-distinguishing clothes are replaced, and the like) according to the figure data and the sex data, so that the initial cartoon image is obtained.
In some possible embodiments of the present application, after the step of modifying the initial animation according to the writing input to generate the basic animation of the first object, the method further includes:
acquiring a second behavior guideline of the first object;
controlling the basic cartoon image to move by utilizing the second behavior guide, and recording movement data;
outputting the second behavior guide to prompt the first object to execute a corresponding second behavior action;
collecting video data of the first object in the process of executing the second behavior action by the first object;
and generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
It can be appreciated that, in order to provide more fun for users to use the animation, in the embodiment of the present application, a second behavior guideline of the first object is further acquired, and then a control instruction of the basic animation is generated by using the second behavior guideline to control the basic animation to move, and movement data is recorded; in addition, outputting the second behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding second behavior action; collecting video data of the first object in the process of executing the second behavior action by the first object; and generating the motion cartoon image of the first object by utilizing the video data, the motion data of the basic cartoon image and a three-dimensional image processing technology.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present application is disclosed above, the present application is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the application.

Claims (6)

1. A character feature-based animation generation system, comprising: the system comprises an acquisition module, a generation module, a prompt module, a generation module, display equipment and a processing module; wherein the method comprises the steps of
The acquisition module is used for acquiring the identity information of the first object;
the generating module is configured to generate a first behavior guideline of the first object according to the identity information, specifically: acquiring corresponding personage data according to the identity information, and acquiring behavior guidelines corresponding to the personage data from a behavior guideline library according to the personage data as the first behavior guidelines of the first object;
the prompting module is used for outputting the first behavior guide in a voice broadcasting or video broadcasting mode to prompt the first object to execute corresponding first behavior action;
the acquisition module is further configured to acquire character feature data of the first object during the process of executing the first behavior action by the first object;
the processing module is used for generating an initial cartoon image of the first object according to the character characteristic data;
the display device is used for displaying the initial cartoon image and receiving writing input of a writing pen;
the processing module is further used for modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object;
the acquisition module comprises a recording device arranged on the writing pen;
the acquisition module is further used for acquiring first voice data while the display equipment receives the writing input of the writing pen through the recording device;
the processing module is further used for identifying the first voice data;
the processing module is further configured to modify the initial animation according to the identification result, and generate a basic animation of the first object, where the processing module is specifically configured to:
extracting a first keyword and a second keyword from the identification result, wherein the first keyword is a human body part name, and the second keyword is an adjective or a verb aiming at the human body part;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
2. The character-based animation generation system of claim 1 wherein the character feature data comprises at least a face image, stature data, gender data, and character feature data; in the step of generating the initial animation of the first object according to the character feature data, the processing module is specifically configured to:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
3. The character feature-based animation generation system of claim 2 wherein the acquisition module is further configured to acquire a second behavioral guideline of the first object;
the processing module is also used for controlling the movement of the basic cartoon image by using the second behavior guide and recording movement data;
the prompting module is further configured to output the second behavior guideline to prompt the first object to execute a corresponding second behavior action;
the acquisition module is further configured to acquire video data of the first object during the process of executing the second behavior action by the first object;
the processing module is further used for generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
4. A cartoon image generating method based on character features is characterized by comprising the following steps:
acquiring identity information of a first object;
generating a first behavior guideline of the first object according to the identity information, specifically: acquiring corresponding personage data according to the identity information, and acquiring behavior guidelines corresponding to the personage data from a behavior guideline library according to the personage data as the first behavior guidelines of the first object;
outputting the first behavior guide in a voice broadcasting or video playing mode to prompt the first object to execute a corresponding first behavior action;
collecting character characteristic data of the first object in the process that the first object executes the first behavior action;
generating an initial cartoon image of the first object according to the character characteristic data;
displaying the initial animation image on a display device;
the display device receives writing input of a writing pen;
modifying the initial cartoon image according to the writing input to generate a basic cartoon image of the first object;
while the display device receives writing input of the writing pen, further comprising:
collecting first voice data and identifying the first voice data;
modifying the initial cartoon image according to the identification result to generate a basic cartoon image of the first object, wherein the method specifically comprises the following steps:
extracting a first keyword and a second keyword from the identification result, wherein the first keyword is a human body part name, and the second keyword is an adjective or a verb aiming at the human body part;
determining a part to be modified of the initial cartoon image according to the first keyword;
and modifying the part to be modified according to the second keyword to generate the basic cartoon image of the first object.
5. The character-based animation generation method of claim 4 wherein the character feature data comprises at least a face image, stature data, gender data, and character feature data; the step of generating the initial cartoon image of the first object according to the character characteristic data comprises the following steps:
selecting a standard cartoon image corresponding to the first object from a standard cartoon image library according to the face image;
and correcting the standard cartoon image according to the stature data and the gender data to obtain the initial cartoon image.
6. The character feature-based animation generation method of claim 5 further comprising, after the step of generating a basic animation of the first object by modifying the initial animation based on the written input:
acquiring a second behavior guideline of the first object;
controlling the basic cartoon image to move by utilizing the second behavior guide, and recording movement data;
outputting the second behavior guide to prompt the first object to execute a corresponding second behavior action;
collecting video data of the first object in the process of executing the second behavior action by the first object;
and generating the motion cartoon image of the first object according to the video data and the motion data of the basic cartoon image.
CN202211280752.9A 2022-10-19 2022-10-19 Cartoon image generation system and method based on character features Active CN115512017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211280752.9A CN115512017B (en) 2022-10-19 2022-10-19 Cartoon image generation system and method based on character features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211280752.9A CN115512017B (en) 2022-10-19 2022-10-19 Cartoon image generation system and method based on character features

Publications (2)

Publication Number Publication Date
CN115512017A CN115512017A (en) 2022-12-23
CN115512017B true CN115512017B (en) 2023-11-28

Family

ID=84510046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211280752.9A Active CN115512017B (en) 2022-10-19 2022-10-19 Cartoon image generation system and method based on character features

Country Status (1)

Country Link
CN (1) CN115512017B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08263683A (en) * 1995-03-24 1996-10-11 Nec Corp User interface device using display of human body image
WO2002084597A1 (en) * 2001-04-13 2002-10-24 La Cantoche Production Method and system for animating a figure in three dimensions
JP2008071360A (en) * 2000-11-15 2008-03-27 Sega Corp Display object generation method in information processor, program for executing the same, and recording medium for storing program
CN101983396A (en) * 2008-03-31 2011-03-02 皇家飞利浦电子股份有限公司 Method for modifying a representation based upon a user instruction
CN107992348A (en) * 2017-10-31 2018-05-04 厦门宜弘电子科技有限公司 Dynamic caricature plug-in unit processing method and system based on intelligent terminal
CN108009200A (en) * 2017-10-30 2018-05-08 努比亚技术有限公司 The method to set up and mobile terminal of contact image, computer-readable recording medium
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
CN111028317A (en) * 2019-11-14 2020-04-17 腾讯科技(深圳)有限公司 Animation generation method, device and equipment for virtual object and storage medium
JP2021005181A (en) * 2019-06-26 2021-01-14 グリー株式会社 Information processing system, information processing method and computer program
JP2021015443A (en) * 2019-07-11 2021-02-12 富士通株式会社 Complement program and complement method and complementary device
CN113793256A (en) * 2021-09-10 2021-12-14 未鲲(上海)科技服务有限公司 Animation character generation method, device, equipment and medium based on user label
CN114187394A (en) * 2021-12-13 2022-03-15 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114219704A (en) * 2021-12-16 2022-03-22 上海幻电信息科技有限公司 Animation image generation method and device
CN114332318A (en) * 2021-12-31 2022-04-12 科大讯飞股份有限公司 Virtual image generation method and related equipment thereof
WO2022143128A1 (en) * 2020-12-29 2022-07-07 华为技术有限公司 Video call method and apparatus based on avatar, and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060274068A1 (en) * 2005-06-06 2006-12-07 Electronic Arts Inc. Adaptive contact based skeleton for animation of characters in video games
US10216300B2 (en) * 2014-09-02 2019-02-26 Spring Power Holdings Limited Human-computer interface device and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08263683A (en) * 1995-03-24 1996-10-11 Nec Corp User interface device using display of human body image
JP2008071360A (en) * 2000-11-15 2008-03-27 Sega Corp Display object generation method in information processor, program for executing the same, and recording medium for storing program
WO2002084597A1 (en) * 2001-04-13 2002-10-24 La Cantoche Production Method and system for animating a figure in three dimensions
CN101983396A (en) * 2008-03-31 2011-03-02 皇家飞利浦电子股份有限公司 Method for modifying a representation based upon a user instruction
CN108009200A (en) * 2017-10-30 2018-05-08 努比亚技术有限公司 The method to set up and mobile terminal of contact image, computer-readable recording medium
CN107992348A (en) * 2017-10-31 2018-05-04 厦门宜弘电子科技有限公司 Dynamic caricature plug-in unit processing method and system based on intelligent terminal
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
JP2021005181A (en) * 2019-06-26 2021-01-14 グリー株式会社 Information processing system, information processing method and computer program
JP2021015443A (en) * 2019-07-11 2021-02-12 富士通株式会社 Complement program and complement method and complementary device
CN111028317A (en) * 2019-11-14 2020-04-17 腾讯科技(深圳)有限公司 Animation generation method, device and equipment for virtual object and storage medium
WO2022143128A1 (en) * 2020-12-29 2022-07-07 华为技术有限公司 Video call method and apparatus based on avatar, and terminal
CN113793256A (en) * 2021-09-10 2021-12-14 未鲲(上海)科技服务有限公司 Animation character generation method, device, equipment and medium based on user label
CN114187394A (en) * 2021-12-13 2022-03-15 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114219704A (en) * 2021-12-16 2022-03-22 上海幻电信息科技有限公司 Animation image generation method and device
CN114332318A (en) * 2021-12-31 2022-04-12 科大讯飞股份有限公司 Virtual image generation method and related equipment thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Empirical Study on Digital Media Technology in Film and Television Animation Design;Renfeng Jiang等;Advanced Pattern and Structure Discovery from Complex Multimedia Data Environments 2021;1-10 *
PicToon: a personalized image-based cartoon system;Hong Chen 等;Proceedings of the tenth ACM international conference on Multimedia;171–178 *
基于虚拟现实技术的动漫人物三维设计;刘思净;;信息记录材料(09);130-132 *

Also Published As

Publication number Publication date
CN115512017A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
US11380316B2 (en) Speech interaction method and apparatus
US11052321B2 (en) Applying participant metrics in game environments
JP7253570B2 (en) Contextual in-game element recognition, annotation and interaction based on remote user input
Fothergill et al. Instructing people for training gestural interactive systems
Deng et al. Automated eye motion using texture synthesis
KR102532908B1 (en) Device, method and program for providing psychological counseling using deep learning and virtual reality
CN113380271B (en) Emotion recognition method, system, device and medium
CN109670385B (en) Method and device for updating expression in application program
CN115951786B (en) Creation method of multi-junction creative social game by utilizing AIGC technology
CN114171198A (en) Multi-mode mental health analysis method based on sand table images, texts and videos
CN112932470B (en) Assessment method and device for push-up training, equipment and storage medium
CN106530377B (en) Method and apparatus for manipulating three-dimensional animated characters
CN115512017B (en) Cartoon image generation system and method based on character features
CN117152308B (en) Virtual person action expression optimization method and system
Balirano Who’s Afraid of Conchita Wurst? Drag Performers and the Construction of Multimodal Prosody
CN111870961B (en) Information pushing method and device in game, electronic equipment and readable storage medium
Steinert et al. Evaluation of an engagement-aware recommender system for people with dementia
CN109697413B (en) Personality analysis method, system and storage medium based on head gesture
Hamdy et al. Affective games: a multimodal classification system
JP6491808B1 (en) Game program and game apparatus
CN115810099B (en) Image fusion device for virtual immersion type depression treatment system
Pettersson et al. A perceptual evaluation of social interaction with emotes and real-time facial motion capture
Ugail et al. Computational Techniques for Human Smile Analysis
CN117539351A (en) Emotion-inducing material set generation method, emotion recognition method and system
WO2024020603A2 (en) Augmenting sports videos using natural language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231107

Address after: Room 306, 2nd Floor, No. 8 Shangcheng West Street, Tianhe District, Guangzhou City, Guangdong Province, 510645

Applicant after: Kuang Wenwu

Address before: 518000 711, Building A, Wanlihua Yifangtiandi Internet Industrial Park, Xiaweiyuan, Gushu Community, Xixiang Street, Bao'an District, Shenzhen, Guangdong

Applicant before: Shenzhen zhugegua Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: 518029 5th Floor, 612 Building, No. 12 Bagua Second Road, Futian District, Shenzhen City, Guangdong Province

Patentee after: Zhou Yi

Address before: Room 306, 2nd Floor, No. 8 Shangcheng West Street, Tianhe District, Guangzhou City, Guangdong Province, 510645

Patentee before: Kuang Wenwu