CN102262788A - Method and device for processing interactive makeup information data of personal three-dimensional (3D) image - Google Patents
Method and device for processing interactive makeup information data of personal three-dimensional (3D) image Download PDFInfo
- Publication number
- CN102262788A CN102262788A CN2010101837316A CN201010183731A CN102262788A CN 102262788 A CN102262788 A CN 102262788A CN 2010101837316 A CN2010101837316 A CN 2010101837316A CN 201010183731 A CN201010183731 A CN 201010183731A CN 102262788 A CN102262788 A CN 102262788A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- model
- point
- information
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a method and a device for processing the interactive makeup information data of a personal three-dimensional (3D) image. The device comprises a client, a server and a communication network, wherein the client comprises an internet terminal, a mobile terminal, a retail terminal and a display unit; the server comprises a face characteristic positioning unit, a 3D image rebuilding unit, a makeup processing unit, a 3D face characteristic database and a makeup article database; and the client is connected with the server through the communication network. The working flow of the device comprises the following seven steps that: the client acquires the picture information of a user and transmits the picture information of the user to the server, and the like. Compared with the prior art, the invention has the advantages that: cross-media application service can be realized on touch screens of the internet terminal, the mobile terminal and the retail terminal at the same time; industry can be more intelligently developed; an application service range is wider; individual experience is more real; and the method and the device are free from limitation of time, places and terminal conditions.
Description
Technical field
The present invention relates to a kind of examination and make up information data disposal route and device, especially relate to the interactive examination of a kind of individual three-dimensional image and make up information data disposal route and device.
Background technology
Along with Internet development and universal, e-commerce website becomes the important channel of merchandise sales day by day, and the mode of shopping online also is that numerous netizens accept day by day.Yet, the commodity closely bound up with personal image, the commodity such as cosmetics, the clothes etc. that need the user to undergo just to make purchase decision still need the support of technology to online experience.Under this demand, in recent years, various online experience type e-commerce websites emerge in an endless stream, and the key of this type of technology success is the fidelity to the user experience process simulation.
Whether the sensation that virtual image is experienced is true, and primary key point is whether image is the image of oneself.Can not satisfy user's demand with model's vivid experience effect, the user need know oneself later effect on probation.
In the prior art, have can catch people's face dynamic image make up simulation technology (such as, can " CN 101371272A " be disclosed with reference to Japanese Patent Application Publication " spy opens 2003-44837 " and Chinese patent application).But these methods are that calculated amount is big to the face tracking features of conducting oneself of each hardwood in the dynamic image, how to use with terminating machine or terminal software form, can't be applied to the network terminal with the B/S form of software, are of limited application.And in a single day leave the camera zone when user takes in real time in terminal applies or posture will lose efficacy in the front inadequately.
Summary of the invention
Purpose of the present invention be exactly provide a kind of in order to overcome the defective that above-mentioned prior art exists and promote industry development, can be applicable to the network terminal, range of application is wider, it is true more to experience, be not subjected to the time simultaneously, information data disposal route and device are made up in the interactive examination of individual three-dimensional image of place and terminal condition restriction.
Purpose of the present invention can be achieved through the following technical solutions:
The information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image, it is characterized in that, may further comprise the steps:
1) client is obtained user picture information, and it is transferred to server;
2) the face characteristic positioning unit in the server detects people's face position to user picture information, and judges whether to detect successfully, if yes, and execution in step 3), if do not return step 1);
3) the face characteristic positioning unit in the server adopts shape-variable model (ASM) detection algorithm to locate and extract the face characteristic dot information;
4) image of the 3D in server reconstruction unit adopts based on the three-dimensional face property data base and calculates the three-dimensional feature point, and adopts and set up three-dimensional face model based on the surf deform algorithm of master pattern;
5) server with photo as texture in three-dimensional face model;
6) client is played up the 3D model by flash, and drives 3D modeling true man action by the model deformation method, exports true man by the display unit of client and simulates vivid information;
7) user selects toiletware in client, cosmetic treatment unit in the server is according to the toiletware information of selecting and the positional information of human face characteristic point, toiletware accurately is added on the model texture picture, upgrade the 3D image and play up, and by the image of the 3D after display unit display process information.
Shape-variable model (ASM) detection algorithm in the described step 3) is as follows:
31) model initial alignment:
For facial image, adopt people's face detection algorithm to determine whether to have in the photo position of people's face and people's face, according to initial position that obtains and the translation by angle θ rotation, yardstick s convergent-divergent and position t the average shape model is put in the photo, obtains initial shape
32) search on the unique point normal direction:
The local gray level model that utilizes training to obtain, to each unique point it is searched within the specific limits along normal direction, its normalization gray scale derivative vector of each some calculating in this scope and training obtain the mahalanobis distance of the average normalization gray scale derivative vector of this point, therefrom the candidate point of selected distance minimum is as optimal match point, each unique point of model is all carried out the search of optimal match point, obtained a new shape vector X ';
33) with shape to carrying out approximate expression, because the shape that obtains after the search of each unique point can't direct representation under new orthogonal basis, can only obtain the approximate representation on the least error meaning, X is obtained four parameter (1+ds of affined transformation to X ' do alignment computing, d θ, dtx dty), further obtains the changing value of form parameter
34) the rational constraint of shape, the variation range of form parameter bi should
Just can have reasonably character in the scope, therefore will judge, it be transformed to for bi not in this scope to b+db
In the scope, utilize the form parameter b+db that finally obtains to calculate the shape of reconstruct;
35) repeat 32)~34) step circulate, when the Euclidean distance of the shape vector of twice adjacent circulation during, think its convergence smaller or equal to preset threshold, iteration finishes.
Human face characteristic point information in the described step 3) comprises facial contour dot information, eye contour dot information, nose contour point information, face contour point information, eyebrow contour point information.
The surf deform algorithm based on master pattern in the described step 4) is as follows:
Through the processing of step 3), obtain general faceform's three-dimensional feature point S
MfThree-dimensional feature point S with photo people face
Obj, by their corresponding relation with general faceform S
Mod elElastic deformation is specific faceform, selects thin-plate spline interpolation algorithm (TPS), and this algorithm is a kind of radially basic interpolating function,
The basis function of TPS is U (r)=r
2Logr
2, r=‖ P wherein
i-(x, y, z) ‖, (x, y z) are the three-dimensional coordinate of interpolation point, and Pi is a unique point, according to S
MfWith S
ObjCorresponding relation, calculate the radially coefficient of basic interpolating function, obtain the TPS interpolating function and be:
(x, y is z) with general faceform S by f
Mod elBecome specific people's face.
Model deformation method in the described step 6) is specific as follows:
Set up 44 elemental motion unit based on facial movement coded system FACS, each one or several human face characteristic point of elemental motion unit controls is in three-dimensional displacement, with different elemental motion unit combination, produce various expressions, and use TPS that the three-dimensional feature point is carried out the interpolation distortion, realize expression shape change.
Toiletware in the described step 7) comprises glasses, hair style, jewellery, cosmetics, the picture of described glasses, hair style, jewellery loads the back and directly is added on the model texture picture according to the position of human face characteristic point, and described cosmetics are added on the model texture picture according to the position of human face characteristic point after according to range of influence and the synthetic color diagram layer of color.
The information data treating apparatus is made up in the interactive examination of a kind of individual three-dimensional image, it is characterized in that, comprise client, server, communication network, described client comprises display unit, described server comprises the face characteristic positioning unit, 3D image reconstruction unit, the cosmetic treatment unit, the three-dimensional face property data base, the toiletware database, described client is connected with server by communication network, described face characteristic positioning unit, 3D image reconstruction unit, the cosmetic treatment unit connects successively, described 3D image reconstruction unit is connected with the three-dimensional face property data base, and described cosmetic treatment unit is connected with the toiletware database.
Described communication network is internet, communication bus or wireless network.
Compared with prior art, the present invention has the following advantages:
1, promote industry development: a difficult problem is experienced in the shopping that utilizes people's face three-dimensional reconstruction technology and client 3D display technique well to solve user's online cosmetics, allow the user can before buying commodity, experience result behind the cosmetic sampler really, promoted the ecommerce process of cosmetics commodity greatly;
2, applied range: can be applicable in internet, desktop software, terminal all-in-one, the portable terminal;
3, experience is truer: drive 3D modeling true man action by the model deformation method, export true man by the display unit of client and simulate image, make user experience truer.
4, have more the advantage that is not subjected to time, place and terminal condition restriction, the mode that satisfying the modern can be at any time and any place, like according to the individual is used the hope of service.
Description of drawings
Fig. 1 sets up process flow diagram for three-dimensional face model of the present invention;
Fig. 2 is that process flow diagram flow chart is made up in examination of the present invention;
Fig. 3 is a hardware configuration synoptic diagram of the present invention.
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
Embodiment
As Fig. 1, Fig. 2, shown in Figure 3, the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image, may further comprise the steps:
1) client 1 is obtained user picture, and it is transferred to server 3;
2) 21 pairs of user picture of the face characteristic positioning unit in the server 2 detect people's face position, and judge whether to detect successfully, if yes, and execution in step 3), if do not return step 1);
3) the face characteristic positioning unit in the server 2 21 adopts shape-variable model (ASM) detection algorithm to locate and extract human face characteristic point;
4) image of the 3D in the server 2 reconstruction unit 22 adopts based on three-dimensional face property data base 24 and calculates the three-dimensional feature point, and adopts and set up three-dimensional face model based on the surf deform algorithm of master pattern;
5) server 2 with photo as texture in three-dimensional face model;
6) client 1 is played up the 3D model by flash, and drives 3D modeling true man action by the model deformation method, exports true man by the display unit 11 of client 1 and simulates image;
7) user selects toiletware in client 1, cosmetic treatment unit 23 in the server 2 is according to the toiletware of selecting and the position of human face characteristic point, toiletware accurately is added on the model texture picture, upgrades the 3D image and play up, and by the 3D image after display unit 1 display process.
Shape-variable model (ASM) detection algorithm in the described step 3) is as follows:
31) model initial alignment:
For facial image, adopt people's face detection algorithm to determine whether to have in the photo position of people's face and people's face, according to initial position that obtains and the translation by angle θ rotation, yardstick s convergent-divergent and position t the average shape model is put in the photo, obtains initial shape
32) search on the unique point normal direction:
The local gray level model that utilizes training to obtain, to each unique point it is searched within the specific limits along normal direction, its normalization gray scale derivative vector of each some calculating in this scope and training obtain the mahalanobis distance of the average normalization gray scale derivative vector of this point, therefrom the candidate point of selected distance minimum is as optimal match point, each unique point of model is all carried out the search of optimal match point, obtained a new shape vector X ';
33) with shape to carrying out approximate expression, because the shape that obtains after the search of each unique point can't direct representation under new orthogonal basis, can only obtain the approximate representation on the least error meaning, X is obtained four parameter (1+ds of affined transformation to X ' do alignment computing, d θ, dtx dty), further obtains the changing value of form parameter
34) the rational constraint of shape, the variation range of form parameter bi should
Just can have reasonably character in the scope, therefore will judge, it be transformed to for bi not in this scope to b+db
In the scope, utilize the form parameter b+db that finally obtains to calculate the shape of reconstruct;
35) repeat 32)~34) step circulate, when the Euclidean distance of the shape vector of twice adjacent circulation during, think its convergence smaller or equal to preset threshold, iteration finishes.
Human face characteristic point information in the described step 3) comprises facial contour dot information, eye contour dot information, nose contour point information, face contour point information, eyebrow contour point information.
The surf deform algorithm based on master pattern in the described step 4) is as follows:
Through the processing of step 3), obtain general faceform's three-dimensional feature point S
MfThree-dimensional feature point S with photo people face
Obj, by their corresponding relation with general faceform S
Mod elElastic deformation is specific faceform, selects thin-plate spline interpolation algorithm (TPS), and this algorithm is a kind of radially basic interpolating function,
The basis function of TPS is U (r)=r
2Logr
2, r=‖ P wherein
i-(x, y, z) ‖, (x, y z) are the three-dimensional coordinate of interpolation point, and Pi is a unique point, according to S
MfWith S
ObjCorresponding relation, calculate the radially coefficient of basic interpolating function, obtain the TPS interpolating function and be:
(x, y is z) with general faceform S by f
ModelBecome specific people's face.
Model deformation method in the described step 6) is specific as follows:
Set up 44 elemental motion unit based on facial movement coded system FACS, each one or several human face characteristic point of elemental motion unit controls is in three-dimensional displacement, with different elemental motion unit combination, produce various expressions, and use TPS that the three-dimensional feature point is carried out the interpolation distortion, realize expression shape change.
Toiletware in the described step 7) comprises glasses, hair style, jewellery, cosmetics, the picture of described glasses, hair style, jewellery loads the back and directly is added on the model texture picture according to the position of human face characteristic point, and described cosmetics are added on the model texture picture according to the position of human face characteristic point after according to range of influence and the synthetic color diagram layer of color.
Hardware device of the present invention comprises client 1, server 2, communication network 3, described client 1 comprises display unit 11, described server 2 comprises face characteristic positioning unit 21,3D image reconstruction unit 22, cosmetic treatment unit 23, three-dimensional face property data base 24, toiletware database 25, described client 1 is connected with server 2 by communication network 3, described face characteristic positioning unit 21,3D image reconstruction unit 22, cosmetic treatment unit 23 connects successively, described 3D image reconstruction unit 22 is connected with three-dimensional face property data base 24, and described cosmetic treatment unit 23 is connected with toiletware database 25.Described communication network 3 is internet, communication bus or wireless network.
If communication network 3 is Internet, can use on the internet, if communication network 3 is a communication bus, can be applied on the terminal all-in-one, if communication network 3 is a wireless network, can be applicable in the portable terminal.This product will be put on display on 2010 Shanghai World's Fair.
The 3D simulation image (action that is virtually reality like reality) that the present invention makes the user can use the photo of oneself to generate is experienced the effect after on probation such as extensive stock such as wig, eye shadow, lipstick, glasses, can be applied in based on this in the network electronic commerce, as the virtual 3D of the wig in the wig e-commerce website experience, cosmetics virtual image in the cosmetics e-commerce website is experienced and the virtual experience of e-commerce websites such as glasses, jewellery.
According to the position of user picture detected characteristics point, comprise the profile and the colour of skin information of face, eyes, nose, face, eyebrow, and serve as according to the trial effect of dissimilar cosmetics is made color change with these information, realize trial effect.
By a user picture,, use three-dimensional rebuilding method to rebuild with the three-dimensional model of this photo as texture by detected characteristics point position.And the three-dimensional motion when being implemented in model 3D and playing up by the model deformation information that prestores and algorithm, as face action such as blinking, smile, speak, and the wobbling action that is virtually reality like reality of head.
The 3D that realizes three-dimensional image at networking client plays up, and realizes that by changing model texture photo the enforcement of 3D rendering content changes, and makes experience truer in render process.
Claims (8)
1. the information data disposal route is made up in the interactive examination of individual three-dimensional image, it is characterized in that, may further comprise the steps:
1) client is obtained user picture information, and it is transferred to server;
2) the face characteristic positioning unit in the server detects people's face position to user picture information, and judges whether to detect successfully, if yes, and execution in step 3), if do not return step 1);
3) the face characteristic positioning unit in the server adopts shape-variable model (ASM) detection algorithm to locate and extract the face characteristic dot information;
4) image of the 3D in server reconstruction unit adopts based on the three-dimensional face property data base and calculates the three-dimensional feature point, and adopts and set up three-dimensional face model based on the surf deform algorithm of master pattern;
5) server with photo as texture in three-dimensional face model;
6) client is played up the 3D model by flash, and drives 3D modeling true man action by the model deformation method, exports true man by the display unit of client and simulates vivid information;
7) user selects toiletware in client, cosmetic treatment unit in the server is according to the toiletware information of selecting and the positional information of human face characteristic point, toiletware accurately is added on the model texture picture, upgrade the 3D image and play up, and by the image of the 3D after display unit display process information.
2. the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image according to claim 1, it is characterized in that shape-variable model (ASM) detection algorithm in the described step 3) is as follows:
31) model initial alignment:
For facial image, adopt people's face detection algorithm to determine whether to have in the photo position of people's face and people's face, according to initial position that obtains and the translation by angle θ rotation, yardstick s convergent-divergent and position t the average shape model is put in the photo, obtains initial shape
32) search on the unique point normal direction:
The local gray level model that utilizes training to obtain, to each unique point it is searched within the specific limits along normal direction, its normalization gray scale derivative vector of each some calculating in this scope and training obtain the mahalanobis distance of the average normalization gray scale derivative vector of this point, therefrom the candidate point of selected distance minimum is as optimal match point, each unique point of model is all carried out the search of optimal match point, obtained a new shape vector X ';
33) with shape to carrying out approximate expression, because the shape that obtains after the search of each unique point can't direct representation under new orthogonal basis, can only obtain the approximate representation on the least error meaning, X is obtained four parameter (1+ds of affined transformation to X ' do alignment computing, d θ, dtx dty), further obtains the changing value of form parameter
34) the rational constraint of shape, the variation range of form parameter bi should
Just can have reasonably character in the scope, therefore will judge, it be transformed to for bi not in this scope to b+db
In the scope, utilize the form parameter b+db that finally obtains to calculate the shape of reconstruct;
35) repeat 32)~34) step circulate, when the Euclidean distance of the shape vector of twice adjacent circulation during, think its convergence smaller or equal to preset threshold, iteration finishes.
3. the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image according to claim 1, it is characterized in that the human face characteristic point information in the described step 3) comprises facial contour dot information, eye contour dot information, nose contour point information, face contour point information, eyebrow contour point information.
4. the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image according to claim 1, it is characterized in that the surf deform algorithm based on master pattern in the described step 4) is as follows:
Through the processing of step 3), obtain general faceform's three-dimensional feature point S
MfThree-dimensional feature point S with photo people face
Obj, by their corresponding relation with general faceform S
ModelElastic deformation is specific faceform, selects thin-plate spline interpolation algorithm (TPS), and this algorithm is a kind of radially basic interpolating function,
The basis function of TPS is U (r)=r
2Logr
2, r=‖ P wherein
i-(x, y, z) ‖, (x, y z) are the three-dimensional coordinate of interpolation point, and Pi is a unique point, according to S
MfWith S
ObjCorresponding relation, calculate the radially coefficient of basic interpolating function, obtain the TPS interpolating function and be:
(x, y is z) with general faceform S by f
ModelBecome specific people's face.
5. the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image according to claim 1, it is characterized in that the model deformation method in the described step 6) is specific as follows:
Set up 44 elemental motion unit based on facial movement coded system FACS, each one or several human face characteristic point of elemental motion unit controls is in three-dimensional displacement, with different elemental motion unit combination, produce various expressions, and use TPS that the three-dimensional feature point is carried out the interpolation distortion, realize expression shape change.
6. the information data disposal route is made up in the interactive examination of a kind of individual three-dimensional image according to claim 1, it is characterized in that, toiletware in the described step 7) comprises glasses, hair style, jewellery, cosmetics, the picture of described glasses, hair style, jewellery loads the back and directly is added on the model texture picture according to the position of human face characteristic point, and described cosmetics are added on the model texture picture according to the position of human face characteristic point after according to range of influence and the synthetic color diagram layer of color.
7. the information data treating apparatus is made up in the interactive examination of individual three-dimensional image, it is characterized in that, comprise client, server, communication network, described client comprises internet terminal, portable terminal, retail terminal, and be equipped with display unit, described server comprises the face characteristic positioning unit, 3D image reconstruction unit, the cosmetic treatment unit, the three-dimensional face property data base, the toiletware database, described client is connected with server by communication network, described face characteristic positioning unit, 3D image reconstruction unit, the cosmetic treatment unit connects successively, described 3D image reconstruction unit is connected with the three-dimensional face property data base, and described cosmetic treatment unit is connected with the toiletware database.
8. the information data treating apparatus is made up in the interactive examination of a kind of individual three-dimensional image according to claim 7, it is characterized in that described communication network is internet, communication bus or wireless network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101837316A CN102262788A (en) | 2010-05-24 | 2010-05-24 | Method and device for processing interactive makeup information data of personal three-dimensional (3D) image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101837316A CN102262788A (en) | 2010-05-24 | 2010-05-24 | Method and device for processing interactive makeup information data of personal three-dimensional (3D) image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102262788A true CN102262788A (en) | 2011-11-30 |
Family
ID=45009403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010101837316A Pending CN102262788A (en) | 2010-05-24 | 2010-05-24 | Method and device for processing interactive makeup information data of personal three-dimensional (3D) image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102262788A (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102647606A (en) * | 2012-02-17 | 2012-08-22 | 钰创科技股份有限公司 | Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
CN103065360A (en) * | 2013-01-16 | 2013-04-24 | 重庆绿色智能技术研究院 | Generation method and generation system of hair style effect pictures |
CN103093357A (en) * | 2012-12-07 | 2013-05-08 | 江苏乐买到网络科技有限公司 | Cosmetic makeup trying system of online shopping |
CN103236066A (en) * | 2013-05-10 | 2013-08-07 | 苏州华漫信息服务有限公司 | Virtual trial make-up method based on human face feature analysis |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN103679143A (en) * | 2013-12-03 | 2014-03-26 | 北京航空航天大学 | Method for capturing facial expressions in real time without supervising |
CN103903292A (en) * | 2012-12-27 | 2014-07-02 | 北京新媒传信科技有限公司 | Method and system for realizing head portrait editing interface |
CN104049726A (en) * | 2013-03-17 | 2014-09-17 | 北京银万特科技有限公司 | Method and device for shooting images based on intelligent information terminal |
CN104680053A (en) * | 2013-12-03 | 2015-06-03 | 湖北海洋文化传播有限公司 | Method and device for authenticating identity of current authentication terminal holder |
CN104794275A (en) * | 2015-04-16 | 2015-07-22 | 北京联合大学 | Face and hair style matching model for mobile terminal |
CN105427238A (en) * | 2015-11-30 | 2016-03-23 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN106203300A (en) * | 2016-06-30 | 2016-12-07 | 北京小米移动软件有限公司 | Content item display packing and device |
CN106372333A (en) * | 2016-08-31 | 2017-02-01 | 北京维盛视通科技有限公司 | Method and device for displaying clothes based on face model |
CN106791775A (en) * | 2016-11-15 | 2017-05-31 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN106780768A (en) * | 2016-11-29 | 2017-05-31 | 深圳市凯木金科技有限公司 | A kind of long-range simulation cosmetic system and method for 3D in real time |
CN106909538A (en) * | 2015-12-21 | 2017-06-30 | 腾讯科技(北京)有限公司 | Using effect methods of exhibiting and device |
CN107220960A (en) * | 2017-05-27 | 2017-09-29 | 无限极(中国)有限公司 | One kind examination cosmetic method, system and equipment |
CN107705240A (en) * | 2016-08-08 | 2018-02-16 | 阿里巴巴集团控股有限公司 | Virtual examination cosmetic method, device and electronic equipment |
CN107924577A (en) * | 2015-10-26 | 2018-04-17 | 松下知识产权经营株式会社 | Position generating means of making up and makeup position generation method |
CN107948499A (en) * | 2017-10-31 | 2018-04-20 | 维沃移动通信有限公司 | A kind of image capturing method and mobile terminal |
CN108510437A (en) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing |
CN108564529A (en) * | 2018-04-23 | 2018-09-21 | 广东奥园奥买家电子商务有限公司 | A kind of implementation method of the real-time makeup of lip based on android system |
CN109118314A (en) * | 2017-06-23 | 2019-01-01 | 杭州美帮网络科技有限公司 | Method and system for build platform |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
WO2019033923A1 (en) * | 2017-08-14 | 2019-02-21 | 迈吉客科技(北京)有限公司 | Image rendering method and system |
CN109671142A (en) * | 2018-11-23 | 2019-04-23 | 南京图玩智能科技有限公司 | A kind of intelligence makeups method and intelligent makeups mirror |
CN110992455A (en) * | 2019-12-08 | 2020-04-10 | 北京中科深智科技有限公司 | Real-time expression capturing method and system |
WO2020082626A1 (en) * | 2018-10-23 | 2020-04-30 | 杭州趣维科技有限公司 | Real-time facial three-dimensional reconstruction system and method for mobile device |
US10755477B2 (en) | 2018-10-23 | 2020-08-25 | Hangzhou Qu Wei Technology Co., Ltd. | Real-time face 3D reconstruction system and method on mobile device |
WO2022135518A1 (en) * | 2020-12-25 | 2022-06-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method and apparatus based on three-dimensional cartoon model, and server and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007128117A1 (en) * | 2006-05-05 | 2007-11-15 | Parham Aarabi | Method. system and computer program product for automatic and semi-automatic modification of digital images of faces |
CN101079876A (en) * | 2006-05-25 | 2007-11-28 | 齐南 | A method for promoting clothe advertisement via network virtual fitting |
-
2010
- 2010-05-24 CN CN2010101837316A patent/CN102262788A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007128117A1 (en) * | 2006-05-05 | 2007-11-15 | Parham Aarabi | Method. system and computer program product for automatic and semi-automatic modification of digital images of faces |
CN101079876A (en) * | 2006-05-25 | 2007-11-28 | 齐南 | A method for promoting clothe advertisement via network virtual fitting |
Non-Patent Citations (3)
Title |
---|
涂意 等: "基于单张人脸图片和一般模型的三维重建方法", 《计算机应用研究》 * |
王巍: "人脸面部特征定位与人脸识别方法的研究", 《北京工业大学硕士学位论文》 * |
署光 等: "基于一般模型的单幅人脸照片三维重建", 《上海交通大学学报》 * |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102647606B (en) * | 2012-02-17 | 2015-01-07 | 钰创科技股份有限公司 | Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method |
CN102647606A (en) * | 2012-02-17 | 2012-08-22 | 钰创科技股份有限公司 | Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
US9367940B2 (en) | 2012-06-20 | 2016-06-14 | Zhejiang University | Method for single-view hair modeling and portrait editing |
CN102800129B (en) * | 2012-06-20 | 2015-09-30 | 浙江大学 | A kind of scalp electroacupuncture based on single image and portrait edit methods |
CN103093357A (en) * | 2012-12-07 | 2013-05-08 | 江苏乐买到网络科技有限公司 | Cosmetic makeup trying system of online shopping |
CN103903292A (en) * | 2012-12-27 | 2014-07-02 | 北京新媒传信科技有限公司 | Method and system for realizing head portrait editing interface |
CN103903292B (en) * | 2012-12-27 | 2017-04-19 | 北京新媒传信科技有限公司 | Method and system for realizing head portrait editing interface |
CN103065360A (en) * | 2013-01-16 | 2013-04-24 | 重庆绿色智能技术研究院 | Generation method and generation system of hair style effect pictures |
CN103065360B (en) * | 2013-01-16 | 2016-08-24 | 中国科学院重庆绿色智能技术研究院 | A kind of hair shape effect map generalization method and system |
CN104049726A (en) * | 2013-03-17 | 2014-09-17 | 北京银万特科技有限公司 | Method and device for shooting images based on intelligent information terminal |
CN103236066A (en) * | 2013-05-10 | 2013-08-07 | 苏州华漫信息服务有限公司 | Virtual trial make-up method based on human face feature analysis |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN103679143A (en) * | 2013-12-03 | 2014-03-26 | 北京航空航天大学 | Method for capturing facial expressions in real time without supervising |
CN103679143B (en) * | 2013-12-03 | 2017-02-15 | 北京航空航天大学 | Method for capturing facial expressions in real time without supervising |
CN104680053A (en) * | 2013-12-03 | 2015-06-03 | 湖北海洋文化传播有限公司 | Method and device for authenticating identity of current authentication terminal holder |
CN104680053B (en) * | 2013-12-03 | 2018-05-11 | 湖北海洋文化传播有限公司 | To current authentication terminal holder's identity authentication method and device |
CN104794275A (en) * | 2015-04-16 | 2015-07-22 | 北京联合大学 | Face and hair style matching model for mobile terminal |
CN107924577A (en) * | 2015-10-26 | 2018-04-17 | 松下知识产权经营株式会社 | Position generating means of making up and makeup position generation method |
CN107924577B (en) * | 2015-10-26 | 2021-08-24 | 松下知识产权经营株式会社 | Cosmetic part creation device and cosmetic part creation method |
CN105427238B (en) * | 2015-11-30 | 2018-09-04 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN105427238A (en) * | 2015-11-30 | 2016-03-23 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN106909538A (en) * | 2015-12-21 | 2017-06-30 | 腾讯科技(北京)有限公司 | Using effect methods of exhibiting and device |
CN106203300A (en) * | 2016-06-30 | 2016-12-07 | 北京小米移动软件有限公司 | Content item display packing and device |
CN107705240B (en) * | 2016-08-08 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Virtual makeup trial method and device and electronic equipment |
CN107705240A (en) * | 2016-08-08 | 2018-02-16 | 阿里巴巴集团控股有限公司 | Virtual examination cosmetic method, device and electronic equipment |
CN106372333A (en) * | 2016-08-31 | 2017-02-01 | 北京维盛视通科技有限公司 | Method and device for displaying clothes based on face model |
CN106791775A (en) * | 2016-11-15 | 2017-05-31 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN106780768A (en) * | 2016-11-29 | 2017-05-31 | 深圳市凯木金科技有限公司 | A kind of long-range simulation cosmetic system and method for 3D in real time |
CN107220960A (en) * | 2017-05-27 | 2017-09-29 | 无限极(中国)有限公司 | One kind examination cosmetic method, system and equipment |
CN109118314A (en) * | 2017-06-23 | 2019-01-01 | 杭州美帮网络科技有限公司 | Method and system for build platform |
WO2019033923A1 (en) * | 2017-08-14 | 2019-02-21 | 迈吉客科技(北京)有限公司 | Image rendering method and system |
CN107948499A (en) * | 2017-10-31 | 2018-04-20 | 维沃移动通信有限公司 | A kind of image capturing method and mobile terminal |
CN108510437A (en) * | 2018-04-04 | 2018-09-07 | 科大讯飞股份有限公司 | A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing |
CN108564529A (en) * | 2018-04-23 | 2018-09-21 | 广东奥园奥买家电子商务有限公司 | A kind of implementation method of the real-time makeup of lip based on android system |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
US10755477B2 (en) | 2018-10-23 | 2020-08-25 | Hangzhou Qu Wei Technology Co., Ltd. | Real-time face 3D reconstruction system and method on mobile device |
WO2020082626A1 (en) * | 2018-10-23 | 2020-04-30 | 杭州趣维科技有限公司 | Real-time facial three-dimensional reconstruction system and method for mobile device |
CN109671142A (en) * | 2018-11-23 | 2019-04-23 | 南京图玩智能科技有限公司 | A kind of intelligence makeups method and intelligent makeups mirror |
CN109671142B (en) * | 2018-11-23 | 2023-08-04 | 南京图玩智能科技有限公司 | Intelligent cosmetic method and intelligent cosmetic mirror |
CN110992455A (en) * | 2019-12-08 | 2020-04-10 | 北京中科深智科技有限公司 | Real-time expression capturing method and system |
WO2022135518A1 (en) * | 2020-12-25 | 2022-06-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method and apparatus based on three-dimensional cartoon model, and server and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102262788A (en) | Method and device for processing interactive makeup information data of personal three-dimensional (3D) image | |
US11688120B2 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
Dibra et al. | Hs-nets: Estimating human body shape from silhouettes with convolutional neural networks | |
Nguyen et al. | Lipstick ain't enough: beyond color matching for in-the-wild makeup transfer | |
Zhao et al. | M3d-vton: A monocular-to-3d virtual try-on network | |
Ichim et al. | Dynamic 3D avatar creation from hand-held video input | |
Suo et al. | A multi-resolution dynamic model for face aging simulation | |
Park et al. | Capturing and animating skin deformation in human motion | |
Urtasun et al. | Style‐based motion synthesis | |
US20160134840A1 (en) | Avatar-Mediated Telepresence Systems with Enhanced Filtering | |
Liao et al. | Enhancing the symmetry and proportion of 3D face geometry | |
CN105118082A (en) | Personalized video generation method and system | |
US9811937B2 (en) | Coordinated gesture and locomotion for virtual pedestrians | |
CN101751689A (en) | Three-dimensional facial reconstruction method | |
JP2004094917A (en) | Virtual makeup device and method therefor | |
CN111833236B (en) | Method and device for generating three-dimensional face model for simulating user | |
US20220044311A1 (en) | Method for enhancing a user's image while e-commerce shopping for the purpose of enhancing the item that is for sale | |
US12079947B2 (en) | Virtual reality presentation of clothing fitted on avatars | |
CN111950430A (en) | Color texture based multi-scale makeup style difference measurement and migration method and system | |
CN104854623A (en) | Avatar-based virtual dressing room | |
TW202040421A (en) | Method of generating 3d facial model for an avatar, related system | |
CN113327190A (en) | Image and data processing method and device | |
Bastanfard et al. | Toward anthropometrics simulation of face rejuvenation and skin cosmetic | |
Li et al. | Computer-aided 3D human modeling for portrait-based product development using point-and curve-based deformation | |
KR102624995B1 (en) | Method and system for clothing virtual try-on service based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20111130 |