WO2022146615A1 - Digital makeup palette - Google Patents
Digital makeup palette Download PDFInfo
- Publication number
- WO2022146615A1 WO2022146615A1 PCT/US2021/061654 US2021061654W WO2022146615A1 WO 2022146615 A1 WO2022146615 A1 WO 2022146615A1 US 2021061654 W US2021061654 W US 2021061654W WO 2022146615 A1 WO2022146615 A1 WO 2022146615A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- makeup
- user
- face
- augmented reality
- objective
- Prior art date
Links
- 230000003190 augmentative effect Effects 0.000 claims abstract description 86
- 238000012800 visualization Methods 0.000 claims abstract description 73
- 230000001815 facial effect Effects 0.000 claims description 75
- 230000002787 reinforcement Effects 0.000 claims description 35
- 238000010801 machine learning Methods 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 28
- 238000004458 analytical method Methods 0.000 claims description 15
- 210000004209 hair Anatomy 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 8
- 230000004048 modification Effects 0.000 claims description 8
- 238000012986 modification Methods 0.000 claims description 8
- 210000000744 eyelid Anatomy 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 7
- 238000002156 mixing Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 238000000034 method Methods 0.000 description 28
- 238000012545 processing Methods 0.000 description 25
- 210000001508 eye Anatomy 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000009471 action Effects 0.000 description 14
- 238000013528 artificial neural network Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 230000008859 change Effects 0.000 description 8
- 230000004913 activation Effects 0.000 description 7
- 238000001994 activation Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 7
- 241001422033 Thestylus Species 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 239000000049 pigment Substances 0.000 description 5
- 239000002537 cosmetic Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 210000000720 eyelash Anatomy 0.000 description 3
- 230000036651 mood Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000000611 regression analysis Methods 0.000 description 3
- 230000036548 skin texture Effects 0.000 description 3
- 210000000216 zygoma Anatomy 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000006071 cream Substances 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000037308 hair color Effects 0.000 description 2
- 230000003810 hyperpigmentation Effects 0.000 description 2
- 208000000069 hyperpigmentation Diseases 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 239000000843 powder Substances 0.000 description 2
- 231100000241 scar Toxicity 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000032544 Cicatrix Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 210000000088 lip Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000009022 nonlinear effect Effects 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000037387 scars Effects 0.000 description 1
- 230000005808 skin problem Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D44/005—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0282—Rating or review of business operators or products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/235—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D2044/007—Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment
Definitions
- the present disclosure is directed to a digital make-up palette and a method for a personalized augmented reality experience using the digital make-up palette.
- Smartphones with front facing cameras offer a capability of taking pictures and videos of the person having the camera at hand in a manner that the person can view the image that is to be captured.
- Various mobile applications also refererred to as an App, have been developed that make use of front facing cameras.
- a common App is one that allows taking a self portrait photo that is referred to as a selfie, and inserting the selfe into some social media context or forwarding the selfe to be shared with others by e-mail or text.
- Some cosmetic product companies have begun to develop Apps that provide assistance in selecting cosmetic products.
- the Apps may provide tools for searching for particular types of make-up, or searching for a product that may be a user’s favorite or just purchasing a previously used product.
- Some Apps offer tutorials on how to apply certain types of make-up.
- Some Apps provide assistance in choosing colors of lipstick or eyeshadow by displaying color palettes.
- Some Apps provide color matching features to assist in searching for a color that matches clothing, and accessory, or a color from a picture.
- Some cosmetic product companies have begun to make use of the cameras in smartphones, tablets, and laptops by offering product try-on applications.
- Some of these applications are implemented as Web applications, or an App. These try-on applications work by taking a self portrait photo with the smartphone camera, uploading the photo to the Web application, then applying virtual makeup products to the uploaded image.
- These try-on applications may offer a variety of options, such as smoothing skin, lifting cheeckbones, adjusting eye color.
- These try-on applications may provide the user with the ability to add any type and color of makeup product, as well as change the color intensity.
- try-on applications offered thus far tend to create a look by way of photo editing tools.
- Some of the prior try-on applications start with an uploaded photograph and provide one step functions to apply makeup types and colors, then allow editing of the made- up photo. Such tools do not capture a personal makeup experience.
- prior try-on application tools do not provide for creation of custom looks. For example, a user may want a Friday date night look. The prior try-on applications may offer a Friday date night look, but that look may not be something that the user had in mind. Provided tools may be used to perform further editing in an attempt to obtain a look that is what the user believes is a Friday date night look. However, such an approach is limited by the features of the editing tools. A user may want a Friday date night look that is based on the user’s mood, or a mood that the user may want to portray, which may require extensive editing.
- a user may have problem areas, such as blemishes, a scar, age spots, hyperpigmentation, etc. that they wish to treat with makeup.
- a user may also wish to emphasize certain facial features, such as ckeek bone, eyes, lips. There is a need to provide a custom try-on experience that can help to address particular problem areas or best facial features of the particular user.
- e-commerce personalization try-on services are not scalable to end consumers with their smartphone because variation in skin problem areas and facial features for each consumer is too high.
- An augmented reality system for makeup includes a makeup objective unit including computation circuitry operably coupled to a graphical user interface configured to generate one or more instances of user selectable makeup objectives and to receive user-selected makeup objective information; a makeup palette unit operably coupled to the makeup objective unit, the makeup palette unit including computation circuitry configured to generate at least one digital makeup palette for a digital makeup product in accordance with the user- selected makeup objective information; and a makeup objective visualization unit including computation circuitry configured to generate one or more instances of a virtual try-on in accordance with the user-selected makeup objective information.
- An augmented reality system for makeup includes a makeup objective unit including computation circuitry operably coupled to a graphical user interface configured to generate one or more instances of user selectable makeup objectives and to receive user-selected makeup objective information; a makeup palette unit operably coupled to the makeup objective unit, the makeup palette unit including computation circuitry configured to generate at least one digital makeup palette for a digital makeup product; and a makeup objective visualization unit including computation circuitry configured to analyze a user’s face to determine one or more of face shape, facial landmarks, skin tone, hair color, eye color, lip shape, eyelid shape, hair style and lighting, and automatically create one or more instances of a custom virtual try-on for a user in accordance with the user-selected makeup objective information and the at least one digital makeup palette generated based on the analysis of the user’s face.
- FIG. l is a diagram of a system in accordance with an exemplary aspect of the disclosure.
- FIG. 2 is a block diagram of a computer system for a mobile device
- FIGs. 3 A, 3B is a flowchart of a method of creating a custom look, where FIG. 3 A is a method in which a user creates their own custom look, and FIG. 3B is a method in which the custom look is created by a mobile application in accordance with an exemplary aspect of the disclosure;
- FIG. 4 is an exemplary user interface for choosing between user creation or App creation of a look in accordance with an exemplary aspect of the disclosure
- FIG. 5 is a flowchart of a method of obtaining a digital makeup palette in accordance with an exemplary aspect of the disclosure
- FIG. 6 illustrates an exemplary digital makeup palette in accordance with an exemplary aspect of the disclosure
- FIG. 7 illustrates an exemplary digital makeup in accordance with an exemplary aspect of the disclosure
- FIG. 8 is a flowchart of the face analysis step in more detail in accordance with an exemplary aspect of the disclosure
- FIG. 9 is a block diagram of a CNN for classifying face shape
- FIG. 10 is a diagram of a deep learning neural network for face landmark detection
- FIG. 11 is an exemplary user interface for selecting a virtual product to apply
- FIG. 12 is an exemplary user interface for choosing between user applying makeup and recommending how to apply makeup
- FIG. 13 is an exemplary mobile application in accordance with an exemplary aspect of the disclosure
- FIG. 14 is a diagram for a recommender system
- FIG. 15 illustrates an exemplary look-makeup matrix for the recommender system in FIG. 12
- FIG. 16 illustrates a blending process that may be used to create a face image based on a desired feature and an original feature
- FIG. 17 is a flowchart for a step of applying virtual makeup in accordance with an exemplary aspect of the disclosure.
- FIG. 18 is a flowchart of a step of recording areas and swipes while applying makeup
- FIG. 19 is a flowchart of a step of analyzing a user’s steps in applying makeup to estimate problem areas or best features;
- FIG. 20 is an exemplary user interface for storing a makeup look in accordance with an exemplary aspect of the disclosure
- FIG. 21 is a flowchart of a method of custom application of a digital palette in accordance with an exemplary aspect of the disclosure
- FIG. 22 is an exemplary user interface showing status of custom makeup application
- FIG. 23 is a flowchart for a method of selecting makeup filters in accordance with an exemplary aspect of the disclosure.
- FIG. 24 is an exemplary user interface for saving makeup looks.
- FIG. 25 is a block diagram of a reinforcement learning architecture
- FIG. 26 is a flow diagram of a machine learning model in accordance with an exemplary aspect of the disclosure.
- the digital makeup palette is an assortment of colors for a digital makeup, either for a single part of a face, or for a full face.
- the augmented reality arrangement can capture steps as the user applies makeup and can create a custom makeup filter for the applied makeup.
- the augmented reality arrangement can analyze the steps to identify what the user considers as problem areas and best features. The results of the analysis may be used to improve custom recommendations.
- the augmented reality arrangement may perform the analysis with a machine learning model.
- the machine learning model may include an artificial neural network that estimates problem areas and best features.
- FIG. l is a diagram of a system in accordance with an exemplary aspect of the disclosure.
- Embodiments include a software application, or mobile application (App).
- App for purposes of this disclosure, herein below the term App will be used interchangeably with the software application or mobile application, and makeup application will be used in reference to the process of applying digital makeup, either virtually or physically.
- a software application may be executed on a desktop computer or laptop computer 103.
- a mobile application may be executed on a tablet computer or other mobile device 101.
- the software application and mobile application are described in terms of the mobile application 111. In each case, the mobile application 111 may be downloaded and installed on a respective device 101, 103.
- the desktop computer or laptop computer 103 may be configured with a microphone 103a as an audio input device.
- the microphone 103a may be a device that connects to a desktop computer or laptop computer 103 via a USB port or audio input port, or wireless via a Bluetooth wireless protocol.
- the mobile device 101 may be a cell phone or smartphone that is equipped with a built-in microphone.
- the software application or mobile application 111 may include a communication function to operate in conjunction with a cloud service 105.
- the cloud service 105 may include a database management service 107 and a machine learning service 109.
- the database management service 107 may be any of the types of database management systems provided in the cloud service 105.
- the database management service 107 may include a database that is accessed using a structured query language (SQL), and a unstructured database that is accessed by keys, commonly referred to as No SQL.
- the machine learning service 109 may perform machine learning in order to allow for scaling up and high performance computing that may be necessary for the machine learning.
- the software application or mobile application 111 may be downloaded from a cloud service 105.
- FIG. 1 shows a single cloud service 105, laptop computer 103 and mobile device 101, it should be understood that a number of mobile devices, laptop computers, as well as desktop computers and tablet computers, may be connected to one or more cloud services.
- the software application or mobile application 111 may be implemented as an augmented reality system that includes a makeup objective unit operably coupled to a graphical user interface, a makeup palette unit coupled to the makeup objective unit, and a makeup objective visualization unit.
- the makeup objective unit may be configured to generate one or more instances of user selectable makeup objectives and to receive user- selected makeup objective information.
- the makeup palette unit may be configured to generate at least one digital makeup palette for a digital makeup product in accordance with the user-selected makeup objective information.
- the makeup objective visualization unit may be configured to generate one or more instances of a virtual try-on in accordance with the user-selected makeup objective information.
- Each of the makeup objective unit, the makeup palette unit, and the makeup objective visualization unit may include computation circuitry of a computer system, ranging from a mobile computer device 101, 103 to a desktop computer device. A minimum requirement is that the computer device includes an interactive display device.
- FIG. 2 is a block diagram of a mobile computer device.
- the functions and processes of the mobile device 101 may be implemented by one or more respective processing/computation circuits 226.
- the same or similar processing/computation circuits 226 may be included in a tablet computer or a laptop computer.
- a desktop computer may be similarly configured, but in some cases, may not include a built-in touch screen 221, microphone 241 or camera 231.
- a processing circuit includes a programmed processor as a processor includes computation circuitry.
- a processing circuit may also include devices such as an application specific integrated circuit (ASIC) and conventional circuit components arranged to perform the recited functions.
- ASIC application specific integrated circuit
- circuitry refers to a circuit or system of circuits.
- the computation circuitry may be in one computer system or may be distributed throughout a network of computer systems.
- the processing/computation circuit 226 includes a Mobile Processing Unit (MPU) 200 which performs the processes described herein.
- the process data and instructions may be stored in memory 202. These processes and instructions may also be stored on a portable storage medium or may be stored remotely.
- the processing/computation circuit 226 may have a replaceable Subscriber Identity Module (SIM) 201 that contains information that is unique to the network service of the mobile device 101.
- SIM Subscriber Identity Module
- the advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored.
- the instructions may be stored in FLASH memory, Secure Digital Random Access Memory (SDRAM), Random Access Memory (RAM), Read Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), solid-state hard disk or any other information processing device with which the processing/computation circuit 226 communicates, such as a server or computer.
- SDRAM Secure Digital Random Access Memory
- RAM Random Access Memory
- ROM Read Only Memory
- PROM Programmable Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read Only Memory
- solid-state hard disk or any other information processing device with which the processing/computation circuit 226 communicates, such as a server or computer.
- advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with MPU 200 and a mobile operating system such as Android, Microsoft® Windows® 10 Mobile, Apple iOS® and other systems known to those skilled in the art.
- MPU 200 may be a Qualcomm mobile processor, a Nvidia mobile processor, a Atom® processor from Intel Corporation of America, a Samsung mobile processor, or a Apple A7 mobile processor, or may be other processor types that would be recognized by one of ordinary skill in the art.
- the MPU 200 may be implemented on an Field- Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Programmable Logic Device (PLD) or using discrete logic circuits, as one of ordinary skill in the art would recognize.
- FPGA Field- Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- PLD Programmable Logic Device
- MPU 200 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
- the processing/computation circuit 226 in FIG. 2 also includes a network controller 206, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network 224.
- the network 224 can be a public network, such as the Internet, or a private network such as LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks.
- the network 224 can also be wired, such as an Ethernet network.
- the processing circuit may include various types of communications processors for wireless communications including 3G, 4G and 5G wireless modems, WiFi®, Bluetooth®, GPS, or any other wireless form of communication that is known.
- the processing/computation circuit 226 includes a Universal Serial Bus (USB) controller 225 which may be managed by the MPU 200.
- USB Universal Serial Bus
- the processing/computation circuit 226 further includes a display controller 208, such as a NVIDIA® GeForce® GTX or Quadro® graphics adaptor from NVIDIA Corporation of America for interfacing with display 210.
- a display controller 208 such as a NVIDIA® GeForce® GTX or Quadro® graphics adaptor from NVIDIA Corporation of America for interfacing with display 210.
- An I/O interface 212 interfaces with buttons 214, such as for volume control.
- the processing/computation circuit 226 may further include a microphone 241 and one or more cameras 231.
- the microphone 241 may have associated circuitry 240 for processing the sound into digital signals.
- the camera 231 may include a camera controller 230 for controlling image capture operation of the camera 231.
- the camera 231 may include a Charge Coupled Device (CCD).
- the processing/computation circuit 226 may include an audio circuit 242 for generating sound output signals, and may include an optional sound output port.
- CCD Charge Coupled Device
- the power management and touch screen controller 220 manages power used by the processing/computation circuit 226 and touch control.
- the communication bus 222 which may be an Industry Standard Architecture (ISA), Extended Industry Standard Architecture (EISA), Video Electronics Standards Association (VESA), Peripheral Component Interface (PCI), or similar, for interconnecting all of the components of the processing/computation circuit 226.
- ISA Industry Standard Architecture
- EISA Extended Industry Standard Architecture
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interface
- a description of the general features and functionality of the display 210, buttons 214, as well as the display controller 208, power management controller 220, network controller 206, and VO interface 212 is omitted herein for brevity as these features are known.
- FIGs. 3 A, 3B are a flowchart for a method of creating a custom look, as well as special treatment of facial problem areas and best facial features.
- FIG. 3 A is a flowchart for a method of creating a custom look by way of a user applying a virtual makeup product having a digital palette in accordance with an exemplary aspect of the disclosure.
- a disclosed embodiment includes a digital makeup palette.
- the digital makeup palette is a virtual palette for a digital makeup.
- the terms virtual makeup and digital makeup may be used interchangeably.
- a digital makeup may have an assortment of colors to choose from.
- a particular digital makeup may have an associated makeup application gesture and one or more face parts where it is typically applied, and the digital makeup palette includes characteristics such as coverage, shade, and finish.
- digital makeup is not limited to colors derived from chemical compositions, and may include a wider range of colors.
- digital makeup may utilize coverage, shade, finish that are generated using characteristics of a display device, such as applying various filters for color temperature, exposure, contrast, saturation, and controlling RGB and HCL values.
- Coverage is the actual coverage of the digital makeup typically based on a percentage of pigment that it contains. Coverage generally pertains to foundation makeup, but may also refer to corrective makeup or primer.
- a light cover makeup may contain lower than about 18% pigment.
- a medium cover product may contain about 18 to 23% pigment.
- a full cover makeup may contain up to about 35% pigment. Some makeup products may contain a higher amount of pigment.
- the coverage for the digital makeup is implemented as an opacity filter representing a single brush stroke of the virtual makeup.
- Shade of a digital makeup can range from fair to dark, or in some cases, very fair to deep, or even very deep.
- a shade may be for a single color, such as a skin color.
- the shade for digital makeup is implemented as a range of a display color, for example, shades of red displayed according to RGB values.
- Finish of a digital makeup may include common finishes such as matte (dull), cream (glossy or shiny), frost (reflective), and glitter (glitter particles). Finishes may be defined in terms of the amount of light reflected. Matte will reflect little of no light. Cream retains a pearl-like sheen. Frost and glitter reflect the most light.
- the finish for digital makeup is implemented as color luminance (brightness). Matte may be a low luminance value and will hide imperfections. Frost may emanate greater luminance.
- Digital makeup may also include various filters, including blur, color temperature, and saturation. Blur may be applied to a region having an imperfection so that the imperfection becomes less noticeable.
- the user may bring up an App 111 on a mobile device, tablet, laptop, or desktop computer.
- the App 111 via the makeup objective unit, may ask the user what type of look they wish to create.
- the App 111 may generate a list of predefined makeup looks, and the user may select a predefined makeup look.
- predefined makeup looks may include season looks (spring, summer, fall), event looks (Friday date night, Girls night out, special date, going out with mother-in-law, holiday, party, new year’s eve, Bridal, Prom), looks based on time to complete (quick makeup, average makeup, take-your-time makeup), mood looks (cheery, happy, notice-me), styles (natural, evening, glam, gothic, office), aesthetic looks
- the App 111 via the makeup objective unit, may ask the user to define their level of experience with using makeup.
- a user’s level of experience may include beginner/novice level, experienced level, expert level, and professional.
- the beginner/novice level may be a user that has little or no experience in applying makeup.
- the experienced level may be a user that has previously applied makeup, and thus has some experience.
- the expert level may be a user that has been applying makeup for a while, such as a year or more, as well as has taken steps to learn how to properly apply makeup.
- the professional level may be a user that applies to others.
- the App 111 may provide an interface that the user may use to create a user profile, which among other things, may include entering the user’s level of experience.
- the App 111 may utilize the selected look and user’s level of experience as a starting point. For example, a user that is new to applying makeup may wish to experiment and possibly learn about applying makeup. An experienced user that has some experience in applying makeup before, but would like to expand their knowledge and creativity, may wish to try a new makeup product or makeup look. Expert users may have extensive experience in applying makeup, but would like to expand their creativity and obtain a look of a quality that would be produced by a professional makeup artist. Subsequently, the App 111 may use the selected look and user’s level of experience in providing recommendations at later stages.
- the App 111 may provide the user with a choice of having the App 111 provide a custom look or for the user to apply virtual makeup to an image of their face.
- the makeup palette unit may generate at least one digital makeup palette.
- the user may obtain a digital makeup palette for a particular virtual makeup, for example by downloading a digital makeup from an App 111 store, or downloading from a website that offers digital makeup.
- a user may modify a digital makeup palette to one for a variation of a makeup look.
- a user may modify a digital makeup palette for a makeup look, such as VSCO girl look, to be more or less dramatic.
- a less dramatic look may involve obtaining a different digital makeup palette for the makeup look, or may involve obtaining a different digital makeup for a face part, e.g. lips, eye lids, nose.
- FIG. 4 is an exemplary graphical user interface for an App 111 that includes a function for choosing a method of applying makeup.
- the user may obtain a digital makeup palette (S305) before deciding (S303) on whether to have the App 111 perform a custom look or for the user to apply digital makeup.
- the user interface 400 may display products 401 that have been obtained by the user such as foundation 401a, eyeshadow 401b, and concealer 401c.
- the user interface 400 may provide the user with a choice of functions (see S303 in FIG. 3 A), such as to create a custom look 403 or to 405 create a look by manually applying one or more virtual makeup.
- the App 111 may provide a user with a list of predefined looks, and the user may select a predefined look as a starting point. Upon selection of a predefined look, the App 111 may provide the user with a set of recommended digital makeup and/or digital makeup palette(s) for the selected look. The user may obtain digital makeup and digital makeup palette(s) from database 107 or from a makeup provider, for example from a Website for a makeup provider, based on the set of recommendations.
- FIG. 5 is a flowchart of a method of obtaining a digital makeup palette.
- the user inputs, via the makeup objective unit, a desired look and, in S503, a level of makeup experience.
- the user obtains, via the makeup palette unit, a digital makeup palette.
- the desired look also referred to herein as a virtual try-on, may be selected from a list of predefined looks, or may be input as a look name that reflects a predefined look. In some cases a user may input a new look that does not have a predefined counterpart, or one that is a modification of a predefined look.
- a digital makeup palette may be a palette for creating a particular type of makeup look.
- the digital makeup palette may be purchased from a makeup company similar to physical makeup products, or may be obtained from a Website that specializes in digital makeup products.
- FIG. 6 illustrates a user interface having a digital makeup palette in accordance with an exemplary aspect of the disclosure.
- the user interface may include a digital makeup palette 601 for a particular makeup look 603 and for a particular user experience level 605.
- the digital makeup palette 601 may include buttons for selecting particular digital makeup, of a specific color, coverage, shade, and finish.
- the user experience level 605 may be controlled by a sliding bar for a range over general to precise application.
- the user interface may include buttons for selecting makeup applicator tools 607.
- a digital makeup palette includes one or more particular digital makeup, which similar to physical makeup is of specific color, coverage, shade, and finish. Unlike physical makeup, coverage may be implemented as an opacity filter, shade may be implemented as a range of rgb values, and finish may be a color density or color brightness.
- a digital makeup palette may also be a general purpose makeup palette. Further, a digital makeup palette may be for a particular virtual makeup for a part of a face.
- FIG. 7 illustrates a user interface having a container for one or more virtual makeup and a container for one or more makeup applicator tools.
- the virtual makeup may be a product 701 or products obtained from one or more makeup provider websites.
- the virtual makeup products may be stored in a makeup bag for a user 703.
- a user experience level 705 may be controlled by a sliding bar for a range over general to precise application.
- the makeup applicator tools may be stored in a container 707.
- Various makeup applicator tools may be used for applying each particular virtual makeup product. Types of applicator tools may include brushes, sponge makeup applicators, and makeup applicator puffs.
- Brushes may be of various widths, have an angled tip, flat tip or pointed tip. Special brushes, such as mascara brushes have bristles.
- a common sponge applicator is a sponge swab, either single or double tipped. Some sponges are flat, oval shaped. Some sponges may be wedge shaped. Puffs may be of various sizes and materials.
- Some makeup products are in the form of a makeup pencil, e.g., eyebrow pencils, eyeliner pencils, and lip liner pencil. Concealer and highlighter products may have built-in pen-like dispensers.
- the virtual makeup may include applicator tools that may be configured to operate according to actual physical gestures using a stylus, mouse, a physical applicator tool with a built-in motion sensor, or even the user’s finger.
- a physical gesture may be made to cause the virtual brush to apply a brush stroke that is commensurate with the movement and force of a stylus.
- the stylus may be used on a 3D touch surface of a mobile device in which the amount of force on the touch screen produces a line having thickness that is commensurate with the force.
- a stylus may take the form of a makeup applicator and include both a motion sensor and force sensor to detect motion and force of a brush stroke as the user uses the stylus to virtually apply a makeup to a face image.
- the mobile application 111 running on the mobile device 101 or laptop computer 103 can use the built-in camera function to capture an image of the face of the user.
- the camera 231 is used to capture a video of the user.
- the camera 231 is used to capture several images of the face of the user from slightly different directions and/or in different lighting conditions.
- a previously captured image, images, or video may be uploaded to the mobile application 111. Further, the previously captured image, images, or video may be taken using an external camera device, or may be obtained from an internal storage device of the mobile device or laptop computer, or from an external storage device.
- the mobile application 111 may perform face recognition and identify parts and their locations in the face image including lips, eyes, nose, ears and hair.
- the mobile application 111 may perform image processing operations in order to improve image features, such as to improve lighting. For instance, a user may inadvertently take a selfpicture when bright light or sunshine is from a direction behind the user. The mobile application 111 may brighten the face image of the user. Other image processing operations may be performed to improve the image quality.
- FIG. 8 is a flowchart of the face analysis step in more detail.
- the captured image may be analyzed to determine a face shape.
- the face shape of the captured face of the user may be detected using a machine learning model.
- the machine learning model may be trained to classify face shape using face images with known face shapes.
- Recently image classification has been performed using a type of neural network that is inspired by how the visual cortex of human brain works when recognizing objects.
- the neural network is a family of networks known as convolution neural networks (CNN).
- CNN convolution neural networks
- Other approaches have been proposed for image classification and continue to be improved upon.
- Other approaches for image classification that may be used for image classification include linear regression, decision tree, random forest and support vector machine, to name a few.
- the machine learning model may be trained remotely using the machine learning service 109 of the cloud service 105.
- an architecture of a machine learning model that may be used to classify face shape is a CNN.
- FIG. 9 is a block diagram of a CNN for classifying face shape.
- Dimensions and activation functions of the CNN may be varied depending on available processing power and desired accuracy.
- the dimensions include number of channels, number of neurons of each layer and the number of layers.
- Possible activation functions include logistic, rectified linear unit, among others.
- the convolution neural network may be made up of several types of layers.
- a convolution component 903 may be made up of a convolution layer 903a, a pooling layer 903c, and a rectified linear unit layer 903b.
- the convolution layer 903a is for developing a 2- dimensional activation map that detects the special position of a feature at all the given spatial positions.
- the pooling layer 903c acts as a form of downsampling.
- the rectified linear unit layer 903b applies an activation function to increase the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolution layer itself.
- a fully connected layer 905 includes neurons that have connections to all the activations amongst the previous layers.
- a loss layer specifies how the network training penalizes the deviation between the predicted and true layers.
- the loss layer 907 detects a class in a set of mutually exclusive classes.
- a type of loss layer is a softmax function, which provides an output value for each of multiple classes.
- the loss layer 907 may be the softmax function.
- the softmax function provides a probability value for each class.
- the classes 909 may include square, rectangular, round, oval, oblong, diamond, triangular, and heart face shapes.
- the mobile application 111 may further analyze facial features and landmarks. Similar to face shape, the facial features and landmarks of the captured face of the user may be detected using a machine learning model.
- the machine learning model may be trained to detect facial landmarks.
- a CNN architecture similar to FIG. 9 may be used for face landmark detection. Other approaches to classification may also be used.
- FIG. 10 is a diagram of a deep learning neural network for face landmark detection.
- the deep learning neural network is a convolution neural network.
- residual connections may be included.
- inverted residual structures may be included in which residual connections are made to earlier layers in the network.
- the network is provided as two stages, 1003 and 1005.
- the first stage 1003 is a convolution stage for performing feature extraction.
- the second stage 1005 performs prediction in regions of interest.
- the architecture of the first stage 1003 includes a convolution section 1003 a that, provided an input face image 1001, performs convolution and max pooling operations.
- the convolution section 1003a is connected to an inverted residual structure 1003b.
- a mask layer 1003c is connected to the inverted residual structure 1003b.
- the size of the mask layer 1003c is based on the number of landmarks (e.g., 2 x L, the number of landmarks).
- the mask layer 1003 c encodes the spatial layout of the input object.
- the architecture of the second stage 1005 includes an inverted residual structure 1005b that is connected to the inverted residual structure 1003b of the first stage 1003. Also, the mask layer 1003c of the first stage 1003is applied to the results of the inverted residual structure 1005b and provided as input for performing region of interest cropping in ROI and Concatenate Block 1011.
- the ROI and Concatenate Block 1011 is based on the number of channels in the inverted residual structure 1005b and the number of landmarks.
- a predict block 1013 predicts landmarks and approximate locations in the mask layer 1005c.
- the predictions for the regions of interest of the second stage 1003 are combined with the landmarks estimated by mask 1003c for the total image to obtain output landmarks in output layer 1007.
- the landmarks for a face include eyes, nose, lips, cheekbones, areas around the eyes including eye brows, eye lids, as well as hair.
- landmarks may include possible facial anomalies.
- each layer and the number of layers may depend on parameters including the desired accuracy, hardware to perform the machine learning model, and the length of time to train the machine learning model.
- the machine learning model may be trained using the machine learning service 109 of the cloud service 105.
- Analysis of facial features, S803, may further include detection of lip shape S805, eyelid shape S807, and hair style S809.
- the detected landmarks can be used to calculate contours of the lips, eyes, and hair style.
- other facial features such as skin color S811 and skin texture S813 may also be determined from the face image.
- Skin color and skin texture may be determined using image processing techniques. Types of skin tone may include, non-limiting, fair, light, medium, deep. Types of skin texture may include, nonlimiting, soft, smooth, coarse, leathery.
- An additional feature of a facial image may be lighting (image brightness).
- image lighting may be determined using image processing techniques.
- Brightness may be defined as a measure of the total amount of perceived light in an image.
- brightness of an image may be increased or decreased from its initial as captured brightness level.
- past look preferences may be retrieved from a database 107.
- Past look preferences can include characteristics of a digital makeup, including color, coverage, shade, finish, and application gesture that was used for a past look.
- Past user preferences may include digital makeup characteristics for a particular part of the face, and can also include a choice of digital makeup that was applied for a particular look.
- the user interface may include a function to select a virtual makeup.
- FIG. 11 is an exemplary user interface for selecting a virtual makeup to apply.
- a user interface screen 1100 may include a message 1101 with instructions for selecting a virtual makeup using a pointer 1103.
- the mobile application 111 may perform a function to activate the selected virtual makeup.
- the virtual makeup may be activated by retrieving characteristics of the virtual makeup, including applicator swipe gesture(s) and typical area(s) of a face where the virtual makeup may be applied.
- data associated with the virtual makeup may include coverage, shade, and finish.
- the mobile application 111 may display a message asking the user if they want a recommendation on how to apply the virtual makeup.
- An example of a user interface to display a request for recommendation message is shown in FIG. 12.
- FIG. 12 is an exemplary user interface for choosing between user applying makeup and the mobile application recommending how to apply makeup.
- the user interface 1200 may display a button 1203 for selecting a recommendation on how to apply the virtual makeup 1205.
- the user interface 1200 may also display, as an alternative, a button 1201 instructing the user to swipe a stylus or mouse to apply the virtual makeup on the face image 1207.
- FIG. 13 is an exemplary user interface on a mobile device 101.
- the user interface may display the face image 1301 and a digital makeup palette 1303.
- a user may select a color 1303b from the digital makeup palette 1303 to apply a virtual makeup 1303a to a specific location 1305 using a swipe gesture of a stylus 1310.
- the screen on the mobile device 101 may be a touch screen that includes a zoom function that can be used to expand or contract the face image 1301 in order to adjust a view of a facial feature.
- the mode of the touch screen may be switched to allow for use of the stylus to apply the virtual makeup to the face image without moving the image.
- the mobile application 111 indicates a location on the face image where the virtual makeup is to be applied.
- FIG. 14 is a diagram for a recommender system.
- the recommender system 1400 may be used for showing how to apply a virtual makeup (S319 in FIG. 3 A).
- the recommender system 1400 works off of an indexed database 1405 of image data and makeup filters.
- the recommender system 1400 includes a recommendation engine 1407 that retrieves and ranks recommendations.
- a recommendation may be for the look that the user has input in step S301 and the virtual makeup.
- the recommendations may be retrieved based on user preferences or favorites.
- Personal user preferences may be makeup characteristics that a user has entered when the App I l l is first set up.
- Favorites may be makeup characteristics that a user has flagged as being a favorite.
- Personal preferences and favorites may be for particular parts of a face or for the entire face.
- the recommendation engine 1407 may use a look-feature matrix.
- FIG. 15 illustrates a non-limiting look-feature matrix in accordance with an exemplary aspect of the disclosure.
- the look-feature matrix in FIG. 15 is a partial matrix showing two types of virtual makeup for the sake of brevity. Other types of virtual makeup may be included in the matrix, including, but not limited to, foundation, mascara, concealer, cheek powder, eyebrow pencil, to name a few.
- the look-feature matrix may be stored in the App 111 in the mobile device to be compared to a vector of desired features.
- the desired features may be current user preferences and may take into account the user’s current experience level and a desired look.
- the recommendation engine 1407 may use one or more similarity metrics and a scoring algorithm to rank recommendations.
- the recommendation engine 1407 may generate a set of features that elevate recommendations in order to encourage creativity by changing certain characteristics for a virtual makeup from those that are recommended. For example, if the recommendation engine 1407 ranks a recommendation high among retrieved recommendations, it may then change one or more characteristics in order to increase a similarity score. Alternatively, the recommendation engine 1407 may change one or more characteristics in a retrieved recommendation, such as shade or finish, to one up or one down (e.g., change a shade to one level up or one level down from the stored shade). In one of more embodiments, the recommendation engine 1407 may adjust the application gesture to be more or less precise based on the experience level of the user.
- the recommendation engine 1407 may output one or more recommendations to a recommendation user interface (S319).
- the recommendation user interface (S319) may display a sequence of video frames that demonstrate application of a selected recommendation.
- the video frames for the recommendations may be generated using the face image of the user and one or more makeup filters stored in database 1405.
- the indexed database 1405 may provide one or more makeup filters to be used to create the sequence of video frames.
- FIG. 16 illustrates a blending process that may be used to create a face image based on a desired feature and an original feature in the face image.
- the blending of a facial feature is accomplished as follows. 1.
- the desired feature 1601 is recolored, 1603, to match the color of the original feature and obtain a recolored feature 1605.
- the recolored feature 1605 is multiplied by a feature mask 1607.
- the original feature 1609 is multiplied by the inverse 1611 (i.e., one minus each of the mask values, which range from 0 to 1) of the feature mask.
- the border of the original feature may have been determined during the face analysis step, S309.
- a sequence of video frames may be generated as an animation to demonstrate how to apply virtual makeup to a particular face part.
- the user may mimic the demonstrated application of the virtual makeup to apply the makeup by making one or more swipes at the facial location of the face image using the stylus or mouse that is configured to draw as a specific type of applicator.
- FIG. 17 is a flowchart for a step of applying virtual makeup in accordance with an exemplary aspect of the disclosure.
- the user may interact with the user interface to select or touch a starting point for applying virtual makeup.
- the user may perform a gesture to apply the virtual makeup.
- the gesture may be a swipe motion, a line draw motion, or a tap motion.
- a swipe motion may be made, for example, in a case of applying mascara to eye lashes.
- a thicker applicator may be used in a swipe motion to apply wider strokes such as for eye shadow.
- a line draw motion may be used, for example, to apply an eye liner.
- a line draw motion with a thicker line may be used to apply lipstick.
- a tap motion may be used to apply a face powder.
- gestures may be analyzed based on level of experience of the user to determine whether the gesture was applied in error, i.e., as a mistake.
- a greater amount of error may be allowed than for an experienced user.
- a gesture that is outside a tolerance amount may be judged as a mistake for an experienced user, whereas the tolerance amount may be greater for a novice user.
- the gesture may be determined as being an error.
- the App 111 determines whether the gesture has been applied in error, i.e., as a mistake.
- a notification message may be displayed to notify the user that the gesture may have been applied as a mistake, and/or ask the user to verify that the gesture has been applied satisfactorily.
- the App may provide the user with an option, in S 1711, to redo the application of the virtual makeup. When there is no mistake (NO in S1707) or the user chooses not to redo the virtual makeup (NO in S1711), the App 111 goes to the next step S323.
- the areas and swipe movements may be limited or controlled to stay within facial features.
- the mobile application 111 may detect the location as being within a facial feature.
- a swipe may be drawn on the screen, but without drawing outside the boundary of the facial part, for example, as determined in the face analysis step, S309.
- Drawing on the screen may be performed in accordance with characteristics of the makeup product, including coverage, shade, and finish. Drawing on the screen may be performed in accordance with common application gestures and facial areas.
- the mobile application 111 may record in a memory 202 of a mobile device 101, 103, the areas and swipe movements as the user applies the virtual makeup.
- FIG. 18 is a flowchart of a step of recording areas and swipes while applying makeup.
- the mobile application 111 may track and record each step and associated data in a memory, including a location on the face image where the virtual makeup is applied and the number of swipes.
- the mobile application 111 analyzes the recorded locations and swipes of the virtual makeup and characteristics of the virtual makeup in order to estimate problem areas or best features of a user’s face. The locations may be mapped to facial features.
- FIG. 18 is a flowchart of a step of recording areas and swipes while applying makeup.
- SI 801 the mobile application 111 may track and record each step and associated data in a memory, including a location on the face image where the virtual makeup is applied and the number of swipes.
- the mobile application 111 analyzes the recorded locations and swipes of the virtual makeup and characteristics of the virtual makeup in order to estimate
- FIG. 19 is a flowchart of a step of analyzing a user’s steps in applying makeup to estimate problem areas or best features.
- the mobile application 111 may analyze makeup swipes to identify potential problem areas.
- Potential problem areas may include blemishes, scars, age spots, and forms of hyperpigmentation.
- Potential problem areas may be facial areas that a user believes to be a problem, or unwanted feature. In other words, potential problem areas may be areas that a user wishes to cover up or alter in appearance.
- the mobile application 111 may identify a potential problem area by detecting an unusual swipe gesture in a particular location of a facial feature.
- the unusual swipe gesture may include an abrupt change in direction or an abrupt change in force that was not made by mistake.
- the mobile application 111 may identify a potential problem area by way of detecting that the user is applying a different virtual makeup, or alternative color, from the digital makeup palette (i.e., virtual makeup with different coverage characteristic and/or different shade), to a particular facial area.
- the mobile application 111 may analyze makeup swipes to identify best facial features. Best facial features may include cheekbones, eye color, eyelashes, lip shape, or any feature that a user wishes to emphasize.
- the mobile application 111 may detect a best facial feature by detecting a change in application of makeup to a facial feature that is different, by a threshold amount, from an average application of makeup to the same facial feature. For example, the mobile application 111 may detect a best facial feature by detecting application of a color that is of a shade and/or finish that is different from a typical shade and/or finish of the color that would be applied to the facial area. In the case of eye color, the mobile application 111 may detect that eye color is a best facial feature by detecting application of a particular eye shadow color.
- the mobile application 111 may compare identified problem areas and best facial features with previous stored recommendations.
- the mobile application 11 I may determine that there may be some new problem areas, or that some problem areas are no longer possible problem areas.
- the mobile application 111 may raise the importance of problem areas that have previously been considered as potential problem areas.
- the results of the comparison may be used to adjust the recommendations so that such that the recommendation engine 1407 will assign a higher score to the recommendation that has had a verified problem area.
- New problem areas and best facial features, or problem areas and best facial features that are no longer potential problem areas or best facial features may be used to adjust recommendations when they have a likelihood to support the change to new or no longer potential.
- a user may apply virtual makeup from the digital makeup palette in a manner that corrects a problem area or that emphasizes best features.
- a problem area may be corrected by applying a filter for blurring an imperfection in a problem area. For example, a blemish may be made less noticeable by blurring the region in the face image containing the blemish.
- potential problem areas may be facial areas that a user believes to be a problem, or unwanted feature.
- Best facial features may include cheekbones, eye color, eyelashes, lip shape, or any feature that a user wishes to emphasize.
- the mobile application 111 may store verified problem areas and verified best facial features and user makeup application as future custom recommendations in the database 1405.
- the user may choose to repeat steps of applying a virtual makeup for another virtual makeup. After all desired virtual makeup has been applied, the user may select, (YES in S333), to save, in S335, the look that has been created in the database 107.
- the user may also choose (YES in S337) to move/publish the look, in S339, that has been created, to a social media platform or other platform having live video.
- the look may be stored as a makeup filter that may be applied to another face image.
- FIG. 20 is an exemplary user interface for storing a makeup look.
- the user interface 2000 may display the finished face image 2001 and provide a button 2003 that is for a function to save the finished face image.
- the finished face image may be stored as the underlying face image and one or more filters that may be applied to the underlying face image to recreate the finished face image.
- the finished face image may be stored as the underlying face image and the recorded swipes of makeup product or products.
- the user interface 2000 may further provide a button 2005 that is for a function to move the finished face image to a platform providing live video or still images, such as a social media platform or video conferencing platform. Examples of social media platforms include Facebook, Linked-in, Instagram, YouTube, Snapchat, TikTok, to name a few. Examples of video conferencing platforms include Microsoft Teams, FaceTime, Google Hangouts or Google Meet, and Zoom, to name a few.
- the one or more makeup filters for recreating the finished face image may be provided to the social media platform or video conferencing platform.
- the one or more filters may be applied to another base image to obtain a new finished face image.
- the user may forward the digital makeup palette and captured face image to another user.
- S321 to S327 may be performed while the other user performs makeup application.
- the other user may be a person that has a higher level of experience in applying makeup, or a person that the original user believes may create a type of makeup look that the original person may prefer.
- FIG. 21 is a flowchart of a method of custom application of a digital palette.
- the user may be instructed to capture an image, images, or video of the user’s face.
- the camera 231 of the mobile device 101, or an external camera may be used to capture an image or video of the user’s face.
- the mobile application 111 may analyze the captured face of the user.
- FIG. 22 is an exemplary user interface for indicating status of the creation of a custom makeup application.
- FIG. 8, as described above is a flowchart of the face analysis step in more detail.
- FIG. 9, as described above, is a block diagram of a CNN for classifying face shape.
- FIG. 10 is a diagram of a deep learning neural network for face landmark detection.
- one or more makeup filters may be selected/retrieved from the database 107 based on the facial features and past look preferences determined by the face analysis (S2103 and FIG. 8).
- Some stored makeup face filters may be filters that have been previously created by the user (upon selecting “Do it yourself’ in S303). Some makeup filters may be for common looks.
- FIG. 23 is a flowchart for a method of selecting makeup filters.
- the face shape from the results of the analysis in S2103 is obtained.
- the landmarks from the results of the analysis in S2103 are obtained.
- features of the skin, hair, eyes, face coloring and lighting are obtained from the analysis in S2103.
- past look preferences for the digital makeup palette may be obtained.
- possible facial filters for the landmarks, the face shape, skin color, hair style, eyelid shape, past preferences are retrieved from the database 107.
- a subset of the retrieved facial filters may be selected.
- Selection criteria may include random selection among the possible facial filters, selection of facial filters that best meet past look preferences, selection of at least one facial filter that is unlike past look preferences, in order to give the user a custom look, but that may still offer the user a choice of a different creative look.
- the retrieved makeup filters may be overlay ed on a face image to obtain one or more custom looks.
- the overlay process may include aligning the makeup filters based on the face shape and facial landmarks.
- the blending process of FIG. 16 may be used to perform the overlay process by creating a face image based on a desired feature and an original feature in the face image.
- the user may select, (YES in S2109), to save, in S2111, the looks created by the mobile application 111 in the database 107.
- the user may also choose (YES in S2113) to move/publish a makeup look, in S2115, that has been created, to a social media platform or video conferencing platform.
- FIG. 24 is an exemplary user interface for storing makeup looks.
- the user interface 2400 may display the finished face images 2401 and provide buttons 2403 that are for a function to save the respective finished face image.
- the finished face image may be stored as the underlying face image and one or more makeup filters that may be applied to the underlying face image to recreate the finished face image.
- the finished face image may be stored as the underlying face image and the recorded swipes of makeup product or products.
- the user interface 2400 may further provide a button (Not shown) that is for a function to move the finished face image to a social media platform or a video conferencing platform. Examples of social media platforms include Facebook, Linked-in, Instagram, Snapchat, YouTube, TikTok, to name a few. Examples of video conferencing platforms include Microsoft Teams, FaceTime, Google Hangouts or Google Meet, and Zoom.
- a form of machine learning such as reinforcement learning, may be used to learn what the user believes to be a problem area and what areas the user wishes to emphasize as a best facial feature.
- FIG. 25 is a block diagram of a type of reinforcement learning architecture. It is noted that various architectures and algorithms have been developed for reinforcement learning, including Deep reinforcement learning, Q-learning, Deep Q Network, to name a few. In this disclosure, a general description of reinforcement learning is provided, and should be understood to apply to various approaches to reinforcement learning.
- reinforcement learning is a form of machine learning where the output is not required to be known in advance. Instead actions output by an actor result in a reward that indicates whether the action was appropriate or not.
- a reinforcement learning system may involve an actor that instructs movement actions in an environment, and the choice of action may result in a reward in the form of a score of a certain value. The movement action places the environment into a new state. The score is fed back to the actor, which makes adjustments to its machine learning component.
- An example movement action may be one in which an actor in the environment makes a move to a new location and performs a task, where the task results in an increase in the actors score value.
- the increase in score serves as a reinforcement that the movement action was beneficial.
- a next movement action may be one in which the actor in the environment makes a move that does not make it to the new location, and subsequently results in a negative score, or at least does not increase a score value.
- the decrease in score is fed back as a negative effect and the machine learning component may be adjusted to learn that the movement action instructed by the actor was not a good choice given the state of the environment.
- reinforcement learning can continue to adopt as the actor continues to instruct movement actions.
- an agent 2510 via an artificial neural network 2513, interacts with its environment 2520 in discrete time steps. At each time /, the agent 2510 receives an observation which typically has an associated reward . The agent then chooses an action from a set of available actions, which is subsequently sent to the environment 2520. The environment 2520 moves to a new state and the reward associated with the transition is determined. The goal of a reinforcement learning agent 2510 is to collect as much reward as possible. The agent 2510 can (possibly randomly) choose any action as a function of the history of previous actions.
- a reinforcement learning system may be arranged to learn what the user believes to be a problem area and what areas the user wishes to emphasize as a best facial feature may be provided as two reinforcement learning processes.
- FIG. 26 is a flow diagram of a machine learning system in accordance with an exemplary aspect of the disclosure.
- reinforcement learning generally performs learning through feedback of a reward 2520a.
- the feedback may be provided in the form of voice interaction with the mobile application 111 as the user applies a makeup product to a face image.
- the voice feedback may be provided using a microphone 103a, 241 and the feedback may be provided in response to questions and statements output through an audio circuit 242.
- the reinforcement learning system 2600 may take the form of multiple reinforcement learning models.
- One reinforcement learning model 2603 may detect a problem area based on one, or a series of swipes, 2601, of a makeup product to a face image.
- the reinforcement learning system 2600 may verify the detection of the problem area (i.e., feedback a reward) by asking a question, such as, “are you applying makeup to a problem area?”
- Another reinforcement learning model 2605 may detect a best facial feature based on one, or a series of swipes, 2601 of a makeup product to a face image.
- the reinforcement learning system 2600 may verify the detection of the best facial feature (i.e., feedback a reward) by asking a question, such as, “are you applying makeup to a special facial feature?”
- the reinforcement learning system may utilize information of the location of a problem area or best facial feature to provide a more specific question, such as, “are you applying makeup to a blemish?” or “are you applying makeup to emphasize your eye color?”
- an alternative approach may be to include a machine learning component to initially classify one or a series of swipes as being for a problem area, a best facial feature, or neither, and providing the result of the initial classification to either the reinforcement learning model 2603, the reinforcement learning model 2605, or neither model.
- the response by the user may be used to apply a reward to the reinforcement learning system.
- the reward may be a positive or a negative score depending on the user’s response.
- the score will be used to adjust parameters in the respective machine learning model 2603 or 2605.
- regression analysis Another approach that performs continuous learning similar to reinforcement learning to detect a problem area or detect a best facial feature is regression analysis.
- An advantage of regression analysis is that it is fast to compute.
- models for nonlinear regression analysis are more suitable for predicting predictable data. Data of makeup swipes may be difficult to clearly predict, as they may be made for reasons other than for problem areas or best features.
- the words “a,” “an” and the like generally carry a meaning of “one or more,” unless stated otherwise.
- the terms “approximately,” “approximate,” “about,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween.
- the augmented reality system includes a makeup objective unit including computation circuitry operably coupled to a graphical user interface configured to generate one or more instances of user selectable makeup objectives and to receive user-selected makeup objective information; a makeup palette unit operably coupled to the makeup objective unit, the makeup palette unit including computation circuitry configured to generate at least one digital makeup palette for a digital makeup product in accordance with the user-selected makeup objective information; and a makeup objective visualization unit including computation circuitry configured to generate one or more instances of a virtual try-on in accordance with the user-selected makeup objective information.
- the augmented reality system for makeup of feature (1) in which the computation circuitry of the makeup objective visualization unit is further configured to receive one or more digital images of the user including at least a portion of the user’s face, analyze the user’s face image to identify face parts, track and record, in a memory, at least one gesture by the user that applies the digital makeup product to the image of the user’s face, analyze the at least one gesture to estimate problem areas in the user’s face or to estimate an emphasis on specific facial features, and store the estimated problem areas or estimated emphasized facial features together with the coverage, shade and finish, that was applied, in the memory.
- the augmented reality system for makeup of features (2) or (3) further including a touch screen, in which the at least one gesture by the user includes one or more swipes on the touch screen, and the computation circuitry of the makeup objective visualization unit is further configured to detect the one or more swipes and apply a selected color to a location in the image of the user’s face.
- the augmented reality system for makeup of feature (4) in which the computation circuitry of the makeup objective visualization unit is further configured to detect the one or more swipes on the touch screen and apply the selected color in an area of the image limited by a boundary of a face part that is at the location in the image of the user’s face.
- the augmented reality system for makeup of features (2) or (3) in which the computation circuitry of the makeup objective visualization unit is further configured to receive a user’s level of experience in applying makeup, detect the one or more swipes on the touch screen, apply the selected color in an area of the image of the user’s face at a location of a face part indicated by the swipes, wherein the face part has a boundary, and analyze the applied color to determine if the one or more swipes are outside a tolerance amount from the boundary, wherein the tolerance amount is based on the user’s level of experience in applying makeup.
- the augmented reality system for makeup of features (4) or (5) in which the touch screen is a three-dimensional touch screen that senses the amount of pressure being applied to the screen, the at least one gesture by the user includes a swipe on the three-dimensional touch screen at a certain pressure on the screen, and the computation circuitry is further configured to detect the one or more swipes and the pressure of the swipes, and apply the selected color to a location in the image of the user’s face at a thickness according to the pressure.
- the augmented reality system for makeup of features (2) or (3) in which the computation circuitry of the makeup objective visualization unit is further configured to analyze the gestures to estimate the problem areas using a problem area reinforcement learning model.
- the augmented reality system for makeup of features (2) or (3) in which the computation circuitry of the makeup objective visualization unit is further configured to analyze the gestures to estimate the emphasis of facial features using a best feature reinforcement learning model.
- the augmented reality system for makeup of features (2) or (3) in which the computation circuitry of the makeup objective visualization unit is further configured to use a gesture identification machine learning model to distinguish between a gesture for a problem area and a gesture for an emphasized facial feature.
- the augmented reality system for makeup of features (2) or (3) in which the computation circuitry of the makeup objective visualization unit is further configured to use an audio output function of a mobile device to ask the user whether they would like a recommendation on how to apply the digital makeup product to the image of the user’s face.
- An augmented reality system for makeup including a makeup objective unit including computation circuitry operably coupled to a graphical user interface configured to generate one or more instances of user selectable makeup objectives and to receive user- selected makeup objective information; a makeup palette unit operably coupled to the makeup objective unit, the makeup palette unit including computation circuitry configured to generate at least one digital makeup palette for a digital makeup product; and a makeup objective visualization unit including computation circuitry configured to analyze a user’s face to determine one or more of face shape, facial landmarks, skin tone, hair color, eye color, lip shape, eyelid shape, hair style and lighting, and automatically create one or more instances of a custom virtual try-on for a user in accordance with the user-selected makeup objective information and the at least one digital makeup palette generated based on the analysis of the user’s face.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Entrepreneurship & Innovation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Game Theory and Decision Science (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180088811.XA CN116830073A (en) | 2020-12-30 | 2021-12-02 | Digital color palette |
KR1020237024133A KR20230117240A (en) | 2020-12-30 | 2021-12-02 | digital makeup palette |
JP2023540040A JP2024506454A (en) | 2020-12-30 | 2021-12-02 | digital makeup palette |
EP21835070.0A EP4272050A1 (en) | 2020-12-30 | 2021-12-02 | Digital makeup palette |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/138,031 US12136173B2 (en) | 2020-12-30 | 2020-12-30 | Digital makeup palette |
US17/138,031 | 2020-12-30 | ||
US17/137,970 | 2020-12-30 | ||
US17/137,970 US11321882B1 (en) | 2020-12-30 | 2020-12-30 | Digital makeup palette |
FR2107923A FR3125611A1 (en) | 2021-07-22 | 2021-07-22 | digital makeup palette |
FRFR2107904 | 2021-07-22 | ||
FRFR2107923 | 2021-07-22 | ||
FR2107904A FR3125612B1 (en) | 2021-07-22 | 2021-07-22 | DIGITAL MAKEUP PALETTE |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022146615A1 true WO2022146615A1 (en) | 2022-07-07 |
Family
ID=79164968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/061654 WO2022146615A1 (en) | 2020-12-30 | 2021-12-02 | Digital makeup palette |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4272050A1 (en) |
JP (1) | JP2024506454A (en) |
KR (1) | KR20230117240A (en) |
WO (1) | WO2022146615A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8908904B2 (en) * | 2011-12-28 | 2014-12-09 | Samsung Electrônica da Amazônia Ltda. | Method and system for make-up simulation on portable devices having digital cameras |
US20160093081A1 (en) * | 2014-09-26 | 2016-03-31 | Samsung Electronics Co., Ltd. | Image display method performed by device including switchable mirror and the device |
WO2016054164A1 (en) * | 2014-09-30 | 2016-04-07 | Tcms Transparent Beauty, Llc | Precise application of cosmetic looks from over a network environment |
US20160240005A1 (en) * | 2014-01-31 | 2016-08-18 | Empire Technology Development, Llc | Subject selected augmented reality skin |
US20180075524A1 (en) * | 2016-09-15 | 2018-03-15 | GlamST LLC | Applying virtual makeup products |
CN112036261A (en) * | 2020-08-11 | 2020-12-04 | 海尔优家智能科技(北京)有限公司 | Gesture recognition method and device, storage medium and electronic device |
-
2021
- 2021-12-02 KR KR1020237024133A patent/KR20230117240A/en unknown
- 2021-12-02 EP EP21835070.0A patent/EP4272050A1/en active Pending
- 2021-12-02 WO PCT/US2021/061654 patent/WO2022146615A1/en active Application Filing
- 2021-12-02 JP JP2023540040A patent/JP2024506454A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8908904B2 (en) * | 2011-12-28 | 2014-12-09 | Samsung Electrônica da Amazônia Ltda. | Method and system for make-up simulation on portable devices having digital cameras |
US20160240005A1 (en) * | 2014-01-31 | 2016-08-18 | Empire Technology Development, Llc | Subject selected augmented reality skin |
US20160093081A1 (en) * | 2014-09-26 | 2016-03-31 | Samsung Electronics Co., Ltd. | Image display method performed by device including switchable mirror and the device |
WO2016054164A1 (en) * | 2014-09-30 | 2016-04-07 | Tcms Transparent Beauty, Llc | Precise application of cosmetic looks from over a network environment |
US20180075524A1 (en) * | 2016-09-15 | 2018-03-15 | GlamST LLC | Applying virtual makeup products |
CN112036261A (en) * | 2020-08-11 | 2020-12-04 | 海尔优家智能科技(北京)有限公司 | Gesture recognition method and device, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
KR20230117240A (en) | 2023-08-07 |
EP4272050A1 (en) | 2023-11-08 |
JP2024506454A (en) | 2024-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12136173B2 (en) | Digital makeup palette | |
US11854070B2 (en) | Generating virtual makeup products | |
JP6778877B2 (en) | Makeup parts creation device, makeup parts utilization device, makeup parts creation method, makeup parts usage method, makeup parts creation program, and makeup parts utilization program | |
US11776187B2 (en) | Digital makeup artist | |
US10799010B2 (en) | Makeup application assist device and makeup application assist method | |
TWI773096B (en) | Makeup processing method and apparatus, electronic device and storage medium | |
US20180075524A1 (en) | Applying virtual makeup products | |
US20160357578A1 (en) | Method and device for providing makeup mirror | |
TWI573093B (en) | Method of establishing virtual makeup data, electronic device having method of establishing virtual makeup data and non-transitory computer readable storage medium thereof | |
US9589178B2 (en) | Image processing with facial features | |
US11961169B2 (en) | Digital makeup artist | |
CN108932654A (en) | A kind of virtually examination adornment guidance method and device | |
US11321882B1 (en) | Digital makeup palette | |
CN112083863A (en) | Image processing method and device, electronic equipment and readable storage medium | |
EP4260172A1 (en) | Digital makeup artist | |
WO2022146615A1 (en) | Digital makeup palette | |
US20180181110A1 (en) | System and method of generating a custom eyebrow stencil | |
KR20020069595A (en) | System and method for producing caricatures | |
US20230101374A1 (en) | Augmented reality cosmetic design filters | |
FR3125613A1 (en) | digital makeup artist | |
JP2024537064A (en) | Augmented reality makeup design filters | |
CN115393552A (en) | Beauty makeup interaction platform providing digital makeup trial and makeup method | |
FR3125610A1 (en) | DIGITAL MAKEUP ARTIST |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21835070 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023540040 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180088811.X Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 20237024133 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021835070 Country of ref document: EP Effective date: 20230731 |