[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022159038A1 - A system and method for generating a 3d avatar - Google Patents

A system and method for generating a 3d avatar Download PDF

Info

Publication number
WO2022159038A1
WO2022159038A1 PCT/SG2022/050034 SG2022050034W WO2022159038A1 WO 2022159038 A1 WO2022159038 A1 WO 2022159038A1 SG 2022050034 W SG2022050034 W SG 2022050034W WO 2022159038 A1 WO2022159038 A1 WO 2022159038A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
avatar
background
data
images
Prior art date
Application number
PCT/SG2022/050034
Other languages
French (fr)
Inventor
Ee Ling BEH
Kean Lee LIM
Original Assignee
Buzz Arvr Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Buzz Arvr Pte. Ltd. filed Critical Buzz Arvr Pte. Ltd.
Publication of WO2022159038A1 publication Critical patent/WO2022159038A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • the present invention relates to a system, and method for generating a 3D avatar.
  • the avatars generated are only in 2D, and are not used in an environment which enables the avatars to be used in an augmented reality manner. This is due to data processing constraints which prevents the generation of enhanced avatars.
  • the use of predefined template forms limits the extent by which the avatars can be customised, and the extent of interaction between avatars and real-life aspects/features is also limited.
  • NFTs non-fungiable tokens
  • a system for generating a 3D avatar including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background.
  • the device is selected from either a user device or a display device.
  • a data processor implemented method for generating a 3D avatar comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background.
  • the device is selected from either a user device or a display device.
  • a user device configured for generating a 3D avatar
  • the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
  • a display device configured for generating a 3D avatar
  • the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
  • a central server generating a 3D avatar
  • the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background.
  • the device is selected from either a user device or a display device.
  • FIG 1 is a flow chart of an example of a method for generating a 3D avatar
  • FIG 2 is a schematic diagram of an example of a system for generating a 3D avatar
  • FIG 3 is a schematic diagram showing components of an example user device of the system shown in FIG 2;
  • FIG 4 is a schematic diagram showing components of an example mass display device of the system shown in FIG 2;
  • FIG 5 is a schematic diagram showing components of an example central server shown in FIG 2;
  • FIGs 6A to 6B is an example of a 3D avatar generated using the method of FIG 1 ;
  • FIG 7 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a first example background
  • FIG 8 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a second example background
  • FIG 9 shows a flow chart of an example of tasks carried out by a user device/display device during the method of FIG 1 .
  • the present invention provides a system and method for generating a 3D avatar, substantially in real-time.
  • the system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like.
  • the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments.
  • the 3D avatars are modelled substantially on physical attributes and wearables of users, such as, for example, facial features, physique, clothing, accessories and so forth. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment.
  • the method can be performed at least in part amongst one or more data processing devices such as, for example, a mobile phone, a display device, a central server, or the like.
  • data processing devices such as, for example, a mobile phone, a display device, a central server, or the like.
  • the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server.
  • the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server.
  • At step 105 at least one image of a user and a surrounding environment of the user is captured.
  • the more images that are captured the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user.
  • the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth.
  • the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user.
  • the at least one image is captured with a user device like a camera on a mobile phone, or a camera coupled to a display device.
  • data of the at least one image of the user and surrounding environment is transmitted to a central server.
  • user credentials to access a third party portal is also provided to the central server from the user device.
  • the user credentials are typically usable in the aforementioned manner with consent of the user.
  • the central server can comprise more than one data processing device. An example embodiment of the central server will be provided in a subsequent paragraph.
  • the data of the at least one image of the user and surrounding environment is processed at the central server to generate a 3D avatar. The physical attributes, the clothing and/or accessories of the user that are obtained from the data are used to generate the 3D avatar.
  • the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods.
  • Substantial processing processes are carried out at the central server which broadly comprises determining the physical attributes, the clothing and/or accessories of the user from the at least one image, and uses that information to generate the 3D avatar with at least some likeness to the user while wearing similar clothing and/or accessories. It should be appreciated that the substantial processing processes rely on both hardware and software of the central server to ensure that the 3D avatar is generated within a short period of time, typically less than five seconds. Most of the data processing to generate the 3D avatar is carried out at the central server, and not at devices configured for showing the 3D avatar.
  • the substantial processing processes can include machine learning of all the images processed at the central server, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing.
  • the machine learning can also enable enhanced likeness of the user to be generated in avatar form.
  • the machine learning is also able to aid in shortening the time to generate the 3D avatar.
  • the central server is able to use the user credentials of the user to obtain a purchase history from third party platforms (eg. e-commerce platforms) at step 117, whereby the purchase history can be from a pre-defined category of goods/services like clothing and/or accessories.
  • the purchase history can be desirable as it can be employed in a product selection to enhance, for example, purchase intent, sales, user engagement, and so forth. This will be evident in a later portion of the description.
  • a background on which the 3D avatar is overlaid on is selected.
  • the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.
  • the 3D avatar is able to interact with the selected background on the device configured to show the 3D avatar.
  • the 3D avatar interacts with the selected background in accordance with actions/gestures carried out by the user. This enhances the user’s perception of immersive-ness in the selected background.
  • the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories.
  • the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
  • step 130 the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.
  • the recording can be stored at the user device or at the central server.
  • the method 100 enables benefits for both users and providers of the method 100.
  • the providers can be entities that provide a good and/or service to the users.
  • the method 100 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories.
  • the level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.
  • the users can also choose to monetize the 3D avatars that are generated, for example, as a digital asset with ownership rights being transferrable via NFT/cryptocurrency transactions.
  • the method 100 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the method 100 to be carried out at any location with connectivity to a data network, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
  • the system 200 includes one or more user devices 220, one or more display devices 230, a communications network 250, a third party platform 280 (eg. an e-commerce platform), and a central server 260.
  • the one or more user devices 220 and the one or more display devices 230 communicate with the central server 260 via the communications network 250.
  • the communications network 250 can be of any appropriate form, such as the Internet and/or a number of local area networks (LANs). Further details of respective components of the system 200 will be provided in a following portion of the description. It will be appreciated that the configuration shown in FIG 2 is not limiting and for the purpose of illustration only.
  • the user device 220 of any of the examples herein may be a handheld computer device such as a smart phone with a capability to download and operate mobile applications, and be connectable to the communications network 250.
  • the user device 220 can also be a VR headset.
  • An exemplary embodiment of the user device 220 is shown in FIG 3. As shown, the user device 220 includes the following components in electronic communication via a bus 311 : 1. a display 302;
  • non-volatile memory 303
  • RAM random access memory
  • transceiver component 305 that includes a transceiver(s);
  • an app 309 stored in the non-volatile memory 303 is required to enable the user device 220 to operate in a desired manner.
  • the app 309 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar.
  • the app 309 can be a web browser.
  • FIG 3 is not intended to be a hardware diagram; thus many of the components depicted in FIG 3 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 3.
  • the display device 230 of any of the examples herein may be a television with a capability to download and operate mobile applications, and be connectable to the communications network 250.
  • An exemplary embodiment of the display device 230 is shown in FIG 4.
  • the display device 230 includes the following components in electronic communication via a bus 411 :
  • RAM random access memory
  • transceiver component 405 that includes a transceiver(s);
  • an app 409 stored in the non-volatile memory 403, is required to enable the display device 230 to operate in a desired manner.
  • the app 409 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar.
  • the app 409 can be a web browser.
  • the user is able to control the display device 230 using another device wirelessly communicating with the display device 230, for example, the user’s mobile phone.
  • the user’s mobile phone may be running the app 409 to provide access to an interface with the display device 230, or a web browser on the mobile phone may provide access to an interface with the display device 230.
  • FIG 4 is not intended to be a hardware diagram; thus many of the components depicted in FIG 4 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 4.
  • the central server 260 is a hardware and software suite comprised of preprogrammed logic, algorithms and other means of processing information coming in, in order to send out information which is useful to the objective of the system 200 in which the central server 260 resides.
  • hardware which can be used by the central server 260 will be described briefly herein.
  • the central server 260 can broadly comprise a database which stores pertinent information, and processes information packets from the user devices 220 and the display devices 230.
  • the central server 260 can be operated from a commercial hosted service such as Amazon Web Services (TM).
  • TM Amazon Web Services
  • the central server 260 can be represented in a form as shown in FIG 4.
  • the central server 260 is in communication with a communications network 250, as shown in FIG 4.
  • the central server 260 is able to communicate with the user devices 220, the display devices 230, and/or other processing devices, as required, over the communications network 250.
  • the user devices 220, the display devices 230 communicate via a direct communication channel (LAN or WIFI) with the central server 260.
  • LAN local area network
  • WIFI wireless local area network
  • the components of the central server 260 can be configured in a variety of ways.
  • the components can be implemented entirely by software to be executed on standard computer server hardware, which may comprise one hardware unit or different computer hardware units distributed over various locations, some of which may require the communications network 250 for communication.
  • the central server 260 is a commercially available computer system based on a 32 bit or a 64 bit Intel architecture, and the processes and/or methods executed or performed by the central server 260 are implemented in the form of programming instructions of one or more software components or modules 502 stored on non-volatile computer-readable storage 503 associated with the central server 260.
  • the device 400 includes at least one or more of the following standard, commercially available, computer components, all interconnected by a bus 505:
  • RAM random access memory
  • CPU central processing unit
  • FIG 5 is not intended to be a hardware diagram; thus many of the components depicted in FIG 5 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 5.
  • the system 200 enables benefits for both users and providers of the system 200, when the system 200 is used to carry out the method 100.
  • the providers can be entities that provide a good and/or service to the users.
  • the system 200 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories.
  • the level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.
  • the system 200 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the system 200, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
  • FIGs 6A and 6B there are shown examples of what a user sees on the user device 220 or the display device 230.
  • a main portion 600 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 610 shows the user a short time lag ago to coincide the user’s action with the 3D avatar shown in the main portion 600.
  • FIGs 6A and 6B show a “no-background” situation.
  • a main portion 700 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 710 shows the user interface for interacting with the 3D avatar.
  • the sub-portion 710 shows a user interface for a user to change attire for the 3D avatar.
  • Main menu 715 shows various types of clothing/accessories that can be changed on the 3D avatar while sub-menu 720 shows various options available when an item from the main menu 715 is selected.
  • the 3D avatar in the main portion 700 moves around in sync with movements of the user while the user is using the main menu 715 and the sub-menu 720.
  • FIG 7 shows a virtual background.
  • FIG 8 there is shown another example of what a user sees on the display device 230.
  • a main portion 800 shows the 3D avatar generated by the method 100 and/or the system 200. It should be noted that FIG 8 shows an actual background that can be the location where the user is located. In addition, FIG 8 shows an instance when the user uses a mobile phone 810 to access an interface to control the display device 230.
  • a method 900 for generating a 3D avatar particularly in relation to a process at the user device 220 or display device 230.
  • step 905 at least one image of a user and a surrounding environment of the user is captured by the user device 220 or display device 230.
  • the more images that are captured the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user.
  • the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth.
  • the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user.
  • the at least one image is captured with a user device 220 like a camera on a mobile phone, or a camera coupled to a display device 230.
  • data of the at least one image of the user and surrounding environment is transmitted to the central server 260.
  • the central server 260 can carry out substantial processing of the at least one image of the user using machine learning, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing.
  • the machine learning can also enable enhanced likeness of the user to be generated in avatar form.
  • the machine learning is also able to aid in shortening the time to generate the 3D avatar.
  • user credentials to a third party portal is also provided to the central server 260 from the user device 220. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user.
  • a background on which the 3D avatar is overlaid on is selected at the user device 220 or display device 230.
  • the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.
  • the generated 3D avatar is received from the central server 260 at the user device 220 or display device 230. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Most of the data processing to generate the 3D avatar is carried out at the central server 260, and not at devices configured for showing the 3D avatar.
  • the 3D avatar is able to interact with the selected background on the user device 220 or display device 230. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories.
  • the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
  • step 930 the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a system and method for generating a 3D avatar, substantially in real-time. The system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like. It should be noted that the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment.

Description

A SYSTEM AND METHOD FOR GENERATING A 3D AVATAR
Field of the Invention
The present invention relates to a system, and method for generating a 3D avatar.
Background
The increasing gamification of user interfaces across digital platforms has led to a prevalence of avatar-based interactions for users on digital platforms, and correspondingly, a widespread acceptance of avatars amongst the users. Currently, many platforms have relied on avatars substantially based on pre-defined template forms, such as Facebook Avatar, Bitmoji, Apple Memoji, and so forth.
It should be appreciated that currently, the avatars generated are only in 2D, and are not used in an environment which enables the avatars to be used in an augmented reality manner. This is due to data processing constraints which prevents the generation of enhanced avatars. In addition, as mentioned previously, the use of predefined template forms limits the extent by which the avatars can be customised, and the extent of interaction between avatars and real-life aspects/features is also limited.
There is also an increasing emphasis on digital environments, for example, a metaverse, a game universe, a simulated world, and the like, whereby avatars are typically used when navigating the digital environments.
Moreover, the increasing acceptance of non-fungiable tokens (NFTs) is leading to a practice of of valuing avatars using NFTs. This is leading to substantial creative effort being expended to create avatars with appeal to third parties, and consequently providing a way to derive financial gain from the creation of avatars, akin to the creation of an avatar creative industry. Summary
In a first aspect, there is provided a system for generating a 3D avatar, the system including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background.
It is preferable that the device is selected from either a user device or a display device.
In a second aspect, there is provided a data processor implemented method for generating a 3D avatar, the method comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background.
It is preferable that the device is selected from either a user device or a display device.
In a third aspect, there is provided a user device configured for generating a 3D avatar, the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
There is also provided a display device configured for generating a 3D avatar, the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
Finally, there is provided a central server generating a 3D avatar, the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background.
It is preferable that the device is selected from either a user device or a display device.
It will be appreciated that the broad forms of the invention and their respective features can be used in conjunction, interchangeably and/or independently, and reference to separate broad forms is not intended to be limiting.
Brief Description of the Drawings A non-limiting example of the present invention will now be described with reference to the accompanying drawings, in which:
FIG 1 is a flow chart of an example of a method for generating a 3D avatar;
FIG 2 is a schematic diagram of an example of a system for generating a 3D avatar;
FIG 3 is a schematic diagram showing components of an example user device of the system shown in FIG 2;
FIG 4 is a schematic diagram showing components of an example mass display device of the system shown in FIG 2;
FIG 5 is a schematic diagram showing components of an example central server shown in FIG 2;
FIGs 6A to 6B is an example of a 3D avatar generated using the method of FIG 1 ;
FIG 7 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a first example background;
FIG 8 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a second example background; and
FIG 9 shows a flow chart of an example of tasks carried out by a user device/display device during the method of FIG 1 .
Detailed Description
The present invention provides a system and method for generating a 3D avatar, substantially in real-time. The system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like. It should be noted that the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments. The 3D avatars are modelled substantially on physical attributes and wearables of users, such as, for example, facial features, physique, clothing, accessories and so forth. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment. For the purpose of illustration, it is assumed that the method can be performed at least in part amongst one or more data processing devices such as, for example, a mobile phone, a display device, a central server, or the like. Typically, the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server. In some instances,
An example of a broad overview of a method 100 for generating a 3D avatar will now be described with reference to FIG 1 .
At step 105, at least one image of a user and a surrounding environment of the user is captured. The more images that are captured, the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user. For example, the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth. In addition, other than physical attributes of the users, the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user. Typically, the at least one image is captured with a user device like a camera on a mobile phone, or a camera coupled to a display device.
At step 110, data of the at least one image of the user and surrounding environment is transmitted to a central server. In some embodiments, user credentials to access a third party portal is also provided to the central server from the user device. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user. It should be appreciated that the central server can comprise more than one data processing device. An example embodiment of the central server will be provided in a subsequent paragraph. At step 115, the data of the at least one image of the user and surrounding environment is processed at the central server to generate a 3D avatar. The physical attributes, the clothing and/or accessories of the user that are obtained from the data are used to generate the 3D avatar. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Substantial processing processes are carried out at the central server which broadly comprises determining the physical attributes, the clothing and/or accessories of the user from the at least one image, and uses that information to generate the 3D avatar with at least some likeness to the user while wearing similar clothing and/or accessories. It should be appreciated that the substantial processing processes rely on both hardware and software of the central server to ensure that the 3D avatar is generated within a short period of time, typically less than five seconds. Most of the data processing to generate the 3D avatar is carried out at the central server, and not at devices configured for showing the 3D avatar. For example, the substantial processing processes can include machine learning of all the images processed at the central server, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing. The machine learning can also enable enhanced likeness of the user to be generated in avatar form. Furthermore, the machine learning is also able to aid in shortening the time to generate the 3D avatar.
In some embodiments, the central server is able to use the user credentials of the user to obtain a purchase history from third party platforms (eg. e-commerce platforms) at step 117, whereby the purchase history can be from a pre-defined category of goods/services like clothing and/or accessories. The purchase history can be desirable as it can be employed in a product selection to enhance, for example, purchase intent, sales, user engagement, and so forth. This will be evident in a later portion of the description. At step 120, a background on which the 3D avatar is overlaid on is selected. For example, the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.
At step 125, the 3D avatar is able to interact with the selected background on the device configured to show the 3D avatar. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions/gestures carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories. In addition, the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
Finally, at step 130, the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback. The recording can be stored at the user device or at the central server.
It should be appreciated that the method 100 enables benefits for both users and providers of the method 100. In some embodiments, the providers can be entities that provide a good and/or service to the users.
In relation to the user, the method 100 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories. The level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds. In addition, the users can also choose to monetize the 3D avatars that are generated, for example, as a digital asset with ownership rights being transferrable via NFT/cryptocurrency transactions.
In relation to the provider, the method 100 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the method 100 to be carried out at any location with connectivity to a data network, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
An example of a system 200 for generating a 3D avatar will now be described with reference to FIG 2.
In this example, the system 200 includes one or more user devices 220, one or more display devices 230, a communications network 250, a third party platform 280 (eg. an e-commerce platform), and a central server 260. The one or more user devices 220 and the one or more display devices 230 communicate with the central server 260 via the communications network 250. The communications network 250 can be of any appropriate form, such as the Internet and/or a number of local area networks (LANs). Further details of respective components of the system 200 will be provided in a following portion of the description. It will be appreciated that the configuration shown in FIG 2 is not limiting and for the purpose of illustration only.
User Device 220
The user device 220 of any of the examples herein may be a handheld computer device such as a smart phone with a capability to download and operate mobile applications, and be connectable to the communications network 250. The user device 220 can also be a VR headset. An exemplary embodiment of the user device 220 is shown in FIG 3. As shown, the user device 220 includes the following components in electronic communication via a bus 311 : 1. a display 302;
2. non-volatile memory 303;
3. random access memory ("RAM") 304;
4. data processor(s) 301 ;
5. a transceiver component 305 that includes a transceiver(s);
6. an image capture module 310; and
7. input controls 307.
In some embodiments, an app 309 stored in the non-volatile memory 303, is required to enable the user device 220 to operate in a desired manner. For example, the app 309 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar. In some instances, the app 309 can be a web browser.
Although the components depicted in FIG 3 represent physical components, FIG 3 is not intended to be a hardware diagram; thus many of the components depicted in FIG 3 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 3.
Display Device 230
The display device 230 of any of the examples herein may be a television with a capability to download and operate mobile applications, and be connectable to the communications network 250. An exemplary embodiment of the display device 230 is shown in FIG 4. As shown, the display device 230 includes the following components in electronic communication via a bus 411 :
1. a display 402; 2. non-volatile memory 403;
3. random access memory ("RAM") 404;
4. data processor(s) 401 ;
5. a transceiver component 405 that includes a transceiver(s);
6. an image capture module 410; and
7. input controls 407.
In some embodiments, an app 409 stored in the non-volatile memory 403, is required to enable the display device 230 to operate in a desired manner. For example, the app 409 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar. In some instances, the app 409 can be a web browser. In some instances, the user is able to control the display device 230 using another device wirelessly communicating with the display device 230, for example, the user’s mobile phone. The user’s mobile phone may be running the app 409 to provide access to an interface with the display device 230, or a web browser on the mobile phone may provide access to an interface with the display device 230.
Although the components depicted in FIG 4 represent physical components, FIG 4 is not intended to be a hardware diagram; thus many of the components depicted in FIG 4 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 4.
Central Server 260
The central server 260 is a hardware and software suite comprised of preprogrammed logic, algorithms and other means of processing information coming in, in order to send out information which is useful to the objective of the system 200 in which the central server 260 resides. For the sake of illustration, hardware which can be used by the central server 260 will be described briefly herein.
The central server 260 can broadly comprise a database which stores pertinent information, and processes information packets from the user devices 220 and the display devices 230. In some embodiments, the central server 260 can be operated from a commercial hosted service such as Amazon Web Services (TM).
In one possible embodiment, the central server 260 can be represented in a form as shown in FIG 4.
The central server 260 is in communication with a communications network 250, as shown in FIG 4. The central server 260 is able to communicate with the user devices 220, the display devices 230, and/or other processing devices, as required, over the communications network 250. In some instances, the user devices 220, the display devices 230 communicate via a direct communication channel (LAN or WIFI) with the central server 260.
The components of the central server 260 can be configured in a variety of ways. The components can be implemented entirely by software to be executed on standard computer server hardware, which may comprise one hardware unit or different computer hardware units distributed over various locations, some of which may require the communications network 250 for communication.
In the example shown in FIG 4, the central server 260 is a commercially available computer system based on a 32 bit or a 64 bit Intel architecture, and the processes and/or methods executed or performed by the central server 260 are implemented in the form of programming instructions of one or more software components or modules 502 stored on non-volatile computer-readable storage 503 associated with the central server 260. The device 400 includes at least one or more of the following standard, commercially available, computer components, all interconnected by a bus 505:
1 . random access memory (RAM) 506; and
2. at least one central processing unit (CPU) 507.
Although the components depicted in FIG 5 represent physical components, FIG 5 is not intended to be a hardware diagram; thus many of the components depicted in FIG 5 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 5.
It should be appreciated that the system 200 enables benefits for both users and providers of the system 200, when the system 200 is used to carry out the method 100. In some embodiments, the providers can be entities that provide a good and/or service to the users.
In relation to the user, the system 200 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories. The level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.
In relation to the provider, the system 200 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the system 200, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
Referring to FIGs 6A and 6B, there are shown examples of what a user sees on the user device 220 or the display device 230. A main portion 600 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 610 shows the user a short time lag ago to coincide the user’s action with the 3D avatar shown in the main portion 600. It should be noted that FIGs 6A and 6B show a “no-background” situation.
Referring to FIG 7, there is shown another example of what a user sees on the user device 220 or the display device 230. A main portion 700 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 710 shows the user interface for interacting with the 3D avatar. In this example, the sub-portion 710 shows a user interface for a user to change attire for the 3D avatar. Main menu 715 shows various types of clothing/accessories that can be changed on the 3D avatar while sub-menu 720 shows various options available when an item from the main menu 715 is selected. It should be noted that the 3D avatar in the main portion 700 moves around in sync with movements of the user while the user is using the main menu 715 and the sub-menu 720. FIG 7 shows a virtual background.
Referring to FIG 8, there is shown another example of what a user sees on the display device 230. A main portion 800 shows the 3D avatar generated by the method 100 and/or the system 200. It should be noted that FIG 8 shows an actual background that can be the location where the user is located. In addition, FIG 8 shows an instance when the user uses a mobile phone 810 to access an interface to control the display device 230.
Further details will now be provided for various aspects of the method 100 and the system 200.
Referring to FIG 9, there is shown an example of a method 900 for generating a 3D avatar, particularly in relation to a process at the user device 220 or display device 230. At step 905, at least one image of a user and a surrounding environment of the user is captured by the user device 220 or display device 230. The more images that are captured, the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user. For example, the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth. In addition, other than physical attributes of the users, the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user. Typically, the at least one image is captured with a user device 220 like a camera on a mobile phone, or a camera coupled to a display device 230.
At step 910, data of the at least one image of the user and surrounding environment is transmitted to the central server 260. For example, the central server 260 can carry out substantial processing of the at least one image of the user using machine learning, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing. The machine learning can also enable enhanced likeness of the user to be generated in avatar form. Furthermore, the machine learning is also able to aid in shortening the time to generate the 3D avatar. In some embodiments, user credentials to a third party portal is also provided to the central server 260 from the user device 220. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user.
At step 915, a background on which the 3D avatar is overlaid on is selected at the user device 220 or display device 230. For example, the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth. At step 920, the generated 3D avatar is received from the central server 260 at the user device 220 or display device 230. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Most of the data processing to generate the 3D avatar is carried out at the central server 260, and not at devices configured for showing the 3D avatar.
At step 925, the 3D avatar is able to interact with the selected background on the user device 220 or display device 230. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories. In addition, the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
Finally, at step 930, the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.
Throughout this specification and claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated integer or group of integers or steps but not the exclusion of any other integer or group of integers.
Persons skilled in the art will appreciate that numerous variations and modifications will become apparent. All such variations and modifications which become apparent to persons skilled in the art, should be considered to fall within the spirit and scope that the invention broadly appearing before described.

Claims

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
1 . A system for generating a 3D avatar, the system including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background, wherein the device is selected from either a user device or a display device.
2. The system of claim 1 , the one or more data processors further configured to: transmit, from the device, user credentials to access a third party portal; control, at the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and record, at the device, the 3D avatar interacting with the background.
3. The system of either claim 1 or 2, wherein the images comprise frontal and side views of the user.
4. The system of claim 3, wherein physical attributes, clothing and accessories of the user are obtained from the images.
5. The system of claim 4, wherein processing of the data at the central server enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user.
6. The system of claim 5, wherein the processing of the data includes use of machine learning.
7. The system of any of claims 1 to 6, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.
8. The system of any of claims 1 to 7, wherein interaction enhances the user’s perception of immersion in the background.
9. A data processor implemented method for generating a 3D avatar, the method comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background, wherein the device is selected from either a user device or a display device.
10. The method of claim 9, further comprising: transmitting, from the device, user credentials to access a third party portal; controlling, at the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and recording, at the device, the 3D avatar interacting with the background.
11. The method of either claim 9 or 10, wherein the images comprise frontal and side views of the user.
12. The method of claim 11 , wherein physical attributes, clothing and accessories of the user are obtained from the images.
13. The method of claim 12, wherein processing of the data at the central server enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user.
14. The method of claim 13, wherein the processing of the data includes use of machine learning.
15. The method of any of claims 9 to 14, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.
16. The method of any of claims 9 to 15, wherein interaction enhances the user’s perception of immersion in the background.
17. A user device configured for generating a 3D avatar, the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
18. The user device of claim 17, the one or more data processors further configured to:
18 transmit, user credentials to access a third party portal; control, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and record, the 3D avatar interacting with the background.
19. The user device of either claim 17 or 18, wherein the images comprise frontal and side views of the user.
20. The user device of claim 19, wherein physical attributes, clothing and accessories of the user are obtained from the images.
21. The user device of any of claims 17 to 20, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.
22. The user device of any of claims 17 to 21 , wherein interaction enhances the user’s perception of immersion in the background.
23. A display device configured for generating a 3D avatar, the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
24. The display device of claim 23, the one or more data processors further configured to: transmit, user credentials to access a third party portal; control, at least one selection resulting from a purchase history of the user, the
19 purchase history being at the third party portal; and record, the 3D avatar interacting with the background.
25. The display device of either claim 23 or 24, wherein the images comprise frontal and side views of the user.
26. The display device of claim 25, wherein physical attributes, clothing and accessories of the user are obtained from the images.
27. The display device of any of claims 23 to 26, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, and a hybrid real-and-virtual environment.
28. The display device of any of claims 23 to 27, wherein interaction enhances the user’s perception of immersion in the background.
29. A central server generating a 3D avatar, the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background, wherein the device is selected from either a user device or a display device.
30. The central server of claim 29, wherein the images comprise frontal and side views of the user.
31. The central server of claim 30, wherein physical attributes, clothing and accessories of the user are obtained from the images.
20
32. The central server of claim 31 , wherein processing of the data enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user.
33. The central server of claim 32, wherein the processing of the data includes use of machine learning.
34. The central server of any of claims 29 to 33, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.
35. The central server of any of claims 29 to 34, the central server including one or more data processors further configured to: receive, from the device, user credentials to access a third party portal; and transmit, to the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal.
21
PCT/SG2022/050034 2021-01-25 2022-01-25 A system and method for generating a 3d avatar WO2022159038A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202100768V 2021-01-25
SG10202100768V 2021-01-25

Publications (1)

Publication Number Publication Date
WO2022159038A1 true WO2022159038A1 (en) 2022-07-28

Family

ID=82548451

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050034 WO2022159038A1 (en) 2021-01-25 2022-01-25 A system and method for generating a 3d avatar

Country Status (1)

Country Link
WO (1) WO2022159038A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130080287A1 (en) * 2011-09-19 2013-03-28 Sdi Technologies, Inc. Virtual doll builder
KR20130032620A (en) * 2011-09-23 2013-04-02 김용국 Method and apparatus for providing moving picture using 3d user avatar
US20140033044A1 (en) * 2010-03-10 2014-01-30 Xmobb, Inc. Personalized 3d avatars in a virtual social venue
US20150123967A1 (en) * 2013-11-01 2015-05-07 Microsoft Corporation Generating an avatar from real time image data
US20150220854A1 (en) * 2011-05-27 2015-08-06 Ctc Tech Corp. Creation, use and training of computer-based discovery avatars

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140033044A1 (en) * 2010-03-10 2014-01-30 Xmobb, Inc. Personalized 3d avatars in a virtual social venue
US20150220854A1 (en) * 2011-05-27 2015-08-06 Ctc Tech Corp. Creation, use and training of computer-based discovery avatars
US20130080287A1 (en) * 2011-09-19 2013-03-28 Sdi Technologies, Inc. Virtual doll builder
KR20130032620A (en) * 2011-09-23 2013-04-02 김용국 Method and apparatus for providing moving picture using 3d user avatar
US20150123967A1 (en) * 2013-11-01 2015-05-07 Microsoft Corporation Generating an avatar from real time image data

Similar Documents

Publication Publication Date Title
JP6022953B2 (en) Avatar service system and method for providing avatar in service provided in mobile environment
US9009746B2 (en) Secure transaction through a television
JP7268071B2 (en) Virtual avatar generation method and generation device
CN107210949A (en) User terminal using the message service method of role, execution methods described includes the message application of methods described
JP2013156986A (en) Avatar service system and method provided through wired or wireless web
TW201835806A (en) VR environment-based identity authentication method and apparatus
CN106576158A (en) Immersive video
CN105915766B (en) Control method based on virtual reality and device
KR102619465B1 (en) Confirm consent
CN116685938A (en) 3D rendering on eyewear device
CN108876878B (en) Head portrait generation method and device
KR102099135B1 (en) Production system and production method for virtual reality contents
JP6861287B2 (en) Effect sharing methods and systems for video
US12032732B2 (en) Automated configuration of augmented and virtual reality avatars for user specific behaviors
US20230164298A1 (en) Generating and modifying video calling and extended-reality environment applications
CN110519247B (en) One-to-many virtual reality display method and device
JP7134298B2 (en) Video distribution system, video distribution method and video distribution program
JP2020064426A (en) Communication system and program
JP7526851B2 (en) Live communication system using characters
JP6392497B2 (en) System and method for generating video
CN116635771A (en) Conversational interface on eyewear device
CN109039851B (en) Interactive data processing method and device, computer equipment and storage medium
WO2022159038A1 (en) A system and method for generating a 3d avatar
US9699123B2 (en) Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
CN116017014A (en) Video processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22742963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11202305662Y

Country of ref document: SG

122 Ep: pct application non-entry in european phase

Ref document number: 22742963

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 22742963

Country of ref document: EP

Kind code of ref document: A1