WO2022159038A1 - A system and method for generating a 3d avatar - Google Patents
A system and method for generating a 3d avatar Download PDFInfo
- Publication number
- WO2022159038A1 WO2022159038A1 PCT/SG2022/050034 SG2022050034W WO2022159038A1 WO 2022159038 A1 WO2022159038 A1 WO 2022159038A1 SG 2022050034 W SG2022050034 W SG 2022050034W WO 2022159038 A1 WO2022159038 A1 WO 2022159038A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- avatar
- background
- data
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012545 processing Methods 0.000 claims description 21
- 230000003993 interaction Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 10
- 238000010801 machine learning Methods 0.000 claims description 9
- 230000008447 perception Effects 0.000 claims description 6
- 230000000977 initiatory effect Effects 0.000 claims description 2
- 238000007654 immersion Methods 0.000 claims 4
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000003542 behavioural effect Effects 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000001097 facial muscle Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Definitions
- the present invention relates to a system, and method for generating a 3D avatar.
- the avatars generated are only in 2D, and are not used in an environment which enables the avatars to be used in an augmented reality manner. This is due to data processing constraints which prevents the generation of enhanced avatars.
- the use of predefined template forms limits the extent by which the avatars can be customised, and the extent of interaction between avatars and real-life aspects/features is also limited.
- NFTs non-fungiable tokens
- a system for generating a 3D avatar including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background.
- the device is selected from either a user device or a display device.
- a data processor implemented method for generating a 3D avatar comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background.
- the device is selected from either a user device or a display device.
- a user device configured for generating a 3D avatar
- the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
- a display device configured for generating a 3D avatar
- the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.
- a central server generating a 3D avatar
- the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background.
- the device is selected from either a user device or a display device.
- FIG 1 is a flow chart of an example of a method for generating a 3D avatar
- FIG 2 is a schematic diagram of an example of a system for generating a 3D avatar
- FIG 3 is a schematic diagram showing components of an example user device of the system shown in FIG 2;
- FIG 4 is a schematic diagram showing components of an example mass display device of the system shown in FIG 2;
- FIG 5 is a schematic diagram showing components of an example central server shown in FIG 2;
- FIGs 6A to 6B is an example of a 3D avatar generated using the method of FIG 1 ;
- FIG 7 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a first example background
- FIG 8 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a second example background
- FIG 9 shows a flow chart of an example of tasks carried out by a user device/display device during the method of FIG 1 .
- the present invention provides a system and method for generating a 3D avatar, substantially in real-time.
- the system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like.
- the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments.
- the 3D avatars are modelled substantially on physical attributes and wearables of users, such as, for example, facial features, physique, clothing, accessories and so forth. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment.
- the method can be performed at least in part amongst one or more data processing devices such as, for example, a mobile phone, a display device, a central server, or the like.
- data processing devices such as, for example, a mobile phone, a display device, a central server, or the like.
- the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server.
- the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server.
- At step 105 at least one image of a user and a surrounding environment of the user is captured.
- the more images that are captured the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user.
- the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth.
- the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user.
- the at least one image is captured with a user device like a camera on a mobile phone, or a camera coupled to a display device.
- data of the at least one image of the user and surrounding environment is transmitted to a central server.
- user credentials to access a third party portal is also provided to the central server from the user device.
- the user credentials are typically usable in the aforementioned manner with consent of the user.
- the central server can comprise more than one data processing device. An example embodiment of the central server will be provided in a subsequent paragraph.
- the data of the at least one image of the user and surrounding environment is processed at the central server to generate a 3D avatar. The physical attributes, the clothing and/or accessories of the user that are obtained from the data are used to generate the 3D avatar.
- the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods.
- Substantial processing processes are carried out at the central server which broadly comprises determining the physical attributes, the clothing and/or accessories of the user from the at least one image, and uses that information to generate the 3D avatar with at least some likeness to the user while wearing similar clothing and/or accessories. It should be appreciated that the substantial processing processes rely on both hardware and software of the central server to ensure that the 3D avatar is generated within a short period of time, typically less than five seconds. Most of the data processing to generate the 3D avatar is carried out at the central server, and not at devices configured for showing the 3D avatar.
- the substantial processing processes can include machine learning of all the images processed at the central server, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing.
- the machine learning can also enable enhanced likeness of the user to be generated in avatar form.
- the machine learning is also able to aid in shortening the time to generate the 3D avatar.
- the central server is able to use the user credentials of the user to obtain a purchase history from third party platforms (eg. e-commerce platforms) at step 117, whereby the purchase history can be from a pre-defined category of goods/services like clothing and/or accessories.
- the purchase history can be desirable as it can be employed in a product selection to enhance, for example, purchase intent, sales, user engagement, and so forth. This will be evident in a later portion of the description.
- a background on which the 3D avatar is overlaid on is selected.
- the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.
- the 3D avatar is able to interact with the selected background on the device configured to show the 3D avatar.
- the 3D avatar interacts with the selected background in accordance with actions/gestures carried out by the user. This enhances the user’s perception of immersive-ness in the selected background.
- the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories.
- the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
- step 130 the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.
- the recording can be stored at the user device or at the central server.
- the method 100 enables benefits for both users and providers of the method 100.
- the providers can be entities that provide a good and/or service to the users.
- the method 100 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories.
- the level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.
- the users can also choose to monetize the 3D avatars that are generated, for example, as a digital asset with ownership rights being transferrable via NFT/cryptocurrency transactions.
- the method 100 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the method 100 to be carried out at any location with connectivity to a data network, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
- the system 200 includes one or more user devices 220, one or more display devices 230, a communications network 250, a third party platform 280 (eg. an e-commerce platform), and a central server 260.
- the one or more user devices 220 and the one or more display devices 230 communicate with the central server 260 via the communications network 250.
- the communications network 250 can be of any appropriate form, such as the Internet and/or a number of local area networks (LANs). Further details of respective components of the system 200 will be provided in a following portion of the description. It will be appreciated that the configuration shown in FIG 2 is not limiting and for the purpose of illustration only.
- the user device 220 of any of the examples herein may be a handheld computer device such as a smart phone with a capability to download and operate mobile applications, and be connectable to the communications network 250.
- the user device 220 can also be a VR headset.
- An exemplary embodiment of the user device 220 is shown in FIG 3. As shown, the user device 220 includes the following components in electronic communication via a bus 311 : 1. a display 302;
- non-volatile memory 303
- RAM random access memory
- transceiver component 305 that includes a transceiver(s);
- an app 309 stored in the non-volatile memory 303 is required to enable the user device 220 to operate in a desired manner.
- the app 309 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar.
- the app 309 can be a web browser.
- FIG 3 is not intended to be a hardware diagram; thus many of the components depicted in FIG 3 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 3.
- the display device 230 of any of the examples herein may be a television with a capability to download and operate mobile applications, and be connectable to the communications network 250.
- An exemplary embodiment of the display device 230 is shown in FIG 4.
- the display device 230 includes the following components in electronic communication via a bus 411 :
- RAM random access memory
- transceiver component 405 that includes a transceiver(s);
- an app 409 stored in the non-volatile memory 403, is required to enable the display device 230 to operate in a desired manner.
- the app 409 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar.
- the app 409 can be a web browser.
- the user is able to control the display device 230 using another device wirelessly communicating with the display device 230, for example, the user’s mobile phone.
- the user’s mobile phone may be running the app 409 to provide access to an interface with the display device 230, or a web browser on the mobile phone may provide access to an interface with the display device 230.
- FIG 4 is not intended to be a hardware diagram; thus many of the components depicted in FIG 4 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 4.
- the central server 260 is a hardware and software suite comprised of preprogrammed logic, algorithms and other means of processing information coming in, in order to send out information which is useful to the objective of the system 200 in which the central server 260 resides.
- hardware which can be used by the central server 260 will be described briefly herein.
- the central server 260 can broadly comprise a database which stores pertinent information, and processes information packets from the user devices 220 and the display devices 230.
- the central server 260 can be operated from a commercial hosted service such as Amazon Web Services (TM).
- TM Amazon Web Services
- the central server 260 can be represented in a form as shown in FIG 4.
- the central server 260 is in communication with a communications network 250, as shown in FIG 4.
- the central server 260 is able to communicate with the user devices 220, the display devices 230, and/or other processing devices, as required, over the communications network 250.
- the user devices 220, the display devices 230 communicate via a direct communication channel (LAN or WIFI) with the central server 260.
- LAN local area network
- WIFI wireless local area network
- the components of the central server 260 can be configured in a variety of ways.
- the components can be implemented entirely by software to be executed on standard computer server hardware, which may comprise one hardware unit or different computer hardware units distributed over various locations, some of which may require the communications network 250 for communication.
- the central server 260 is a commercially available computer system based on a 32 bit or a 64 bit Intel architecture, and the processes and/or methods executed or performed by the central server 260 are implemented in the form of programming instructions of one or more software components or modules 502 stored on non-volatile computer-readable storage 503 associated with the central server 260.
- the device 400 includes at least one or more of the following standard, commercially available, computer components, all interconnected by a bus 505:
- RAM random access memory
- CPU central processing unit
- FIG 5 is not intended to be a hardware diagram; thus many of the components depicted in FIG 5 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 5.
- the system 200 enables benefits for both users and providers of the system 200, when the system 200 is used to carry out the method 100.
- the providers can be entities that provide a good and/or service to the users.
- the system 200 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories.
- the level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.
- the system 200 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the system 200, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.
- FIGs 6A and 6B there are shown examples of what a user sees on the user device 220 or the display device 230.
- a main portion 600 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 610 shows the user a short time lag ago to coincide the user’s action with the 3D avatar shown in the main portion 600.
- FIGs 6A and 6B show a “no-background” situation.
- a main portion 700 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 710 shows the user interface for interacting with the 3D avatar.
- the sub-portion 710 shows a user interface for a user to change attire for the 3D avatar.
- Main menu 715 shows various types of clothing/accessories that can be changed on the 3D avatar while sub-menu 720 shows various options available when an item from the main menu 715 is selected.
- the 3D avatar in the main portion 700 moves around in sync with movements of the user while the user is using the main menu 715 and the sub-menu 720.
- FIG 7 shows a virtual background.
- FIG 8 there is shown another example of what a user sees on the display device 230.
- a main portion 800 shows the 3D avatar generated by the method 100 and/or the system 200. It should be noted that FIG 8 shows an actual background that can be the location where the user is located. In addition, FIG 8 shows an instance when the user uses a mobile phone 810 to access an interface to control the display device 230.
- a method 900 for generating a 3D avatar particularly in relation to a process at the user device 220 or display device 230.
- step 905 at least one image of a user and a surrounding environment of the user is captured by the user device 220 or display device 230.
- the more images that are captured the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user.
- the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth.
- the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user.
- the at least one image is captured with a user device 220 like a camera on a mobile phone, or a camera coupled to a display device 230.
- data of the at least one image of the user and surrounding environment is transmitted to the central server 260.
- the central server 260 can carry out substantial processing of the at least one image of the user using machine learning, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing.
- the machine learning can also enable enhanced likeness of the user to be generated in avatar form.
- the machine learning is also able to aid in shortening the time to generate the 3D avatar.
- user credentials to a third party portal is also provided to the central server 260 from the user device 220. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user.
- a background on which the 3D avatar is overlaid on is selected at the user device 220 or display device 230.
- the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.
- the generated 3D avatar is received from the central server 260 at the user device 220 or display device 230. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Most of the data processing to generate the 3D avatar is carried out at the central server 260, and not at devices configured for showing the 3D avatar.
- the 3D avatar is able to interact with the selected background on the user device 220 or display device 230. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories.
- the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.
- step 930 the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Finance (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- Marketing (AREA)
- Accounting & Taxation (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10202100768V | 2021-01-25 | ||
SG10202100768V | 2021-01-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022159038A1 true WO2022159038A1 (en) | 2022-07-28 |
Family
ID=82548451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SG2022/050034 WO2022159038A1 (en) | 2021-01-25 | 2022-01-25 | A system and method for generating a 3d avatar |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022159038A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130080287A1 (en) * | 2011-09-19 | 2013-03-28 | Sdi Technologies, Inc. | Virtual doll builder |
KR20130032620A (en) * | 2011-09-23 | 2013-04-02 | 김용국 | Method and apparatus for providing moving picture using 3d user avatar |
US20140033044A1 (en) * | 2010-03-10 | 2014-01-30 | Xmobb, Inc. | Personalized 3d avatars in a virtual social venue |
US20150123967A1 (en) * | 2013-11-01 | 2015-05-07 | Microsoft Corporation | Generating an avatar from real time image data |
US20150220854A1 (en) * | 2011-05-27 | 2015-08-06 | Ctc Tech Corp. | Creation, use and training of computer-based discovery avatars |
-
2022
- 2022-01-25 WO PCT/SG2022/050034 patent/WO2022159038A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140033044A1 (en) * | 2010-03-10 | 2014-01-30 | Xmobb, Inc. | Personalized 3d avatars in a virtual social venue |
US20150220854A1 (en) * | 2011-05-27 | 2015-08-06 | Ctc Tech Corp. | Creation, use and training of computer-based discovery avatars |
US20130080287A1 (en) * | 2011-09-19 | 2013-03-28 | Sdi Technologies, Inc. | Virtual doll builder |
KR20130032620A (en) * | 2011-09-23 | 2013-04-02 | 김용국 | Method and apparatus for providing moving picture using 3d user avatar |
US20150123967A1 (en) * | 2013-11-01 | 2015-05-07 | Microsoft Corporation | Generating an avatar from real time image data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6022953B2 (en) | Avatar service system and method for providing avatar in service provided in mobile environment | |
US9009746B2 (en) | Secure transaction through a television | |
JP7268071B2 (en) | Virtual avatar generation method and generation device | |
CN107210949A (en) | User terminal using the message service method of role, execution methods described includes the message application of methods described | |
JP2013156986A (en) | Avatar service system and method provided through wired or wireless web | |
TW201835806A (en) | VR environment-based identity authentication method and apparatus | |
CN106576158A (en) | Immersive video | |
CN105915766B (en) | Control method based on virtual reality and device | |
KR102619465B1 (en) | Confirm consent | |
CN116685938A (en) | 3D rendering on eyewear device | |
CN108876878B (en) | Head portrait generation method and device | |
KR102099135B1 (en) | Production system and production method for virtual reality contents | |
JP6861287B2 (en) | Effect sharing methods and systems for video | |
US12032732B2 (en) | Automated configuration of augmented and virtual reality avatars for user specific behaviors | |
US20230164298A1 (en) | Generating and modifying video calling and extended-reality environment applications | |
CN110519247B (en) | One-to-many virtual reality display method and device | |
JP7134298B2 (en) | Video distribution system, video distribution method and video distribution program | |
JP2020064426A (en) | Communication system and program | |
JP7526851B2 (en) | Live communication system using characters | |
JP6392497B2 (en) | System and method for generating video | |
CN116635771A (en) | Conversational interface on eyewear device | |
CN109039851B (en) | Interactive data processing method and device, computer equipment and storage medium | |
WO2022159038A1 (en) | A system and method for generating a 3d avatar | |
US9699123B2 (en) | Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session | |
CN116017014A (en) | Video processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22742963 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202305662Y Country of ref document: SG |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22742963 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22742963 Country of ref document: EP Kind code of ref document: A1 |