[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107728895B - Virtual object processing method and device and storage medium - Google Patents

Virtual object processing method and device and storage medium Download PDF

Info

Publication number
CN107728895B
CN107728895B CN201711015768.6A CN201711015768A CN107728895B CN 107728895 B CN107728895 B CN 107728895B CN 201711015768 A CN201711015768 A CN 201711015768A CN 107728895 B CN107728895 B CN 107728895B
Authority
CN
China
Prior art keywords
user
role
virtual
virtual object
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711015768.6A
Other languages
Chinese (zh)
Other versions
CN107728895A (en
Inventor
班新灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Comic Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Comic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Comic Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201711015768.6A priority Critical patent/CN107728895B/en
Publication of CN107728895A publication Critical patent/CN107728895A/en
Application granted granted Critical
Publication of CN107728895B publication Critical patent/CN107728895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method for processing a virtual object, which comprises the following steps: when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed aiming at least one object in the content displayed on the current display page; calling a virtual object corresponding to the determined role; and performing growth maintenance on the virtual object according to the behavior data of the user. The invention also provides a processing device and a storage medium of the virtual object.

Description

Virtual object processing method and device and storage medium
Technical Field
The present invention relates to information processing technologies, and in particular, to a method and an apparatus for processing a virtual object, and a storage medium.
Background
The electronic publication refers to a mass-propagation medium which is edited and processed by digital codes to obtain information with intellectual and ideological contents, stored in a magnetic, optical, or electronic medium with fixed physical form, and read and used by electronic reading, displaying, and playing devices. For example, the electronic publication may be an electronic book, an electronic dictionary, or the like.
With the development of the information age, more and more reading Applications (APPs) bearing the electronic publications are provided, and a user can read and watch the electronic publications borne in the reading APPs by installing the reading APPs on a terminal. However, in the existing reading APP, reading and viewing of the electronic publication by the user can only be realized, and when there is no electronic publication in the reading APP which the user is interested in, the user may give up continuing to use the reading APP, so that the access amount of the reading APP is reduced, and the application value of the reading APP is reduced. Therefore, the existing reading APP is low in interestingness and poor in user experience.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present invention provide a method, an apparatus, and a storage medium for processing a virtual object, which can improve an application value of reading an APP.
The technical scheme of the embodiment of the invention is realized as follows:
according to an aspect of the embodiments of the present invention, there is provided a method for processing a virtual object, the method including:
when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed aiming at least one object in the content displayed on the current display page;
calling a virtual object corresponding to the determined role;
and performing growth maintenance on the virtual object according to the behavior data of the user.
In the above solution, the determining the role selected by the specific operation includes:
determining a touch position where the specific operation is performed;
extracting a role identifier preset to correspond to the touch position;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification.
In the above solution, the determining the role selected by the specific operation includes:
determining an option of a role list implemented with the specific operation, wherein the role list pops up after detecting an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification corresponding to the option.
In the above scheme, the role identifier included in the role list is specifically determined in the following manner:
after the recommendation instruction is detected, determining the user level according to the user identification of the current user;
screening role identifications corresponding to the user levels, and adding the role identifications to the role list; or,
after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
and screening the role identification corresponding to the user preference, and adding the role identification into the role list.
In the foregoing solution, the retrieving the virtual object corresponding to the determined role includes:
searching a virtual model corresponding to the role;
and when the virtual model corresponding to the role is found, calling the virtual model to activate the virtual model so that the virtual model generates the virtual object.
In the above solution, the developing and maintaining the virtual object according to the behavior data of the user at least includes one of the following modes:
according to the behavior data of the user, when the fact that the user carries out a reading behavior is determined, first virtual growth information corresponding to the reading behavior is obtained;
performing growth maintenance on the virtual object according to the first virtual growth information;
or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior;
performing growth maintenance on the virtual object according to the second virtual growth information;
or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
and performing growth maintenance on the virtual object according to the third virtual growth information.
In the above scheme, the method further comprises:
and if the user behavior aiming at the current application is not detected within the preset duration, sending out prompt information for prompting the user to carry out growth maintenance on the virtual object.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for processing a virtual object, the apparatus including: the device comprises a determining unit, a calling unit and a maintaining unit;
the determining unit is configured to determine a role selected by a specific operation when the specific operation is detected, where the specific operation is a first touch operation performed on at least one object in content displayed on a currently displayed page;
the calling unit is used for calling the virtual object corresponding to the determined role;
and the maintenance unit is used for carrying out growth maintenance on the virtual object according to the behavior data of the user.
In the above scheme, the apparatus further comprises: an extraction unit;
the determining unit is further configured to determine a touch position where the specific operation is performed, and specifically, determine, according to the role identifier extracted by the extracting unit, a role matched with the role identifier as a role selected by the specific operation;
the extracting unit is used for extracting the role identification corresponding to the preset touch position.
In the foregoing solution, the determining unit is further configured to determine an option of a role list implemented with the specific operation, where the role list pops up after an operation implemented by a user for triggering a recommendation instruction is detected, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen; and the role matching module is specifically further used for determining the role matched with the role identifier as the role selected by the specific operation according to the role identifier corresponding to the option.
In the above scheme, the apparatus further comprises: a screening unit and an adding unit;
the determining unit is further configured to determine a user level according to a user identifier of a current user after the recommending instruction is detected; or after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
the screening unit is used for screening the role identification corresponding to the user level; or screening role identifiers corresponding to the user preferences;
and the adding unit is used for adding the role identifier which is screened by the screening unit and corresponds to the user level or the role identifier which corresponds to the user preference into the role list.
In the above scheme, the apparatus further comprises: a search unit;
the searching unit is used for searching the virtual model corresponding to the role;
the retrieving unit is specifically configured to, when the searching unit finds the virtual model corresponding to the role, retrieve the virtual model to activate the virtual model, so that the virtual model generates the virtual object.
In the above scheme, the apparatus further comprises:
the acquisition unit is used for acquiring first virtual growth information corresponding to the reading behavior when the fact that the reading behavior is implemented by the user is determined according to the behavior data of the user; or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior; or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
the maintenance unit is specifically configured to perform growth maintenance on the virtual object according to the first virtual growth information, or according to the second virtual growth information.
In the above scheme, the apparatus further comprises:
and the output unit is used for sending prompt information for prompting a user to perform growth maintenance on the virtual object if the user behavior aiming at the current application is not detected within the preset duration.
According to still another aspect of the embodiments of the present invention, there is provided an apparatus for processing a virtual object, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to execute the steps of any one of the above methods for processing a virtual object when the computer program is executed.
According to a further aspect of embodiments of the present invention, there is provided a computer-readable storage medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of any one of the above-described methods for processing a virtual object.
In the technical solution of the embodiment of the present invention, a method, an apparatus, and a storage medium for processing a virtual object are provided, where when a specific operation is detected, a role selected by the specific operation is determined, where the specific operation is a first touch operation executed for at least one object in content displayed on a currently displayed page; calling a virtual object corresponding to the determined role; and performing growth maintenance on the virtual object according to the behavior data of the user. Therefore, the user can be prompted to frequently access the current application, so that the user access amount of the current application is increased, and meanwhile, the interest and the enthusiasm of the user for using the application are increased.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for processing a virtual object according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a role list in an embodiment of the present invention;
FIG. 3 is a schematic view illustrating a process of creating a virtual pet according to a character in a cartoon according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating retrieval of a virtual pet from a background according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a virtual object processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a virtual object processing apparatus according to another embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flowchart illustrating a method for processing a virtual object according to an embodiment of the present invention; as shown in fig. 1, the method includes:
step 101, when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed for at least one object in image content displayed on a current display screen page;
in the embodiment of the invention, the method is mainly applied to the terminal, and the terminal can be a mobile phone, a tablet computer, a desktop computer and other equipment. The terminal is provided with a reading APP with a reading function. The reading APP carries the content of more than one electronic publication.
In the embodiment of the invention, when a user uses the terminal to read, browse and watch the content of the electronic publication loaded by the reading APP, the terminal is triggered to detect the virtual object corresponding to the role in the electronic publication in the current application and generate the detection result. And when the detection result represents that the virtual object is not detected in the current application, the terminal detects whether the specific operation in the current application is triggered. When the specific operation is detected, determining a role selected by the specific operation.
Specifically, determining the role selected by the specific operation can be implemented at least in one way:
the first method is as follows: determining a touch position where the specific operation is performed; extracting a role identifier preset to correspond to the touch position; and determining the role matched with the role identification as the role selected by the specific operation according to the role identification.
For example, when the user particularly likes the person a in the comic 1 when using the terminal to read the content in the comic 1, the user may select the person a by double-clicking or long-pressing on the picture in which the person a is located. When the terminal detects the double-click or long-press operation, determining that the touch position for implementing the double-click or long-press operation is the picture position of the character A in the current display picture, extracting a role identifier (the name of the character A or the name of a prop used by the character A) corresponding to the picture position in advance, determining that the role matched with the role identifier is the character A according to the role identifier, and taking the character A as the role selected by the double-click or long-press operation. That is, the specific operation is the double-click or long-press operation currently.
Or when the terminal detects the double-click or long-press operation, outputting a role adding window to the user for the user to determine whether to select the role again. And determining the role selected by the user by triggering a corresponding instruction on the role adding window by the user.
For example, the role adding window includes an instruction icon "add" and an instruction icon "cancel", and when the user clicks the instruction icon "add", the terminal can detect a role adding operation triggered by the user with respect to the instruction icon "add", and determine a character a corresponding to the role adding operation as a role selected by the user. When the user clicks the command icon 'cancel', the terminal can detect the role cancel operation triggered by the user aiming at the command icon 'cancel', and quit the current role selection window. That is, the specific operation is currently an operation in which the user clicks the corresponding instruction icon in the character adding window.
Or when the user uses the terminal to read the content in the cartoon 1, and particularly likes the character a in the cartoon 1, the user can perform a double-click or long-press operation at a non-picture position in a current display page, when the terminal detects that the user performs the double-click or long-press operation triggered by the non-picture position in the current display page, the touch position where the double-click or long-press operation is performed is determined as the non-picture position in the current display screen, the electronic publication read by the current user is determined as the cartoon 1 according to the content displayed in the current page, a role list matched with the content of the cartoon 1 is output to the user, and the role selected by the user is determined through the selection operation performed by the user on the preset role identifier in the role list.
For example, the preset role identifier displayed in the role list includes: the method comprises a character A, a character B, a character C and a character D, wherein when the user touches the character A in the character list, the terminal can detect a character adding instruction aiming at the character A triggered by the user touch, and then the character A is determined as the character selected by the current user. That is, the current specific operation is an operation of the user clicking the corresponding role identifier in the role list.
The second mode is that an option of a role list implemented with the specific operation is determined, where the role list pops up after an operation implemented by a user for triggering a recommendation instruction is detected, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen; and determining the role matched with the role identification as the role selected by the specific operation according to the role identification corresponding to the option.
For example, when a user uses the terminal to read the content in the cartoon 1, and particularly likes the character a in the cartoon 1, the user can perform a click operation at any position in a current display page, at this time, the terminal can detect a recommendation instruction triggered by the click operation performed by the user in the current display page, output a character list matched with the content in the cartoon 1 to the user, and determine a character selected by the user by detecting a selection operation performed by the user on a preset character identifier in the character list.
For example, the preset role identifier displayed in the role list includes: the method comprises a character A, a character B, a character C and a character D, wherein when the user touches the character A in the character list, the terminal can detect a character adding instruction aiming at the character A triggered by the user touch, and then the character A is determined as the character selected by the current user. That is, the current specific operation is an operation of the user clicking the corresponding role identifier in the role list.
Here, the character identifier in the character list may be a character corresponding to a character in the electronic publication, or may be a picture corresponding to a character in the electronic publication, and a specific presentation form of the character list is shown in fig. 2.
Fig. 2 is a schematic diagram of a character list in an embodiment of the present invention, as shown in fig. 2, the character list includes a state diagram (fig. 2a) shown in a character form and a state diagram (fig. 2b) shown in a picture form, where 201 is a specific text content of a cartoon, and 202 is a character list of the cartoon.
When a user carries out clicking operation on the current page of the reading APP, the terminal can detect a recommendation instruction generated by user triggering, and the terminal outputs a role list corresponding to the current reading of the electronic publication to the user. Through the role list, the user can select the virtual pet which the user wants to "feed".
In the embodiment of the present invention, the role identifier included in the role list is specifically determined by the following method:
in the first mode, after the recommendation instruction is detected, the user level is determined according to the user identification of the current user; and screening the role identification corresponding to the user level, and adding the role identification into the role list.
Here, the user identifier may be a user account and/or a user nickname used by the user when registering the reading APP. Or, when the user uses the reading APP, the background server of the reading APP allocates a random account to the user.
When the terminal detects that a user logs in a current application, a position (for example, a user center) capable of managing user information in the current application can acquire a user identifier of the user, and when the user identifier is acquired, a user level corresponding to the user identifier is acquired. In particular, the user level is generated when the user identification is generated. And the user level is upgraded according to the increase of the use duration of the user identification. That is to say, when the reading APP is used by the user for the first time, the user level is the lowest level, and as the frequency and duration of content reading performed by the reading APP increase, the user level gradually increases. For example, the user level is divided into: the device comprises a first level, a second level and a third level, wherein the first level is less than the second level and less than the third level. When the terminal detects that the user uses for the first time when reading the APP, then with the user level that user identification corresponds is one-level, and when the terminal detects that the user uses when reading the cumulative duration of APP reaches 20 hours, then the user level is promoted to the second grade by the one-level, when the terminal detects the user uses when reading the cumulative duration of APP reaches 40 hours, the user level is promoted to the third grade by the second grade, increases progressively in proper order, until the highest level of user level.
In the embodiment of the invention, the background server for reading the APP can grade the roles in the electronic publication according to role preset rules. For example, the role preset rule is divided according to the main configuration of the human role. That is, the first male hero, the first female hero, the second male hero, the second female hero … … the first male hero, the first female hero, the second male hero, the second female hero, the third male hero and the third female hero … … are ranked in order. The level of the role in the electronic publication can be determined by collecting comment data of all users on the role in the electronic publication.
Here, when the user level is one level, the preset character identifier output by the terminal for the user may be a third male character match and a third female character match … …. That is, when the user rank is the lowest user, the user can only select the target character of interest as the virtual pet to be "kept" from the character identifications below the male matchmaker III, the female matchmaker III … … and the like. When the user level is two-level, the preset role identifiers output by the terminal for the user can be two male parities, two female parities, three male parities and three female parities … …, that is, when the user level is a medium-level user, the user can only select an interested target role as a pet to be maintained from the role identifiers below the parities such as two male parities, two female parities … … and the like; when the user level is three levels, the preset role identifiers output by the terminal for the user can be a first male chief actor, a first female chief actor, a second male chief actor, a second female chief actor … … a first male matchmaker, a first female matchmaker, a second male matchmaker, a second female matchmaker, a third male matchmaker and a third female matchmaker … …, namely, when the user level is the highest level, the terminal outputs all the preset role identifiers in the electronic publication to the user, and the user can select a role interested by himself from all the roles as a virtual pet to be 'fed'.
For example, when the terminal detects the recommendation instruction triggered by the user, if the user level is determined to be a middle level according to the user identifier of the current user, the role identifier corresponding to the middle level user is screened from the role database and added to the role list.
The second method comprises the following steps: after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user; and screening the role identification corresponding to the user preference, and adding the role identification into the role list.
Here, the historical behavior data of the user includes at least: comment behavior data when the user comments on the electronic publication and page turning behavior data when the user turns over pages in the reading process. The page turning behavior data comprises page turning speed and page turning frequency.
For example, when the historical behavior data of the user is the comment behavior data, the terminal determines the role in which the user is interested by extracting keywords from the comment behavior data. For example, the keyword is a name of a role corresponding to a role in the electronic publication, or an exclusive prop name corresponding to a role in the electronic publication. And after the terminal determines the role which is interested by the user, screening out a role identifier corresponding to the role which is interested by the user from a role database, and adding the role identifier into the role list.
For example, when the historical behavior data of the user is page turning behavior data, the terminal extracts page turning speed data or page turning frequency data of each page from the page turning behavior data, and determines a page corresponding to data of which the page turning speed or the page turning frequency is smaller than a preset threshold as a page in which the user is interested. And then extracting keywords in the pages which are interested by the user so as to determine the roles which are interested by the user. For example, the keyword is a name of a role corresponding to a role in the electronic publication, or an exclusive prop name corresponding to a role in the electronic publication. And after the terminal determines the roles which are interested by the user, screening role identifiers corresponding to the roles which are interested by the user from a role database, and adding the role identifiers to the role list.
Step 102, calling a virtual object corresponding to the determined role;
here, the virtual object refers to a virtual pet or a virtual character corresponding to a character in the electronic publication.
In the embodiment of the invention, after the terminal determines the role selected by the user, a role model matched with the role is searched from a role database based on the role, and when the role model matched with the role is searched, the role model is called and activated so as to generate the virtual object by the role model.
For example, when a user reads a cartoon of "famous detective cauma" in an migu reading APP in a terminal, the user likes the character role of "cauma" in the cartoon very much, the character of "cauma" can be selected by clicking or long-pressing on a picture of the character having cauma in a current picture, the terminal determines that the character selected by the user is "cauma" according to the triggered operation of the user, searches a character model corresponding to the character of "cauma" from a character database, and when the character model corresponding to the character of "cauma" is found, the character model is retrieved from the character database, and the character model is activated, so that the character model generates a virtual character corresponding to the character of "cauma".
In the embodiment of the invention, after the terminal activates the role model, a preset display effect can be configured for the generated virtual character, and the virtual character is output in the specific display area of the reading APP according to the preset display effect. For example, the specific display area may be a lower left corner, a lower right corner, an upper left corner, and an upper right corner of the display area of the terminal, and the specific area position may be set by itself according to a requirement so as not to affect reading of a user.
In the embodiment of the invention, when the role model is activated to generate the virtual object, the virtual object identifier matched with the virtual object can be obtained, and the corresponding relation between the user identifier and the virtual object identifier is established; or establishing the corresponding relation between the role identification and the virtual object identification to generate a corresponding relation table. Therefore, the background server for reading the APP can find the association relation between all users reading a certain electronic publication and the virtual object through the corresponding relation table, so that the popularity of the role in each electronic publication in the reading APP can be evaluated and analyzed, and favorable conditions are provided for the application of the role in the subsequent process. In addition, when the user uses the application to read the content, the terminal can also detect whether the virtual object corresponding to the role in the electronic publication exists in the current application according to the corresponding relation table.
And 103, performing growth maintenance on the virtual object according to the behavior data of the user.
In the embodiment of the invention, when the terminal performs growth maintenance on the virtual object, the terminal detects the user behavior implemented by aiming at the electronic publication, generates a detection result, and acquires the virtual growth information corresponding to the user behavior when the detection result represents that the user behavior is detected. And the terminal performs growth maintenance on the virtual object according to the virtual growth information.
Wherein the user behavior comprises: reading behavior, sharing behavior, watching behavior, and commenting behavior.
For example, when it is determined that the behavior implemented by the user is a reading behavior for an electronic publication based on behavior data of the user, first virtual growth information corresponding to the reading behavior is acquired. And then, performing growth maintenance on the virtual object according to the first virtual growth information.
Here, the first virtual growth information may be "nutrient addition", that is, the virtual subject needs to be fed to raise the growth value by one level.
For example, when the terminal detects that the duration of the user reading the cartoon at this time exceeds 2 hours, the terminal pops up the first virtual growth information to remind the user to add 'food' and 'water' to the virtual pet, so that the virtual pet rapidly grows to a level in a short time.
And when the behavior implemented by the user is determined to be the reward behavior of the electronic publication according to the behavior data of the user, acquiring second virtual growth information corresponding to the reward behavior, and performing growth maintenance on the virtual object according to the second virtual growth information.
Here, the second virtual growth information may be "physical examination", that is, health examination may be performed on the virtual subject as required currently, so as to provide advice for growth of the virtual subject. Such as suggesting that the virtual object drinks more, suggesting that the virtual object eats more food, suggesting that the virtual object moves more, suggesting that the virtual object has more rest, etc.;
when the behavior implemented by the user is determined to be a comment behavior of the electronic publication according to the behavior data of the user, acquiring third virtual growth information corresponding to the comment behavior; and performing growth maintenance on the virtual object according to the third virtual growth information.
Here, the third virtual growth information may be "beauty care", that is, currently, a decoration such as hair trimming, neck wearing, clothes changing, hat changing, or the like may be applied to the virtual object.
For example, when the terminal detects that the user commends the currently read cartoon, the terminal pops up the third virtual growth information to remind the user to perform beauty care on the virtual pet. For example, a collar may be worn, or a cap may be changed for the pet.
When determining that the behavior implemented by the user is a sharing behavior of the electronic publication according to the behavior data of the user, acquiring fourth virtual growth information corresponding to the sharing behavior; and performing growth maintenance on the virtual object according to the fourth virtual growth information.
Here, the fourth virtual growth information may be "physical exercise", that is, exercise such as playing football, skipping, running, or the like, may be currently performed on the virtual object.
For example, when the terminal detects that the user shares the currently read cartoon, the terminal pops up the fourth virtual growth information to remind the user to perform physical exercise on the virtual pet,
in the embodiment of the invention, the growth maintenance states of the virtual objects corresponding to the user behaviors can be interchanged. For example, the virtual growth information corresponding to the reward behavior may be "add food", and the virtual growth information corresponding to the share behavior may be "beauty care". Specifically, what user behavior performs what growth maintenance on the virtual object is specifically performed according to the user setting.
In the embodiment of the present invention, the manner of popping up the virtual growth information may be to directly pop up a text to indicate the virtual growth information, or to send a voice prompt, for example, to send a voice that "your pet is out of water", or "your pet should move" and the like recommendation content.
Through the user right read virtual pet among the APP and carry out growth dimension to can promote user's reading experience, read cartoon or cartoon in-process, still can promote user's interest and enthusiasm.
In the embodiment of the present invention, a place for growth and maintenance of the virtual object may be selected.
For example, when the terminal detects that the user reads the content, a maintenance list of the virtual object may be output to the user, and a growth maintenance place of the virtual object is determined by selecting a scene icon preset in the maintenance list by the user.
Here, the scene icon may be a character or a picture representing a scene.
For example, the scene icons in the maintenance list include: background maintenance and display interface maintenance.
The user selects an icon corresponding to 'background maintenance' in the maintenance list, and the terminal can place the virtual object in a background scene of the reading APP to perform growth maintenance on the virtual object; when a user selects an icon corresponding to 'display interface maintenance' in the maintenance list of the virtual object, the terminal places the virtual object in the current display interface scene of the reading APP to perform growth maintenance on the virtual object.
In the embodiment of the present invention, when the user selects to perform growth maintenance on the virtual object on the display interface, a selection window of the display mode of the virtual object may be output to the user, so that the user can select the display mode of the virtual object. For example, whether the virtual object is displayed in a semi-transparent manner or an opaque manner on the current display interface is selected.
After the user selects the display mode of displaying the virtual object on the current display interface, a selection window of a display area may be output to the user, so that the user can select the display position of the virtual object in the current display interface, for example, the virtual object is displayed in the upper left corner of the current interface, or the virtual object is displayed in the lower right corner of the current interface.
In the embodiment of the present invention, the virtual object may also be clicked or pressed for a long time in the current interface, and when the terminal detects that the user clicks or presses for the virtual object, the personal information of the virtual object is output. For the user to view the current status of the virtual object, such as: growth level, name, skill, weight, height, physical condition, appearance, etc. of the virtual object.
In addition, in embodiments of the present invention, a user may also "feed" a virtual object separately in multiple different electronic publications, or may "feed" a pet together in multiple different electronic publications.
For example, the user "eats" pet 1 in caricature 1, pet 2 in caricature 2, and pet 3 in caricature 3; alternatively, the user "eats" the pet 1 in all of caricature 1, caricature 2, and caricature 3.
In the embodiment of the invention, if the terminal does not detect the user behavior aiming at the current application within the preset duration, the terminal sends out the prompt information for prompting the user to carry out growth maintenance on the virtual object.
Specifically, when the terminal does not detect the user behavior within a preset duration, the reading APP is not accessed by the current user. At this time, the terminal counts the duration between the current time for detecting the user behavior and the last time for detecting the user behavior; judging whether the duration reaches a growth maintenance duration set for the virtual object; and determining that the duration reaches the growth maintenance duration, and outputting prompt information for performing growth maintenance on the virtual object.
For example, if the terminal does not detect the user behavior at 4 pm on 10/18/2017, the terminal acquires the time at which the user behavior is detected the last time from the current time "4 pm on 10/18/2017". For example, the time at which the user behavior is detected for the latest time from the current time "4 am at 10/18 th/2017" 8 am at 10/18 th/2017 ", the duration (for example, 8 hours) between 4 am at 10/18 th/2017 at 10/18 th/2017 is counted, and it is determined whether or not the duration (8 hours) reaches the growth maintenance duration set for the virtual object. For example, the set growth maintenance time period is 6 hours. And when the duration is determined to reach the growth maintenance duration when the determination result is that 8 is larger than 6, prompting information for growth maintenance of the virtual object is output.
The reminding information may be that the virtual object is popped up on a display screen of the terminal at a preset moment, or a voice reminding is sent out. Therefore, the user can be reminded to feed the pet, so that the user can be prompted to use the reading APP to read the cartoon or cartoon, and the access amount of the reading APP by the user is increased.
In the embodiment of the invention, the virtual object can also simply interact with the user, and besides reminding the user to read, the virtual object can also actively provide some confusion or guidance services for the user. For example, after the user finishes reading, the user is prompted to update the date of the next section, "expect the next section to meet you again," and the like.
FIG. 3 is a schematic view illustrating a process of creating a virtual pet according to a character in a cartoon according to an embodiment of the present invention; as shown in fig. 3, the method comprises the following steps:
step 301, when detecting that a user reads a cartoon in a current application, judging whether the user has 'fed' a virtual pet in the cartoon; having "housed" the virtual pet, step 304 is performed, and the virtual pet is not "housed," step 302 is performed.
Specifically, the terminal determines whether the user is performing cartoon reading in the current application by detecting user behavior. For example, when the user is detected to click a current homepage to enter the operation, the user is determined to perform cartoon reading in the current application; or when the user clicks the operation entered from the reading history record is detected, the user is determined to perform cartoon reading in the current application.
In addition, if the virtual pet is "fed" in the application, a correspondence table between the pet identifier of the virtual pet and the user identifier or the comic identifier is established in the application, so that the terminal can judge whether the virtual pet is "fed" in the application by searching the correspondence table.
For example, a corresponding table of the user identifier and the pet identifier is inquired, whether the pet identifier corresponding to the user identifier exists is searched, if yes, the user is determined to have 'fed' the pet in the caricature, the role of the pet is not required to be reestablished, if not, the user is determined not to have 'fed' the pet in the caricature, the role of the pet in the caricature can be established, and feeding is performed.
Or inquiring a corresponding table of the cartoon identifier and the pet identifier, and searching whether the pet identifier corresponding to the cartoon identifier exists, if so, not establishing the pet role again, otherwise, establishing the pet role in the cartoon and performing 'feeding'.
Step 302: and recommending cartoon roles in the cartoon for the user.
Here, the terminal may further determine a user level by acquiring the user identifier, and recommend a cartoon role corresponding to the user level to the user according to the user level.
The terminal can also determine user preference according to the historical behavior data of the user by collecting the historical behavior data of the user, and then recommend cartoon roles to the user according to the user preference.
Step 303: the caricature role selected by the user is determined to be a virtual pet "housed" by the user in the caricature.
Specifically, according to the cartoon role selected by the user, the identification of the cartoon role is searched in a role database, when the cartoon model corresponding to the identification of the cartoon role is found, the cartoon model is called from the role database and activated, so that the cartoon model generates the virtual pet,
in addition, dynamic patterns can be set for the virtual pet, so that the virtual pet can be dynamically displayed in a display area of the terminal, and the viewing experience and the enjoyment of the user using the application are increased.
Or the virtual pet can be maintained in the background, so that the user can call and check the virtual pet at any time, a reading interface cannot be shielded, and the page turning of the cartoon or the playing of the cartoon cannot be influenced. Moreover, the reading experience of the user can be improved.
When a user wants to call out the virtual pet from the background, the user can right click on the current display interface to call out the list box shown in fig. 4, and click the pet in the list box, so that the terminal can be triggered to call out the virtual pet from the background.
In the embodiment of the invention, after the 'feeding' virtual pet is determined, a nursing reminder can be set for the 'feeding' virtual pet, so that the user can be reminded to read when the user does not read the cartoon for a long time. For example, a clock timer may be set, and when it is detected that the user does not access the cartoon within 5-8 hours, the clock timer is triggered to initiate a reminder to the user. The reminding mode can be that the virtual pet is popped up to a display interface of the terminal at a preset moment or a voice reminding is sent out. Therefore, the user can be reminded to grow and maintain the virtual pet, so that the enthusiasm of the user for reading the cartoon or the cartoon is prompted by a feeding mode of the virtual pet in the cartoon, and the user access amount of the user is increased.
In addition, in the embodiment of the present invention, the virtual pet may further perform simple interaction with the user, for example, after the user finishes reading, prompt the user to update the date in the next section, "expect to meet you again in the next section," and the like.
Step 304: and according to the behavior data of the user, the virtual pet is grown and maintained.
Here, the behavior data of the user includes at least: data of user reading behavior, user sharing behavior, user comment behavior or user appreciation behavior.
In an optimized embodiment, a currently "housed" virtual pet may be replaced when the user dislikes the currently "housed" virtual pet. Specifically, after determining that the user has "fed" the virtual pet in the caricature, prompting the user whether to replace the current pet; when the user selects yes, the process jumps to step 302 from the current display interface, otherwise, step 304 is executed.
Through this scheme, provide a pet that can promote user's reading or watch experience and foster the scheme, make into the cartoon role or animation role APP pet, can watch, share, enjoy modes such as good opinion through reading and make the pet role grow up. Therefore, the reading experience of the user is improved, and the interest and the enthusiasm of the user can be improved in the process of reading the cartoon or the cartoon. The user can be prompted to read the cartoon or the cartoon by means of reminding the user of feeding the pet, and the access amount of the user is increased.
Fig. 5 is a schematic composition diagram of an apparatus for processing a virtual object according to an embodiment of the present invention, where the apparatus includes: a determining unit 501, a calling unit 502 and a maintaining unit 503;
the determining unit 501 is configured to determine a role selected by a specific operation when the specific operation is detected, where the specific operation is a first touch operation performed on at least one object in content displayed on a currently displayed page;
the retrieving unit 502 is configured to retrieve a virtual object corresponding to the determined role;
the maintenance unit 503 is configured to perform growth maintenance on the virtual object according to the behavior data of the user.
In the embodiment of the invention, the device can be a mobile phone, a tablet computer, a desktop computer and other terminals. The terminal is provided with a reading APP with a reading function. The reading APP carries the content of more than one electronic publication.
In the embodiment of the present invention, the apparatus further includes: a detection unit 504.
Specifically, when a user uses the terminal to read, browse, and watch the content of the electronic publication loaded by the reading APP, the terminal may trigger the detection unit 504 to detect the virtual object corresponding to the role in the electronic publication in the current application, and generate a detection result. When the detection result indicates that the detection unit 504 does not detect the virtual object in the current application, the detection unit 504 may detect whether a specific operation in the current application is triggered. When the detection unit 504 detects the specific operation, the determination unit 501 is triggered to determine the role selected by the specific operation.
In the embodiment of the present invention, the apparatus further includes: extraction unit 505 and output unit 506
Specifically, the determining unit 501 determines the role selected by the specific operation, which can be implemented at least in one way:
the first method is as follows: the determination unit 501 determines a touch position where the specific operation is performed; the extracting unit 505 extracts a role identifier preset to correspond to the touch position; the determining unit 501 determines, according to the role identifier, a role matched with the role identifier as the role selected by the specific operation.
For example, when the user particularly likes the person a in the comic 1 when using the terminal to read the content in the comic 1, the user may select the person a by double-clicking or long-pressing on the picture in which the person a is located. When the detection unit 504 detects the double-click or long-press operation, the touch position where the double-click or long-press operation is performed is detected, the determination unit 501 determines that the touch position where the double-click or long-press operation is performed is the picture position of the person a in the currently displayed picture, then the extraction unit 505 is triggered to extract the role identifier (the name of the person a or the name of the prop used by the person a) preset at the picture position, then the determination unit 501 determines that the role matched with the role identifier is the person a according to the role identifier, and then the person a is used as the role selected by the double-click or long-press operation. That is, the specific operation is the double-click or long-press operation currently.
Or, when the detection unit 504 detects the double-click or long-press operation, the output unit 506 is triggered to output a character adding window to the user, so that the user can determine whether to select the character again. The determining unit 501 determines the role selected by the user triggering a corresponding instruction on the role adding window.
For example, the role adding window includes an instruction icon "add" and an instruction icon "cancel", when the user clicks the instruction icon "add", the detecting unit 504 can detect a role adding operation triggered by the user with respect to the instruction icon "add", and the determining unit 501 determines the character a corresponding to the role adding operation as the role selected by the user. When the user clicks the command icon "cancel," the detection unit 504 may detect that the user performs a role cancel operation triggered by the command icon "cancel," and exit from the current role selection window. That is, the specific operation is currently an operation in which the user clicks the corresponding instruction icon in the character adding window.
Or, when the user uses the terminal to read the contents in the comic 1, the user particularly likes the character a in the comic 1, the user may perform a double-click or long-press operation at a non-picture position in the currently displayed page, when the detecting unit 504 detects that the user performs a double-click or long-press operation triggered by the non-picture position in the currently displayed page, triggering the determining unit 501 to determine that the touch position for implementing the double-click or long-press operation is a non-picture position in the current display screen, and determines that the electronic publication read by the current user is a cartoon 1 according to the content displayed on the current page, then, the output unit 506 is triggered to output the character list matching the content of the cartoon 1 to the user, the determining unit 501 determines the role selected by the user through the selection operation performed by the user on the preset role identifier in the role list.
For example, the preset role identifier displayed in the role list includes: the detection unit 504 can detect a character adding instruction for the character a triggered by the user touch when the user touches the character a in the character list, and the determination unit 501 determines the character a as the character selected by the current user when the detection unit 504 detects the character adding instruction. That is, the current specific operation is an operation of the user clicking the corresponding role identifier in the role list.
In a second mode, the determining unit 501 determines an option of a role list implemented with the specific operation, where the role list pops up after the detecting unit 504 detects an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering a recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen; the determining unit 501 specifically determines, according to the role identifier corresponding to the option, a role matched with the role identifier as the role selected by the specific operation.
For example, when a user uses the terminal to read content in the comic 1, and particularly likes the character a in the comic 1, the user may perform a click operation at any position in a currently displayed page, at this time, the detection unit 504 may detect a recommendation instruction triggered by the click operation performed by the user in the currently displayed page, when the detection unit 504 detects the recommendation instruction, the output unit 506 is triggered to output a role list matched with the content of the comic 1 to the user, and the determination unit 501 specifically determines a role selected by the user by detecting, by the detection unit 504, a selection operation performed by the user on a preset role identifier in the role list.
For example, the preset role identifier displayed in the role list includes: the detection unit 504 can detect a character adding instruction for the character a triggered by the user touch when the user touches the character a in the character list, and the determination unit 501 determines the character a as the character selected by the current user when the detection unit 504 detects the recommendation instruction. That is, the current specific operation is an operation of the user clicking the corresponding role identifier in the role list.
Here, the character identifier in the character list may be a character corresponding to a character in the electronic publication, or may be a picture corresponding to a character in the electronic publication, and specifically, the display form of the character list is as described in the above method description with reference to fig. 2.
In the embodiment of the present invention, the role identifier included in the role list is specifically determined by the following method:
specifically, the apparatus further comprises: screening unit 507 and acquisition unit 508
In a first manner, after the detecting unit 504 detects the recommendation instruction, the determining unit 501 determines a user level according to a user identifier of a current user; the screening unit 507 screens the role identifier corresponding to the user level, and adds the role identifier to the role list.
Here, the user identifier may be a user account and/or a user nickname used by the user when registering the reading APP. Or, when the user uses the reading APP, the background server of the reading APP allocates a random account to the user.
In this embodiment of the present invention, when the detecting unit 504 detects that the user logs in the current application, the obtaining unit 508 may obtain the user identifier of the user at a location (for example, a user center) where user information is managed in the current application, and obtain a user level corresponding to the user identifier while the obtaining unit 508 obtains the user identifier. In particular, the user level is generated when the user identification is generated. And the user level is upgraded according to the increase of the use duration of the user identification. That is to say, when the reading APP is used by the user for the first time, the user level is the lowest level, and as the frequency and duration of content reading performed by the reading APP increase, the user level gradually increases. For example, the user level is divided into: the device comprises a first level, a second level and a third level, wherein the first level is less than the second level and less than the third level. When the terminal detects that the user uses for the first time when reading the APP, then with the user level that user identification corresponds is one-level, and when the terminal detects that the user uses when reading the cumulative duration of APP reaches 20 hours, then the user level is promoted to the second grade by the one-level, when the terminal detects the user uses when reading the cumulative duration of APP reaches 40 hours, the user level is promoted to the third grade by the second grade, increases progressively in proper order, until the highest level of user level.
In the embodiment of the invention, the background server for reading the APP can grade the roles in the electronic publication according to role preset rules. For example, the role preset rule is divided according to the main configuration of the human role. That is, the first male hero, the first female hero, the second male hero, the second female hero … … the first male hero, the first female hero, the second male hero, the second female hero, the third male hero and the third female hero … … are ranked in order. The level of the role in the electronic publication can be determined by collecting comment data of all users on the role in the electronic publication.
Here, when the user level is one level, the preset character identifier output by the terminal for the user may be a third male character match and a third female character match … …. That is, when the user rank is the lowest user, the user can only select the target character of interest as the virtual pet to be "kept" from the character identifications below the male matchmaker III, the female matchmaker III … … and the like. When the user level is two-level, the preset role identifiers output by the terminal for the user can be two male parities, two female parities, three male parities and three female parities … …, that is, when the user level is a medium-level user, the user can only select an interested target role as a pet to be maintained from the role identifiers below the parities such as two male parities, two female parities … … and the like; when the user level is three levels, the preset role identifiers output by the terminal for the user can be a first male chief actor, a first female chief actor, a second male chief actor, a second female chief actor … … a first male matchmaker, a first female matchmaker, a second male matchmaker, a second female matchmaker, a third male matchmaker and a third female matchmaker … …, namely, when the user level is the highest level, the terminal outputs all the preset role identifiers in the electronic publication to the user, and the user can select a role interested by himself from all the roles as a virtual pet to be 'fed'.
For example, when the detecting unit 504 detects the recommendation instruction triggered by the user, the determining unit 501 determines that the user level is a middle level according to the user identifier of the current user, and the terminal triggers the screening unit 507 to screen the role identifier corresponding to the middle level user from the role database and add the role identifier to the role list.
The second method comprises the following steps: after the detection unit 504 detects the recommendation instruction, the determination unit 501 determines the user preference according to the historical behavior data of the current user; the screening unit 507 screens the role identifiers corresponding to the user preferences, and adds the role identifiers to the role list.
Here, the historical behavior data of the user includes at least: comment behavior data when the user comments the role in the electronic publication, and page turning behavior data when the user turns over the page in the reading process. The page turning behavior data comprises page turning speed and page turning frequency.
For example, when the historical behavior data of the user is the comment behavior data, the extraction unit 505 determines a role in which the user is interested by performing keyword extraction in the comment behavior data. For example, the keyword is a name of a role corresponding to a role in the electronic publication, or an exclusive prop name corresponding to a role in the electronic publication. When the determining unit 501 determines the role interested by the user, the screening unit 507 is triggered to screen out the role identifier corresponding to the role interested by the user, and add the role identifier to the role list.
For example, when the historical behavior data of the user is page turning behavior data, the terminal triggers the extracting unit 505 to extract page turning speed data or page turning frequency data of each page in the page turning behavior data, and the determining unit 501 determines a page corresponding to data whose page turning speed or page turning frequency is smaller than a preset threshold as a page that is interested by the user. Then, the extracting unit 505 is triggered to extract keywords in the page in which the user is interested, so that the determining unit 501 determines the role in which the user is interested. For example, the keyword is a name of a role corresponding to a role in the electronic publication, or an exclusive prop name corresponding to a role in the electronic publication. When the determining unit 501 determines the role interested by the user, the screening unit 507 is triggered to screen out the role identifier corresponding to the role interested by the user, and add the role identifier to the role list.
In the embodiment of the present invention, the apparatus further includes: a lookup unit 509.
Specifically, after the determining unit 501 determines the role selected by the user, the searching unit 509 is triggered to search the role model matched with the role from the role database based on the role, and when the searching unit 509 searches the role model, the invoking unit 502 is triggered to invoke and activate the role model, so that the role model generates the virtual object.
For example, when the user reads a cartoon of "celebrating korea" in the mocu reading APP in the terminal, the user likes the character role of "korea" in the cartoon, the selection of the character "caucasian" can be performed by clicking or long-pressing on the picture of the person in the current picture having said character "caucasian", when the determination unit 501 determines that the role selected by the user is "caucasian" according to the operation triggered by the user, the search unit 509 is triggered to search a character model corresponding to the character of "caucasian" from the character database, when the searching unit 509 searches the role model corresponding to the role of "caucasian", the invoking unit 502 is triggered to invoke the role model from the role database, and activating the role model to enable the role model to generate a virtual character corresponding to the Korea role.
In the embodiment of the present invention, the apparatus further includes: a configuration unit 510;
specifically, after the terminal activates the role model, the configuration unit 510 is triggered to configure a preset display effect for the generated virtual character, and the output unit 506 outputs the virtual character in the specific display area of the reading APP according to the preset display effect. For example, the specific display area may be a lower left corner, a lower right corner, an upper left corner, and an upper right corner of the display area of the terminal, and the specific area position may be set by itself according to a requirement so as not to affect reading of a user.
In the embodiment of the present invention, the apparatus further includes: a building unit 511;
specifically, when the role model is activated and generates a virtual object, the obtaining unit 508 may further obtain a virtual object identifier matching with the virtual object, and trigger the establishing unit 511 to establish a corresponding relationship between the user identifier and the virtual object identifier; or establishing the corresponding relation between the role identification and the virtual object identification. And generating a corresponding relation table. Therefore, the background server for reading the APP can find the association relation between all users reading a certain electronic publication and the virtual object through the corresponding relation table, so that the popularity of the role in each electronic publication in the reading APP can be evaluated and analyzed, and favorable conditions are provided for the application of the role in the subsequent process. In addition, when the user uses the application to read the content, the terminal can also detect whether the virtual object corresponding to the role in the electronic publication exists in the current application according to the corresponding relation table.
In this embodiment of the present invention, when the maintenance unit 503 performs growth maintenance on the virtual object, the detection unit 504 may further detect a user behavior implemented for the electronic publication, generate a detection result, and when the detection result represents that the user behavior is detected, the terminal triggers the acquisition unit 508 to acquire virtual growth information corresponding to the user behavior. The maintenance unit 503 performs growth maintenance on the virtual object according to the virtual growth information.
Wherein the user behavior comprises: reading behavior, sharing behavior, watching behavior, and commenting behavior.
For example, when determining that the behavior implemented by the user is a reading behavior for the electronic publication according to the behavior data of the user, the determining unit 501 triggers the acquiring unit 508 to acquire the first virtual growth information corresponding to the reading behavior. Then, the maintenance unit 503 performs growth maintenance on the virtual object according to the first virtual growth information.
Here, the first virtual growth information may be "nutrient addition", that is, the virtual subject needs to be fed to raise the growth value by one level.
For example, when the terminal detects that the duration of the user reading the cartoon at this time exceeds 2 hours, the terminal pops up the first virtual growth information to remind the user to add 'food' and 'water' to the virtual pet, so that the virtual pet rapidly grows to a level in a short time.
When the determining unit 501 determines that the behavior implemented by the user is a sharing behavior for the electronic publication according to the behavior data of the user, the obtaining unit 508 is triggered to obtain second virtual growth information corresponding to the reward behavior, and the maintenance unit 503 performs growth maintenance on the virtual object according to the second virtual growth information.
Here, the second virtual growth information may be "physical examination", that is, health examination may be performed on the virtual subject as required currently, so as to provide advice for growth of the virtual subject. Such as suggesting that the virtual object drinks more, suggesting that the virtual object eats more food, suggesting that the virtual object moves more, suggesting that the virtual object has more rest, etc.;
when the determining unit 501 determines that the behavior implemented by the user is a comment behavior on the electronic publication according to the behavior data of the user, the obtaining unit 508 is triggered to obtain third virtual growth information corresponding to the comment behavior; the maintenance unit 503 performs growth maintenance on the virtual object according to the third virtual growth information.
Here, the third virtual growth information may be "beauty care", that is, currently, a decoration such as hair trimming, neck wearing, clothes changing, hat changing, or the like may be performed on the virtual object.
For example, when the terminal detects that the user commends the currently read cartoon, the terminal pops up the third virtual growth information to remind the user to perform beauty care on the virtual pet. For example, a collar may be worn, or a cap may be changed for the pet.
When the determining unit 501 determines that the behavior implemented by the user is a comment behavior on the electronic publication according to the behavior data of the user, the acquiring unit 508 is triggered to acquire fourth virtual growth information corresponding to the sharing behavior; and performing growth maintenance on the virtual object according to the fourth virtual growth information.
Here, the fourth virtual growth information may be "physical exercise", that is, exercise such as playing football, skipping, running, or the like, may be currently performed on the virtual object.
For example, when the terminal detects that the user shares the currently read cartoon, the terminal pops up the fourth virtual growth information to remind the user to perform physical exercise on the virtual pet,
in the embodiment of the invention, the growth maintenance states of the virtual objects corresponding to the user behaviors can be interchanged. For example, the virtual growth information corresponding to the reward behavior may be "add food", and the virtual growth information corresponding to the share behavior may be "beauty care". Specifically, what user behavior performs what growth maintenance on the virtual object is specifically performed according to the user setting.
In the embodiment of the present invention, the manner of popping up the virtual growth information may be to directly pop up a text to indicate the virtual growth information, or to send a voice prompt, for example, to send a voice that "your pet is out of water", or "your pet should move" and the like recommendation content.
Through the user right read virtual pet among the APP and carry out growth dimension to can promote user's reading experience, read cartoon or cartoon in-process, still can promote user's interest and enthusiasm.
In the embodiment of the present invention, a place for growth and maintenance of the virtual object may be selected.
For example, when the detecting unit 504 detects that the user reads content, the outputting unit 506 may be triggered to output a maintenance list of the virtual object to the user, and the determining unit 501 determines a growth maintenance site of the virtual object by selecting a scene icon preset in the maintenance list by the user.
Here, the scene icon may be a character or a picture representing a scene.
For example, the scene icons in the maintenance list include: background maintenance and display interface maintenance.
The user selects an icon corresponding to 'background maintenance' in the maintenance list, and the terminal can place the virtual object in a background scene of the reading APP to perform growth maintenance on the virtual object; when a user selects an icon corresponding to 'display interface maintenance' in the maintenance list of the virtual object, the terminal places the virtual object in the current display interface scene of the reading APP to perform growth maintenance on the virtual object.
In the embodiment of the present invention, when the user selects to perform growth maintenance on the virtual object on the display interface, a selection window of the display mode of the virtual object may be output to the user, so that the user can select the display mode of the virtual object. For example, whether the virtual object is displayed in a semi-transparent manner or an opaque manner on the current display interface is selected.
In the embodiment of the present invention, the virtual object may also be clicked or pressed for a long time in the current interface, and when the terminal detects that the user clicks or presses for the virtual object, the personal information of the virtual object is output. For the user to view the current status of the virtual object, such as: growth level, name, skill, weight, height, physical condition, appearance, etc. of the virtual object.
In addition, in embodiments of the present invention, a user may also "feed" a virtual object separately in multiple different electronic publications, or may "feed" a pet together in multiple different electronic publications.
For example, the user "eats" pet 1 in caricature 1, pet 2 in caricature 2, and pet 3 in caricature 3; alternatively, the user "eats" the pet 1 in all of caricature 1, caricature 2, and caricature 3.
In the embodiment of the present invention, the apparatus further includes: a statistic unit 512;
when the detection unit 504 does not detect the user behavior within a preset duration, it represents that the current user does not access the reading APP. At this time, the terminal triggers the counting unit 512 to count the duration of the time when the current detection unit 504 detects the user behavior and the last time when the detection unit 504 detects the user behavior; judging whether the duration reaches a growth maintenance duration set for the virtual object; when the determining unit 501 determines that the duration reaches the growth maintenance duration, it triggers the output unit 506 to output a prompt message for performing growth maintenance on the virtual object.
For example, if the terminal does not detect the user behavior at 4 pm on 10/18/2017, the terminal acquires the time at which the user behavior is detected the last time from the current time "4 pm on 10/18/2017". For example, the time at which the user behavior is detected for the latest time from the current time "4 am at 10/18 th/2017" 8 am at 10/18 th/2017 ", the duration (for example, 8 hours) between 4 am at 10/18 th/2017 at 10/18 th/2017 is counted, and it is determined whether or not the duration (8 hours) reaches the growth maintenance duration set for the virtual object. For example, the set growth maintenance time period is 6 hours. And when the duration is determined to reach the growth maintenance duration when the determination result is that 8 is larger than 6, prompting information for growth maintenance of the virtual object is output.
The reminding information may be that the virtual object is popped up on a display screen of the terminal at a preset moment, or a voice reminding is sent out. Therefore, the user can be reminded to feed the pet, so that the user can be prompted to use the reading APP to read the cartoon or cartoon, and the access amount of the reading APP by the user is increased.
In the embodiment of the invention, the virtual object can also simply interact with the user, and besides reminding the user to read, the virtual object can also actively provide some confusion or guidance services for the user. For example, after the user finishes reading, the user is prompted to update the date of the next section, "expect the next section to meet you again," and the like.
In the embodiment of the invention, the virtual pet generated by the role in the reading APP is used for pet 'feeding', so that a user can be prompted to frequently access the reading APP, and uses the reading APP for cartoon or cartoon reading and watching, thereby increasing the interest and enthusiasm of the user in using the reading APP, improving the user access amount of the reading APP, and having a certain commercial value.
An embodiment of the present invention further provides another apparatus for processing a virtual object, where the apparatus includes: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to, when running the computer program, execute: when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed aiming at least one object in the content displayed on the current display page;
calling a virtual object corresponding to the determined role;
and performing growth maintenance on the virtual object according to the behavior data of the user.
The processor, when running the computer program, further executes: determining a touch position where the specific operation is performed;
extracting a role identifier preset to correspond to the touch position;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification.
The processor, when running the computer program, further executes: determining an option of a role list implemented with the specific operation, wherein the role list pops up after detecting an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification corresponding to the option.
The processor, when running the computer program, further executes: after the recommendation instruction is detected, determining the user level according to the user identification of the current user;
screening role identifications corresponding to the user levels, and adding the role identifications to the role list; or,
after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
and screening the role identification corresponding to the user preference, and adding the role identification into the role list.
The processor, when running the computer program, further executes: searching a virtual model corresponding to the role;
and when the virtual model corresponding to the role is found, calling the virtual model to activate the virtual model so that the virtual model generates the virtual object.
The processor, when running the computer program, further executes: according to the behavior data of the user, when the fact that the user carries out a reading behavior is determined, first virtual growth information corresponding to the reading behavior is obtained;
performing growth maintenance on the virtual object according to the first virtual growth information;
or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior;
performing growth maintenance on the virtual object according to the second virtual growth information;
or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
and performing growth maintenance on the virtual object according to the third virtual growth information.
The processor, when running the computer program, further executes: and if the user behavior aiming at the current application is not detected within the preset duration, sending out prompt information for prompting the user to carry out growth maintenance on the virtual object.
Fig. 6 is a schematic structural diagram of a processing apparatus for a virtual object according to another embodiment of the present invention, and the processing apparatus 600 for a virtual object may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver, a game console, a tablet device, a personal digital assistant, an information push server, a content server, or the like. The processing apparatus 600 of the virtual object shown in fig. 6 includes: at least one processor 601, memory 602, at least one network interface 604, and a user interface 603. The various components in the processing device 600 of the virtual object are coupled together by a bus system 605. It is understood that the bus system 605 is used to enable communications among the components. The bus system 605 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 605 in fig. 6.
The user interface 603 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 602 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 602 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 602 in embodiments of the present invention is used to store various types of data to support the operation of the processing device 600 of virtual objects. Examples of such data include: any computer programs for operating on the processing apparatus 400 of virtual objects, such as an operating system 6021 and application programs 6022; music data; animation data; book information; video, etc. The operating system 6021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 6022 may include various application programs such as a media player (MediaPlayer), a Browser (Browser), and the like for implementing various application services. A program implementing the method of an embodiment of the invention can be included in the application program 6022.
The method disclosed by the above-mentioned embodiment of the present invention can be applied to the processor 601, or implemented by the processor 601. The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601. The Processor 601 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 601 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 602, and the processor 601 reads the information in the memory 602 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the processing Device 600 of the virtual object may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
In an exemplary embodiment, the embodiment of the present invention further provides a computer readable storage medium, for example, a memory 602 including a computer program, which is executable by a processor 601 of a processing apparatus 600 of a virtual object to perform the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs: when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed aiming at least one object in the content displayed on the current display page;
calling a virtual object corresponding to the determined role;
and performing growth maintenance on the virtual object according to the behavior data of the user.
The computer program, when executed by the processor, further performs: determining a touch position where the specific operation is performed;
extracting a role identifier preset to correspond to the touch position;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification.
The computer program, when executed by the processor, further performs: determining an option of a role list implemented with the specific operation, wherein the role list pops up after detecting an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification corresponding to the option.
The computer program, when executed by the processor, further performs: after the recommendation instruction is detected, determining the user level according to the user identification of the current user;
screening role identifications corresponding to the user levels, and adding the role identifications to the role list; or,
after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
and screening the role identification corresponding to the user preference, and adding the role identification into the role list.
The computer program, when executed by the processor, further performs: searching a virtual model corresponding to the role;
and when the virtual model corresponding to the role is found, calling the virtual model to activate the virtual model so that the virtual model generates the virtual object.
The computer program, when executed by the processor, further performs: according to the behavior data of the user, when the fact that the user carries out a reading behavior is determined, first virtual growth information corresponding to the reading behavior is obtained;
performing growth maintenance on the virtual object according to the first virtual growth information;
or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior;
performing growth maintenance on the virtual object according to the second virtual growth information;
or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
and performing growth maintenance on the virtual object according to the third virtual growth information.
The computer program, when executed by the processor, further performs: and if the user behavior aiming at the current application is not detected within the preset duration, sending out prompt information for prompting the user to carry out growth maintenance on the virtual object.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (16)

1. A method for processing a virtual object, the method comprising:
when a specific operation is detected, determining a role selected by the specific operation, wherein the specific operation is a first touch operation executed aiming at least one object in the content displayed on the current display page; the content displayed on the current display page is an electronic publication currently read by a user, and the electronic publication is borne in a reading application program;
calling a virtual object corresponding to the determined role; the virtual object is a virtual object corresponding to a role in the electronic publication;
and performing growth maintenance on the virtual object according to the behavior data of the user.
2. The method of claim 1, wherein the determining the role selected by the particular operation comprises:
determining a touch position where the specific operation is performed;
extracting a role identifier preset to correspond to the touch position;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification.
3. The method of claim 1, wherein the determining the role selected by the particular operation comprises:
determining an option of a role list implemented with the specific operation, wherein the role list pops up after detecting an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering the recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen;
and determining the role matched with the role identification as the role selected by the specific operation according to the role identification corresponding to the option.
4. The method according to claim 3, wherein the role identifier included in the role list is determined by:
after the recommendation instruction is detected, determining the user level according to the user identification of the current user;
screening role identifications corresponding to the user levels, and adding the role identifications to the role list; or,
after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
and screening the role identification corresponding to the user preference, and adding the role identification into the role list.
5. The method of claim 1, wherein invoking the virtual object corresponding to the determined character comprises:
searching a virtual model corresponding to the role;
and when the virtual model corresponding to the role is found, calling the virtual model to activate the virtual model so that the virtual model generates the virtual object.
6. The method of claim 1, wherein performing growth maintenance on the virtual object according to the behavior data of the user comprises at least one of:
according to the behavior data of the user, when the fact that the user carries out a reading behavior is determined, first virtual growth information corresponding to the reading behavior is obtained;
performing growth maintenance on the virtual object according to the first virtual growth information;
or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior;
performing growth maintenance on the virtual object according to the second virtual growth information;
or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
and performing growth maintenance on the virtual object according to the third virtual growth information.
7. The method of claim 1, further comprising:
and if the user behavior aiming at the current application is not detected within the preset duration, sending out prompt information for prompting the user to carry out growth maintenance on the virtual object.
8. An apparatus for processing a virtual object, the apparatus comprising: the device comprises a determining unit, a calling unit and a maintaining unit;
the determining unit is configured to determine a role selected by a specific operation when the specific operation is detected, where the specific operation is a first touch operation performed on at least one object in content displayed on a currently displayed page; the content displayed on the current display page is an electronic publication currently read by a user, and the electronic publication is borne in a reading application program;
the calling unit is used for calling the virtual object corresponding to the determined role; the virtual object is a virtual object corresponding to a role in the electronic publication;
and the maintenance unit is used for carrying out growth maintenance on the virtual object according to the behavior data of the user.
9. The apparatus of claim 8, further comprising: an extraction unit;
the determining unit is further configured to determine a touch position where the specific operation is performed, and specifically, determine, according to the role identifier extracted by the extracting unit, a role matched with the role identifier as a role selected by the specific operation;
the extracting unit is used for extracting the role identification corresponding to the preset touch position.
10. The apparatus according to claim 8, wherein the determining unit is further configured to determine an option of a role list implemented with the specific operation, where the role list is popped up after detecting an operation implemented by a user for triggering a recommendation instruction, and the operation for triggering a recommendation instruction specifically includes: the user opens the application or the user performs a second touch operation at any position of the display screen; and the role matching module is specifically further used for determining the role matched with the role identifier as the role selected by the specific operation according to the role identifier corresponding to the option.
11. The apparatus of claim 10, further comprising: a screening unit and an adding unit;
the determining unit is further configured to determine a user level according to a user identifier of a current user after the recommending instruction is detected; or after the recommendation instruction is detected, determining user preference according to historical behavior data of the current user;
the screening unit is used for screening the role identification corresponding to the user level; or screening role identifiers corresponding to the user preferences;
and the adding unit is used for adding the role identifier which is screened by the screening unit and corresponds to the user level or the role identifier which corresponds to the user preference into the role list.
12. The apparatus of claim 8, further comprising: a search unit;
the searching unit is used for searching the virtual model corresponding to the role;
the retrieving unit is specifically configured to, when the searching unit finds the virtual model corresponding to the role, retrieve the virtual model to activate the virtual model, so that the virtual model generates the virtual object.
13. The apparatus of claim 8, further comprising:
the acquisition unit is used for acquiring first virtual growth information corresponding to the reading behavior when the fact that the reading behavior is implemented by the user is determined according to the behavior data of the user; or according to the behavior data of the user, when the user is determined to implement the reward behavior, acquiring second virtual growth information corresponding to the reward behavior; or, according to the behavior data of the user, when the fact that the user implements the comment behavior is determined, third virtual growth information corresponding to the comment behavior is obtained;
the maintenance unit is specifically configured to perform growth maintenance on the virtual object according to the first virtual growth information, or according to the second virtual growth information, or according to the third virtual growth information.
14. The apparatus of claim 8, further comprising:
and the output unit is used for sending prompt information for prompting a user to perform growth maintenance on the virtual object if the user behavior aiming at the current application is not detected within the preset duration.
15. An apparatus for processing a virtual object, the apparatus comprising: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is configured to perform the steps of the method of any of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201711015768.6A 2017-10-25 2017-10-25 Virtual object processing method and device and storage medium Active CN107728895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711015768.6A CN107728895B (en) 2017-10-25 2017-10-25 Virtual object processing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711015768.6A CN107728895B (en) 2017-10-25 2017-10-25 Virtual object processing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN107728895A CN107728895A (en) 2018-02-23
CN107728895B true CN107728895B (en) 2020-09-01

Family

ID=61212862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711015768.6A Active CN107728895B (en) 2017-10-25 2017-10-25 Virtual object processing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN107728895B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767182A (en) * 2017-10-31 2018-03-06 深圳春沐源控股有限公司 A kind of method and system virtually planted
CN110489680B (en) * 2018-05-11 2023-04-21 腾讯科技(深圳)有限公司 Network resource display method and device
CN109299355B (en) * 2018-08-09 2020-12-04 咪咕数字传媒有限公司 Recommended book list display method and device and storage medium
CN109513209B (en) * 2018-11-22 2020-04-17 网易(杭州)网络有限公司 Virtual object processing method and device, electronic device and storage medium
CN111290729A (en) * 2018-12-07 2020-06-16 阿里巴巴集团控股有限公司 Man-machine interaction method, device and system
CN111784271B (en) * 2019-04-04 2023-09-19 腾讯科技(深圳)有限公司 User guiding method, device, equipment and storage medium based on virtual object
CN111815419B (en) * 2020-07-17 2023-09-15 网易(杭州)网络有限公司 Recommendation method and device for virtual commodity in game and electronic equipment
CN112102662B (en) * 2020-08-11 2023-05-26 苏州承儒信息科技有限公司 Intelligent network education method and system based on virtual pet cultivation
CN111930287B (en) * 2020-08-12 2022-02-18 广州酷狗计算机科技有限公司 Interaction method and device based on virtual object, electronic equipment and storage medium
CN114217689A (en) * 2020-09-04 2022-03-22 本田技研工业(中国)投资有限公司 Control method of virtual role, vehicle-mounted terminal and server
CN114895970B (en) * 2021-01-26 2024-02-27 博泰车联网科技(上海)股份有限公司 Virtual character growth method and related device
CN113407850B (en) * 2021-07-15 2022-08-26 北京百度网讯科技有限公司 Method and device for determining and acquiring virtual image and electronic equipment
CN118051291A (en) * 2024-02-23 2024-05-17 北京字跳网络技术有限公司 Session interface display method and device, electronic equipment, storage medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103100203A (en) * 2011-11-14 2013-05-15 索尼公司 Identification apparatus, control apparatus, identification method, program, and identification system
CN105487677A (en) * 2016-01-25 2016-04-13 深圳市宜康达信息技术有限公司 Child behavior and motion trail monitoring system and method
CN105763946A (en) * 2016-01-29 2016-07-13 广州酷狗计算机科技有限公司 Anchor online prompting method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103100203A (en) * 2011-11-14 2013-05-15 索尼公司 Identification apparatus, control apparatus, identification method, program, and identification system
CN105487677A (en) * 2016-01-25 2016-04-13 深圳市宜康达信息技术有限公司 Child behavior and motion trail monitoring system and method
CN105763946A (en) * 2016-01-29 2016-07-13 广州酷狗计算机科技有限公司 Anchor online prompting method and device

Also Published As

Publication number Publication date
CN107728895A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN107728895B (en) Virtual object processing method and device and storage medium
US10467239B2 (en) Suggesting search results to users before receiving any search query from the users
US11644953B2 (en) Techniques for context sensitive illustrated graphical user interface elements
US11024070B2 (en) Device and method of managing user information based on image
JP5872691B2 (en) Text suggestions for images
ES2707967T3 (en) Contextual menu controls based on objects
US9110992B2 (en) Context-based selection of calls-to-action associated with search results
US20190259070A1 (en) Advertising information pushing method, device and system, server, and computer readable medium
US9866927B2 (en) Identifying entities based on sensor data
US10846112B2 (en) System and method of guiding a user in utilizing functions and features of a computer based device
US20160110065A1 (en) Suggesting Activities
KR20160100414A (en) Intelligent display of information in a user interface
CN110222256B (en) Information recommendation method and device and information recommendation device
CN111565143B (en) Instant messaging method, equipment and computer readable storage medium
CN107515869B (en) Searching method and device and searching device
CN111752436A (en) Recommendation method and device and recommendation device
CN113157972B (en) Recommendation method and device for video cover document, electronic equipment and storage medium
CN106843889A (en) A kind of method and device of scene customization
CN111932346A (en) Image display method, device, equipment and computer readable storage medium
US11816152B2 (en) Language-setting based search
US20220343394A1 (en) Object identifiers for real world objects
CN106874455B (en) Task plan recommendation method and device and server
JP2008282357A (en) Footprint display system
JP6184536B2 (en) Generating device, generating method, and generating program
JP2015106351A (en) Content distribution device and free word recommendation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant