[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106817349B - Method and device for enabling communication interface to generate animation effect in communication process - Google Patents

Method and device for enabling communication interface to generate animation effect in communication process Download PDF

Info

Publication number
CN106817349B
CN106817349B CN201510859600.8A CN201510859600A CN106817349B CN 106817349 B CN106817349 B CN 106817349B CN 201510859600 A CN201510859600 A CN 201510859600A CN 106817349 B CN106817349 B CN 106817349B
Authority
CN
China
Prior art keywords
user
communication
communication interface
avatar
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510859600.8A
Other languages
Chinese (zh)
Other versions
CN106817349A (en
Inventor
陈军宏
张培养
吴智华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co Ltd
Original Assignee
Xiamen Black Mirror Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Black Mirror Technology Co ltd filed Critical Xiamen Black Mirror Technology Co ltd
Priority to CN201510859600.8A priority Critical patent/CN106817349B/en
Priority to PCT/CN2016/076590 priority patent/WO2017092194A1/en
Publication of CN106817349A publication Critical patent/CN106817349A/en
Application granted granted Critical
Publication of CN106817349B publication Critical patent/CN106817349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42136Administration or customisation of services

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for enabling a communication interface to generate an animation effect in a communication process. The method comprises the following steps: receiving content generated by a user based on communication in the communication process; identifying the content generated based on the communication, and generating a corresponding state change instruction according to the content generated based on the communication; and modifying the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect. By applying the embodiment of the application, the communication interface of the user can generate corresponding animation effects according to the communication content between the users in the communication process.

Description

Method and device for enabling communication interface to generate animation effect in communication process
Technical Field
The present application relates to the field of communications, and in particular, to a method and an apparatus for generating an animation effect on a communication interface during a communication process.
Background
With the rapid development of communication technology, communication equipment and the like, information transfer modes of people are more and more diversified. Originally, people can only carry out simple voice call by means of a communication terminal, and later develop various new modes such as character communication, video call and the like, and even in some instant messaging software, people can freely select a communication mode suitable for the current situation from various communication modes according to the requirements of the people. The communication modes bring convenience to information transfer of people and greatly enrich communication life of users.
With the diversification of communication methods, user information presented on a communication interface of a user is more and more vivid and interesting. For example, in a conventional voice call, only the telephone numbers of both or one of the parties of the communication (or the user names set by the user for the communication numbers) are always displayed on the communication interface. But in the process of communication through instant communication software, the communication interface is not a stiff communication number any more, but can display the user image which may come from the real photo stored in the communication equipment by the user or the virtual image made by the user; under the condition of video communication, a real picture captured by a camera on the communication equipment can be displayed on the communication interface.
The method for displaying the user image on the communication interface in the user communication process improves the visibility of user communication and enhances the communication experience of the user. However, this approach is static in the user image or information associated with the user image displayed by the communication interface during user communication. For example, when the user starts to perform voice or text communication and finishes the communication, the user image presented on the communication interface is always kept unchanged; as another example, when a user is engaged in a video call, the communication interface displays only the real pictures captured by the device.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present application provide a method and an apparatus for generating an animation effect on a communication interface during a communication process, so that a user can generate a corresponding animation effect according to communication content between users during the communication process.
The embodiment of the application adopts the following technical scheme: a method for animating a communication interface during a communication, the method comprising: receiving content generated by a user based on communication in the communication process; identifying the content generated based on the communication, and generating a corresponding state change instruction according to the content generated based on the communication; and modifying the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
Preferably, the user information includes an animation element on the communication interface of the user, and the modifying the state of the user information on the communication interface according to the state change instruction to enable the communication interface to generate the animation effect includes:
and modifying the animation elements on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
Preferably, the user information includes a user avatar and/or a virtual scene presented on the communication interface of the user, and the modifying the state of the user information on the communication interface according to the state change instruction to generate the animation effect on the communication interface includes:
and modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction so as to enable the communication interface to generate an animation effect.
Preferably, the modifying the state of the user avatar and/or the virtual scene on the communication interface according to the state change instruction to generate the animation effect on the communication interface includes:
changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the second party is in response to the state of the avatar and/or the virtual scene of the first party; or,
and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the first party is in response to the state of the avatar and/or the virtual scene of the second party.
Preferably, the modifying the state of the user avatar on the communication interface according to the state change instruction specifically includes:
and modifying the mouth shape, action, appearance, dressing, body type, sound and/or expression state of the user avatar according to the state change instruction.
Preferably, the user avatar and/or virtual scene is an avatar and/or virtual scene selected from a plurality of avatars and virtual scenes provided by the terminal device or the remote server, and/or a user avatar and/or virtual scene determined according to information of the external environment from a plurality of avatars and/or virtual scenes provided by the terminal device or the remote server, and/or a user avatar and/or virtual scene determined according to information stored in the terminal device or the remote server in advance from a plurality of avatars and/or virtual scenes provided by the terminal device or the remote server; and/or the presence of a gas in the gas,
the user avatar and/or virtual scene is an avatar and/or virtual scene created based on the user avatar and/or the real scene where the user is located.
Preferably, the information stored in the terminal device or the remote server includes at least one of:
personal information; mood information; body state information; information relating to the user avatar and/or virtual scene; and the related information of the opposite user establishing the communication relationship with the user.
Preferably, the content generated based on the communication includes:
voice and text generated by a user in the communication process and/or video content captured by a camera on terminal equipment; or, the user selects the operation behavior information of the animation on an animation selection interface presented in the communication process; or the user touches the operation behavior information of the communication interface in the communication process.
Preferably, the method further comprises:
and after the state of the user information on the communication interface is modified according to the state change instruction so that the communication interface generates an animation effect, storing the animation effect generated by the communication interface in the terminal equipment.
An apparatus for generating animation effect on communication interface during communication, the apparatus is located in terminal devices of two users establishing communication relationship, the apparatus comprises:
receiving unit, recognition unit and modification unit, wherein:
the receiving unit is used for receiving the content generated by the user based on the communication in the communication process;
the identification unit is used for identifying the content generated based on the communication and generating a corresponding state change instruction according to the content generated based on the communication;
and the modifying unit is used for modifying the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
Preferably, the identification unit includes: identifying a subunit and generating a subunit, wherein:
the identification unit is used for identifying voice, characters generated by the user in the communication process and/or video content captured by a camera on the terminal equipment; or,
identifying the operation behavior information of the animation selected on the animation selection interface presented by the user in the communication process; or,
and identifying the operation behavior information of the user touching the communication interface in the communication process.
The generating subunit is used for generating a corresponding state change instruction according to voice and characters generated by a user in the communication process and/or video content captured by a camera of the terminal equipment; or,
generating a corresponding state change instruction according to the operation behavior information of the animation selected by the user on the animation selection interface presented in the communication process; or,
and generating a corresponding state change instruction according to the operation behavior information of the user touching the communication interface in the communication process.
Preferably, the user information includes an animation element on a communication interface of the user, and the modification unit is configured to:
and modifying the animation elements on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
Preferably, the user information comprises a user avatar and/or a virtual scene presented on a communication interface of the user, the modifying unit is configured to:
and modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction so as to enable the communication interface to generate an animation effect.
Preferably, the apparatus further comprises a storage unit for:
and storing the animation effect generated by the communication interface in the terminal equipment.
According to the method and the device, the corresponding state change instruction is generated according to the communication content of the user in the communication process of the user, and then the communication interface of the user generates the corresponding animation effect according to the state instruction, so that the communication interface of the user in the voice communication or text communication process is not static any more, or the video call interface is not only a real picture captured by a camera but also can generate the corresponding animation effect according to the communication content of the user in the video call process of the user, and the communication mode of the user is enriched.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic specific flowchart of a method for generating an animation effect on a communication interface in a communication process according to embodiment 1 of the present application;
fig. 2 is a schematic specific flowchart of an apparatus for generating an animation effect on a communication interface in a communication process according to embodiment 2 of the present application;
fig. 3-1 is a schematic specific flowchart of a method for enabling a call interface to generate an animation effect in a call process of a user in a specific scenario according to embodiment 3 of the present application;
fig. 3-2 is a screenshot in which avatars corresponding to two users establishing a communication relationship are displayed on the same call interface according to embodiment 3 of the present application;
fig. 3-3 is a screenshot of a call interface generating a corresponding animation effect according to call content according to embodiment 3 of the present application;
fig. 4-1 is a schematic specific flowchart of a method for enabling a call interface to generate an animation effect in a call process of a user in a specific scenario according to embodiment 4 of the present application;
fig. 4-2 is a screenshot in which avatars corresponding to two users establishing a communication relationship are displayed on the same call interface according to embodiment 4 of the present application;
fig. 4-3 is a screenshot illustrating a call interface generating a corresponding animation effect according to a user selecting an animation on an animation selection interface according to embodiment 4 of the present application;
fig. 5-1 is a screenshot in which real images corresponding to two users establishing a communication relationship are displayed on the same video call interface according to embodiment 5 of the present application;
fig. 5-2 is a screenshot of a video call interface generating a corresponding animation effect according to communication content according to embodiment 5 of the present application;
fig. 5-3 is a screenshot of a user selecting an animation on a video call interface, provided in embodiment 5 of the present application;
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example 1
As mentioned above, during the process of voice or text communication by the user using the instant messaging software, the communication interface may display the user image, which may be from the real photo stored in the communication device by the user or the virtual image made by the user, so that the user image is static and does not generate corresponding animation according to the communication content of the user; in the case of video communication, although the display picture on the communication interface is generally dynamic, the display picture is only a real picture captured by the camera of the device, and the interface does not display a corresponding animation according to the communication content of the user.
In order to solve the above problem, an embodiment of the present application provides a method for generating an animation effect on a communication interface during a communication process. The method enables a communication interface to generate corresponding animation effects according to communication contents of a user in a communication process of the user, and fig. 1 is a specific flow diagram of the method, and the specific steps of the method are as follows:
step 11: content generated by a user based on the communication during the communication is received.
In this step, a communication relationship is first established between users, and the communication relationship may be established by a dial-up call, or by using instant messaging software to perform voice, text, or video communication, and the like, which is not limited specifically herein. In the communication process, the content generated by the user based on the communication may refer to the communication content generated by the user in the communication process, such as voice, text or video, and may also be operation behavior information generated by the user performing a relevant operation in the communication process, for example, the operation behavior information generated by the user selecting a corresponding animation on an animation selection interface, or the operation behavior information generated by the user touching the animation on the communication interface.
Step 12: and identifying the content generated based on the communication, and generating a corresponding state change instruction according to the content generated based on the communication.
In this step, the state change instruction refers to an instruction for causing the communication interface to generate a corresponding animation effect, the terminal device identifies communication content generated by the user during the communication process and generates a corresponding state change instruction, and the manner for generating the corresponding state change instruction may include, but is not limited to, the following manners: if the user uses the voice call to communicate, a voice recognition unit in the equipment recognizes the voice content generated by the user in the call process, analyzes the semantics of the voice content, and then generates a corresponding state change instruction according to the semantics, wherein the voice content generated by the user in the call process can be converted into visual text content and displayed on a communication interface; if the user uses a character communication mode for communication, after the user inputs character information, a character recognition unit in the terminal equipment recognizes the text information input by the user, analyzes the semantics of the text information, and then generates a corresponding state change instruction according to the semantics; if the video call mode used by the user is used for communication, the video recognition unit in the equipment generates a corresponding state change instruction according to the real picture captured by the equipment, or the voice recognition unit in the equipment generates a corresponding state change instruction according to the voice content generated by the user in the video call.
The above-mentioned generating the state change instruction according to the communication content is only an exemplary illustration, and in practical applications, in addition to generating the corresponding state change instruction according to the voice, text or video generated by the user in the communication process, the method further includes that the user selects the corresponding animation on the animation selection interface to generate the corresponding state change instruction, or the user generates the corresponding state change instruction by touching the animation on the communication interface, and the like, for example, the user selects "dancing" on the animation selection interface on the communication interface, and generates the corresponding state change instruction according to the selection operation of the user, that is, the instruction makes the corresponding animation on the communication interface complete the "dancing" action; or the user touches the animation displayed on the communication interface to generate a corresponding state change instruction, for example, an avatar corresponding to the user is displayed on the communication interface, the user wants to make the avatar display a "running" action on the interface, and the user can touch the leg of the avatar to make the avatar complete the "running" action on the interface, where the user touching the leg of the avatar is a process of generating the state change instruction, that is, the instruction makes the avatar corresponding to the user complete the "running" action.
Step 13: and modifying the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
In this step, the user information may include, but is not limited to: the animation elements on the communication interface of the user or the corresponding avatars and/or virtual scenes of the user, wherein the animation elements are independent of the avatars and/or virtual scenes of the user, and the animation elements, the corresponding avatars of the user and the virtual scenes can be 3D images or 2D images. In addition, there are many situations for modifying the user information on the communication interface according to the state change instruction, and the following three situations are mainly described here:
the first case:
after the user establishes the communication relationship, the animation element or the avatar and/or the virtual scene corresponding to the user is loaded on the communication interface according to the communication content between the users, for example, when the user performs a voice call, although the communication relationship between the users is already established, the avatar and/or the virtual scene corresponding to the user is not displayed on the communication interface at first, but when the user says "snowing and cold" in the voice call process, the system starts to load the avatar and the virtual scene of the user on the communication interface according to the communication content, the avatar herein may be an avatar wearing a hat on a down jacket, and the corresponding virtual scene may be a snowing scene, or the communication interface only displays the snowing scene or only displays the avatar corresponding to the user.
The second case:
before a user establishes a communication relation, binding animation elements or user avatars and/or virtual scenes to be displayed on a communication interface with a telephone number of the user or an account number of instant messaging software, before or after the user establishes the communication relation, searching the animation elements or the user avatars and/or virtual scenes bound with the telephone number or the account number by terminal equipment through the telephone number of the user or the account number of the instant messaging software, displaying the animation elements or the user avatars and/or virtual scenes on the communication interface, and then generating a state change instruction according to a state of the user in a communication process to change the states of the animation elements or the virtual images and/or virtual scenes corresponding to the user on the communication interface.
The above-mentioned avatar and/or virtual scene may be provided by a terminal device or by a remote server, and here, the following ways of obtaining the avatar and/or virtual scene are exemplified:
one way is that: the user selects an avatar and/or a virtual scene from a plurality of avatars and virtual scenes provided by the terminal device or the remote server. For example: the terminal device or the remote server provides a plurality of avatars and virtual scenes for the user when the user logs in the instant messaging software, the user can select one or more avatars and virtual scenes from the avatars and virtual scenes to be bound with the login account number of the user, when the user logs in the instant messaging software next time, the terminal device can automatically load the avatars and virtual scenes of the user, the avatars and virtual scenes provided by the terminal device or the remote server can be the avatars and virtual scenes which are downloaded and stored in the terminal device or the remote server before, or the terminal device or the remote server can start to download the avatars and virtual scenes selected by the user only when the user selects the avatars and virtual scenes.
The other mode is as follows: the avatar and/or the virtual scene of the user determined according to the information of the external environment from among the plurality of avatars and/or virtual scenes provided from the terminal device or the remote server. There are many kinds of external environment information, and here, an example lists an external environment information, such as weather conditions, which can be obtained by the device itself, or obtained from other software related to weather forecast, or obtained from the internet or GPS, etc.; after the weather condition is obtained, the terminal device or the remote server may automatically search for the avatar and/or the virtual scene which are in accordance with the current weather condition in the terminal device or the remote server according to the current weather condition, for example, the weather when the user establishes the communication relationship is rainy, the terminal device or the remote server automatically loads the avatar and the virtual scene of the user on the user communication interface before or after the user establishes the communication relationship according to the current weather condition, where the avatar corresponding to the user may wear a thick coat to hold an umbrella, and the virtual scene where the avatar is located may be a rainy scene.
Yet another way is: the avatar and/or the virtual scene of the user are determined according to information pre-stored in the terminal device or the remote server from among a plurality of avatars and/or virtual scenes provided from the terminal device or the remote server. The information stored in the terminal device or the remote server can be personal information of the user, for example, the gender of the user in the personal information stored in the terminal is female, and the system can automatically load the avatar of the female on the user communication interface; the information stored by the user in the standby or remote server may also be a personal mood or a physical condition stored by the user, for example, the personal mood stored by the user on the terminal device is a good mood, the expression of the virtual image loaded by the terminal device on the user communication interface according to the mood information stored by the user may be a haha laugh, or the limb action of the user may be a dance and the like, for example, the personal physical condition stored by the user on the terminal device is a cold, the virtual image loaded by the terminal device on the user communication interface according to the physical condition information stored by the user may be a sneezing without stop, and the virtual scene may be in a hospital; the information stored in the terminal device or the remote server may also be information related to the avatar and/or the virtual scene of the user, for example, the information related to the avatar of the user stored in the terminal device is "high-order", and when the terminal device loads the avatar for the user, the high-order avatar is loaded; the information stored in the terminal device or the remote server may also be related information of an opposite user establishing a communication relationship with the user, where the related information may be a name, remark or belonging classification of the opposite user, for example, one user establishing the communication relationship stores the name of the other user in the terminal device as "old man", the user image displayed on the communication interface may be a lover image, for example, wearing a lover clothes, and the virtual scene at this time may display a scene at home; for another example, a user on one side establishing a communication relationship stores a user on the other side in a category of "family" in the device, and at this time, the virtual scene displayed on the interface automatically loads the virtual scene in the family.
Yet another way is: an avatar and/or a virtual scene created based on the user avatar and/or the real scene in which the user is located. For example, the terminal device or the remote server processes the user real image and the real scene in the photo according to the photo stored in the terminal device or the remote server by the user, and generates the virtual image and the virtual scene corresponding to the user.
There are many ways to modify the state of the user information on the communication interface according to the state change instruction to generate the animation effect on the communication interface, and the following three ways are exemplified:
the first mode is as follows: the avatar on the communication interface changes the shape, motion, appearance, dressing, body type, sound and/or expression of the avatar according to the state change instruction, for example, the user says: "i are too happy," the expression of the virtual image corresponding to the user on the interface starts to change into haha laugh according to the conversation content, the action changes into chorea, and for example, the user says: "i feel good suddenly now", the dressing of the user's avatar on the interface changes correspondingly, possibly the avatar originally wears clothes in summer and now becomes a thick clothes in winter, and the avatar action may become a thursky action. Here, the sound of the avatar of the user may be that the avatar "speaks" the call content of the user, or that the avatar corresponding to the user generates a sound effect according to a specific application scenario, for example, the avatar changes into a snow scene according to a state change instruction, and then the avatar corresponding to the user automatically sends a "good cool voice" content. In addition, the state change instruction can also change the interaction between the avatars corresponding to the two users establishing the communication relationship, for example, when the one user establishing the communication relationship speaks: "hug", at this moment the avatar corresponding to the user makes the hugging action, and the avatar corresponding to the opposite user can also make the hugging action as a response. In addition, the avatar of one user who establishes the communication relationship may change according to the state change instruction, while the avatar corresponding to the other user does not change, and the interaction between the avatars of the users may change, such as the action, sound, expression, etc. of the avatar of the user.
The second way is: the virtual scene on the communication interface changes according to the state change instruction, wherein the virtual scene change may be a change of an animation element in the virtual scene, for example, the virtual scene on the communication interface generates a music effect according to the state change instruction, or a change of an element composing the virtual scene, for example, a sky in the virtual scene suddenly rains according to the state change instruction. In addition, the scene change mode may be that only the virtual scene changes according to the state change instruction, or that the avatar and the virtual scene corresponding to the user change simultaneously or not simultaneously.
The third mode is that: the animation elements on the communication interface are modified according to the state change instruction, and the animation elements are changed independently of the avatars and/or virtual scenes of the users on the interface, for example, when the avatars and/or virtual scenes corresponding to the users do not appear on the communication interface, the communication interface can also generate sound effects, or generate other animation elements.
In the above three expression modes for generating animation effect, a single avatar and/or virtual scene may be changed or a plurality of avatars and/or virtual scenes may be changed according to the same state change instruction. When a plurality of animation effects are generated, the sequence of displaying the plurality of animations on the communication interface may be according to the sequence of status instructions received by the device, or may be displaying the plurality of animations simultaneously, for example, when a party user establishing a relationship says: "hug", the action of hugging is made to the avatar that both sides user that shows on this moment on the interface corresponds, can be the action that these two avatars are making hugs simultaneously, also can be the avatar that the party that speaks corresponds makes hug action earlier, then the other party makes hug action as the response.
The third situation:
this situation is a combination of the two situations described above: before a user establishes a communication relationship, binding animation elements or avatars and/or virtual scenes to be displayed on a call interface with a telephone number of the user or an account of instant messaging software, before or after the user establishes the communication relationship, searching the animation elements or the user avatars and/or virtual scenes bound with the telephone number or the account by terminal equipment through the telephone number of the user or the account of the instant messaging software, displaying the animation elements or the avatars and/or the virtual scenes on the communication interface, generating a corresponding state change instruction according to communication contents generated by the user in the communication process, and reloading the animation elements or the avatars and/or the virtual scenes of the user on the communication interface according to the state change instruction, rather than making local changes on the basis of previous animation elements or avatars and/or virtual scenes, for example: one party of the user who establishes communication speaks: "i want to run", the user may be a standing image before saying the communication, and after saying the communication, the user reloads a new user virtual image, which may be an image of wearing sports wear, on the interface, and the virtual scene may become the track and field.
In the three cases described above, the identifying the communication content may be that each terminal establishing the communication relationship identifies the communication content generated by the terminal corresponding to the user, generates a corresponding state change instruction, and causes the communication interface to generate an animation effect according to the state instruction; or after one terminal establishing the communication relationship identifies the communication content, generating a corresponding state change instruction and sending the state change instruction to the other terminal to execute the instruction to generate an animation effect; the server of the remote network can also identify the communication content to generate a state change instruction and then send the instruction to each terminal to execute the instruction to generate animation effect. For example, when a user A and a user B communicate, only the avatar A and the corresponding virtual scene a corresponding to the user A can be displayed on the communication interface of the user A, if the user A sends out the communication content, the terminal device corresponding to the user A identifies the communication content and then generates a corresponding state change instruction, or the server of a remote network identifies the communication content and then generates a corresponding state change instruction, and then sends the instruction to the terminal device corresponding to the user A, and finally the terminal device corresponding to the user A enables the avatar A and the corresponding virtual scene a on the communication interface to generate an animation effect according to the instruction; the communication interface of the user A can only display the avatar B and the corresponding virtual scene B corresponding to the user B, if the user B sends the communication content, the terminal device corresponding to the user B identifies the communication content and generates a corresponding state change instruction, or a server of a remote network identifies the communication content and generates a corresponding state change instruction and sends the state change instruction to the terminal device corresponding to the user B, then the terminal device corresponding to the user B sends the instruction to the terminal device corresponding to the user A, and finally the terminal device corresponding to the user A enables the avatar B and the corresponding virtual scene B of the user B displayed on the communication interface to generate an animation effect according to the received instruction. The transmission mode of the state change instruction may include a bluetooth transmission mode, an NFC transmission mode, a TCP/IP protocol transmission mode, a UDP transmission mode, an RTP protocol transmission mode, an SRTP protocol transmission mode, an HTTP/HTTPs protocol transmission mode, and the like.
After the communication interface generates the animation effect according to the state change instruction, the animation effect can be stored in the terminal equipment, so that a user can watch the previous communication record at any time, the communication record is presented to the user in an animation mode, and the interestingness of the communication process is increased.
By applying the method provided by the embodiment of the application, in the communication process of the user, the corresponding state change instruction is generated according to the communication content of the user, and then the communication interface of the user generates the corresponding animation effect according to the state instruction, so that the communication interface of the user is not static any more in the communication process, but can generate the corresponding animation according to the communication content of the user, and the communication mode of the user is enriched.
Example 2
Based on the method for generating an animation effect on a communication interface in a communication process provided in embodiment 1, this embodiment correspondingly provides a device for generating an animation effect on a communication interface in a communication process, and the device enables a user to generate a corresponding animation effect on the communication interface according to communication content of the user in the communication process. The specific structure of the device is schematically shown in fig. 2, and the device comprises:
a receiving unit 21, a recognition unit 22 and a modification unit 23;
a receiving unit 21, which can be used for receiving the content generated by the user based on the communication during the communication process;
an identifying unit 22, configured to identify the content generated based on the communication, and generate a corresponding state change instruction according to the content generated based on the communication;
and the modifying unit 23 may be configured to modify the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
The working process of the device embodiment is as follows: firstly, the receiving unit 21 receives the content generated by the user based on the communication, then the identifying unit 22 identifies the content generated based on the communication, and generates a corresponding state change instruction according to the content, and finally the modifying unit 23 modifies the state of the user information on the communication interface according to the state change instruction, so that the communication interface generates an animation effect.
In the above device embodiments, there are many implementations in which the user generates an animation effect on the communication interface during the communication process, and in one implementation, the identifying unit 22 includes an identifying subunit and a generating subunit, where:
the identification subunit is used for identifying voice, text and/or video content captured by a camera of the terminal equipment generated by the user in the communication process; or,
identifying the operation behavior information of the animation selected by the user on the animation selection interface; or,
and identifying the operation behavior information of the animation on the user touch communication interface.
The generating subunit is used for generating a corresponding state change instruction according to voice, characters generated by a user in the communication process and/or video content captured by a camera of the terminal equipment; or,
generating a corresponding state change instruction according to the operation behavior information of the animation selected by the user on the animation selection interface; or,
and generating a corresponding state change instruction according to the operation behavior information of the animation of the user through the touch communication interface.
In another embodiment, the user information includes an animation element on the user's communication interface, and the modifying 23 unit may be configured to:
and modifying the animation elements on the communication interface according to the state change instruction.
In a further embodiment, the user information comprises a user avatar and/or a virtual scene presented on a communication interface of the user, and the modifying unit 23 may be configured to:
modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect; or,
in another embodiment, the apparatus further comprises a storage unit, the storage unit is configured to:
and storing the animation effect generated by the communication interface in the terminal equipment.
The beneficial effects obtained by applying the device provided by the embodiment of the present application are the same as or similar to the beneficial effects obtained by the method embodiment provided by embodiment 1, and are not repeated here to avoid repetition.
Example 3
In order to more clearly illustrate the technical solutions and technical features of the present application, the following description is made with reference to an example in which a user performs related display on a communication interface when the user performs a voice call using a mobile phone in a specific scenario (thereby forming embodiment 3). The specific scenarios for this example are: in the process of carrying out voice communication between the user A and the user B, the user A speaks to the user B: "you are too bored recently to hit you", at this time, the mobile phone interface performs the related display according to the speech content, as shown in fig. 3-1, a specific flowchart of a method for performing the related display according to the speech content communicated between users by the mobile phone interface in the scene is shown, the method includes:
step 31: inquiring a virtual image and a related scene corresponding to the number through the telephone number establishing the call relation, and loading the virtual scene and the related scene to a call interface;
the specific scenario in this step is: a user A and a user B carry out voice call through mobile phone dialing, and in the call process, the two parties use earphones or hands-free phones to carry out call, the assumption is that the user A dials to the user B first, and after the user B answers the call, mobile phone interfaces of the two parties can load corresponding virtual images and related scenes of the user A and the user B at the same time. The virtual images and related scenes of the user A and the user B are virtual images and virtual scenes which are selected by the user A and the user B from the virtual images and virtual scenes provided by the mobile phone system in advance, the virtual images and the virtual scenes are bound with the mobile phone number of the user B, after the user B answers the call of the user A, the mobile phone systems of the user B and the user B inquire the virtual images and the virtual scenes corresponding to the number according to the mobile phone numbers of the user A and the mobile phone number of the user B, and the virtual images and the virtual scenes are displayed on the mobile phone interfaces of the user A and the user B through setting (as shown in a figure 3-2), in addition, the virtual scenes displayed on the mobile phone interfaces of the user A and the user B can be the same or different, and are displayed according to the virtual scenes selected by the two users.
Step 32: after the system identifies the call content of the user, calling a relevant animation according to the call content;
as shown in step 31, after the user B connects the phone call, the mobile phone interfaces of the two parties will display the virtual images and related scenes of the user a and the user B, assuming that the user a says: "you are too bored to want to make you recently", the mobile phone system of the user a receives the call content, converts the voice content into visual text content through the voice recognition module of the mobile phone, and generates a corresponding state change instruction, and the system changes the state of the virtual image and/or virtual scene corresponding to the user on the communication interface according to the instruction, as shown in fig. 3-3, it is a simulated mobile phone interface screenshot of the user a: the avatar of the user A makes an action of typing the avatar of the user B under the state change instruction, meanwhile, the mouth shape of the avatar of the user A makes an animation of 'you too bored to want to type you recently', and a dialog box of the user A is displayed near the avatar of the user A, and the displayed content of the dialog box is the character 'you too bored to type you recently' spoken by the user A; correspondingly, after the avatar of the user A makes an action of typing the avatar of the user B, the virtual scene displayed on the mobile phone interface of the user A is changed from the scene selected by the user A before to the scene of the martial art arena.
Similarly, the mobile phone system of the user B may also recognize the voice content of the user a, when the avatar corresponding to the user a performs the action of typing the avatar of the user B, the mobile phone system of the user B generates a corresponding state change instruction to act on the avatar of the user B, as shown in fig. 3-3, the avatar of the user B performs the action of being knocked down on the ground under the instruction, and the system automatically displays a text of "good pain, i do not dare" in a dialog box beside the avatar corresponding to the user B as a response to the action of typing the avatar of the user B on the ground by the avatar of the user a, in addition, the virtual scene of the mobile phone interface of the user B may be changed from the virtual scene selected by the user B to the scene of the martial arena same as the user a, or the previous virtual scene selected by the user B may be left unchanged, is defined by the user according to personal requirements.
By applying the method provided by the embodiment of the application, the corresponding state change instruction is generated according to the call content of the user in the call process of the user, and then the call interface of the user generates the corresponding animation effect according to the state instruction, so that the call interface of the user is not static in the call process, but can generate the corresponding animation according to the call content of the user, and the communication mode of the user is enriched.
Example 4
Embodiment 4 is another method for generating an animation effect on a communication interface during communication, which is provided on the basis of embodiment 3, and the scenario adopted in this embodiment is consistent with embodiment 3: in the process of carrying out voice communication between the user A and the user B, the user A speaks to the user B: "you are too bored recently to hit you", at this time, the mobile phone interface performs the related display according to the voice content, as shown in fig. 4-1, a specific flowchart of a method for performing the related display according to the voice content communicated between users by the mobile phone interface in the scene is shown, the method includes:
step 41: the two users establishing the conversation relationship select the system to provide the virtual image and the virtual scene to be displayed on the mobile phone interface during the conversation;
the specific scenario in this step is: a user A and a user B carry out voice call through mobile phone dialing, and in the call process, the two parties use earphones or hands-free phones to carry out call, the assumption is that the user A dials to the user B first, after the user B answers the call, the virtual image and the virtual scene provided by the system are displayed on the call interface of the user A and the user B for the user to select, and after the user A and the user B select well, the mobile phone interfaces of the two parties display the virtual image and the virtual scene selected by the user A and the user B.
Step 42: in the call process, a user selects animation provided by the system on a call interface to display;
as shown in step 41, after the user B calls, the call interface loads the avatar and the virtual scene selected by the user a and the user B (as shown in fig. 4-2), and it is assumed that the user a speaks to the user B during the call: "you are too bored to want to make you recently", the mobile phone system of the user a receives the conversation content, converts the speech content into visual text content through the speech recognition module of the mobile phone, in addition, the user selects animation to be displayed on the animation selection interface provided by the system, generates a corresponding state change instruction according to the operation of the user selecting animation, the system changes the avatar and virtual scene of the user according to the instruction, for example, the avatar mouth shape of the user a displays the animation of "you are too bored to make you want to make you recently", and displays the communication content in the form of dialog box near the avatar corresponding to the user a in text, meanwhile, the user a presses the "animation selection" button on the conversation interface, the animation selection interface provides animation for the user a to select, as shown in fig. 4-3, the mobile phone interface of the user a, the lower half part of the interface is animation provided by the mobile phone interface after the user A presses the picture selection button, the user A selects the animation of the 'opposite party', at the moment, a corresponding state change instruction is generated according to the animation of the 'opposite party' selected by the user, and the mobile phone interface displays the virtual image of the user A to finish the action of typing the virtual image of the user B according to the state change instruction (as shown in figure 4-3); in addition, the system automatically loads a virtual scene corresponding to the action, such as the scene of the martial art arena (as shown in fig. 4-3), according to the animation of the 'opposite party' selected by the user A.
The mobile phone of the user B also receives the call content of the user a, and after the avatar corresponding to the user a makes a motion of typing the avatar of the user B, the mobile phone system of the user B generates a corresponding state change instruction to act on the avatar of the user B, as shown in fig. 4-3, the avatar corresponding to the user B makes a motion of being knocked down under the instruction, and the system displays a character of "good pain, i do not dare to do any more" in a dialog box beside the avatar corresponding to the user B as a response, or the user B presses a animation selection button on the mobile phone interface of the user B in the same way, selects a corresponding animation to interact with the avatar of the user a, for example, the user B can also select the animation of "typing the other side" on the animation selection interface of the animation to counterclick the avatar of the user a; in addition, the virtual scene of the mobile phone interface of the user B can be a scene changed from the virtual scene selected by the user B to the magazine-arm arena as the user A, and the virtual scene selected by the user B can be kept unchanged.
The beneficial effects obtained by applying the embodiment of the present application are the same as or similar to the beneficial effects obtained by embodiment 3, and are not repeated here to avoid repetition.
Example 5
In embodiments 3 and 4, for a method for performing related display on a call interface when a user performs a voice call by dialing, in order to more fully describe the technical solution of the present application, an example of performing related display on a video call interface when the user performs a video call by using a mobile phone in a specific scene is described below (thereby forming embodiment 5), where the specific scene of the example is: in the process of carrying out video call between the user A and the user B, the user A says: the method specifically comprises the following steps that: when a user A and a user B carry out video communication, the user A is supposed to log in some instant messaging software to carry out video chat with the user B, after the user B receives the video chat, a mobile phone interface displays a real picture captured by a camera, for example, fig. 5-1 is a simulated screenshot of the mobile phone interface of the user B when the user A and the user B carry out video communication, a large image in the diagram is supposed to be a video picture of the user A received by a mobile phone of the user B, a small image in a small frame at the upper right corner in the diagram is a video picture of the user B, a mobile phone system of the user B is supposed to say that the user A says's mobile phone B is particularly clear in the current weather' in the video communication process, the mobile phone system of the user B receives the voice content, the voice content is converted into visual text content through a voice recognition module of the mobile phone, a corresponding state change instruction is generated, and, as shown in fig. 5-2, the mobile phone system of user B calls the relevant animation to be displayed on the video call interface according to the speech content of user a saying "today is particularly sunny", as shown in the figure, the virtual animation in which the sun and the cloud appear in the interface is displayed on the video picture of user a, and a text box is added beside the image of user a, which shows that user B receives the speech content of user a.
In analogy to the method for generating an animation effect on a communication interface in the communication process provided in embodiment 4, the method for displaying an animation on a video call interface provided in this embodiment may further be: user a says to user B: "today is particularly sunny" in weather, then the user a selects an animation selection button on the communication interface, an animation selection interface is displayed on the interface, the user a can select an animation button (as shown in fig. 5-3) on the animation interface, the mobile phone system generates a corresponding state change instruction according to the operation, the mobile phone system of the user a transmits the instruction to the mobile phone system of the user B, and after the state change instruction is received by the mobile phone system of the user B, the mobile phone interface of the user B displays a related animation, such as a sunny picture shown in fig. 5-2.
By applying the method provided by the embodiment of the application, in the process of video call of the user, the corresponding state change instruction is generated according to the call content of the user, and then the corresponding animation effect is generated on the video call interface of the user according to the state instruction, so that the video call interface of the user is no longer a real picture captured by a camera in the process of video call, the corresponding animation can be generated according to the communication content of the user, and the communication mode of the user is enriched.

Claims (12)

1. A method for generating animation effect for communication interface in communication process, the method comprising:
receiving content generated by a user based on communication in the communication process;
identifying the content generated based on the communication, and generating a corresponding state change instruction according to the content generated based on the communication;
modifying the state of the user information on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect;
wherein,
the user information comprises a user virtual image and/or a virtual scene presented on a communication interface of the user, and the step of modifying the state of the user information on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect comprises the following steps:
modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect;
the method for generating the animation effect on the communication interface comprises the following steps that the user virtual image and/or the virtual scene on the communication interface comprises the virtual image and/or the virtual scene of both communication parties, the content generated based on the communication is the content generated based on the communication of the first party of both communication parties, and the state of the user virtual image and/or the virtual scene on the communication interface is modified according to the state change instruction so that the communication interface generates the animation effect, wherein the step of:
changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the second party is in response to the state of the avatar and/or the virtual scene of the first party; or,
and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the first party is in response to the state of the avatar and/or the virtual scene of the second party.
2. The method of claim 1,
the user information comprises animation elements on the communication interface of the user, and the step of modifying the state of the user information on the communication interface according to the state change instruction to enable the communication interface to generate animation effect comprises the following steps:
and modifying the animation elements on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
3. The method of claim 1,
the modifying the state of the user avatar on the communication interface according to the state change instruction specifically includes:
and modifying the mouth shape, action, appearance, dressing, body type, sound and/or expression state of the user avatar according to the state change instruction.
4. The method of claim 1,
the user avatar and/or virtual scene is an avatar and/or virtual scene selected from a plurality of avatars and virtual scenes provided by the terminal device or the remote server, and/or is a user avatar and/or virtual scene determined according to information of an external environment from the plurality of avatars and/or virtual scenes provided by the terminal device or the remote server, and/or is a user avatar and/or virtual scene determined according to information stored in the terminal device or the remote server in advance from the plurality of avatars and/or virtual scenes provided by the terminal device or the remote server; and/or the presence of a gas in the gas,
the user avatar and/or virtual scene is an avatar and/or virtual scene created based on the user avatar and/or the real scene where the user is located.
5. The method of claim 4,
the information stored in the terminal device or the remote server comprises at least one of the following:
personal information; mood information; body state information; information relating to the user avatar and/or virtual scene; and the related information of the opposite user establishing the communication relationship with the user.
6. The method of claim 1,
the content generated based on the communication includes:
voice and text generated by a user in the communication process and/or video content captured by a camera on terminal equipment; or, the user selects the operation behavior information of the animation on an animation selection interface presented in the communication process; or the user touches the operation behavior information of the communication interface in the communication process.
7. The method of claim 1, further comprising:
and after the state of the user information on the communication interface is modified according to the state change instruction so that the communication interface generates an animation effect, storing the animation effect generated by the communication interface in the terminal equipment.
8. An apparatus for generating animation effect on communication interface during communication, wherein the apparatus is located in terminal devices of two users establishing communication relationship, the apparatus comprising:
receiving unit, recognition unit and modification unit, wherein:
the receiving unit is used for receiving the content generated by the user based on the communication in the communication process;
the identification unit is used for identifying the content generated based on the communication and generating a corresponding state change instruction according to the content generated based on the communication;
the modification unit is used for modifying the state of the user information on the communication interface according to the state change instruction so that the communication interface generates an animation effect;
wherein,
the user information comprises a user virtual image and/or a virtual scene presented on a communication interface of the user, and the step of modifying the state of the user information on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect comprises the following steps:
modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction to enable the communication interface to generate an animation effect;
the method for generating the animation effect on the communication interface comprises the following steps that the user virtual image and/or the virtual scene on the communication interface comprises the virtual image and/or the virtual scene of both communication parties, the content generated based on the communication is the content generated based on the communication of the first party of both communication parties, and the state of the user virtual image and/or the virtual scene on the communication interface is modified according to the state change instruction so that the communication interface generates the animation effect, wherein the step of:
changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the second party is in response to the state of the avatar and/or the virtual scene of the first party; or,
and changing the state of the avatar and/or the virtual scene of the second party on the communication interface according to the state change instruction, and changing the state of the avatar and/or the virtual scene of the first party on the communication interface according to the state change instruction, wherein the state of the avatar and/or the virtual scene of the first party is in response to the state of the avatar and/or the virtual scene of the second party.
9. The apparatus of claim 8,
the identification unit includes: identifying a subunit and generating a subunit, wherein:
the identification unit is used for identifying voice, characters generated by the user in the communication process and/or video content captured by a camera on the terminal equipment; or,
identifying the operation behavior information of the animation selected on the animation selection interface presented by the user in the communication process; or,
identifying operation behavior information of the user touching a communication interface in the communication process;
the generating subunit is used for generating a corresponding state change instruction according to voice and characters generated by a user in the communication process and/or video content captured by a camera of the terminal equipment; or,
generating a corresponding state change instruction according to the operation behavior information of the animation selected by the user on the animation selection interface presented in the communication process; or,
and generating a corresponding state change instruction according to the operation behavior information of the user touching the communication interface in the communication process.
10. The apparatus of claim 8,
the user information comprises animation elements on a communication interface of the user, and the modification unit is used for:
and modifying the animation elements on the communication interface according to the state change instruction so that the communication interface generates an animation effect.
11. The apparatus of claim 8,
the user information comprises a user avatar and/or a virtual scene presented on a communication interface of a user, the modifying unit is configured to:
and modifying the state of the user virtual image and/or the virtual scene on the communication interface according to the state change instruction so as to enable the communication interface to generate an animation effect.
12. The apparatus of claim 8,
the apparatus further comprises a storage unit to:
and storing the animation effect generated by the communication interface in the terminal equipment.
CN201510859600.8A 2015-11-30 2015-11-30 Method and device for enabling communication interface to generate animation effect in communication process Active CN106817349B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510859600.8A CN106817349B (en) 2015-11-30 2015-11-30 Method and device for enabling communication interface to generate animation effect in communication process
PCT/CN2016/076590 WO2017092194A1 (en) 2015-11-30 2016-03-17 Method and apparatus for enabling communication interface to produce animation effect in communication process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510859600.8A CN106817349B (en) 2015-11-30 2015-11-30 Method and device for enabling communication interface to generate animation effect in communication process

Publications (2)

Publication Number Publication Date
CN106817349A CN106817349A (en) 2017-06-09
CN106817349B true CN106817349B (en) 2020-04-14

Family

ID=58796227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510859600.8A Active CN106817349B (en) 2015-11-30 2015-11-30 Method and device for enabling communication interface to generate animation effect in communication process

Country Status (2)

Country Link
CN (1) CN106817349B (en)
WO (1) WO2017092194A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911643B (en) * 2017-11-30 2020-10-27 维沃移动通信有限公司 Method and device for showing scene special effect in video communication
CN111316203B (en) * 2018-07-10 2022-05-31 微软技术许可有限责任公司 Actions for automatically generating a character
CN110971502B (en) * 2018-09-30 2021-09-28 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for displaying sound message in application program
CN110971747A (en) * 2018-09-30 2020-04-07 华为技术有限公司 Control method for media display and related product
CN109684544A (en) * 2018-12-14 2019-04-26 维沃移动通信有限公司 One kind, which is worn, takes recommended method and terminal device
CN110062116A (en) * 2019-04-29 2019-07-26 上海掌门科技有限公司 Method and apparatus for handling information
CN111949117A (en) * 2019-05-17 2020-11-17 深圳欧博思智能科技有限公司 Equipment state switching method and device, storage medium and sound box
CN111949118A (en) * 2019-05-17 2020-11-17 深圳欧博思智能科技有限公司 Equipment state switching method and device, storage medium and sound box
CN110418010B (en) * 2019-08-15 2021-12-07 咪咕文化科技有限公司 Virtual object control method, equipment and computer storage medium
CN110781346A (en) * 2019-09-06 2020-02-11 天脉聚源(杭州)传媒科技有限公司 News production method, system, device and storage medium based on virtual image
CN113395597A (en) * 2020-10-26 2021-09-14 腾讯科技(深圳)有限公司 Video communication processing method, device and readable storage medium
CN114979789B (en) * 2021-02-24 2024-07-23 腾讯科技(深圳)有限公司 Video display method and device and readable storage medium
CN113325983B (en) * 2021-06-30 2024-09-06 广州酷狗计算机科技有限公司 Virtual image processing method, device, terminal and storage medium
CN115499613A (en) * 2022-08-17 2022-12-20 安徽听见科技有限公司 Video call method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581903A (en) * 2003-08-14 2005-02-16 日本电气株式会社 A portable telephone including an animation function and a method of controlling the same
US7991401B2 (en) * 2006-08-08 2011-08-02 Samsung Electronics Co., Ltd. Apparatus, a method, and a system for animating a virtual scene
CN103905644A (en) * 2014-03-27 2014-07-02 郑明� Generating method and equipment of mobile terminal call interface
CN104468959A (en) * 2013-09-25 2015-03-25 中兴通讯股份有限公司 Method, device and mobile terminal displaying image in communication process of mobile terminal
CN104902212A (en) * 2015-04-30 2015-09-09 努比亚技术有限公司 Video communication method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010067993A (en) * 2001-04-13 2001-07-13 장민근 Portable communication system capable of abstraction and inserting background image and method thereof
US20090311993A1 (en) * 2008-06-16 2009-12-17 Horodezky Samuel Jacob Method for indicating an active voice call using animation
CN105100432B (en) * 2015-06-10 2018-02-06 小米科技有限责任公司 Call interface display methods and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581903A (en) * 2003-08-14 2005-02-16 日本电气株式会社 A portable telephone including an animation function and a method of controlling the same
US7991401B2 (en) * 2006-08-08 2011-08-02 Samsung Electronics Co., Ltd. Apparatus, a method, and a system for animating a virtual scene
CN104468959A (en) * 2013-09-25 2015-03-25 中兴通讯股份有限公司 Method, device and mobile terminal displaying image in communication process of mobile terminal
CN103905644A (en) * 2014-03-27 2014-07-02 郑明� Generating method and equipment of mobile terminal call interface
CN104902212A (en) * 2015-04-30 2015-09-09 努比亚技术有限公司 Video communication method and apparatus

Also Published As

Publication number Publication date
WO2017092194A1 (en) 2017-06-08
CN106817349A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
CN106817349B (en) Method and device for enabling communication interface to generate animation effect in communication process
US9402057B2 (en) Interactive avatars for telecommunication systems
US11178358B2 (en) Method and apparatus for generating video file, and storage medium
CN101136923B (en) System and method for multiplexing media information over a network
CN103368816A (en) Instant communication method based on virtual character and system
WO2016165615A1 (en) Expression specific animation loading method in real-time video and electronic device
EP1981254A2 (en) Communication control device and communication terminal
CN107085495B (en) Information display method, electronic equipment and storage medium
US20100053187A1 (en) Method, Apparatus, and Computer Readable Medium for Editing an Avatar and Performing Authentication
CN101669352A (en) A communication network and devices for text to speech and text to facial animation conversion
CN107818787B (en) Voice information processing method, terminal and computer readable storage medium
WO2023138184A1 (en) Prompt information display method and apparatus, storage medium and electronic device
CN102364965A (en) Refined display method of mobile phone communication information
WO2006011295A1 (en) Communication device
KR101200559B1 (en) System, apparatus and method for providing a flashcon in a instant messenger of a mobile device
CN114995924A (en) Information display processing method, device, terminal and storage medium
CN201690498U (en) Mobile phone photographing-and-merging device and mobile phone
KR101179465B1 (en) Video call system and method for delivering feeling of far end talker
KR100587515B1 (en) Method for operating character on wireless terminal device
JP2006344134A (en) Method, device and service for creating image for communication terminal
KR100912230B1 (en) Method and system for providing call service transmitting alternate image
KR100736541B1 (en) System for unification personal character in online network
CN112198972A (en) Input method, device, equipment and storage medium
KR100923307B1 (en) A mobile communication terminal for a video call and method for servicing a video call using the same
KR20190138406A (en) Chatting system using device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190326

Address after: 361012 3F-A193, Innovation Building C, Software Park, Xiamen Torch High-tech Zone, Xiamen City, Fujian Province

Applicant after: Xiamen Black Mirror Technology Co., Ltd.

Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000

Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant