CN109688347A - Multi-screen interaction method, device and electronic equipment - Google Patents
Multi-screen interaction method, device and electronic equipment Download PDFInfo
- Publication number
- CN109688347A CN109688347A CN201710979621.2A CN201710979621A CN109688347A CN 109688347 A CN109688347 A CN 109688347A CN 201710979621 A CN201710979621 A CN 201710979621A CN 109688347 A CN109688347 A CN 109688347A
- Authority
- CN
- China
- Prior art keywords
- terminal
- specified object
- real scene
- scene image
- interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the present application discloses multi-screen interaction method, device and electronic equipment, this method comprises: first terminal load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;Acquire real scene image;When the video playing in second terminal is to object event relevant to the specified object, the specified object material is added in the real scene image.By the embodiment of the present application, user can be improved to the participation of interaction.
Description
Technical field
This application involves multi-screen interactive technical fields, more particularly to multi-screen interaction method, device and electronic equipment.
Background technique
Multi-screen interactive refers to through wireless network connection, on different multimedia terminal equipments (such as common mobile phone with
Between TV etc.), a series of behaviour such as the transmission, parsing, displaying, control of multimedia (audio, video, picture) content can be carried out
Make, can show same content in different terminal equipment, and realize the content intercommunication between each terminal.
In the prior art, the interaction from television to mobile phone terminal is realized, example mostly by way of the mode of graphic code
Such as, two dimensional code relevant to the program being currently played can be usually shown on the tv screen, and user can be in mobile phone
Functions such as " sweep and sweep " of the application program of installation are scanned the two dimensional code, are then parsed in mobile phone terminal, and show
The page is specifically interacted out, then, the interactions such as user can answer a question in the page, draw a lottery.
Although the mode of this prior art can be realized interacting between mobile phone and TV, but concrete implementation form
More stiff, the actual participation degree of user is not high.Therefore, how to provide form richer multi-screen interactive, improve user's
Participation becomes the technical issues of needing those skilled in the art to solve.
Summary of the invention
This application provides multi-screen interaction method, device and electronic equipments, and user can be improved to the participation of interaction.
This application provides following schemes:
A kind of multi-screen interaction method, comprising:
First terminal load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When the video playing in second terminal is to object event relevant to the specified object, by the specified object
Material is added in the real scene image.
A kind of multi-screen interaction method, comprising:
First service end saves interaction material, and the interaction material includes according to the specified to pixel of specified Object Creation
Material;
The interaction material is supplied to first terminal, real scene image is acquired by the first terminal, when in second terminal
Video playing to object event corresponding with the specified object when, the specified object material is added to the realistic picture
As in.
A kind of multi-screen interaction method, comprising:
Second terminal plays video;
When the video playing is to object event relevant to specified object, the acoustic signals of predetermined frequency are played, with
Just first terminal knows the generation of the object event by detecting the acoustic signals, and specified object material is added to and is adopted
In the real scene image collected.
A kind of multi-screen interaction method, comprising:
Second service end receives the acoustic signals information for the predetermined frequency that first service end provides;
The acoustic signals of the predetermined frequency are inserted into the position that object event relevant to specified object occurs in video,
So as to during playing the video by second terminal, by first terminal by detect the acoustic signals know it is described
The generation of object event, and specified object material is added in collected real scene image.
A kind of video interaction method, comprising:
First terminal load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
When the video playing in the first terminal is to object event relevant to the specified object, interaction is jumped to
Interface;
Real scene image collection result is shown in the interactive interface, and the specified object material is added to the reality
In scape image.
A kind of video interaction method, comprising:
First service end saves interaction material, and the interaction material includes according to the specified to pixel of specified Object Creation
Material;
The interaction material is supplied to first terminal, so that the video playing in the first terminal is specified to described
When the relevant object event of object, interactive interface is jumped to, and shows real scene image collection result in the interactive interface, and
The specified object material is added in the real scene image.
A kind of interactive approach, comprising:
Load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When detecting object event relevant to the specified object, the specified object material is added to the reality
In scape image.
A kind of multi-screen interactive device is applied to first terminal, comprising:
First material loading unit, for loading interaction material, the interaction material includes according to specified Object Creation
Specified object material;
First real scene image acquisition unit, for acquiring real scene image;
First material adding unit, for the video playing in the second terminal to target relevant to the specified object
When event, the specified object material is added in the real scene image.
A kind of multi-screen interactive device is applied to first service end, comprising:
First interaction material storage unit, is used to save interaction material, and the interaction material includes according to specified object wound
The specified object material built;
First interaction material provides unit, for the interaction material to be supplied to first terminal, by the first terminal
Real scene image is acquired, when the video playing in second terminal is to object event corresponding with the specified object, by the finger
Determine object material to be added in the real scene image.
A kind of multi-screen interactive device is applied to second terminal, comprising:
Video playback unit, for playing video;
Acoustic signals broadcast unit, for the video playing arrive object event relevant to specified object when, broadcasting
The acoustic signals of predetermined frequency, so that first terminal knows by detecting the acoustic signals generation of the object event, and
Specified object material is added in collected real scene image.
A kind of multi-screen interactive device is applied to second service end, comprising:
Acoustic signals information receiving unit, the acoustic signals information of the predetermined frequency for receiving the offer of first service end;
Acoustic signals information insertion unit is inserted for the position that object event relevant to specified object occurs in video
Enter the acoustic signals of the predetermined frequency, to be led to during playing the video by second terminal by first terminal
It crosses and detects the generation that the acoustic signals know the object event, and specified object material is added to collected realistic picture
As in.
A kind of video interactive device is applied to first terminal, comprising:
Loading unit, for loading interaction material, the interaction material includes according to the specified object for specifying Object Creation
Material;
Interface jump-transfer unit, for the video playing in the first terminal to target relevant to the specified object
When event, interactive interface is jumped to;
Material adding unit, for showing real scene image collection result in the interactive interface, and by described specified pair
Pixel material is added in the real scene image.
A kind of video interactive device is applied to first service end, comprising:
Second material storage unit, for saving interaction material, the interaction material includes according to specified Object Creation
Specified object material;
Second material provides unit, for the interaction material to be supplied to first terminal, so as in the first terminal
Video playing to object event relevant to the specified object when, jump to interactive interface, and in the interactive interface
It shows real scene image collection result, and the specified object material is added in the real scene image.
A kind of interactive device, comprising:
Second material loading unit, for loading interaction material, the interaction material includes according to specified Object Creation
Specified object material;
Second real scene image acquisition unit, for acquiring real scene image;
Second material adding unit, for when detecting object event relevant to the specified object, by the finger
Determine object material to be added in the real scene image.
A kind of electronic equipment, comprising:
One or more processors;And
With the memory of one or more of relational processors, the memory is for storing program instruction, the journey
Sequence instruction is performed the following operations when reading execution by one or more of processors:
Load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When the video playing in second terminal is to object event relevant to the specified object, by the specified object
Material is added in the real scene image.
According to specific embodiment provided by the present application, this application discloses following technical effects:
By the embodiment of the present application, material can be interacted according to video relevant to specified object/animation creation, specific
During being interacted, real scene image can be acquired to the actual environment where user, and broadcast in second terminal
When object event corresponding with the specified object, specified object material is added in the real scene image and is shown.This
Sample allows user to obtain the experience of space environment (for example, own home is medium) where related specified object comes oneself, because
This, can be improved user to the participation of interaction.
Certainly, any product for implementing the application does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the application
Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the schematic diagram of system provided by the embodiments of the present application;
Fig. 2 is the flow chart of first method provided by the embodiments of the present application;
Fig. 3-1 to 3-10 is the schematic diagram of user interface provided by the embodiments of the present application;
Fig. 4 is the flow chart of second method provided by the embodiments of the present application;
Fig. 5 is the flow chart of third method provided by the embodiments of the present application;
Fig. 6 is the flow chart of fourth method provided by the embodiments of the present application;
Fig. 7 is the flow chart of the 5th method provided by the embodiments of the present application;
Fig. 8 is the flow chart of the 6th method provided by the embodiments of the present application;
Fig. 9 is the flow chart of the 7th method provided by the embodiments of the present application;
Figure 10 is the schematic diagram of first device provided by the embodiments of the present application;
Figure 11 is the schematic diagram of second device provided by the embodiments of the present application;
Figure 12 is the schematic diagram of 3rd device provided by the embodiments of the present application;
Figure 13 is the schematic diagram of the 4th device provided by the embodiments of the present application;
Figure 14 is the schematic diagram of the 5th device provided by the embodiments of the present application;
Figure 15 is the schematic diagram of the 6th device provided by the embodiments of the present application;
Figure 16 is the schematic diagram of the 7th device provided by the embodiments of the present application;
Figure 17 is the schematic diagram of electronic equipment provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, those of ordinary skill in the art's every other embodiment obtained belong to the application protection
Range.
In the embodiment of the present application, a kind of new multi-screen interactive scheme is provided, in this scenario, the mainly hand of user
Terminal device (this with large screen such as the mobile terminal devices such as machine (being known as first terminal in the embodiment of the present application) and television set
Apply embodiment in be known as second terminal) between interaction.Specifically, classes can be broadcast live to some large sizes by second terminal
During the programs such as party (it is of course also possible to being other kinds of program) play out, above-mentioned interactive process is carried out.Example
Such as, in live streaming class party program, meeting program sponsor can be invited to some stars in amusement circle etc. and give a performance, still, existing
In technology, user can only watch the performance of star before the lights from second terminal.And in the embodiment of the present application, then it can lead to
Some technological means are crossed, user is made to obtain the experience of " star to my family ".When specific implementation, it can be provided previously and specifically give pleasure to
The relevant material of the personage's performances such as happy star passes through increasing in second terminal in the performance link of the personage in first terminal
The materials such as performance video, the animation of the personage prerecorded, are projected the true environment where user by the mode of strong reality
In.For example, user is usually the program watched in the second terminals such as TV at home, accordingly, it is possible to by the specific list of characters
Play frequency/animation etc. projects in the family of user.In this way, although there is still a need for the screens across second terminal to watch tool by user
The projection of body is as a result, still, since the background of performance is the real scene image that user takes in the environment, accordingly, with respect to
For the performance before the lights watched in first terminal, user can be made to be somebody's turn to do " star " really in its family
Experience.Certainly, in specific implementation, other than designated person, can also be animal, or even can also be commodity, etc.,
In the embodiment of the present application, " specified object " is collectively referred to as.
When specific implementation, for system architecture angle, referring to Fig. 1, the hardware device that is related in the embodiment of the present application
It can include aforementioned first terminal and second terminal, and the software being related to can be certain association installed in first terminal
Application client (alternatively, being also possible to be solidificated in the program etc. in first terminal) and cloud first service end.
For example, it is assumed that being " providing above-mentioned interaction during double 11 " parties, then since " sponsor of double 11 " parties is usually certain
The company of online sale platform (for example, " mobile phone Taobao ", " day cat " etc.), accordingly, it is possible to be mentioned by the online sale platform
The application client and server-side of confession provide technical support for above-mentioned multi-screen interactive.That is, user can be used
The clients of the application programs such as " mobile phone Taobao ", " day cat " carries out in specific interactive process, and required use in interactive process
The data such as the material arrived can then be provided by server-side.It should be noted that second terminal is deposited mainly as playback terminal
, and the contents such as video wherein played can be and be controlled by the second service end (server etc. of TV station) of rear end
, that is to say, that about signals such as live video streams, the behaviour such as unified vision signal transmitting can be executed by second service end
Make, later, video signal transmission is played out to each second terminal.That is, provided by the embodiments of the present application more
Shield in interactive scene, first terminal is corresponding with second terminal to be different server-side.
It describes in detail below to concrete implementation scheme.
Embodiment one
Firstly, the embodiment one provides a kind of multi-screen interaction method from the angle of client, and referring to fig. 2, this method tool
Body may include:
S201: first terminal load interaction material, the interaction material includes according to the specified object for specifying Object Creation
Material;
Wherein, about interaction material namely specifically carry out augmented reality interactive process in, for generating virtual graph
Material needed for the information contents such as picture.When specific implementation, specified object specifically can be the relevant information of designated person, or refer to
Determine the relevant information of commodity, then alternatively, can also be relevant information of relevant to game under line stage property, etc..Wherein, different
Specified object can correspond to different interactive scenes, for example, when specified object be designated person when, specific scene can be
" star to your family " activity, that is, for example, user during watching TV programme by TV etc., with this solution, can
" passing through " such as " stars " of performance will be participated in program into user family.And when specified object is specified commodity, the commodity
It usually can be and the relevant commodity such as the material object commodity sold in network sale system, it is generally the case that user needs to prop up
More resource is paid to buy the commodity, still, during activity, can be made by giving or surpassing the modes such as sale at low prices
User is given for present.And during giving gifts object, so that it may realize and " be given gifts by the way of in the embodiment of the present application across screen
Object ", specifically, by playing content relevant to specified commodity in the second terminals such as TV, and " passing through " arrives user mobile phone
Equal first terminals, in addition it can provide the option of operation rushed to purchase for the data object to the specified commodity association,
When receiving panic buying operation by the option of operation, it is submitted to server-side, panic buying result is determined by server-side.Therefore, so that with
Family such as is rushed to purchase or is drawn a lottery at the chances, and then obtains corresponding commodity, alternatively, obtaining with the machine of super under-buy corresponding goods
Can, etc..
In addition, the another kind of " giving gifts object across screen " can be corresponded to when specified object is stage property relevant to game under line
Form, that is, system is if it is intended to provide the non-material objects such as some discount coupons, " cash red packet " for user during activity
The process for object of giving gifts can be then associated with by present with game such as magics under line.For example, playing evil spirit in second terminals such as TVs
During art constant pitch purpose, certain stage property during which may be used, then leads to the scheme of the embodiment of the present application, which " can be worn
It more " is shown into first terminals equipment such as the mobile phones of also user, in turn, user can be by the stage property such as clicking
Mode of operation, carry out non-material objects present gets operation.That is, when receiving to target stage property progress operation information,
The operation information is submitted to server-side, determines that this operates incentive message obtained and returns by the server-side, then,
First terminal can provide incentive message obtained, etc..
Specified object material can specifically include by carrying out shooting video material obtained, example to the specified object
Such as, if specified object specifically refers to designated person, the programs such as the sing and dance of designated person performance can be carried out in advance
Video record obtains video material.Alternatively, the specified object material also may include: with the finger under another way
The image for determining object is the cartoon character of prototype, and the cartoon material etc. based on cartoon character production.For example,
, can be according to the image production cartoon character of designated person in the case that specified object is designated person, and it is based on the card
Logical figure image makes cartoon material, including dancing animation, the singing animation etc. by cartoon character, wherein if necessary
" singing " etc. can be then that the cartoon character is dubbed by designated person, alternatively, playing what the designated person had been prerecorded
Song, etc..
Wherein, for the same specified object, it can correspond to and cover different specified object materials more, for example, for
The same designated person, the different programs of performance can generate different personage's materials, etc. respectively.That is, same specified pair
As corresponding material specifically when this specifies object to enter " in the family " of some user, can be selected more to cover by user
Specific material is selected, then, provides specific augmented reality picture using selected material.
In addition, the interaction material that first service end provides can also include: the material for indicating Transfer pipe.For example,
Specifically the material can be generated by the mascot such as door, tunnel, worm hole, " day cat ", transmission light battle array etc..It is this to be transmitted for expression
The material in channel can be used for: before specifically specified object material is added in real scene image, since this specifies object sheet
Being to be performed on the stage at party scene, but and then can come in user home, therefore, in order to enhance interest,
So that the change in location that specified object occurs seems more rationally, it can also be used to indicate the material of Transfer pipe by this first,
Preset animation effect is played, building one kind there will be specified object to pass through this Transfer pipe " passing through " to the atmosphere in its family
It encloses, so that user obtains preferably experience.In addition, when specified object needs leave from family, can also lead to after interaction
This is crossed for indicating the material offer of Transfer pipe and come the animation of inverse process when getting home middle, so that user obtains this specified pair
As being left from its family, the experience that Transfer pipe progressively closes off.
Furthermore the interaction material that first service end provides can also include the speech samples element recorded by the specified object
Material, this speech samples material, which can be used for greet to user when specified object " entrance " is into user family, to be indicated to ask
It waits.Also, the information such as the user's name (including the pet name, Real Name etc.) of user can also be obtained before specific greet,
The greeting in " thousand people, thousand face " is realized, for example, " XXX, I comes to your family ", wherein for different users, " XXX's "
Particular content is different.Above-mentioned greeting can be greeted by way of voice by the specified object, and in order to reach
The purpose in above-mentioned " thousand people, thousand face ", cannot directly be realized by way of prerecording a greeting voice.For this purpose, at this
Apply in embodiment, one section of specific character can be read aloud by specified object (can specifically correspond to the situation of designated person) in advance,
And the voice for each text read aloud is recorded, it will include most initial consonant, simple or compound vowel of a Chinese syllable and tone etc. in this section of text
Articulation type.When specific implementation, above-mentioned specific character typically 1,000 or so, 90% Chinese can be covered substantially
Word pronunciation.In this way, generating specific greeting in the proprietary name according to user when in specified object just " entrance " user family
After language, so that it may by the pronunciation information of each Chinese character saved in the speech samples material, corresponding voice is issued, to reach
To the effect by specifying object to bark out user's name and greeted.
Certainly, in practical applications, it can also include other materials, will not enumerate here.It is above-mentioned when specific implementation
The data volume for interacting material may be bigger, when the process that first terminal loads the interaction material may need to spend longer
Between, as such, it can be that downloading to first terminal local in advance.For example, user is just after the party played in second terminal starts
The program in second terminal is watched on the party main meeting-place interface that can be provided by first terminal, side, and side passes through the main meeting-place circle
It is ready at all times to be interacted in face.And specific " star to your family " link may be during party progress sometime, with
The state synchronized of second terminal carries out, therefore, as long as user enters the party main meeting-place circle of first terminal after party starts
Face, even if the not yet formal starting of specific " star to your family " activity, can also carry out the behaviour of related interaction material downloading in advance
Make, in this way, after specific activity starts, so that it may proceed rapidly to interactive process, avoid not yet downloading due to interaction material at
Caused by function can not timely activity the case where.Certainly, the main meeting-place provided for not entering first terminal in advance
The user at interface participates in above-mentioned " star to your family " activity if necessary, then can also temporarily download relevant interaction material.Its
In, it is too long in order to avoid downloading the time it takes the case where for temporarily downloading, degradation schemes can be provided, for example, can be with
Only download aforementioned special object material, about for expressing Transfer pipe material and speech samples material etc. can no longer under
It carries, at this point, user's body can not receive the greeting of specified object less than the feeling of " passing through " yet.
S202: acquisition real scene image;
When specific implementation, first terminal can provide corresponding loose-leaf for the activity such as " star to your family ", in the page
Option of operation for issuing interaction request can be provided in face.For example, being the activity in an example as shown in figure 3-1
Page presentation schematic diagram can also provide buttons such as " starting immediately " wherein the prompt informations such as related specified object can be provided,
The button can be the option of operation that the user issues interaction request.User " be able to should start immediately " button by clicking
Issue specific interaction request.Certainly, in practical applications, the interaction request of user, example can also be received by other means
Such as, two dimensional code can be shown on second terminal screen, user sends out in such a way that first terminal is scanned two dimensional code
Request, etc. out.
When specific implementation, the option of operation such as the button of above-mentioned " starting immediately " can be located before formal interactive process starts
In inoperable state, the premature click of user is avoided.In addition, can also not in terms of the official documents and correspondence shown in option of operation
Together, for example, under inoperable state, " excellent horse back unlatching ", etc. can be shown as.Before interaction will start, then
The official documents and correspondence shown on button is revised as to states such as " starting immediately ", also, a kind of nervous, anxious etc. in order to build for user
To atmosphere, while more user being attracted to execute clicking operation, which can also be showed to " breathing " is dynamic to imitate in display, example
Such as, button can be shunk with 70% ratio, then returned to original size after 3S, shunk again after 3S, and constantly repeat this rhythm,
Etc..
Wherein, receive user interaction request time point can earlier than specified object formally from second terminal disappear simultaneously " into
In access customer man " time point, be really because, user issue interaction request after, client can also first carry out in advance
Preparation.Specifically, the real scene image that can be first turned in first terminal is adopted after the interaction request for receiving user
Collection, that is, can star the CCD camera assembly on first terminal, then into the state of shoot on location, to be subsequent based on increasing
The interaction of strong reality is ready.
When specific implementation, before the specifically acquisition of starting real scene image, it can also first determine whether that first terminal is locally
It is no to be loaded with interaction material, if not yet loaded, the loading processing of interaction material can be carried out first.
It should be noted that in the embodiment of the present application, the virtual image being presented to the user by way of augmented reality
It is and object material etc. is specifically designated, in order to enable the process of interaction more has authenticity, specified object material can be made
It is in the plane shown in real scene image, for example, it may be ground, the plane of desk, etc., in this way, if specified
Object is designated person, then the performance process of designated person can be made in one plane to carry out.And if without
Specially treated, then after the specified object material being added in real scene image, it is possible that specified object material " floaing "
In the air the case where, if corresponding specified object material is the performance such as the dancing of designated person, singing, it can to specify
Personage's " floaing " performs in the air, this can reduce user experience, is not available family and obtains more true feeling of immersion on the spot in person.
For this purpose, in a preferred embodiment of the present application, the outdoor scene can also will be added to the specified object material
It is shown in the plane for including in image.When specific implementation, plane identification can be carried out from real scene image by first terminal,
Then, the specified object material is added in real scene image in the plane, avoids generating " floaing " skyborne phenomenon.This
When, it is particularly occurd on which location point about specified object material, can be and be arbitrarily decided by first terminal, as long as being located at one
In a plane.Alternatively, further object specifically can also be specified by user's selection under another implementation
The appearance position of material.Specifically, client can therefrom carry out plane monitoring-network first after starting real scene image detection,
After detecting a plane, as shown in figure 3-2, a range can be drawn out, and provide a moveable cursor, also
Can be prompted in interface user cursor is put into draw out can be in placing range.Moving the cursor to this in user can place
After in range, the color of cursor can change, to prompt the placement location of user available.At this point, client can be remembered
Record the position that lower cursor is specifically placed.When specific implementation, in order to record the location information that the cursor is placed, can there are many
Mode, for example, under a kind of mode, can using the position where certain moment first terminal as initial position (for example,
Using position of cursor placement when good where first terminal as initial position, etc.), and (can be first with the initial position
The geometric center point etc. of terminal) as coordinate origin creation coordinate system, then, being placed into cursor specifically can placing range
Afterwards, so that it may position of the cursor relative to the coordinate system is recorded, in this way, subsequent be added to real scene image in specified object material
When middle, so that it may which being subject to the position is added.
In addition, as it was noted above, in alternative embodiments, arriving real scene image in specified object material formal " entrance "
In before, can also will be used to indicate that Transfer pipe material is added in real scene image, then under aforesaid way, user complete
After cursor placement, the material specific for expressing Transfer pipe can also be showed at the position where the cursor.For example,
Assuming that when then implementing, as shown in Fig. 3-3, completing the placement of cursor in user using " transmission gate " material as Transfer pipe
Afterwards, user's " having confirmed that plane, click and place transmission gate ", etc. can be prompted, after user completes to click to cursor, so that it may
To go out to show " transmission gate " material in corresponding position.
Subsequent specifically to start to add specified object material into real scene image, which can disappear, also, can be with
Animation effect is provided according to Transfer pipe material, to enter the bat by the Transfer pipe for having shown specified object
The animation process in real scene image taken the photograph.For example, it illustrates two in above-mentioned animation process as shown in Fig. 3-4 and 3-5
A state, it can be seen that it, which is presented, passes through the effect in transmission gate " entrance " the user family for someone.Described specified
After object material enters in the real scene image, the material for indicating Transfer pipe disappears.At the end of interaction, then may be used
It is described for indicating the material of Transfer pipe, and providing there is specified object to pass through the Transfer pipe for showing to show again
The animation process left, after leaving completely, the material for indicating Transfer pipe disappears.
S203: when the video playing in second terminal is to object event relevant to the specified object, by the finger
Determine object material to be added in the real scene image.
When specific implementation, the time point for starting to be interacted be can be with broadcasting in second terminal with the specified object
Corresponding object event is relevant.Wherein, so-called object event can specifically refer to interactive event relevant to specified object
The events such as start.For example, when having arrived " star to your family " link, can be put before the lights in the program that second terminal plays
" transmission gate " (can be entity, or be also possible to virtual through projection) etc. is set, specifies object from stage
" transmission gate " event for being pierced by can serve as the object event, at this point, at the beginning of the time point also just becomes interaction
Between point, correspondingly, first terminal, which can execute specifically specified object material, is added to the phase being shown in real scene image
Pass processing.
Wherein, since the program in second terminal is usually to broadcast live, therefore, it is impossible to according in advance in first terminal
In set the mode of time, come keep with second terminal occur object event time point it is synchronous.And due to second terminal
Middle broadcasting be usually TV signal, although TV signal send time point be it is identical, for diverse geographic location
User for, signal reach user time point may be different.That is, also referring to determine object from stage
" transmission gate " be pierced by this event, Pekinese user may be the hair for seeing the event from second terminal in 21:00:00
It is raw, and the user in Guangzhou then may be just to see in 21:00:02, etc..Therefore, though by first service end work people
Member is when party scene sees that event occurs, and the unified notification message sent to each first terminal about object event can also
The result that can there is a situation where that user's real experiences of different regions arrive is different, some possible users feel wearing for specified object
Event more in process and second terminal can be with seamless connection, and some users then may be imperceptible, in fact it could happen that TV
Situations such as specifying object to be not yet pierced by from transmission gate in program, but having been introduced into the real scene image in mobile phone.
For this purpose, in the embodiment of the present application, since user is usually in the case where TV is seen on side, while mobile with mobile phone etc.
Terminal is interacted, and therefore, first terminal and second terminal are typically situated in the same space environment, and the two be separated by away from
From will not be too far.In this case, sense of the first terminal to object event in second terminal can also be accomplished by the following way
Know: the moment can occur in the object event by television program designing side, one is added in vision signal to be sent in advance
Set the acoustic signals of frequency.In this way, the acoustic signals can also be sent therewith as specific vision signal is sent to the second terminal of user
It reaches, also, the frequency of the acoustic signals can be except the earshot of the mankind, that is, user can't perceive the sound wave
The presence of signal, still, first terminal can then perceive the acoustic signals, and in turn, first terminal can believe the sound wave
Number indicate as object event, and then executes subsequent interactive process.It in this way, can be by target thing
The generation mark carrying of part is communicated to first terminal in specific vision signal, and through second terminal, it may therefore be assured that
The event that user sees in second terminal preferably can carry out seamless connection with the image seen in first terminal, to obtain
It obtains and preferably experiences.
Wherein, about the acoustic signals, specific frequency information can be to be determined by first service end, and by
First service end is supplied to second service end, by second service end during sending vision signal, if it find that sending out
Give birth to the relevant object event of the specified object, so that it may go out in the corresponding position of vision signal and be inserted into the acoustic signals.It is another
Aspect, first service end can also be informed the frequency information of the acoustic signals to first terminal by some modes, in this way,
It can be established and be contacted by the acoustic signals between one terminal and second terminal.It should be noted that in specific implementation, together
In one party, multiple " star to your family " links, corresponding different specified object, and hence it is also possible to respectively may have
The acoustic signals of different frequency are provided for different specified objects.First service end can will be between specified object and frequency of sound wave
Corresponding relationship be supplied to second service end, second service end can be provided when adding acoustic signals according to first service end
Corresponding relationship be added;Also, the corresponding relationship is also provided to first terminal, and first terminal can be according to detecting
The difference of the frequency of acoustic signals determines corresponding specifically which the specified object of current event.
After specifically starting interaction, as it was noted above, can be started by the animation based on Transfer pipe as interaction
Mark, later, can will be added in real scene image with specified object material, if user designated position, can add
It is added in real scene image at corresponding position.If specified object material is added, user has moved first terminal
The position of equipment namely its relative to initial position have occurred that variation so that not going out but after material is added
In the display screen of present first terminal.For this situation, due to initial position (position one before based on mobile terminal device
Denier, which determines, just no longer to be changed) coordinate system is created, and hence it is also possible to be based on SLAM (Concurrent Mapping and
Localization, immediately positioning and map structuring) etc. technologies, determine the position after first terminal is mobile in the coordinate system
Coordinate, namely determine first terminal be moved to where, and first terminal can be determined relative to initial position
It is what direction to have occurred movement to, and then user can be guided to move its first terminal in the opposite direction to the party, with
Added material is appeared in the picture of first terminal.It as seen in figures 3-6, can be by way of " arrow "
Guidance user moves first terminal.
As it was noted above, the corresponding material of same specified object may have more sets, it may for example comprise the material of dancing is sung
Material etc., then can also specifically be provided for user for selecting before specified object material is added to real scene image
The option of specific material, user can select.During user selects, one section of fixed view can also be played
Frequency etc., for example, the content of this section of fixed video can be a box, and is ceaselessly beating, to express specified object just
Doing preparations such as change one's clothes, etc..After user has selected a specific material, so that it may add selected material
It is added in real scene image and is shown, for example, being to show the specified material exhibition in a specific example as shown in fig. 3 to 7
A wherein frame image during showing, wherein the parts of images about personage is virtual image, and the subsequent background of personage is then to use
Family passes through the collected real scene image of first terminal.
Wherein, due to that can also include the speech samples material recorded by the specified object in interaction material,
It is added to after the specified object material, the user's name information of the first terminal association user, and needle can also be obtained
The dedicated greeting corpus of the user is generated to the association user, including the user's name.In turn, according to institute's predicate
Sound sample material converts voice for greeting expectation and plays out.It, can be with correspondingly, in specified object material
Movement from specified object to user, expression etc. when greeting there are, so that user feels the strictly specified object sheet
People greets with oneself.Wherein, about user's name, the account that can be logged on to according to active user determines corresponding use
The family pet name, or user's Real Name, etc. can also be obtained, by upper according to the real-name authentication information that user is provided previously
" thousand people, thousand face " for different user may be implemented in the mode of stating.Certainly, if the pet name of certain user, true can not be got
Then relatively general address, etc. can also be generated for user according to gender, the age etc. of user in name etc..
In addition, shooting operation option specifically can also be provided after specified object material is added in real scene image,
When receiving operation requests by the option of operation, it can be generated pair by carrying out the modes such as screenshotss or record screen to each image layer
The image (photo or video etc.) answered.In this way, the group photo, etc. with the specified object may be implemented.Also
It is to say, can also includes needing to specify with described when carrying out the shooting of the images such as photo or video, in the real scene image
Personage's real scene image that object is taken a group photo.For example, during certain user specifically interacts, since it is usually at home
It is interacted, may also have other people at one's side, if other people want to take a group photo with the specified object, can enter
The real scene image pickup area of first terminal, allows first terminal to collect his/her real scene image, later, Yong Hutong
It crosses and operates the option of operation, specific photographing operation can be realized.When specific implementation, depth of view information can also be utilized, is distinguished
Front-rear position relationship between specified object in personage and virtual image in real scene image out is true to further enhance
Sense.
Wherein, due to further including the option of operation such as some buttons in interface, when carrying out screenshotss or record screen, may be used also
To remove wherein for showing the image layer of option of operation, only to the image where real scene image layer and the video/animation
Layer carries out screenshotss and records screen operation perhaps to improve the sense of reality of generated photo or video.
When specific implementation, the function of shooting photo and video can be provided by the same option of operation, and by not
With mode of operation distinguish the specifically intended of user.For example, corresponding shooting is shone when carrying out clicking operation to the option of operation
Piece function, it is corresponding when carrying out long press operation to the option of operation to shoot video capability, etc..That is, if user
It is to click aforesaid operations option, then can triggers screenshotss operation, generate photo.It is not put always if user pins option of operation,
Record screen operation is then triggered, until user decontrols.In addition, specific implementation when, can also to the video length recorded every time into
Row limitation, for example, every section of video is no more than 10S, then user pins after option of operation is more than 10S, does not put even if still pinning,
It can terminate record screen operation, generate up to video of 10S, etc..
Furthermore in interactive process, the option of operation shared to the resulting photo of shooting or video can also be provided.
For example, the option of operation can be located at the side of the above-mentioned option of operation for being used to shoot photo or video, and can show
Prompt information, such as: " click can share Wonderful time ", etc..After user clicks, multiple societies can be provided
The sharing entrance of the network platform is handed over, user can choose social network-i i-platform therein and share.
In addition, in interactive process, in addition to can play the corresponding video/animation of specified object, or with specified object
The operation such as souvenir, sharing of taking pictures is carried out, other interactive operation options can also be provided for user, for example, it is also possible to provide ginseng
Add the option of operation of certain public welfare activities, if user is ready to participate in, can directly be clicked by the option of operation.Or
Person can also will participate in the number of corresponding public welfare activities, service associated with the schedule of public welfare activities by this channel
End is by counting this number information, and the staff such as director for being supplied to program scene in real time, so that the cloth at program scene
Corresponding variation can occur with the schedule of public welfare activities for scape etc., for example, the setting at program scene is gradually become by desert
At oasis, etc..
After interaction, specified object material will not be shown again, and certainly, which is also possible to occur again
It, therefore, at this time still can be again by transmission in order to preferably show " passing through " process of user in the picture of second terminal
The material in channel is shown, and as shown in figures 3-8, can be provided for showing has personage to leave by the Transfer pipe
Animation process, Transfer pipe material itself can also show gradually smaller process, described for indicating to pass after leaving completely
The material in channel is sent also to disappear from picture.
After interaction, the interface to real scene image acquisition can be exited, at this point, in optional implementation, also
The undertaking page for the resulting each photo of shooting or video to be browsed and shared can be provided.That is, mutual
After dynamic, a undertakings page can be provided, for guiding user to shooting obtained photo or video carries out sharing behaviour
Make.Wherein, each photo or video can be ranked up in the page according to the sequencing of shooting, for example, such as Fig. 3-9 institute
Show, specifically can from left to right start to carry out successively displaying, etc. according to the time from the near to the distant.During displaying, use
Wherein any one photo or video are clicked in family, can arouse sharing component interface, as shown in figs. 3-10, user can pass through
The component completes specific sharing operation.
In short, specified object material can be loaded by the embodiment of the present application, it, can during specifically being interacted
To be acquired real scene image to the actual environment where user, and broadcast is corresponding with the specified object in second terminal
When object event, specified object material is added in the real scene image and is shown.In this way, user is allowed to obtain spy
The experience for determining space environment (for example, own home is medium) where object comes oneself, it is thus possible to improve ginseng of the user to interaction
With degree.
Embodiment two
The embodiment from the angle of server-side, provides a kind of multi-screen interaction method second is that corresponding with embodiment one,
Wherein, referring to fig. 4, this method can specifically include:
S401: first service end saves interaction material, and the interaction material includes specified pair according to specified Object Creation
Pixel material;
S402: being supplied to first terminal for the interaction material, acquires real scene image by the first terminal, when the second end
When video playing in end is to object event corresponding with the specified object, the specified object material is added to the reality
In scape image.
When specific implementation, the specified object includes designated person.It certainly, can also include animal, quotient when specific implementation
Product, stage property etc..
Specifically when providing interaction material, it can provide by carrying out shooting video element obtained to the specified object
Material.Alternatively, providing the cartoon character with the image of the specified object for prototype, and based on cartoon character production
Cartoon material.
Specifically, the voice recorded by the designated person can also be provided when the specified object is designated person
Sample material.
When specific implementation, in order to enable first terminal more easily perceives the generation of object event described in second terminal,
First service end can also provide the acoustic signals of predetermined frequency to the corresponding second service end of the second terminal, to be used for
When video playing in the second terminal is to object event corresponding with the specified object, it is added in the video, with
Toilet states the generation that first terminal knows the object event by detecting the acoustic signals of the predetermined frequency.
When specific implementation, server-side can also the interaction situation to each client count.Wherein, it can also will count
Information is supplied to the corresponding second service end of the second terminal, and the statistical information is added to institute by the second service end
In the video for stating second terminal broadcasting, to be used to announce statistical result by second terminal, alternatively, knot can also be counted
Fruit influences the setting, etc. at party scene.
Wherein, due to the embodiment second is that corresponding with embodiment one, before relevant specific implementation may refer to
The record in embodiment one is stated, which is not described herein again.
Embodiment three
The embodiment provides a kind of multi-screen interaction method third is that from the angle of second terminal, referring to Fig. 5, this method tool
Body may include:
S501: second terminal plays video;
S502: when the video playing is to object event relevant to specified object, the sound wave letter of predetermined frequency is played
Number, so that first terminal knows the generation of the object event by detecting the acoustic signals, and specified object material is added
It is added in collected real scene image.
When specific implementation, the specified object of difference can correspond to the acoustic signals of different frequency.
Example IV
The example IV is to provide a kind of multi-screen interaction method from the angle at the corresponding second service end of second terminal,
Referring to Fig. 6, this method be can specifically include:
S601: second service end receives the acoustic signals information for the predetermined frequency that first service end provides;
S602: the sound wave of the predetermined frequency is inserted into the position that object event relevant to specified object occurs in video
Signal, to be obtained by first terminal by detecting the acoustic signals during playing the video by second terminal
Know the generation of the object event, and specified object material is added in collected real scene image.
When specific implementation, the second service end can also receive that the first service end provides to first terminal mutually emotionally
The statistical information is added in the video and sends by the statistical information of condition, with for by the second terminal into
Row plays.
Embodiment five
In previous embodiment one into example IV, be multi-screen interactive is realized between first terminal and second terminal, and
In practical applications, user can also watch the videos such as party programme televised live, in this case, Yong Hu by first terminal
During watching video by first terminal, it is also possible to obtain the experience of " star to my family ".That is, same end can be passed through
End carries out viewing video and interaction.
Specifically, the embodiment five provides a kind of video interaction method, and this method can specifically include referring to Fig. 7:
S701: first terminal load interaction material, the interaction material includes according to the specified object for specifying Object Creation
Material;
S702: it when the video playing in the first terminal is to object event relevant to the specified object, jumps
To interactive interface;
S703: real scene image collection result is shown in the interactive interface, and the specified object material is added to
In the real scene image.
That is, user, during watching video by first terminal, which has arrived specifies with described
The relevant object event of object, then can jump in interactive interface, in the interactive interface, can carry out real scene image first
Acquisition then specified object material is added in the real scene image.In this way, user equally obtains specified object from " evening
Experience of the ground " passing through " such as meeting scene " to space environment where oneself.
About specific implementations other in the embodiment five, the record in foregoing embodiments may refer to, here no longer
It repeats.
Embodiment six
The embodiment provides a kind of video interactive side from the angle at first service end sixth is that corresponding with embodiment five
Method, referring to Fig. 8, this method be can specifically include:
S801: first service end saves interaction material, and the interaction material includes specified pair according to specified Object Creation
Pixel material;
S802: being supplied to first terminal for the interaction material, so as to the video playing in the first terminal to institute
When stating the relevant object event of specified object, interactive interface is jumped to, and shows real scene image acquisition in the interactive interface
As a result, and the specified object material is added in the real scene image.
Embodiment seven
In foregoing embodiments, be specific interaction is provided using first terminals such as mobile phones as executing subject as a result,
And in practical applications, the program can be extended to other scenes, for example, besides a cellular phone, it can also be by intelligent glasses
Equal wearable devices are as first terminal, in addition to that can be interacted with the video played in second terminal or first terminal,
The video that can also be and played in motion picture screen, or performance, the performance, businessman's advertising campaign, competitive sports of scene viewing
Etc. during dependent event, carry out specific interactive process.For this purpose, the embodiment seven provides another interactive approach,
Referring to Fig. 9, this method be can specifically include:
S901: first terminal load interaction material, the interaction material includes according to the specified object for specifying Object Creation
Material;
In specific implementation, due to the application scenarios in the embodiment can without limiting, it is specific providing
Before interactive interface, the interface that can be provided for load interaction material can provide the mutual of plurality of optional in the interface
Therbligs material, for example, material relevant to the film that current Major Line Cinema is being shown, with performance, advertising campaign, the match under line
Etc. the relevant material of events, etc., user can therefrom select the material needed for oneself to be downloaded.In addition, implementing
When, since some application programs provide the function of booking tickets on line for user, user can be ordered by way of operating on line
Ticket can also include the relevant admission tickets such as various performances, match wherein not only may include film ticket.Therefore, specific to provide mutually
It when therbligs material, can also be provided according to the specific ticketing information of user, for example, pre- by certain online seat reservation system in user
When having ordered certain film ticket, if just there is interaction material relevant to the film, user can be prompted to be downloaded, etc.
Deng.It should be noted that in the embodiment of the present application, the interaction material of downloading can be stored in the terminal locals such as mobile phone, alternatively,
It can also download in the terminals such as wearable device, more easily to be interacted during viewing film, performance etc..
In this embodiment, specifically specified object can equally refer to designated person, commodity, stage property etc..
S902: acquisition real scene image;
In general, wearable device with devices such as cameras, therefore, can carry out real scene image by wearable device etc.
Acquisition, user is durings viewing film, the performance etc., the image that the wearable devices such as practical transmitted through glasses are seen, can be with
It is the collected real scene image of this glasses.
S903: when detecting object event relevant to the specified object, the specified object material is added to
In the real scene image.
Specific object event can be specified object and appear in the processes such as specific film, performance, match, advertising campaign
In, etc..Specifically when being detected to object event, can there are many mode, for example, under a kind of mode, can by film,
The projection side or the side of holding of performance, match, advertising campaign, are inserted into some information of acoustic wave etc. in specific event node, can
Wearable device etc. knows the generation of specific event by way of detecting this signal.Alternatively, under other implementations, also
It can be by the generation directly by knowing object event in a manner of analyze etc. to collected real scene image.For example, specifically existing
During being interacted using wearable device, the collected real scene image of wearable device camera with user is practical watches
The real scene image arrived is usually identical, or with overlapped part, then if the user sees that certain object event occurs,
Then the camera actually can also collect the information of corresponding event, in addition, wearable device can also have sound collection
Therefore device etc. can know the generation, etc. of objectives event by modes such as image analysis, speech analysis.
Corresponding with embodiment one, the embodiment of the present application also provides a kind of multi-screen interactive devices, specifically, referring to figure
10, which is applied to first terminal, comprising:
First material loading unit 1001, for loading interaction material, the interaction material includes that basis specifies object wound
The specified object material built;
First real scene image acquisition unit 1002, for acquiring real scene image;
First material adding unit 1003, for the video playing in the second terminal to relevant to the specified object
When object event, the specified object material is added in the real scene image.
Wherein, the video played in the second terminal is live video stream.
The first terminal receives the acoustic signals of the predetermined frequency issued when the video playing, determines the target thing
The generation of part.Wherein, the acoustic signals can be by the video playing in the second terminal to corresponding with the specified object
Object event when issue.
Wherein, the first material adding unit specifically can be used for:
The specified object material is added in the plane in the real scene image included and is shown.
When specific implementation, which can also include:
Placement location determination unit determines collected realistic picture before the addition specified object material
Placement location as in;
The first material adding unit specifically can be used for: the specified object material is added to the placement location
Place.
Wherein, the placement location determination unit specifically can be used for:
Determine the plan-position in the collected real scene image, the plan-position that the placement location is located at
In.
Specifically, the placement location determination unit can specifically include:
Plane monitoring-network subelement, for carrying out plane monitoring-network in collected real scene image;
Cursor provides subelement, and for providing cursor, according to the plane detected, determine cursor can placing range;
Placement location determines subelement, and the position for the cursor to be placed is as the placement location.
Wherein, the placement location determines that subelement can specifically include:
Establishment of coordinate system subelement, for establishing coordinate system as origin using the initial position where first terminal;
Coordinate determines subelement, for determining the cursor coordinates of position that the cursor is placed in the coordinate system;
Position determines subelement, for using the cursor coordinates as the placement location.
Specifically, described device can also include:
Change direction determination unit, for working as institute after the specified object material is added at the placement location
When stating material and not appearing in the interface of the first terminal, variation side of the first terminal relative to the initial position is determined
To;
Prompt unit, for providing mentioning for opposite direction in the interface of the first terminal according to the change direction
Indicating is known.
When specific implementation, the interaction material can also include material for indicating Transfer pipe, which can be with
Include:
Channel material adding unit, for being used to indicate that transmission to be logical for described after the acquisition real scene image step
The material in road is added in the real scene image.
Wherein, the first material adding unit specifically can be used for:
Based on the Transfer pipe material, show the specified object by the Transfer pipe enter described in take
Process in real scene image.
In addition, can also include the speech samples material recorded by the specified object in the interaction material;
Described device can also include:
User's name obtaining unit, for obtaining the user's name information of the first terminal association user;
Corpus generation unit is greeted, for generating the greeting corpus including the user's name for the association user;
Broadcast unit, for converting voice for greeting expectation and playing according to the speech samples material.
Wherein, the specified object material is the corresponding more set materials of same specified object, described device further include:
Selection of materials option provides unit, selects about the operation provided for carrying out selection to the specified object material
?;
The first material adding unit can be used for:
Selected specified object material is added in the real scene image.
In addition, the device can also include:
It shoots option and unit is provided, for being shown being added in the real scene image to the specified object material
During, shooting operation option is provided;
Image generation unit, for receiving operation requests by the shooting operation option, according to each image layer, generation pair
The image answered, described image layer include real scene image, the specified image to pixel material.
Wherein, described image generation unit specifically can be used for:
Screenshotss or record screen are carried out to each image layer, and are removed wherein for showing the image layer of option of operation, shooting is generated
Image.
It further include the character image taken a group photo with the specified object when specific implementation, in the real scene image.
Furthermore it is also possible to include:
Share option and unit is provided, for providing the option of operation shared to shooting image.
Furthermore, further includes:
It accepts the page and unit is provided, for providing the undertaking page that shooting image is browsed and shared.
When specific implementation, the specified object includes the information of designated person.
Alternatively, the specified object includes the information of specified commodity.
Specifically, further include:
Rush to purchase option provide unit, for it is described the specified object material is added in the real scene image after,
The option of operation rushed to purchase for the data object to the specified commodity association is provided;
Unit is submitted, when for receiving panic buying operation by the option of operation, server-side is submitted to, being determined by server-side
Rush to purchase result.
In addition, the specified object includes the relevant stage property information of game under line.
At this point, the device can also include:
Operation information submit unit, for it is described the specified object material is added in the real scene image after,
When receiving to target stage property progress operation information, the operation information is submitted to server-side, it is true by the server-side
Fixed operation incentive message obtained simultaneously returns;
Incentive message provides unit, for providing incentive message obtained.
Wherein, the specified object material includes by carrying out shooting video material obtained to the specified object.
Alternatively, the specified object material includes: the cartoon character with the image of the specified object for prototype, with
And the cartoon material based on cartoon character production.
Corresponding with embodiment two, the embodiment of the present application also provides a kind of multi-screen interactive devices, referring to Figure 11, the device
Applied to first service end, comprising:
First interaction material storage unit 1101, for saving interaction material, the interaction material includes according to specified pair
As the specified object material of creation;
First interaction material provides unit 1102, for the interaction material to be supplied to first terminal, by described first
Terminal acquires real scene image, when the video playing in second terminal is to object event corresponding with the specified object, by institute
Specified object material is stated to be added in the real scene image.
Wherein, the specified object includes designated person.
Specifically, it is described first interaction material storage unit specifically can be used for: save by the specified object into
Row shoots video material obtained.
Alternatively, saving the cartoon character with the image of the specified object for prototype, and it is based on the cartoon character
The cartoon material of production.
Wherein, the specified object includes designated person, and the first interaction material storage unit can be also used for:
Save the speech samples material recorded by the designated person.
In addition, the device can also include:
Acoustic signals information provider unit, for providing predetermined frequency to the corresponding second service end of the second terminal
Acoustic signals add when with for video playing in the second terminal to object event corresponding with the specified object
It is added in the video, so that the first terminal knows the object event by detecting the acoustic signals of the predetermined frequency
Generation.
Furthermore it is also possible to include:
Statistic unit is counted for the interaction situation to each first terminal;
Statistical information provides unit, for statistical information to be supplied to the corresponding second service end of the second terminal, by
The statistical information is added in the video that the second terminal plays by the second service end.
Corresponding with embodiment three, the embodiment of the present application also provides a kind of multi-screen interactive devices, referring to Figure 12, the device
Applied to second terminal, comprising:
Video playback unit 1201, for playing video;
Acoustic signals broadcast unit 1202, for when the video playing is to object event relevant to specified object,
The acoustic signals of predetermined frequency are played, so that first terminal knows by the detection acoustic signals hair of the object event
It is raw, and specified object material is added in collected real scene image.
Wherein, different specified objects correspond to the acoustic signals of different frequency.
Corresponding with example IV, the embodiment of the present application also provides a kind of multi-screen interactive devices, referring to Figure 13, the device
Applied to second service end, comprising:
Acoustic signals information receiving unit 1301, the acoustic signals letter of the predetermined frequency for receiving the offer of first service end
Breath;
Acoustic signals information insertion unit 1302, for the position that object event relevant to specified object occurs in video
The acoustic signals for being inserted into the predetermined frequency are set, so as to during playing the video by second terminal, eventually by first
The generation of the object event is known by detecting the acoustic signals in end, and specified object material is added to collected reality
In scape image.
Wherein, which can also include:
Statistical information receiving unit, the statistics to first terminal interaction situation provided for receiving the first service end
Information;
Statistical information broadcast unit is sent for the statistical information to be added in the video, for leading to
The second terminal is crossed to play out.
Corresponding with embodiment five, the embodiment of the present application also provides a kind of video interactive devices, referring to Figure 14, the device
Applied to first terminal, comprising:
Loading unit 1401, for loading interaction material, the interaction material includes that basis specifies the specified of Object Creation
Object material;
Interface jump-transfer unit 1402, for the video playing in the first terminal to relevant to the specified object
When object event, interactive interface is jumped to;
Material adding unit 1403, for showing real scene image collection result in the interactive interface, and by the finger
Determine object material to be added in the real scene image.
Corresponding with embodiment six, the embodiment of the present application also provides a kind of video interactive devices, referring to Figure 15, the device
Applied to first service end, comprising:
Second material storage unit 1501, for saving interaction material, the interaction material includes that basis specifies object wound
The specified object material built;
Second material provides unit 1502, for the interaction material to be supplied to first terminal, eventually so as to described first
When video playing in end is to object event relevant to the specified object, interactive interface is jumped to, and in the mutual arena
Real scene image collection result is shown in face, and the specified object material is added in the real scene image.
Corresponding with embodiment seven, the embodiment of the present application also provides a kind of interactive device, referring to Figure 16, which can be with
Include:
Second material loading unit 1601, for loading interaction material, the interaction material includes that basis specifies object wound
The specified object material built;
Second real scene image acquisition unit 1602, for acquiring real scene image;
Second material adding unit 1603, for when detecting object event relevant to the specified object, by institute
Specified object material is stated to be added in the real scene image.
In addition, the embodiment of the present application also provides a kind of electronic equipment, comprising:
One or more processors;And
With the memory of one or more of relational processors, the memory is for storing program instruction, the journey
Sequence instruction is performed the following operations when reading execution by one or more of processors:
Load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When the video playing in second terminal is to object event relevant to the specified object, by the specified object
Material is added in the real scene image.
Wherein, Figure 17 illustratively illustrates the framework of electronic equipment, for example, equipment 1700 can be mobile phone,
Computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building equipment, a number
Word assistant, aircraft etc..
Referring to Fig.1 7, equipment 1700 may include following one or more components: processing component 1702, memory 1704,
Power supply module 1706, multimedia component 1708, audio component 1710, the interface 1712 of input/output (I/O), sensor module
1714 and communication component 1716.
Processing component 1702 usually control equipment 1700 integrated operation, such as with display, telephone call, data communication,
Camera operation and record operate associated operation.Processing element 1702 may include one or more processors 1720 to execute
Instruction, with complete disclosed technique scheme offer video broadcasting method in when meeting preset condition, generate flow constriction
Request, and it is sent to server, wherein there is for trigger the server acquisition target concern area record in flow constriction request
The information in domain, the flow constriction request preferentially guarantee the code rate of video content in target region-of-interest for request server;
The corresponding video content of the ASCII stream file ASCII is played according to the ASCII stream file ASCII that server returns, wherein the ASCII stream file ASCII is service
Device requests to carry out what Compression was handled to the video content except the target region-of-interest according to the flow constriction
The all or part of the steps of video file.In addition, processing component 1702 may include one or more modules, it is convenient for processing component
Interaction between 1702 and other assemblies.For example, processing component 1702 may include multi-media module, to facilitate multimedia component
Interaction between 1708 and processing component 1702.
Memory 1704 is configured as storing various types of data to support the operation in equipment 1700.These data
Example includes the instruction of any application or method for operating in equipment 1700, contact data, telephone book data,
Message, picture, video etc..Memory 1704 can by any kind of volatibility or non-volatile memory device or they
Combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
Power supply module 1706 provides electric power for the various assemblies of equipment 1700.Power supply module 1706 may include power management
System, one or more power supplys and other with for equipment 1700 generate, manage, and distribute the associated component of electric power.
Multimedia component 1708 includes the screen of one output interface of offer between equipment 1700 and user.Some
In embodiment, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
It may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensors
To sense the gesture on touch, slide, and touch panel.Touch sensor can not only sense the boundary of a touch or slide action,
But also detection duration and pressure relevant to touch or slide.In some embodiments, multimedia component 1708
Including a front camera and/or rear camera.When equipment 1700 is in operation mode, such as screening-mode or video mode
When, front camera and/or rear camera can receive external multi-medium data.Each front camera and postposition camera shooting
Head can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 1710 is configured as exporting and/or inputting acoustic signals.For example, audio component 1710 includes a wheat
Gram wind (MIC), when equipment 1700 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone quilt
It is configured to receive external acoustic waves signal.The received acoustic signals of institute can be further stored in memory 1704 or via communication
Component 1716 is sent.In some embodiments, audio component 1710 further includes a loudspeaker, for exporting acoustic signals.
I/O interface 1712 provides interface, above-mentioned peripheral interface module between processing component 1702 and peripheral interface module
It can be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and
Locking press button.
Sensor module 1714 includes one or more sensors, and the state for providing various aspects for equipment 1700 is commented
Estimate.For example, sensor module 1714 can detecte the state that opens/closes of equipment 1700, the relative positioning of component, such as institute
The display and keypad that component is equipment 1700 are stated, sensor module 1714 can be with detection device 1700 or equipment 1,700 1
It the position change of a component, the existence or non-existence that user contacts with equipment 1700,1700 orientation of equipment or acceleration/deceleration and sets
Standby 1700 temperature change.Sensor module 1714 may include proximity sensor, be configured in not any physics
The presence of commodity nearby is detected when contact.Sensor module 1714 can also include optical sensor, as CMOS or ccd image are sensed
Device, for being used in imaging applications.In some embodiments, which can also include acceleration sensing
Device, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 1716 is configured to facilitate the communication of wired or wireless way between equipment 1700 and other equipment.If
Standby 1700 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.It is exemplary at one
In embodiment, communication component 1716 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel
Information.In one exemplary embodiment, the communication component 1716 further includes near-field communication (NFC) module, to promote short distance
Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, equipment 1700 can be by one or more application specific integrated circuit (ASIC), number
Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 1704 of instruction, above-metioned instruction can be executed by the processor 1720 of equipment 1700 to complete disclosed technique side
In the video broadcasting method that case provides when meeting preset condition, generate flow constriction request, and be sent to server, wherein
Record has the information that target region-of-interest is obtained for trigger the server, the flow constriction request in the flow constriction request
Preferentially guarantee the code rate of video content in target region-of-interest for request server;It is broadcast according to the ASCII stream file ASCII that server returns
The corresponding video content of the ASCII stream file ASCII is put, wherein the ASCII stream file ASCII is requested according to the flow constriction to institute for server
It states the video content except target region-of-interest and carries out the video file that Compression is handled.For example, the non-transitory
Computer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage
Equipment etc..
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can
It realizes by means of software and necessary general hardware platform.Based on this understanding, the technical solution essence of the application
On in other words the part that contributes to existing technology can be embodied in the form of software products, the computer software product
It can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) executes the certain of each embodiment of the application or embodiment
Method described in part.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system or
For system embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to method
The part of embodiment illustrates.System and system embodiment described above is only schematical, wherein the conduct
The unit of separate part description may or may not be physically separated, component shown as a unit can be or
Person may not be physical unit, it can and it is in one place, or may be distributed over multiple network units.It can root
According to actual need that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Ordinary skill
Personnel can understand and implement without creative efforts.
It above to multi-screen interaction method provided herein, device and electronic equipment, is described in detail, herein
Applying specific case, the principle and implementation of this application are described, and the explanation of above example is only intended to help
Understand the present processes and its core concept;At the same time, for those skilled in the art, according to the thought of the application,
There will be changes in the specific implementation manner and application range.In conclusion the content of the present specification should not be construed as to this
The limitation of application.
Claims (48)
1. a kind of multi-screen interaction method characterized by comprising
First terminal load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When the video playing in second terminal is to object event relevant to the specified object, by the specified object material
It is added in the real scene image.
2. the method according to claim 1, wherein the video played in the second terminal is live video
Stream.
3. the method according to claim 1, wherein the first terminal is received and is issued when the video playing
The acoustic signals of predetermined frequency determine the generation of the object event.
4. the method according to claim 1, wherein the acoustic signals are broadcast by the video in the second terminal
It is issued when being put into object event corresponding with the specified object.
5. the method according to claim 1, wherein described be added to the outdoor scene for the specified object material
In image, comprising:
The specified object material is added in the plane in the real scene image included and is shown.
6. according to the method described in claim 5, it is characterized in that,
Before the addition specified object material, further includes:
Determine the placement location in collected real scene image;
It is described that the specified object material is added in the real scene image, comprising:
The specified object material is added at the placement location.
7. according to the method described in claim 5, it is characterized in that,
Placement location in the collected real scene image of determination, comprising:
The plan-position in the collected real scene image is determined, in the plan-position that the placement location is located at.
8. the method according to the description of claim 7 is characterized in that
Plan-position in the determination collected real scene image, the plan-position that the placement location is located at
In, comprising:
Plane monitoring-network is carried out in collected real scene image;
Cursor is provided, according to the plane detected, determine cursor can placing range;
The position that the cursor is placed is as the placement location.
9. according to the method described in claim 8, it is characterized in that, the position that the cursor is placed is put as described in
Seated position, comprising:
Coordinate system is established as origin using the initial position where first terminal;
Determine the cursor coordinates of position that the cursor is placed in the coordinate system;
Using the cursor coordinates as the placement location.
10. according to the method described in claim 6, it is characterized by further comprising:
After the specified object material is added at the placement location, when the material does not appear in the first terminal
Interface when, determine change direction of the first terminal relative to the initial position;
According to the change direction, the prompt mark of opposite direction is provided in the interface of the first terminal.
11. the method according to claim 1, wherein the interaction material further includes for indicating Transfer pipe
Material, after the acquisition real scene image step, further includes:
The material for being used to indicate Transfer pipe is added in the real scene image.
12. according to the method for claim 11, which is characterized in that further include:
It is described the specified object material is added to the real scene image to specifically include:
Based on the Transfer pipe material, show that the specified object passes through the outdoor scene that takes described in Transfer pipe entrance
Process in image.
13. the method according to claim 1, wherein further including by the specified object in the interaction material
The speech samples material of recording;
The method also includes:
Obtain the user's name information of the first terminal association user;
The greeting corpus including the user's name is generated for the association user;
According to the speech samples material, voice is converted by greeting expectation and is played.
14. the method according to claim 1, wherein the specified object material is that same specified object is corresponding
More set materials, the method also includes:
Option of operation for carrying out selection to the specified object material is provided;
It is described that the specified object material is added in the real scene image, comprising:
Selected specified object material is added in the real scene image.
15. the method according to claim 1, wherein further include:
The specified object material is added to be shown in the real scene image during, provide shooting operation choosing
?;
Operation requests are received by the shooting operation option, and corresponding image, described image layer packet are generated according to each image layer
Include real scene image, the specified image to pixel material.
16. according to the method for claim 15, which is characterized in that it is described according to each image layer, generate shooting image, packet
It includes:
Screenshotss or record screen are carried out to each image layer, and are removed wherein for showing the image layer of option of operation, shooting image is generated.
17. according to the method for claim 15, which is characterized in that further include in the real scene image and the specified object
The character image taken a group photo.
18. according to the method for claim 15, which is characterized in that further include:
The option of operation shared to shooting image is provided.
19. according to the method for claim 15, which is characterized in that further include:
The undertaking page that shooting image is browsed and shared is provided.
20. according to claim 1 to 19 described in any item methods, which is characterized in that the specified object includes designated person
Information.
21. according to claim 1 to 19 described in any item methods, which is characterized in that the specified object includes specified commodity
Information.
22. according to the method for claim 21, which is characterized in that described that the specified object material is added to the reality
After in scape image, further includes:
The option of operation rushed to purchase for the data object to the specified commodity association is provided;
When receiving panic buying operation by the option of operation, it is submitted to server-side, panic buying result is determined by server-side.
23. according to claim 1 to 19 described in any item methods, which is characterized in that the specified object includes game under line
Relevant stage property information.
24. according to the method for claim 23, which is characterized in that described that the specified object material is added to the reality
After in scape image, further includes:
When receiving to target stage property progress operation information, the operation information is submitted to server-side, by the service
It holds and determines operation incentive message obtained and return;
Incentive message obtained is provided.
25. according to claim 1 to 19 described in any item methods, which is characterized in that the specified object material includes passing through
The specified object is carried out shooting video material obtained.
26. according to claim 1 to 19 described in any item methods, which is characterized in that the specified object material includes: with institute
The image for stating specified object is the cartoon character of prototype, and the cartoon material based on cartoon character production.
27. a kind of multi-screen interaction method characterized by comprising
First service end saves interaction material, and the interaction material includes according to the specified object material for specifying Object Creation;
The interaction material is supplied to first terminal, real scene image is acquired by the first terminal, when the view in second terminal
When frequency is played to object event corresponding with the specified object, the specified object material is added to the real scene image
In.
28. according to the method for claim 27, which is characterized in that the specified object includes designated person.
29. according to the method for claim 27, which is characterized in that the preservation interacts material, comprising:
It saves by carrying out shooting video material obtained to the specified object.
30. according to the method for claim 27, which is characterized in that the preservation interacts material, comprising:
Save the cartoon character with the image of the specified object for prototype, and the animation based on cartoon character production
Material.
31. according to the method for claim 27, which is characterized in that the specified object includes designated person, the preservation
Interact material, further includes:
Save the speech samples material recorded by the designated person.
32. according to the method for claim 27, which is characterized in that before the method further include:
The acoustic signals of predetermined frequency are provided to the corresponding second service end of the second terminal, in the second terminal
In video playing to object event corresponding with the specified object when, be added in the video, eventually so as to described first
The generation of the object event is known by detecting the acoustic signals of the predetermined frequency in end.
33. according to the method for claim 27, which is characterized in that further include:
The interaction situation of each first terminal is counted;
Statistical information is supplied to the corresponding second service end of the second terminal, is believed the statistics by the second service end
Breath is added in the video that the second terminal plays.
34. a kind of multi-screen interaction method characterized by comprising
Second terminal plays video;
When the video playing is to object event relevant to specified object, the acoustic signals of predetermined frequency are played, so as to
One terminal knows the generation of the object event by detecting the acoustic signals, and specified object material is added to and is collected
Real scene image in.
35. according to the method for claim 34, which is characterized in that the specified object of difference corresponds to the sound wave letter of different frequency
Number.
36. a kind of multi-screen interaction method characterized by comprising
Second service end receives the acoustic signals information for the predetermined frequency that first service end provides;
The acoustic signals of the predetermined frequency are inserted into the position that object event relevant to specified object occurs in video, so as to
During playing the video by second terminal, the target is known by detecting the acoustic signals by first terminal
The generation of event, and specified object material is added in collected real scene image.
37. according to the method for claim 36, which is characterized in that further include:
Receive the statistical information to first terminal interaction situation that the first service end provides;
The statistical information is added in the video and is sent, for being played out by the second terminal.
38. a kind of video interaction method characterized by comprising
First terminal load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
When the video playing in the first terminal is to object event relevant to the specified object, mutual arena is jumped to
Face;
Real scene image collection result is shown in the interactive interface, and the specified object material is added to the realistic picture
As in.
39. a kind of video interaction method characterized by comprising
First service end saves interaction material, and the interaction material includes according to the specified object material for specifying Object Creation;
The interaction material is supplied to first terminal, so as to the video playing in the first terminal to the specified object
When relevant object event, interactive interface is jumped to, and shows real scene image collection result in the interactive interface, and by institute
Specified object material is stated to be added in the real scene image.
40. a kind of interactive approach characterized by comprising
Load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When detecting object event relevant to the specified object, the specified object material is added to the realistic picture
As in.
41. a kind of multi-screen interactive device, which is characterized in that be applied to first terminal, comprising:
First material loading unit, for loading interaction material, the interaction material includes that basis specifies the specified of Object Creation
Object material;
First real scene image acquisition unit, for acquiring real scene image;
First material adding unit, for the video playing in the second terminal to object event relevant to the specified object
When, the specified object material is added in the real scene image.
42. a kind of multi-screen interactive device, which is characterized in that be applied to first service end, comprising:
First interaction material storage unit, for saving interaction material, the interaction material includes that basis specifies Object Creation
Specified object material;
First interaction material provides unit and is acquired for the interaction material to be supplied to first terminal by the first terminal
Real scene image, when the video playing in second terminal is to object event corresponding with the specified object, by described specified pair
Pixel material is added in the real scene image.
43. a kind of multi-screen interactive device, which is characterized in that be applied to second terminal, comprising:
Video playback unit, for playing video;
Acoustic signals broadcast unit, for the video playing arrive object event relevant to specified object when, broadcasting it is preset
The acoustic signals of frequency so that first terminal knows the generation of the object event by detecting the acoustic signals, and will refer to
Determine object material to be added in collected real scene image.
44. a kind of multi-screen interactive device, which is characterized in that be applied to second service end, comprising:
Acoustic signals information receiving unit, the acoustic signals information of the predetermined frequency for receiving the offer of first service end;
Acoustic signals information insertion unit is inserted into institute for the position that object event relevant to specified object occurs in video
The acoustic signals of predetermined frequency are stated, to pass through inspection by first terminal during playing the video by second terminal
The generation that the acoustic signals know the object event is surveyed, and specified object material is added to collected real scene image
In.
45. a kind of video interactive device, which is characterized in that be applied to first terminal, comprising:
Loading unit, for loading interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Interface jump-transfer unit, for the video playing in the first terminal to object event relevant to the specified object
When, jump to interactive interface;
Material adding unit, for showing real scene image collection result in the interactive interface, and will be described specified to pixel
Material is added in the real scene image.
46. a kind of video interactive device, which is characterized in that be applied to first service end, comprising:
Second material storage unit, for saving interaction material, the interaction material includes that basis specifies the specified of Object Creation
Object material;
Second material provides unit, for the interaction material to be supplied to first terminal, so as to the view in the first terminal
When frequency is played to object event relevant to the specified object, interactive interface is jumped to, and show in the interactive interface
Real scene image collection result, and the specified object material is added in the real scene image.
47. a kind of interactive device characterized by comprising
Second material loading unit, for loading interaction material, the interaction material includes that basis specifies the specified of Object Creation
Object material;
Second real scene image acquisition unit, for acquiring real scene image;
Second material adding unit, for when detecting object event relevant to the specified object, by described specified pair
Pixel material is added in the real scene image.
48. a kind of electronic equipment characterized by comprising
One or more processors;And
With the memory of one or more of relational processors, for storing program instruction, described program refers to the memory
It enables when reading execution by one or more of processors, performs the following operations:
Load interaction material, the interaction material includes according to the specified object material for specifying Object Creation;
Acquire real scene image;
When the video playing in second terminal is to object event relevant to the specified object, by the specified object material
It is added in the real scene image.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710979621.2A CN109688347A (en) | 2017-10-19 | 2017-10-19 | Multi-screen interaction method, device and electronic equipment |
TW107119580A TW201917556A (en) | 2017-10-19 | 2018-06-07 | Multi-screen interaction method and apparatus, and electronic device |
PCT/CN2018/109281 WO2019076202A1 (en) | 2017-10-19 | 2018-10-08 | Multi-screen interaction method and apparatus, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710979621.2A CN109688347A (en) | 2017-10-19 | 2017-10-19 | Multi-screen interaction method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109688347A true CN109688347A (en) | 2019-04-26 |
Family
ID=66173994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710979621.2A Pending CN109688347A (en) | 2017-10-19 | 2017-10-19 | Multi-screen interaction method, device and electronic equipment |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN109688347A (en) |
TW (1) | TW201917556A (en) |
WO (1) | WO2019076202A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110062290A (en) * | 2019-04-30 | 2019-07-26 | 北京儒博科技有限公司 | Video interactive content generating method, device, equipment and medium |
CN113157178A (en) * | 2021-02-26 | 2021-07-23 | 北京五八信息技术有限公司 | Information processing method and device |
CN113556531A (en) * | 2021-07-13 | 2021-10-26 | Oppo广东移动通信有限公司 | Image content sharing method and device and head-mounted display equipment |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI851486B (en) * | 2023-11-27 | 2024-08-01 | 鴻海精密工業股份有限公司 | Method for inspecting images, electronic device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794834A (en) * | 2015-04-04 | 2015-07-22 | 金琥 | Intelligent voice doorbell system and implementation method thereof |
CN105378625A (en) * | 2013-06-25 | 2016-03-02 | 微软技术许可有限责任公司 | Indicating out-of-view augmented reality images |
CN105392022A (en) * | 2015-11-04 | 2016-03-09 | 北京符景数据服务有限公司 | Audio watermark-based information interaction method and device |
CN105810131A (en) * | 2014-12-31 | 2016-07-27 | 吴建伟 | Virtual receptionist device |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
CN107172411A (en) * | 2017-04-18 | 2017-09-15 | 浙江传媒学院 | A kind of virtual reality business scenario rendering method under the service environment based on home videos |
CN107167153A (en) * | 2012-10-23 | 2017-09-15 | 华为终端有限公司 | Method for processing navigation information and mobile unit |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160260319A1 (en) * | 2015-03-04 | 2016-09-08 | Aquimo, Llc | Method and system for a control device to connect to and control a display device |
CN106028169B (en) * | 2016-07-04 | 2019-04-12 | 无锡天脉聚源传媒科技有限公司 | A kind of method and device of prize drawing interaction |
CN106792246B (en) * | 2016-12-09 | 2021-03-09 | 福建星网视易信息系统有限公司 | Method and system for interaction of fusion type virtual scene |
CN106730815B (en) * | 2016-12-09 | 2020-04-21 | 福建星网视易信息系统有限公司 | Somatosensory interaction method and system easy to realize |
CN106899870A (en) * | 2017-02-23 | 2017-06-27 | 任刚 | A kind of VR contents interactive system and method based on intelligent television and mobile terminal |
-
2017
- 2017-10-19 CN CN201710979621.2A patent/CN109688347A/en active Pending
-
2018
- 2018-06-07 TW TW107119580A patent/TW201917556A/en unknown
- 2018-10-08 WO PCT/CN2018/109281 patent/WO2019076202A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107167153A (en) * | 2012-10-23 | 2017-09-15 | 华为终端有限公司 | Method for processing navigation information and mobile unit |
CN105378625A (en) * | 2013-06-25 | 2016-03-02 | 微软技术许可有限责任公司 | Indicating out-of-view augmented reality images |
CN105810131A (en) * | 2014-12-31 | 2016-07-27 | 吴建伟 | Virtual receptionist device |
CN104794834A (en) * | 2015-04-04 | 2015-07-22 | 金琥 | Intelligent voice doorbell system and implementation method thereof |
CN105392022A (en) * | 2015-11-04 | 2016-03-09 | 北京符景数据服务有限公司 | Audio watermark-based information interaction method and device |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
CN107172411A (en) * | 2017-04-18 | 2017-09-15 | 浙江传媒学院 | A kind of virtual reality business scenario rendering method under the service environment based on home videos |
Non-Patent Citations (4)
Title |
---|
D2前端技术论坛: "《优酷视频》", 5 January 2017 * |
HEIX.COM.CN: "《中国AR网》", 16 June 2017 * |
巧克力: "《家核优居》", 12 November 2016 * |
雷锋网: "《搜狐网》", 31 August 2017 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110062290A (en) * | 2019-04-30 | 2019-07-26 | 北京儒博科技有限公司 | Video interactive content generating method, device, equipment and medium |
CN113157178A (en) * | 2021-02-26 | 2021-07-23 | 北京五八信息技术有限公司 | Information processing method and device |
CN113556531A (en) * | 2021-07-13 | 2021-10-26 | Oppo广东移动通信有限公司 | Image content sharing method and device and head-mounted display equipment |
CN113556531B (en) * | 2021-07-13 | 2024-06-18 | Oppo广东移动通信有限公司 | Image content sharing method and device and head-mounted display equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2019076202A1 (en) | 2019-04-25 |
TW201917556A (en) | 2019-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020083021A1 (en) | Video recording method and apparatus, video playback method and apparatus, device, and storage medium | |
KR20230159578A (en) | Presentation of participant responses within a virtual conference system | |
CN113298585B (en) | Method and device for providing commodity object information and electronic equipment | |
CN113411656B (en) | Information processing method, information processing device, computer equipment and storage medium | |
CN108769814A (en) | Video interaction method, device and readable medium | |
WO2019128787A1 (en) | Network video live broadcast method and apparatus, and electronic device | |
CN112181573B (en) | Media resource display method, device, terminal, server and storage medium | |
CN111327916B (en) | Live broadcast management method, device and equipment based on geographic object and storage medium | |
CN114610191B (en) | Interface information providing method and device and electronic equipment | |
CN112261481B (en) | Interactive video creating method, device and equipment and readable storage medium | |
Scheible et al. | MobiToss: a novel gesture based interface for creating and sharing mobile multimedia art on large public displays | |
CN109688347A (en) | Multi-screen interaction method, device and electronic equipment | |
US10248665B1 (en) | Method and system for collecting, and globally communicating and evaluating digital images of sports fans, public displays of affection and miscellaneous groups from entertainment venues | |
CN109729367B (en) | Method and device for providing live media content information and electronic equipment | |
CN113490010A (en) | Interaction method, device and equipment based on live video and storage medium | |
CN111277890A (en) | Method for acquiring virtual gift and method for generating three-dimensional panoramic live broadcast room | |
CN113573092A (en) | Live broadcast data processing method and device, electronic equipment and storage medium | |
CN111382355A (en) | Live broadcast management method, device and equipment based on geographic object and storage medium | |
CN109788364A (en) | Video calling interactive approach, device and electronic equipment | |
KR101968953B1 (en) | Method, apparatus and computer program for providing video contents | |
CN109754275B (en) | Data object information providing method and device and electronic equipment | |
CN109788327B (en) | Multi-screen interaction method and device and electronic equipment | |
CN112261482A (en) | Interactive video playing method, device and equipment and readable storage medium | |
CN109688450A (en) | Multi-screen interaction method, device and electronic equipment | |
CN114302160A (en) | Information display method, information display device, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190426 |