CN103002244B - A kind of method of interactive video call and call terminal - Google Patents
A kind of method of interactive video call and call terminal Download PDFInfo
- Publication number
- CN103002244B CN103002244B CN201110267764.3A CN201110267764A CN103002244B CN 103002244 B CN103002244 B CN 103002244B CN 201110267764 A CN201110267764 A CN 201110267764A CN 103002244 B CN103002244 B CN 103002244B
- Authority
- CN
- China
- Prior art keywords
- image
- terminal
- framing
- display
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000002452 interceptive effect Effects 0.000 title abstract description 7
- 230000006854 communication Effects 0.000 claims abstract description 24
- 238000004891 communication Methods 0.000 claims abstract description 23
- 238000009432 framing Methods 0.000 claims description 67
- 238000012937 correction Methods 0.000 claims description 14
- 230000003068 static effect Effects 0.000 claims description 7
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 abstract 1
- 238000013519 translation Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Landscapes
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A kind of method that the embodiment of the present invention provides interactive video to converse and call terminal, be applied to first terminal, described first terminal comprises the first display unit, and described method comprises: set up the communication connection with described second terminal; Receive the second image from described second terminal, described second image is that the second image acquisition units of described second terminal gathers; Show described second image; Detection control order; When maintaining described communication connection, obtain one first image according to described control command, described first image is different from described second image; Show described first image.In the process of video calling, first terminal is as first terminal, show one second image on the display unit, second image is the picture be associated with video calling, normally the second terminal current in video call process captured by the scene that obtains, make first terminal can see the picture be associated with video calling in real time, improve the experience of user.
Description
Technical Field
The present invention relates to video call technologies, and in particular, to an interactive video call method and a call terminal.
Background
Video calls are also called video calls and are divided into two modes of running an IP line and running a common telephone line. The video call generally refers to a communication mode for transmitting human voice and images, such as a user's bust, a photo, an article, etc., in real time between mobile phones based on the internet and the mobile internet. With the rapid increase of network bandwidth and the development and popularization of hardware devices, the market of video calls also enters the developing express way.
The video call needs two networked terminal devices, such as a smart phone, a PC, a tablet personal computer and the like, and the real-time bidirectional audio and video transmission is realized through integrated software or third-party software.
The inventor finds that the prior art has the following problems: in video calls, the called party at the first terminal often wants to more actively view the environment and scenery where the calling party is located, but the prior art does not provide similar functionality.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an interactive video call method and a call terminal, which are used for achieving the purpose that a called party at a first terminal can actively check the environment and scenery of the position of a calling party.
To solve the foregoing technical problem, an embodiment of the present invention provides a display method applied to a first terminal, where the first terminal includes a first display unit, and the method includes: establishing a communication connection with the second terminal; receiving a second image from the second terminal, wherein the second image is acquired by a second image acquisition unit of the second terminal; displaying the second image; detecting a control command; under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image; and displaying the first image.
In the method, obtaining a first image according to the control command specifically includes: acquiring the first image through a first image acquisition unit of the first terminal; or locally calling an image from the first terminal as the first image.
In the method, the second image comprises a second dynamic image, the second dynamic image is located in a display window, and the display window occupies part or all of the display unit; the first image comprises two parts, wherein the first part is a first dynamic image and comprises a part or all of the second dynamic image displayed in the current display window; and the second part is a first static image and comprises an image presented by the part outside the display window in the display unit.
In the method, a panoramic picture is arranged in the first terminal, and the panoramic picture records the environmental information of the position of the second terminal; a second portion of the first image is padded by a corresponding portion of the panoramic picture.
In the method, the panoramic picture corresponds to a space reference coordinate system; the display window corresponds to a first space coordinate which is positioned in the space reference coordinate system; after the display window moves, the display window corresponds to a second space coordinate, and the second space coordinate is located in the space reference coordinate system; and sending the second space coordinate to the second terminal, and receiving the framing data from the second terminal, wherein the framing data is acquired by the second terminal after correcting the framing coordinate of a framing window in the space reference coordinate system according to the second space coordinate.
In the method, receiving the framing data from the second terminal further includes: receiving a framing coordinate when the second terminal acquires the framing data, and comparing the framing coordinate with the second space coordinate; and when the space coordinates are not matched with the first space coordinates, sending a correction request instruction to the second terminal in real time, and correcting the view coordinates of the view window by the second terminal according to the correction request instruction and the second space coordinates.
In the method, after receiving the framing data from the second terminal, the method further includes: displaying the framing data according to the received framing coordinates; or, according to the received framing coordinate and the second spatial coordinate, determining an overlapping region between the framing window and the display window at the current moment, and displaying the framing data corresponding to the overlapping region in the overlapping region.
An electronic device, as a first terminal, comprising: the communication unit is used for establishing communication connection with a second terminal; the image receiving unit is used for receiving a second image from the second terminal, and the second image is acquired by a second image acquisition unit of the second terminal; the image display unit is used for displaying the second image on a first display unit; a control unit for detecting a control command; under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image; the image display unit is further used for displaying the first image.
The electronic device further comprises: the image management unit is used for acquiring the first image through a first image acquisition unit of the first terminal; or locally calling an image from the first terminal as the first image.
The electronic device further comprises: and the display window unit occupies part or all of the display unit and is used for displaying a second dynamic image, and the second dynamic image is part or all of the second image.
The technical scheme of the invention has the following beneficial effects: in the process of video call, the first terminal serves as the first terminal, a second image is displayed on the display unit, the second image is a picture associated with the video call, and is usually a scene shot by the second terminal at present in the process of video call, so that the first terminal can see the picture associated with the video call in real time, and the use experience of a user is improved.
Drawings
FIG. 1 is a schematic flow chart of a display method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a relationship between a display window and a display unit according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating padding of a blank with a panoramic picture in a first terminal according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an embodiment of switching from a second image to a first image;
FIG. 5 is a schematic diagram illustrating an embodiment of switching from a second image to a first image;
FIG. 6 is a first diagram illustrating multiple calibration between two terminals according to an embodiment of the present invention;
FIG. 7 is a second diagram illustrating multiple calibration between two terminals according to an embodiment of the present invention;
FIG. 8 is a third diagram illustrating multiple calibration between two terminals according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a video call between two terminals according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 11 is a schematic diagram of an interactive image between two terminals according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
An embodiment of the present invention provides a display method, as shown in fig. 1, applied to a first terminal, where the first terminal includes a first display unit, and the method includes:
step 101, establishing a communication connection with the second terminal;
102, receiving a second image from the second terminal, wherein the second image is acquired by a second image acquisition unit of the second terminal;
step 103, displaying the second image;
step 104, detecting a control command;
under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image;
step 105, displaying the first image.
By applying the technical scheme, in the process of video call, the first terminal serves as the first terminal, the second image is displayed on the display unit, the second image is a picture associated with the video call, and is usually a scene shot by the second terminal at present in the process of video call, so that the first terminal can see the picture associated with the video call in real time, and the use experience of a user is improved.
In a preferred embodiment, the obtaining a first image according to the control command specifically includes:
acquiring the first image through a first image acquisition unit of the first terminal; or locally calling an image from the first terminal as the first image.
In a preferred embodiment, as shown in fig. 2, the second image comprises a second dynamic image, the second dynamic image being located in a display window, the display window occupying part or all of the display unit;
the first image comprises two parts, wherein the first part is a first dynamic image and comprises a part or all of the second dynamic image displayed in a display window currently positioned in the display unit; and the second part is a first static image and comprises images presented by parts outside the display window in the display unit.
The display unit actually refers to a display screen of a terminal, in other words, the area of the display unit is the area of the display screen; the display window is a software concept that focuses on an application program, when the position of the display window on the display unit is changed, the visual effect of the second dynamic image in the display window is changed, and the changed second dynamic image in the display window is the first part of the first image.
An image may be retrieved from a data resource local to the first terminal as the first image. It is also possible to receive an image from the second terminal again as the first image, in which case the image sent by the second terminal is usually the real-time image captured by the second terminal at this moment.
In a preferred embodiment, as shown in fig. 3, a panoramic picture is provided in the first terminal, and the panoramic picture records the environmental information of the location of the second terminal; a second portion of the first image is padded by a corresponding portion of the panoramic picture. The length and width of the panoramic picture are generally greater than the length and width of the display unit.
In the second terminal, in the moving process of the display window, a blank part is formed on the display unit at the position where the display window is located previously, and the blank part is a part outside the display window in the display unit; at this time, the blank part is filled by using a panoramic picture instead of a common background picture, and the filling should follow the following principle:
the panoramic picture and the display window are positioned at different service logic layers, the panoramic picture can be generally used as a lower layer of the display window, and the display window is arranged at an upper layer of the panoramic picture and can move;
when the display window is moved, the second dynamic image in the display window is relatively stable relative to the display window, but is displaced relative to the panoramic picture;
if the scenery in the second dynamic image is the same as the corresponding scenery in the panoramic picture;
then, in the process of filling the blank part with the panoramic picture, the scenery in the second dynamic image should always coincide with or coincide with the corresponding scenery in the panoramic picture.
The selection for realizing the technical scheme is various, and in the practical application process, the operation of the user of the first terminal on the display window is various, and the operation mode comprises the following steps:
the method comprises the following steps that 1, translation operation is achieved by linearly and unidirectionally sliding a finger on a touch screen, or the translation operation is achieved by linearly and unidirectionally moving a first terminal body in a plane where the touch screen is located in space;
2, rotating operation is realized by rotating and sliding the touch screen with a single finger, or rotating the first terminal body in a plane where the touch screen is located in space;
3, the operation of translation and rotation superposition is realized by performing linear and rotary superposition sliding on the touch screen, or by rotating the first terminal body in a plane where the touch screen is positioned in space and simultaneously linearly moving the first terminal body;
and 4, zooming operation is realized by sliding the two fingers towards or away from each other on the touch screen, or the first terminal body is moved in a direction vertical to the touch screen in space.
Therefore, technical features of aspects involved in switching from a first image to a second image are described by various embodiments.
In an application scenario, as shown in fig. 4, when a translation operation on the first terminal or a translation operation on the display window is detected, the second image is switched to the first image, and the first image is used as a final expected image and is obtained after multiple interactions between the first terminal and the second terminal are required; in the multiple interaction processes, an image displayed on the first terminal is called a reference image;
the first image comprises two parts, wherein the first part is a first dynamic image, and the second part is a first static image; the reference image also comprises two parts, wherein the first part is a reference dynamic image, and the second part is a reference static image;
setting a reference line on a reference image, the reference line being located at one side of the display window, in particular at an interface line between the reference dynamic image and the reference static image, and on which a moving direction of the reference line should be identified;
transmitting a reference image currently displayed in a first terminal to a second terminal in real time;
after the second terminal receives the reference image, the image seen by the user of the second terminal is consistent with the image seen by the user of the first terminal, so that the translation direction of the second terminal can be determined according to the position of the reference line and the translation direction on the reference line.
And the first terminal and the second terminal obtain a first image required by the first terminal after multiple interactions.
In an application scene, when the translation operation on the first terminal or the translation operation on the display window is detected, the second image is switched into the first image. The first image is used as a final expected image and is obtained after the first terminal and the second terminal are interacted for multiple times; in the multiple interaction processes, an image displayed on the first terminal is called a reference image;
when a reference image is displayed on a first terminal, detecting the current operation of a user in real time, and generating an operation prompt according to the operation, wherein the operation prompt indicates the specific implementation mode of the operation at the current moment;
sending the operation prompt to the second terminal, and displaying the operation prompt in a display unit of the second terminal to prompt the next operation;
or, the operation prompt and the reference image are both transmitted to the second terminal in real time.
And the first terminal and the second terminal obtain a first image required by the first terminal after multiple interactions.
In a preferred embodiment, the first terminal establishes a communication connection with the second terminal;
receiving a second image from the second terminal, wherein the second image is acquired by a second image acquisition unit of the second terminal;
displaying the second image;
when a control command is detected and indicates that the second terminal disconnects the communication connection, the second image is actually maintained in the first terminal.
In a preferred embodiment, as shown in fig. 5, the panoramic picture corresponds to a spatial reference coordinate system; the display window corresponds to a first space coordinate which is positioned in the space reference coordinate system;
after the display window moves, the display window corresponds to a second space coordinate, and the second space coordinate is located in the space reference coordinate system;
sending the second space coordinate to the second terminal;
and receiving the framing data from the second terminal, wherein the framing data is acquired by correcting the framing coordinates of a framing window in the spatial reference coordinate system according to the second spatial coordinate by the second terminal.
In an application scene, the technical scheme provided by the embodiment is applied to provide an interactive video call method, which is applied to a first terminal, wherein the first terminal is internally provided with a panoramic picture, the panoramic picture records a scene where a second terminal is located and positions of objects in the scene, and the panoramic picture corresponds to a spatial reference coordinate system; the working process comprises the following steps:
displaying a display window, wherein a first space coordinate corresponding to the display window is positioned in a space reference coordinate system;
after the display window moves, the display window corresponds to a second space coordinate, and the second space coordinate is positioned in the space reference coordinate system;
the second spatial coordinates are transmitted to the second terminal,
and receiving the framing data from the second terminal, wherein the framing data is acquired by correcting the framing coordinates of a framing window in the spatial reference coordinate system according to the second spatial coordinate by the second terminal.
The framing data is obtained by the following method: the rear camera of the second terminal is aligned to a certain area in the environment recorded by the panoramic picture, the object in the area is shot, and the shot object can appear in the view finding window of the second terminal, and technicians should know that the object in the environment shot by the camera can completely cover the whole view finding window or exceed the size which can be accommodated by the view finding window; in general, the finder data is only data of the environment corresponding to the finder window.
In a preferred embodiment, receiving framing data from the second terminal further comprises: receiving a framing coordinate when the second terminal acquires the framing data, and matching the framing coordinate with the second space coordinate;
and when the space coordinate of the viewing window in the space reference coordinate system is not matched with the space coordinate of the second terminal, sending a correction request instruction to the second terminal in real time, and continuously correcting the viewing coordinate of the viewing window in the space reference coordinate system by the second terminal according to the correction request instruction and the second space coordinate.
The two embodiments described above differ in that:
the first terminal receives the framing data from the second terminal, wherein the framing data is acquired by correcting the framing coordinates of a framing window in the spatial reference coordinate system according to the second spatial coordinate by the second terminal. In this case, the first terminal can display the first image generated based on the finder data, but the first image at this time is not the final image desired by the user of the first terminal, but is only a real-time image at a certain point in the switching process.
However, when the images are not matched, the first terminal sends a correction request instruction to the second terminal, the second terminal adjusts parameters such as a viewing angle, an extent, a depth of field and the like of the second image acquisition unit of the second terminal according to the correction request instruction in real time, the first terminal and the second terminal interact for many times to adjust the parameters, and finally a final image expected by a user of the first terminal is obtained.
In an application scenario, as shown in fig. 6, matching the corrected scene coordinates with the second spatial coordinates requires a multiple correction process, including:
and step 01, receiving a corrected framing coordinate when the second terminal acquires framing data, wherein the corrected framing coordinate is the coordinate of the current framing window in the spatial reference coordinate system.
Step 02, matching the corrected view coordinate with a second space coordinate in a space reference coordinate system corresponding to the panoramic picture; the second spatial coordinates are the coordinates of the display window at the current time within the spatial reference frame.
At step 03, the matching result shown in fig. 6 is obviously not matched, and at this time, the second terminal needs to be notified in real time to correct the viewing coordinate again in real time.
And step 04, the second terminal continuously corrects the framing coordinates of the framing window in the space reference coordinate system according to the correction request instruction and the second space coordinates.
And step 05, the camera of the second terminal performs framing again, and sends the framing data which is corrected again and displayed in the framing window to the first terminal.
And step 06, the first terminal and the second terminal are repeatedly corrected for many times until the viewing window completely covers the display window as shown in fig. 6. In this case, the display window and the finder window may be the same size or different sizes.
The above embodiment describes the process of performing position correction between the first terminal and the second terminal after the display window of the first terminal is translated and rotated, and the translation and the rotation are superimposed.
When the display window is zoomed, the zooming degree corresponds to a zooming multiple, and the panoramic picture is zoomed by the same multiple according to the zooming multiple;
the first terminal sends the zoom factor to the second terminal, the second terminal prompts to adjust the view finding parameters of a second image acquisition unit of the second terminal, the adjusted second image acquisition unit can obtain a video picture required by the first terminal, and view finding data corresponding to the video picture is sent to the first terminal.
In a preferred embodiment, the panoramic picture corresponds to a spatial reference coordinate system; the display window corresponds to a first space coordinate which is positioned in the space reference coordinate system;
after the display window moves, a new picture is formed, which is distinguished from other previous pictures and named as a display window displacement picture, and because the communication connection is established between the first terminal and the second terminal at the moment, the pictures can be transmitted to the second terminal,
after the second terminal receives the display window displacement picture, because the second terminal also stores a panoramic picture, the display window displacement picture can be placed on the local panoramic picture of the second terminal, and how to adjust the second image acquisition unit can be determined visually so as to finally acquire the first image expected by the first terminal.
In a preferred embodiment, after receiving the framing data from the second terminal, the method further comprises: displaying framing data according to the received framing coordinates; or determining an overlapping area between the viewfinder window and the display window at the current moment according to the received viewfinder coordinates and the second space coordinates, and displaying part of the viewfinder data corresponding to the overlapping area. This correction process is specifically implemented by the following means, including:
step 1, obtaining a position difference value between the second space coordinate corresponding to a display window and the framing coordinate corresponding to the framing window,
step 2, as shown in fig. 8, a reference line is set in the view finding window according to the position difference, the reference line is parallel to a reference edge of the view finding window,
and 3, determining a region enclosed by the reference line and the reference edge in a viewing window, and no longer sending the viewing data corresponding to the region to the second terminal.
As shown in fig. 9, the second image capturing unit is generally referred to as a rear camera of the second terminal, which is called a rear camera B, and the second terminal further has a front camera, which is called a front camera B; similarly, the rear camera of the first terminal is the first image acquisition unit, also called rear camera a, and the front camera thereof is called front camera a.
After the first terminal establishes a communication connection with the second terminal, a video picture shot by the rear camera B of the second terminal is transmitted to the first terminal, and the video picture is displayed in the display window as a second picture.
The front camera B of the second terminal can also shoot a user B of the second terminal, and transmits the head portrait video of the user B to the first terminal, and the first terminal determines whether to display the head portrait video.
Similarly, the front camera a of the first terminal shoots the user a of the first terminal, transmits the head portrait video of the user a to the second terminal, and the second terminal determines whether to display the head portrait video.
The front camera B usually captures the face of the user of the second terminal, and for the first terminal, the face is the opposite face. The front camera a of the first terminal typically captures the face of the first terminal user, which is the own face for the first terminal.
An embodiment of the present invention provides an electronic device, as shown in fig. 10, where the electronic device serves as a first terminal, and includes:
a communication unit 1001 for establishing a communication connection with a second terminal;
an image receiving unit 1002, configured to receive a second image from the second terminal, where the second image is acquired by a second image acquiring unit of the second terminal;
an image display unit 1003 for displaying the second image on a first display unit;
a control unit 1004 for detecting a control command; under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image;
the image display unit 1003 is further configured to display the first image.
By applying the technical scheme, in the process of video call, the first terminal serves as the first terminal, the second image is displayed on the display unit, the second image is a picture associated with the video call, and is usually a scene shot by the second terminal at present in the process of video call, so that the first terminal can see the picture associated with the video call in real time, and the use experience of a user is improved.
In a preferred embodiment, as shown in fig. 11, the first terminal further includes: the image management unit is connected with the image display unit 1003 and used for acquiring a first image through a first image acquisition unit of the first terminal; or locally calling an image from the first terminal as the first image.
And a display window unit connected to the image display unit 1003, occupying a part or all of the display unit, and configured to display a second moving image, which is a part or all of the second image.
In the technical scheme, the first image comprises two parts, wherein the first part is a first dynamic image and comprises a part of or all of the second dynamic image displayed in the current display window; and the second part is a first static image and comprises images presented by parts outside the display window in the display unit.
The panoramic picture management unit is used for storing a panoramic picture, and the panoramic picture records the environmental information of the position of the second terminal; the second portion of the first image is padded by a corresponding portion of the panoramic picture.
In a preferred embodiment, the display unit of the first terminal is a touch screen, and in the actual application process, the user of the first terminal has various operations on the display window, and the operation modes include:
the method comprises the following steps that 1, translation operation is achieved by linearly and unidirectionally sliding a finger on a touch screen, or the translation operation is achieved by linearly and unidirectionally moving a first terminal body in a plane where the touch screen is located in space;
2, rotating operation is realized by rotating and sliding the touch screen with a single finger, or rotating the first terminal body in a plane where the touch screen is located in space;
3, the operation of translation and rotation superposition is realized by performing linear and rotary superposition sliding on the touch screen, or by rotating the first terminal body in a plane where the touch screen is positioned in space and simultaneously linearly moving the first terminal body;
and 4, zooming operation is realized by sliding the two fingers towards or away from each other on the touch screen, or the first terminal body is moved in a direction vertical to the touch screen in space.
The first terminal comprises at least one spatial position sensor, such as a gyroscope, an acceleration sensor or the like,
in the process of moving the first terminal body, the spatial position sensor detects the movements and converts the movements into corresponding control instructions.
In a preferred embodiment, as shown in fig. 11, further comprising:
the display window adjusting unit is used for acquiring a corresponding second space coordinate after the display window moves, and the second space coordinate is positioned in a space reference coordinate system; wherein, the panoramic picture corresponds to a space reference coordinate system; the display window corresponds to a first space coordinate which is positioned in the space reference coordinate system;
transmitting the second spatial coordinates to the second terminal,
and receiving the framing data from the second terminal, wherein the framing data is acquired by correcting the framing coordinates of a framing window in the spatial reference coordinate system according to the second spatial coordinate by the second terminal.
In a preferred embodiment, as shown in fig. 11, the display window adjusting unit receiving the framing data from the second terminal further includes: receiving a framing coordinate when the second terminal acquires the framing data, and matching the framing coordinate with the second space coordinate;
and when the space coordinate of the viewing window in the space reference coordinate system is not matched with the space coordinate of the second terminal, sending a correction request instruction to the second terminal in real time, and continuously correcting the viewing coordinate of the viewing window in the space reference coordinate system according to the correction request instruction and the second space coordinate by a viewing window calculation unit in the second terminal.
The display window adjusting unit further comprises, after receiving the framing data from the second terminal: displaying the framing data according to the received framing coordinates;
or,
and determining an overlapping area between the view window and the display window at the current moment according to the received view coordinate and the second space coordinate, and displaying part of the view data corresponding to the overlapping area.
In an application scenario, by applying the technical solutions provided in the above embodiments, in a process of communicating between a first terminal and a second terminal, a second image acquisition unit generally refers to a rear camera of the second terminal, which is called a rear camera B, and the second terminal further has a front camera, which is called a front camera B; similarly, the rear camera of the first terminal is called a rear camera a, and the front camera thereof is called a front camera a. The communication process comprises the following steps:
after the first terminal establishes a communication connection with the second terminal, a video picture shot by the rear camera B of the second terminal is transmitted to the first terminal, and the video picture is displayed in the display window as a second picture.
The front camera B of the second terminal can also shoot a user B of the second terminal, and transmits the head portrait video of the user B to the first terminal, and the first terminal determines whether to display the head portrait video.
Similarly, the front camera a of the first terminal shoots the user a of the first terminal, transmits the head portrait video of the user a to the second terminal, and the second terminal determines whether to display the head portrait video.
The front camera B usually captures the face of the user of the second terminal, and for the first terminal, the face is the opposite face. The front camera a of the first terminal typically captures the face of the first terminal user, which is the own face for the first terminal.
The technical scheme of the invention has the following beneficial effects: in the process of video call, the first terminal serves as the first terminal, a second image is displayed on the display unit, the second image is a picture associated with the video call, generally, a scene shot by the second terminal at present in the process of video call, so that the first terminal can see the picture associated with the video call in real time, surrounding scenery of the other party can be browsed in the process of call, the other party can be interactively requested to turn a lens to a desired visual angle of the other party in the process of call, the user experience of the double cameras in the process of video call is improved, the user can experience a panoramic real-time video with less bandwidth, and the use experience of the user is improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (9)
1. A display method is characterized in that the display method is applied to a first terminal, the first terminal comprises a first display unit, a panoramic picture is arranged in the first terminal, and the panoramic picture records environmental information of the position of a second terminal; the method comprises the following steps:
establishing a communication connection with the second terminal;
receiving a second image from the second terminal, wherein the second image is acquired by a second image acquisition unit of the second terminal;
displaying the second image;
detecting a control command;
under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image;
displaying the first image, the first image comprising two portions, a second portion of the first image being padded by a corresponding portion of the panoramic picture.
2. The method of claim 1, wherein obtaining a first image according to the control command comprises:
acquiring the first image through a first image acquisition unit of the first terminal;
or locally calling an image from the first terminal as the first image.
3. The method of claim 1, wherein the second image comprises a second dynamic image, the second dynamic image being located in a display window occupying part or all of the display unit;
the first image comprises two parts, wherein the first part is a first dynamic image and comprises a part or all of the second dynamic image displayed in the current display window; and the second part is a first static image and comprises an image presented by the part outside the display window in the display unit.
4. The method of claim 1, wherein the panoramic picture corresponds to a spatial reference coordinate system; the display window corresponds to a first space coordinate which is positioned in the space reference coordinate system;
after the display window moves, the display window corresponds to a second space coordinate, and the second space coordinate is located in the space reference coordinate system;
transmitting the second spatial coordinates to the second terminal,
and receiving framing data from the second terminal, wherein the framing data is acquired by the second terminal after correcting framing coordinates of a framing window in the spatial reference coordinate system according to the second spatial coordinates.
5. The method of claim 4, wherein receiving framing data from the second terminal further comprises:
receiving a framing coordinate when the second terminal acquires the framing data, and comparing the framing coordinate with the second space coordinate;
and when the space coordinates are not matched with the first space coordinates, sending a correction request instruction to the second terminal in real time, and correcting the view coordinates of the view window by the second terminal according to the correction request instruction and the second space coordinates.
6. The method of claim 4, further comprising, after receiving framing data from the second terminal:
displaying the framing data according to the received framing coordinates;
or,
and determining an overlapping area between the viewfinder window and the display window at the current moment according to the received viewfinder coordinates and the second space coordinates, and displaying the viewfinder data corresponding to the overlapping area in the overlapping area.
7. An electronic device is characterized in that the electronic device is used as a first terminal, a panoramic picture is arranged in the first terminal, and the panoramic picture records environmental information of the position of a second terminal; the method comprises the following steps:
the communication unit is used for establishing communication connection with a second terminal;
the image receiving unit is used for receiving a second image from the second terminal, and the second image is acquired by a second image acquisition unit of the second terminal;
the image display unit is used for displaying the second image on a first display unit;
a control unit for detecting a control command; under the condition of maintaining the communication connection, obtaining a first image according to the control command, wherein the first image is different from the second image;
the image display unit is further configured to display the first image, where the first image includes two parts, and a second part of the first image is padded by a corresponding part of the panoramic picture.
8. The electronic device of claim 7, further comprising:
the image management unit is used for acquiring the first image through a first image acquisition unit of the first terminal;
or locally calling an image from the first terminal as the first image.
9. The electronic device of claim 7, further comprising:
and the display window unit occupies part or all of the display unit and is used for displaying a second dynamic image, and the second dynamic image is part or all of the second image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110267764.3A CN103002244B (en) | 2011-09-09 | 2011-09-09 | A kind of method of interactive video call and call terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110267764.3A CN103002244B (en) | 2011-09-09 | 2011-09-09 | A kind of method of interactive video call and call terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103002244A CN103002244A (en) | 2013-03-27 |
CN103002244B true CN103002244B (en) | 2016-03-30 |
Family
ID=47930320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110267764.3A Active CN103002244B (en) | 2011-09-09 | 2011-09-09 | A kind of method of interactive video call and call terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103002244B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103369138A (en) * | 2013-06-28 | 2013-10-23 | 深圳市有方科技有限公司 | Photo shooting method of digital equipment and digital equipment |
CN104469249B (en) * | 2013-09-16 | 2017-11-03 | 联想(北京)有限公司 | A kind of information processing method and the first electronic equipment |
CN104023197A (en) * | 2014-06-27 | 2014-09-03 | 联想(北京)有限公司 | Information processing method and information processing device |
US10771736B2 (en) * | 2014-06-30 | 2020-09-08 | Microsoft Technology Licensing, Llc | Compositing and transmitting contextual information during an audio or video call |
CN106341641A (en) * | 2015-07-10 | 2017-01-18 | 小米科技有限责任公司 | Video communication method and device |
CN108307140B (en) * | 2015-09-23 | 2021-02-12 | 腾讯科技(深圳)有限公司 | Network call method, device and computer readable storage medium |
CN107015246B (en) | 2016-01-28 | 2021-03-30 | 华为技术有限公司 | Navigation assistance method and terminal based on scene sharing |
CN109873864B (en) * | 2019-02-02 | 2020-12-11 | 视联动力信息技术股份有限公司 | Communication connection establishing method and system based on video networking |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1489389A (en) * | 2003-08-29 | 2004-04-14 | 陈旭光 | Video communication terminal and video communication method |
CN101547333A (en) * | 2009-04-20 | 2009-09-30 | 中兴通讯股份有限公司 | Method and terminal for switching front and back scene during viewable call |
-
2011
- 2011-09-09 CN CN201110267764.3A patent/CN103002244B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1489389A (en) * | 2003-08-29 | 2004-04-14 | 陈旭光 | Video communication terminal and video communication method |
CN101547333A (en) * | 2009-04-20 | 2009-09-30 | 中兴通讯股份有限公司 | Method and terminal for switching front and back scene during viewable call |
Also Published As
Publication number | Publication date |
---|---|
CN103002244A (en) | 2013-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103002244B (en) | A kind of method of interactive video call and call terminal | |
KR102032347B1 (en) | Image display positioning using image sensor location | |
KR102194094B1 (en) | Synthesis method, apparatus, program and recording medium of virtual and real objects | |
CN111083380B (en) | Video processing method, electronic equipment and storage medium | |
JP6205072B2 (en) | Imaging control apparatus, imaging control method, camera, camera system, and program | |
US20160286131A1 (en) | Method and apparatus for displaying self-taken images | |
CN109660723B (en) | Panoramic shooting method and device | |
CN108495032B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN110602401A (en) | Photographing method and terminal | |
CN103945045A (en) | Method and device for data processing | |
CN110072061B (en) | Interactive shooting method, mobile terminal and storage medium | |
CN109660738B (en) | Exposure control method and system based on double cameras | |
CN104243830B (en) | A kind of method and device for controlling camera rotation | |
CN107948505B (en) | Panoramic shooting method and mobile terminal | |
CN109905603B (en) | Shooting processing method and mobile terminal | |
CN111064895B (en) | Virtual shooting method and electronic equipment | |
CN108632543B (en) | Image display method, image display device, storage medium and electronic equipment | |
CN111145192A (en) | Image processing method and electronic device | |
CN111083371A (en) | Shooting method and electronic equipment | |
US11636571B1 (en) | Adaptive dewarping of wide angle video frames | |
CN110086998B (en) | Shooting method and terminal | |
CN108924422A (en) | A kind of panorama photographic method and mobile terminal | |
CN109218709B (en) | Holographic content adjusting method and device and computer readable storage medium | |
CN108156386B (en) | Panoramic photographing method and mobile terminal | |
EP3599763B1 (en) | Method and apparatus for controlling image display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |