[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113763472A - Method and device for determining viewpoint width and storage medium - Google Patents

Method and device for determining viewpoint width and storage medium Download PDF

Info

Publication number
CN113763472A
CN113763472A CN202111048861.3A CN202111048861A CN113763472A CN 113763472 A CN113763472 A CN 113763472A CN 202111048861 A CN202111048861 A CN 202111048861A CN 113763472 A CN113763472 A CN 113763472A
Authority
CN
China
Prior art keywords
image
mean value
coordinate
pixel mean
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111048861.3A
Other languages
Chinese (zh)
Other versions
CN113763472B (en
Inventor
贺曙
徐万良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Technology Xiang Yang Co ltd
Original Assignee
Future Technology Xiang Yang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Technology Xiang Yang Co ltd filed Critical Future Technology Xiang Yang Co ltd
Priority to CN202111048861.3A priority Critical patent/CN113763472B/en
Publication of CN113763472A publication Critical patent/CN113763472A/en
Application granted granted Critical
Publication of CN113763472B publication Critical patent/CN113763472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a method for determining viewpoint width and related equipment, which are used for rapidly determining the width of a viewpoint corresponding to the equipment under the condition that the screen optical parameters of the equipment are unknown. The method comprises the following steps: the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, and the first image and the second image are obtained by shooting the target image at different positions by the second device; if the first device receives the first coordinate recording instruction, recording a first position coordinate; if the first equipment receives a second coordinate recording instruction, recording a second position coordinate; and the first equipment determines the width of a viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.

Description

Method and device for determining viewpoint width and storage medium
[ technical field ] A method for producing a semiconductor device
The application belongs to the field of naked eye 3D, and particularly relates to a method and a device for determining viewpoint width and a storage medium.
[ background of the invention ]
Naked eye 3D, which is a generic name for technologies that achieve stereoscopic vision effects without the aid of external tools such as polarized glasses, is abbreviated as autoscopy.
In a naked eye 3D system with human eye tracking, equipment acquires images through a front camera and tracks the positions of human eyes, then a viewpoint corresponding to the current position of the human eyes is calculated, and the width of each viewpoint in the naked eye 3D system needs to be determined in the process of acquiring the images through the front camera and tracking the positions of the human eyes.
At present, the width of a viewpoint is mainly deduced through optical design, but in actual use, one grating is often used on multiple types of third-party equipment, and screen optical parameters of the equipment, such as glass thickness, optical cement thickness, assembly gap size and the like, cannot be obtained exactly, so that the width of the viewpoint cannot be determined exactly.
[ summary of the invention ]
The application aims to provide a method and a device for determining a viewpoint width and a storage medium, which can quickly determine the width of a viewpoint corresponding to equipment under the condition that the screen optical parameters of the equipment are unknown, and further adjust a 3D image or a 3D video displayed by the equipment according to the width of the viewpoint, so that the watching experience of a user is improved.
A first aspect of the embodiments of the present application provides a method for determining a viewpoint width, including:
the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first device receives a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value, the first device records a first position coordinate of a position where the second device shoots the first image;
if the first device receives a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value, the first device records a second position coordinate of a position where the second device shoots the second image;
and the first equipment determines the width of a viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
In one possible design, the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device includes:
the first equipment determines a first position of the second equipment according to the fitting angle and the first position coordinate, wherein the first position is the position where the second equipment is located when the first pixel mean value reaches the first preset value;
the first equipment determines a second position of the second equipment according to the fitting angle and the second position coordinate, wherein the second position is the position where the second equipment is located when the second pixel mean value reaches the second preset value;
the first device determines a width of the viewpoint from the first location and the second location.
In one possible design, the determining, by the first device, the first position of the second device from the fit angle and the first position coordinate includes:
the first device calculates the first position by the following equation:
X0′=x0+(y0-y)*tan(a);
wherein, X0' is the first position, the first position coordinate is (x)0,y0) Y is a preset constant, and a is the fitting angle;
the first device determining a second position of the second device according to the fitting angle and the second position coordinate comprises:
the first device calculates the second position by the following equation:
X1′=x1+(y1-y)*tan(a);
wherein, X1' is the second position, the second position coordinate is (x)1,y1);
The first device determining the width of the viewpoint from the first location and the second location comprises:
the first device calculates the width of the viewpoint by the following formula:
VW=abs(X0′-X1′);
wherein VW is the width of the viewpoint and abs is an absolute value function.
In one possible design, the method further includes:
obtaining the width of the grating;
determining the arrangement layout of the viewpoints corresponding to the first equipment according to the width of the grating and the width of the viewpoints;
and adjusting the stereoscopic image displayed when the first equipment operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the change of the positions of human eyes of the user.
A second aspect of the embodiments of the present application provides a method for determining a viewpoint width, including:
the method comprises the steps that a second device shoots a target image displayed by a first device in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device so that the first device records a first position coordinate, wherein the first position coordinate is a coordinate of a position where the second device is located when the first image is shot;
if the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, the width of a viewpoint corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device, and the second position coordinate is the coordinate of the position where the second device is located when the second image is shot.
In one possible design, the analyzing, by the second device, the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device includes:
the second device calculates the first pixel mean value and the second pixel mean value by the following formulas:
Figure BDA0003252087200000041
wherein aver _ piexl is the first pixel mean value or the second pixel mean value, a is the screen region, w is the width of the screen region, and h is the height of the screen region.
A third aspect of the embodiments of the present application provides an apparatus, where the apparatus is a first apparatus, and the first apparatus includes:
the display unit is used for displaying a target image in a stereo mode, so that a second device can shoot the target image in real time to obtain a first image and a second image, and analyzing the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
the recording unit is used for recording a first position coordinate of a position where the second device shoots the first image if a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value is received;
the recording unit is further configured to record a second position coordinate of a position where the second device shoots the second image if a second coordinate recording instruction sent by the second device when the second pixel average value reaches a second preset value is received;
and the determining unit is used for determining the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
In one possible design, the determining unit is specifically configured to:
determining a first position of the second equipment according to the fitting angle and the first position coordinate, wherein the first position is a position where the second equipment is located when the first pixel mean value reaches the first preset value;
determining a second position of the second device according to the fitting angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches a second preset value;
determining the width of the viewpoint from the first position and the second position.
In one possible design, the determining, by the determining unit, the first position of the second device according to the fitting angle and the first position coordinate includes:
calculating the first position by the formula:
X0′=x0+(y0-y)*tan(a);
wherein, X0' is the first position, the first position coordinate is (x)0,y0) Y is a preset constant, and a is the fitting angle;
the determining unit determining the second position of the second device according to the fitting angle and the second position coordinate includes:
calculating the second position by the formula:
X1′=x1+(y1-y)*tan(a);
wherein, X1' is the second position, the second position coordinate is (x)1,y1);
The determining unit determining the width of the viewpoint from the first position and the second position includes:
calculating the width of the viewpoint by the following formula:
VW=abs(X0′-X1′);
wherein VW is the width of the viewpoint and abs is an absolute value function.
In one possible design, the determining unit is further configured to:
obtaining the width of the grating;
determining the arrangement layout of the viewpoints corresponding to the first equipment according to the width of the grating and the width of the viewpoints;
and adjusting the stereoscopic image displayed when the first equipment operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the change of the positions of human eyes of the user.
A fourth aspect of the embodiments of the present application provides an apparatus, where the apparatus is a second apparatus, and the second apparatus includes:
the shooting unit is used for shooting a target image displayed by first equipment in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image by the second equipment at different positions;
the analysis unit is used for analyzing the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a first sending unit, configured to send a first coordinate recording instruction to the first device if the first pixel average value reaches a first preset value, so that the first device records a first position coordinate, where the first position coordinate is a coordinate of a position where the second device is located when the first image is captured;
and the second sending unit is configured to send a second coordinate recording instruction to the first device if the second pixel average value reaches a second preset value, so that the first device records a second position coordinate, and determine the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the attachment angle of the grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second image is shot.
In one possible design, the analysis unit is specifically configured to:
the second device calculates the first pixel mean value and the second pixel mean value by the following formulas:
Figure BDA0003252087200000061
wherein aver _ piexl is the first pixel mean value or the second pixel mean value, a is the screen region, w is the width of the screen region, and h is the height of the screen region.
A fifth aspect of the embodiments of the present application provides a computer device, which includes at least one connected processor, a memory and a transceiver, wherein the memory is used for storing program codes, and the processor is used for calling the program codes in the memory to execute the steps of the method for determining viewpoint width in the above aspects.
A sixth aspect of the embodiments of the present application provides a computer storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the steps of the method for determining a viewpoint width as described in the above aspects.
Compared with the related art, in the embodiment provided by the application, when the viewpoint width of the first device is determined, the second device can shoot the first device at different positions to obtain a plurality of images, the images are analyzed, the pixel mean values corresponding to the images can be obtained respectively, when the pixel mean value reaches a preset value, the position coordinate of the second device when the pixel mean value reaches the preset value is recorded by the first device, then the width of the viewpoint corresponding to the first device is calculated by the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, therefore, the width of the viewpoint corresponding to the first device can be determined rapidly under the condition that the screen optical parameters of the first device are unknown, the 3D image or the 3D video displayed by the first device is adjusted through the viewpoint width, and the user viewing experience is improved.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width provided in an embodiment of the present application;
fig. 2 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 4 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 6 is a schematic view of a virtual structure of a first device according to an embodiment of the present application;
fig. 7 is a schematic view of a virtual structure of a second device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of a first device and a second device provided in an embodiment of the present application.
[ detailed description ] embodiments
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, the division of modules presented herein is merely a logical division that may be implemented in a practical application in a further manner, such that a plurality of modules may be combined or integrated into another system, or some feature vectors may be omitted, or not implemented, and such that couplings or direct couplings or communicative coupling between each other as shown or discussed may be through some interfaces, indirect couplings or communicative coupling between modules may be electrical or other similar, this application is not intended to be limiting. The modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the present disclosure.
Referring to fig. 1, fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
101. the method comprises the steps that a target image is displayed by first equipment in a three-dimensional mode, so that second equipment shoots the target image to obtain a first image and a second image, and the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first equipment.
In this embodiment, when it is necessary to determine the width of a viewpoint corresponding to a first device, the first device displays a target image in a stereo mode, so that a second device captures the target image displayed in the stereo mode by the first device in real time to obtain a first image and a second image, and analyzes the first image and the second image to obtain a first pixel average value and a second pixel average value of a screen area corresponding to the first device, where the first image and the second image are obtained by the second device capturing the target image at different positions, the second device is any terminal device with an image capturing function and a communication function, the target image is an image of half-black and half-color, and colors in the half-color refer to visible colors such as white, red, green, yellow, and the like. It can be understood that, when the second device captures the target image to obtain the first image and the second image, the first image and the second image may not only include the target image, but may also include other contents, and therefore, what needs to be determined here is the pixel average value of the screen region, which is the display region of the target image in the screen of the first device, instead of the pixel average value of the first image. In addition, when the first device displays the target image, the position coordinate of the second device can be displayed on a screen corresponding to the first device in real time, or the position coordinate of a camera of the second device is directly displayed.
102. If a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value is received, the first device records a first position coordinate of a position where the second device shoots the first image.
In this embodiment, after analyzing the first image to obtain a first pixel mean value, the second device may determine whether the first pixel mean value reaches a first preset value, and if the first pixel mean value reaches the first preset value, the second device may send a first coordinate recording instruction, and the first device records a first position coordinate of a position where the second device shoots the first image according to the first coordinate recording instruction.
103. And if a second coordinate recording instruction sent by the second equipment when the second pixel mean value reaches a second preset value is received, the first equipment records a second position coordinate of the position where the second equipment shoots the second image.
In this embodiment, after analyzing the second image to obtain a second pixel mean value, the second device may determine whether the second pixel mean value reaches a second preset value, and if the second pixel mean value reaches the second preset value, the second device may send a second coordinate recording instruction, and the first device records a second position coordinate of a position where the second device shoots the second image according to the received coordinate recording instruction.
104. And the first equipment determines the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
In this embodiment, after recording the first position coordinate and the second position coordinate, the first device may obtain a bonding angle of a grating corresponding to the first device (the bonding angle is a bonding angle of a grating of a 3D film attached to the first device, and a manner of obtaining the bonding angle of the grating is not limited here, and may be input by a user, for example), and determine the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the bonding angle.
In one embodiment, the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device includes:
the first equipment determines a first position of the second equipment according to the attaching angle and the first position coordinate, wherein the first position is the position where the second equipment is located when the first pixel mean value reaches a first preset value;
the first equipment determines a second position of the second equipment according to the fitting angle and the second position coordinate, wherein the second position is the position where the second equipment is located when the second pixel mean value reaches a second preset value;
the first device determines the width of the viewpoint from the first location and the second location.
In this embodiment, the second device may calculate, based on the attachment angle and the first position coordinate, a first position by using the following formula, where the second device shoots the first image when the first pixel average value reaches the first preset value:
X0′=x0+(y0-y)*tan(a);
wherein, X0' is a first position having a first position coordinate of (x)0,y0) Y is a preset constant, and a is a fitting angle;
then, the second device may calculate a second position based on the attachment angle and the second position coordinate by using the following formula, where the second device captures a second image when the second pixel average value reaches a second preset value:
X1′=x1+(y1-y)*tan(a);
wherein, X1' is a second position having a coordinate of (x)1,y1);
After the second device calculates the first position and the second position according to the formula, the width of the viewpoint corresponding to the first device may be calculated based on the first position and the second position according to the following formula:
VW=abs(X0′-X1′);
VW is the width of a viewpoint corresponding to the first device, and abs is an absolute value function.
The following describes the calculation of the width of the viewpoint with reference to fig. 2, where fig. 2 is a schematic diagram of the calculation of the width of the viewpoint provided in the embodiment of the present application, where 201 is a first position coordinate (x)0,y0) And 202 is the second position coordinate (x)1,y1) And 203 is a coordinate of a preset constant y in the coordinate system (it is understood that the preset constant y may be set to be a half of a width corresponding to the screen area, or may be set according to an actual situation, and is not limited specifically) to calculate the first position X0' for example, in calculating the first position X0' time, angle of fit of gratingDegree a is known, and the first position coordinate (x) is obtained0,y0) Then, the preset constant 203 is converted to the same direction as the Y-axis direction of the first position coordinate, and then the formula X is used0′=x0+(y0-y) tan (a) calculating the first position, and in a similar way calculating the second position, followed by the formula VW ═ abs (X)0′-X1') the absolute value of the difference between the first position and the second position, i.e. the width of the viewpoint to which the first device corresponds, is calculated.
It should be noted that, after obtaining the width of the viewpoint corresponding to the first device, the first device may adjust the 3D image displayed or the played 3D video when the first device operates in the 3D mode based on the width of the viewpoint, and when adjusting the 3D image displayed or the played 3D video when the first device operates in the 3D mode based on the width of the viewpoint, the specific first device may obtain the width of the raster corresponding to the first device, then determine the arrangement layout of the viewpoints corresponding to the first device according to the width of the raster and the width of the viewpoint, and adjust the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the change in the positions of the human eyes of the user. That is to say, after the width of the viewpoint is obtained, since the width of the grating of the first device is known, the arrangement layout of the grating of the 3D film pasted on the screen of the first device can be calculated, and then the 3D image displayed or the played 3D video when the first device operates in the stereoscopic mode can be adjusted according to the change of the positions of the human eyes, so as to provide a better 3D display effect for the user.
It should be noted that, although the second device transforms the position of the second device with the position of the first device unchanged, tracks the position coordinates of the camera of the second device through the first device, and records the position coordinates when the position of the second device is transformed, other ways are also possible, for example, the position coordinates of the camera of the second device is recorded with the position transformed by the first device with the position of the second device unchanged, and the position coordinates when the image of the first device is captured by the second device at different positions are not limited specifically, as long as the position coordinates can be recorded. It can be understood that, when the position of the examiner is not changed and the coordinates of the examiner's eyes are recorded by changing the position of the target device, the specific implementation procedure is as follows:
when the first device displays a target image in a 3D mode, the second device shoots the first device, the screen of the first device displays the position coordinate of the second device (certainly, the position coordinate of a camera of the second device can also be displayed, and the position coordinate is not limited), the first device adjusts the position until receiving a coordinate recording instruction of the second device and records the position coordinate of the second device at the current position, the coordinate recording instruction is sent by the second device when the image obtained by shooting the first device is analyzed to obtain a corresponding pixel average value, and the pixel average value reaches a first preset value or a second preset value; then, continuing to adjust the position until a coordinate recording instruction sent by the second equipment is received again, and recording the position coordinate of the second equipment at the current position, wherein the coordinate recording instruction is sent by the second equipment when the second equipment analyzes the image obtained by shooting the first equipment to obtain a corresponding pixel mean value, and the pixel mean value reaches a first preset value or a second preset value; therefore, the position coordinates of two different positions can be obtained, and the width of the viewpoint is calculated according to the position coordinates and the fitting angle of the two different positions.
In summary, in the embodiment provided by the application, when the viewpoint width of the first device is determined, the second device may shoot the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may obtain pixel mean values corresponding to the plurality of images, respectively, and further when the pixel mean values reach a preset value, the first device records a position coordinate of the second device when the pixel mean values reach the preset value, and then the first device calculates the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be determined quickly under the condition that the screen optical parameter of the first device is unknown, and further, the 3D image or the 3D video displayed by the first device is adjusted through the viewpoint width, thereby improving the viewing experience of a user.
Referring to fig. 3, fig. 3 is a schematic view of an application scenario provided in an embodiment of the present application, in which, in fig. 3, a position of a first device is fixed, a position of a second device is changed, and a position coordinate of a position where the second device is located is recorded by the first device when a pixel mean value of an image captured by the second device reaches a preset value, as shown in fig. 3, when a viewpoint width of a 3D film disposed on the first device 301 needs to be determined, the first device 301 displays a target image in a 3D mode, the second device captures the target image displayed by the first device in the 3D mode at different positions to obtain a corresponding image, analyzes the image to obtain a corresponding pixel mean value, then determines whether the pixel mean value reaches a first preset value or a second preset value, and if the pixel mean value reaches the first preset value or the second preset value, sends a coordinate recording instruction to the first device 301, after the first device 301 receives the coordinate recording instruction, the first device 301 records the position coordinate, for example, when the second device is at the 302 position in fig. 3, the pixel average value of the image of the first device obtained by shooting reaches a first preset value, and at this time, when the first device 301 receives the coordinate recording instruction, the first device 301 may record the position coordinate of the second position at the 302 position; then, the second device changes the position again, shoots the first device again to obtain a corresponding image, analyzes the image to obtain a corresponding pixel mean value, then determines whether the pixel mean value reaches a second preset value, if the pixel mean value reaches the second preset value, the second device sends a coordinate recording instruction to the first device 301, at this time, the first device 301 records a second position coordinate according to the coordinate recording instruction, for example, in fig. 3, when the second device is at the 303 position, the pixel mean value of the image shot by the first device 301 reaches the second preset value, at this time, the second device sends an instruction for recording the position coordinate to the first device 301, the first device 301 records the position coordinate when the second device is at the 303 position according to the coordinate recording instruction, and then the first device 301 can calculate the position coordinate when the second device is at the 302 position, the position coordinate when the second device is at the 303 position, and the joint angle of the grating corresponding to the first device 301 The width of the viewpoint is taken into account and thus the 3D image or 3D video displayed by the first device 301 in the 3D mode is adjusted according to the width of the viewpoint. Therefore, the width of the viewpoint corresponding to the first device can be rapidly determined under the condition that the screen optical parameters of the first device are unknown, and then the 3D image or the 3D video displayed by the first device is adjusted through the width of the viewpoint, so that the watching experience of a user is improved.
It should be noted that, after the second device photographs the first device at a position to obtain the image of the first device photographed at the position, the second device may directly analyze the image to obtain a corresponding pixel mean value, and determine whether the pixel mean value reaches a first preset value or a second preset value, if neither of the pixel mean values reaches the first preset value or the second preset value, the position of the second device is transformed to repeat the above steps until the pixel mean value of the image of the first device photographed at a certain position reaches the first preset value or the second preset value, and when the first preset value or the second preset value is reached, a coordinate recording instruction is sent to the first device to enable the first device to return a corresponding position coordinate, and then the above steps are continuously performed by adjusting the position until the pixel mean value of the image of the first device photographed at another position reaches another preset value, and position coordinates of the two positions are obtained from the first device, as a first position coordinate and a second position coordinate.
The method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspective of a first device with reference to fig. 1, and is described below from the perspective of a second device with reference to fig. 4.
Referring to fig. 4 in combination, fig. 4 is a schematic diagram of another embodiment of the method for determining a viewpoint width according to the embodiment of the present application, including:
401. the second device shoots the target image displayed by the first device in the stereoscopic mode in real time to obtain a first image and a second image.
In this embodiment, when it is necessary to determine the width of a viewpoint corresponding to a first device, a second device captures a target image displayed by the first device in a stereoscopic mode in real time to obtain a first image and a second image, where the first image and the second image are obtained by the second device capturing the target image at different positions, the second device is any terminal device with an image capturing function and a communication function, the target image is a semi-black and semi-color image, and colors in the semi-color image refer to visible colors such as white, red, green, and yellow. That is, when it is necessary to determine the width of a viewpoint corresponding to the first device (i.e., the width of a viewpoint corresponding to a 3D film covered on a screen of the target device), the first device displays a half-black and half-color image in a 3D mode, and then the second device photographs the first device at different positions to obtain a first image and a second image.
402. The second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of the screen area corresponding to the first device.
In this embodiment, after the second device photographs the first device at different positions to obtain a first image and a second image, the first image and the second image may be analyzed respectively to obtain a first pixel mean value and a second pixel mean value of a screen region corresponding to the first device, where the first pixel mean value corresponds to the first image, the second pixel mean value corresponds to the second image, and the screen region is a region where the first device displays a target image. Specifically, the second device may calculate the first pixel mean value and the second pixel mean value by the following formulas:
Figure BDA0003252087200000131
wherein aver _ pixl is the first pixel average value or the second pixel average value, a is the screen region, w is the width of the screen region, and h is the height of the screen region.
It can be understood that, when the second device captures the target image to obtain the first image and the second image, the first image and the second image may not only include the target image, but may also include other contents, and therefore, what needs to be determined here is the pixel average value of the screen region, which is the display region of the target image in the screen of the first device, instead of the pixel average value of the first image. In addition, when the first device displays the target image, the position coordinate of the second device can be displayed on a screen corresponding to the first device in real time, or the position coordinate of a camera of the second device is directly displayed.
403. And if the first pixel mean value reaches a first preset value, the second equipment sends a first coordinate recording instruction to the first equipment so that the first equipment records the first position coordinate.
In this embodiment, after analyzing the first image to obtain a first pixel mean value, the second device may determine whether the first pixel mean value reaches a first preset value, and if the first pixel mean value reaches the first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate according to the first coordinate recording instruction, where the first position coordinate is a coordinate of a position where the second device is located when shooting the first image. That is to say, the first device displays the position coordinates of the second device in real time, when the second device determines that the first pixel mean value reaches the first preset value, a coordinate recording instruction can be sent to the first device, and after receiving the first coordinate recording instruction, the first device can record the position coordinates of the position where the second device is located when the first image is shot.
It can be understood that when the first pixel mean value does not reach the first preset value, the second device transforms the position to shoot the first device again to obtain the image shot after transforming the position, and analyzes the image until the pixel mean value of the image shot after transforming the position reaches the first preset value, and sends a coordinate recording instruction to the first device to enable the first device to record the position coordinate of the position.
404. And if the second pixel mean value reaches a second preset value, the second equipment sends a second coordinate recording instruction to the first equipment so that the first equipment records a second position coordinate, and determines the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
In this embodiment, after analyzing the second image to obtain a second pixel mean value, the second device may determine whether the second pixel mean value reaches a second preset value, and if the second pixel mean value reaches the second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, and determines the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the attachment angle of the grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when shooting the second image. That is to say, the first device displays the position coordinates of the second device in real time, when the second device determines that the second pixel mean value reaches the second preset value, the coordinate recording instruction can be sent to the first device, after receiving the coordinate recording instruction, the first device can record the position coordinates of the position, and calculate the width of the viewpoint according to the two position coordinates and the fitting angle.
It can be understood that when the second pixel mean value does not reach the second preset value, the second device transforms the position to shoot the first device again to obtain the image shot after transforming the position, analyzes the image until the pixel mean value of the image shot after transforming the position reaches the second preset value, and sends a coordinate recording instruction to the first device to enable the first device to record the position coordinate of the position.
It should be noted that, after the second device photographs the first device at a position to obtain the image of the first device photographed at the position, the second device directly analyzes the image to obtain a corresponding pixel mean value, and determines whether the pixel mean value reaches a first preset value or a second preset value, if neither of the pixel mean values reaches the first preset value or the second preset value, the position of the second device is transformed to repeatedly execute the above steps until the pixel mean value of the image of the first device photographed at a certain position reaches the first preset value, and when the pixel mean value reaches the first preset value, a coordinate recording instruction is sent to the first device to enable the first device to record the corresponding position coordinate, and then the position is continuously adjusted to execute the above steps until the pixel mean value of the image of the first device photographed at another position reaches the second preset value, a coordinate recording instruction is sent to the first device to enable the first device to record the coordinate of the position, and calculating the width of the viewpoint according to the coordinates of the two positions and the fitting angle.
In summary, it can be seen that, in the embodiment provided by the application, when determining the width of a viewpoint of a first device, a second device may shoot the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may respectively obtain pixel mean values corresponding to the plurality of images, and further send a coordinate recording instruction to the first device when the pixel mean values reach a preset value, so that the first device records a position coordinate of a corresponding position according to the coordinate recording instruction, and calculates the width of the viewpoint corresponding to the first device according to the position coordinate and a fitting angle of a grating corresponding to the first device. Therefore, the width of the viewpoint corresponding to the first device can be rapidly determined under the condition that the screen optical parameters of the first device are not needed to be known, and then the 3D image or the 3D video displayed by the first device is adjusted through the viewpoint width, so that the watching experience of a user is improved.
The method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspective of the second device and the first device, respectively, and is described below with reference to fig. 5 from the perspective of the interaction between the first device and the second device.
Referring to fig. 5, fig. 5 is a schematic diagram of another embodiment of the method for determining a viewpoint width according to the embodiment of the present application, including:
501. the first device displays the target image in a 3D mode.
502. And the second equipment shoots the target image displayed by the first equipment in the 3D mode in real time to obtain a first image and a second image.
503. And the second equipment analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of the screen area corresponding to the first equipment.
It is understood that steps 501 to 503 are similar to steps 401 to 402 in fig. 4, and detailed description has already been made in fig. 4, and detailed description is omitted here.
504. And if the first pixel mean value reaches a first preset value, the second equipment sends a first coordinate recording instruction to the first equipment.
505. The first device records the first position coordinate according to the first coordinate recording instruction.
506. And if the second pixel mean value reaches a second preset value, the second equipment sends a second coordinate recording instruction to the first equipment.
507. And the first equipment records the second position coordinate according to the second coordinate recording instruction.
It is understood that steps 504 to 507 are similar to the steps of recording the position coordinates in fig. 1 and 4, and the detailed description has been given in fig. 1 and 4, and detailed description is omitted here.
508. And the first equipment determines the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
It is understood that step 508 is similar to step 104 in fig. 1, and is already described in detail in fig. 1, and is not described here in detail.
In summary, in the embodiment provided by the application, when the viewpoint width of the first device is determined, the second device may shoot the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may obtain pixel mean values corresponding to the plurality of images, respectively, and further when the pixel mean values reach a preset value, the first device records a position coordinate of the second device when the pixel mean values reach the preset value, and then the first device calculates the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be determined quickly under the condition that the screen optical parameter of the first device is unknown, and further, the 3D image or the 3D video displayed by the first device is adjusted through the viewpoint width, thereby improving the viewing experience of a user.
It should be noted that, in the above embodiments, the first device calculates the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the attachment angle of the grating, or the first device sends the first position coordinate and the second position coordinate to the second device after recording the first position coordinate and the second position coordinate, the second device calculates the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the attachment angle of the grating corresponding to the first device, and then sends the width of the viewpoint corresponding to the first device; in addition, the analyzing of the first image and the second image to obtain the pixel mean value may also be performed by the first device, and is not limited herein.
It should be further noted that, the pixel mean calculation, the position calculation, and the viewpoint width calculation have been described in detail in fig. 1 to 5, and the pixel mean calculation, the position calculation, and the viewpoint width calculation are the same as those described in fig. 1 to 5, but are performed in different manners, and are not repeated herein.
The present embodiment is described above from the viewpoint of the viewpoint width determination method, and is described below from the viewpoint width determination device.
Referring to fig. 6, fig. 6 is a schematic view of a virtual structure of a first device according to an embodiment of the present application, where the first device 600 includes:
a display unit 601, configured to display a target image in a stereo mode, so that a second device captures the target image in real time to obtain a first image and a second image, and analyze the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, where the first image and the second image are obtained by capturing the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a recording unit 602, configured to record a first position coordinate of a position where the second device captures the first image if a first coordinate recording instruction sent by the second device when the first pixel average value reaches a first preset value is received;
the recording unit 602 is further configured to record a second position coordinate of a position where the second device shoots the second image if a second coordinate recording instruction sent by the second device when the second pixel average value reaches a second preset value is received;
a determining unit 603, configured to determine, according to the first position coordinate, the second position coordinate, and a fitting angle of a grating corresponding to the first device, a width of a viewpoint corresponding to the first device.
In one possible design, the determining unit 603 is specifically configured to:
determining a first position of the second equipment according to the fitting angle and the first position coordinate, wherein the first position is a position where the second equipment is located when the first pixel mean value reaches the first preset value;
determining a second position of the second device according to the fitting angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches a second preset value;
determining the width of the viewpoint from the first position and the second position.
In one possible design, the determining unit 603 determines the first position of the second device according to the fitting angle and the first position coordinate includes:
calculating the first position by the formula:
X0′=x0+(y0-y)*tan(a);
wherein, X0' is the first position, the first position coordinate is (x)0,y0) Y is a preset constant, and a is the fitting angle;
the determining unit 603 may determine the second position of the second device according to the fitting angle and the second position coordinate, including:
calculating the second position by the formula:
X1′=x1+(y1-y)*tan(a);
wherein, X1' is the second position, the second position coordinate is (x)1,y1);
The determining unit 603 determining the width of the viewpoint from the first position and the second position includes:
calculating the width of the viewpoint by the following formula:
VW=abs(X0′-X1′);
wherein VW is the width of the viewpoint and abs is an absolute value function.
In one possible design, the determining unit 603 is further configured to:
obtaining the width of the grating;
determining the arrangement layout of the viewpoints corresponding to the first equipment according to the width of the grating and the width of the viewpoints;
and adjusting the stereoscopic image displayed when the first equipment operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the change of the positions of human eyes of the user.
Referring to fig. 7, fig. 7 is a schematic view of a virtual structure of a second device according to an embodiment of the present application, where the second device 700 includes:
a shooting unit 701, configured to shoot a target image displayed in a stereoscopic mode by a first device in real time to obtain a first image and a second image, where the first image and the second image are obtained by shooting the target image at different positions by a second device;
an analyzing unit 702, configured to analyze the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, where the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a first sending unit 703, configured to send a first coordinate recording instruction to the first device if the first pixel mean value reaches a first preset value, so that the first device records a first position coordinate, where the first position coordinate is a coordinate of a position where the second device is located when the first image is captured;
a second sending unit 704, configured to send a second coordinate recording instruction to the first device if the second pixel average value reaches a second preset value, so that the first device records a second position coordinate, and determines a width of a viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and a bonding angle of a grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second image is captured.
In one possible design, the analysis unit 702 is specifically configured to:
the second device calculates the first pixel mean value and the second pixel mean value by the following formulas:
Figure BDA0003252087200000191
wherein aver _ piexl is the first pixel mean value or the second pixel mean value, a is the screen region, w is the width of the screen region, and h is the height of the screen region.
Referring to another video width determining apparatus provided in this embodiment of the present application, the video width determining apparatus may be a terminal device, and as shown in fig. 8, a terminal device 800 includes:
a receiver 801, a transmitter 802, a processor 803 and a memory 804 (wherein the number of processors 803 in the terminal device 800 may be one or more, one processor is taken as an example in fig. 8). In some embodiments of the present application, the receiver 801, the transmitter 802, the processor 803 and the memory 804 may be connected by a bus or other means, wherein fig. 8 illustrates the connection by a bus.
The memory 804 may include a read-only memory and a random access memory, and provides instructions and data to the processor 803. A portion of the memory 804 may also include NVRAM. The memory 804 stores an operating system and operating instructions, executable modules or data structures, or a subset or an expanded set thereof, wherein the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various basic services and for handling hardware-based tasks.
The processor 803 controls the operation of the terminal device, and the processor 803 may also be referred to as a CPU. In a specific application, the various components of the terminal device are coupled together by a bus system, wherein the bus system may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, the various buses are referred to in the figures as a bus system.
The method disclosed in the embodiments of the present application can be applied to the processor 803 or implemented by the processor 803. The processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 803. The processor 803 described above may be a general purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 804, and the processor 803 reads the information in the memory 804 to complete the steps of the method in combination with the hardware thereof.
In this embodiment, the processor 803 is configured to perform the operations performed by the first device and the second device.
The embodiment of the present application further provides a computer-readable medium, which includes a computer-executable instruction, where the computer-executable instruction enables a server to execute the method for determining a viewpoint width described in the foregoing embodiment, and the implementation principle and the technical effect of the method are similar, and are not described herein again.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for determining a viewpoint width, comprising:
the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first device receives a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value, the first device records a first position coordinate of a position where the second device shoots the first image;
if the first device receives a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value, the first device records a second position coordinate of a position where the second device shoots the second image;
and the first equipment determines the width of a viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
2. The method of claim 1, wherein the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device comprises:
the first equipment determines a first position of the second equipment according to the fitting angle and the first position coordinate, wherein the first position is the position where the second equipment is located when the first pixel mean value reaches the first preset value;
the first equipment determines a second position of the second equipment according to the fitting angle and the second position coordinate, wherein the second position is the position where the second equipment is located when the second pixel mean value reaches the second preset value;
the first device determines a width of the viewpoint from the first location and the second location.
3. The method of claim 2, wherein determining, by the first device, the first location of the second device from the fit angle and the first location coordinate comprises:
the first device calculates the first position by the following equation:
X0′=x0+(y0-y)*tan(a);
wherein, X0' is the first position, the first position coordinate is (x)0,y0) Y is a preset constant, and a is the fitting angle;
the first device determining a second position of the second device according to the fitting angle and the second position coordinate comprises:
the first device calculates the second position by the following equation:
X1′=x1+(y1-y)*tan(a);
wherein, X1' is the second position, the second position coordinate is (x)1,y1);
The first device determining the width of the viewpoint from the first location and the second location comprises:
the first device calculates the width of the viewpoint by the following formula:
VW=abs(X0′-X1′);
wherein VW is the width of the viewpoint and abs is an absolute value function.
4. The method according to any one of claims 1 to 3, further comprising:
obtaining the width of the grating;
determining the arrangement layout of the viewpoints corresponding to the first equipment according to the width of the grating and the width of the viewpoints;
and adjusting the stereoscopic image displayed when the first equipment operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the change of the positions of human eyes of the user.
5. A method for determining a viewpoint width, comprising:
the method comprises the steps that a second device shoots a target image displayed by a first device in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device so that the first device records a first position coordinate, wherein the first position coordinate is a coordinate of a position where the second device is located when the first image is shot;
if the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, the width of a viewpoint corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device, and the second position coordinate is the coordinate of the position where the second device is located when the second image is shot.
6. The method of claim 5, wherein the second device analyzing the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of the screen area corresponding to the first device comprises:
the second device calculates the first pixel mean value and the second pixel mean value by the following formulas:
Figure FDA0003252087190000031
wherein aver _ piexl is the first pixel mean value or the second pixel mean value, a is the screen region, w is the width of the screen region, and h is the height of the screen region.
7. An apparatus, the apparatus being a first apparatus, the first apparatus comprising:
the display unit is used for displaying a target image in a stereo mode, so that a second device can shoot the target image in real time to obtain a first image and a second image, and analyzing the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
the recording unit is used for recording a first position coordinate of a position where the second device shoots the first image if a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value is received;
the recording unit is further configured to record a second position coordinate of a position where the second device shoots the second image if a second coordinate recording instruction sent by the second device when the second pixel average value reaches a second preset value is received;
and the determining unit is used for determining the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first equipment.
8. An apparatus, the apparatus being a second apparatus, the second apparatus comprising:
the shooting unit is used for shooting a target image displayed by first equipment in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image by the second equipment at different positions;
the analysis unit is used for analyzing the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a first sending unit, configured to send a first coordinate recording instruction to the first device if the first pixel average value reaches a first preset value, so that the first device records a first position coordinate, where the first position coordinate is a coordinate of a position where the second device is located when the first image is captured;
and the second sending unit is configured to send a second coordinate recording instruction to the first device if the second pixel average value reaches a second preset value, so that the first device records a second position coordinate, and determine the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the attachment angle of the grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second image is shot.
9. A computer device, comprising:
at least one processor, a memory and a transceiver connected, wherein the memory is configured to store program code and the processor is configured to call the program code in the memory to perform the method of determining a viewpoint width of any one of the above claims 1 to 4 and claims 5 to 6.
10. A computer storage medium, comprising:
instructions which, when run on a computer, cause the computer to perform the method of determining viewpoint width of any one of claims 1 to 4 and claims 5 to 6.
CN202111048861.3A 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium Active CN113763472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111048861.3A CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111048861.3A CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113763472A true CN113763472A (en) 2021-12-07
CN113763472B CN113763472B (en) 2024-03-29

Family

ID=78793768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111048861.3A Active CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113763472B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023036218A1 (en) * 2021-09-08 2023-03-16 未来科技(襄阳)有限公司 Method and apparatus for determining width of viewpoint

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608622B1 (en) * 1994-10-14 2003-08-19 Canon Kabushiki Kaisha Multi-viewpoint image processing method and apparatus
CN102186091A (en) * 2011-01-25 2011-09-14 天津大学 Grating-based video pixel arrangement method for multi-view stereoscopic mobile phone
US20140092222A1 (en) * 2011-06-21 2014-04-03 Sharp Kabushiki Kaisha Stereoscopic image processing device, stereoscopic image processing method, and recording medium
JP2015125494A (en) * 2013-12-25 2015-07-06 日本電信電話株式会社 Image generation method, image generation device, and image generation program
WO2016032600A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of stereo and structured-light processing
CN112925109A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Multi-view naked eye 3D display screen and naked eye 3D display terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608622B1 (en) * 1994-10-14 2003-08-19 Canon Kabushiki Kaisha Multi-viewpoint image processing method and apparatus
CN102186091A (en) * 2011-01-25 2011-09-14 天津大学 Grating-based video pixel arrangement method for multi-view stereoscopic mobile phone
US20140092222A1 (en) * 2011-06-21 2014-04-03 Sharp Kabushiki Kaisha Stereoscopic image processing device, stereoscopic image processing method, and recording medium
JP2015125494A (en) * 2013-12-25 2015-07-06 日本電信電話株式会社 Image generation method, image generation device, and image generation program
WO2016032600A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of stereo and structured-light processing
CN112925109A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Multi-view naked eye 3D display screen and naked eye 3D display terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023036218A1 (en) * 2021-09-08 2023-03-16 未来科技(襄阳)有限公司 Method and apparatus for determining width of viewpoint

Also Published As

Publication number Publication date
CN113763472B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US9521362B2 (en) View rendering for the provision of virtual eye contact using special geometric constraints in combination with eye-tracking
EP2328125A1 (en) Image splicing method and device
JP2000354257A (en) Image processor, image processing method and program provision medium
CN112565589A (en) Photographing preview method and device, storage medium and electronic equipment
JP2010278878A (en) Stereoscopic image device and display image switching method thereof
US9380263B2 (en) Systems and methods for real-time view-synthesis in a multi-camera setup
US20180262749A1 (en) Storing Data Retrieved from Different Sensors for Generating a 3-D Image
CN109726613B (en) Method and device for detection
CN106919246A (en) The display methods and device of a kind of application interface
US20170061695A1 (en) Wearable display apparatus, information processing apparatus, and control method therefor
CN113763472A (en) Method and device for determining viewpoint width and storage medium
CN109005285A (en) augmented reality processing method, terminal device and storage medium
CN113781560A (en) Method and device for determining viewpoint width and storage medium
CN113940622A (en) Visual fusion intersection angle measuring method, device and storage medium
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
JP6004978B2 (en) Subject image extraction device and subject image extraction / synthesis device
CN114786001B (en) 3D picture shooting method and 3D shooting system
JP4018273B2 (en) Video processing apparatus, control method therefor, and storage medium
KR20110025083A (en) 3D image display device and method in 3D image system
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
CN114422665A (en) Shooting method based on multiple cameras and related device
CN115988343B (en) Image generation method, device and readable storage medium
CN110211238A (en) Display methods, device, system, storage medium and the processor of mixed reality
KR100584536B1 (en) Image processing device for video communication
CN114584754A (en) 3D display method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant