Disclosure of Invention
The application provides a 3D display method, a device and a storage medium of a 2D image, which can periodically adjust the target offset between a first image and a second image in a reference offset interval, ensure that both eyes of a user can receive views, and further observe a clear 3D image.
A first aspect of the present application provides a 3D display method of a 2D image, including:
determining a first image and a second image corresponding to a target image, wherein the target image is a 2D image to be subjected to 3D display;
determining a reference offset interval corresponding to a target user who watches the 2D image after 3D display at a preset distance;
periodically adjusting the target offset between the first image and the second image in the reference offset interval;
and interleaving the first image and the second image after the target offset is adjusted so as to display the 2D image in a 3D mode.
In one possible design, the determining a reference offset interval corresponding to a target user viewing the 3D display image at a preset distance includes:
determining an initial visual parameter corresponding to the target user;
adjusting the offset between a third image and a fourth image corresponding to the first test 3D image;
recording actual visual parameters corresponding to the target user when the shifted third image and the shifted fourth image are displayed after blanking;
determining a first maximum offset and a first minimum offset between a center of the third image and a center of the fourth image when the actual visual parameters match the initial visual parameters;
and determining the reference offset interval according to the first maximum offset and the first minimum offset.
In one possible design, the determining a reference offset interval corresponding to a target user viewing the 2D image after 3D display at a preset distance includes:
shifting at least one image of a fifth image and a sixth image according to the operation instruction of the target user, wherein the fifth image and the sixth image are images corresponding to a second test 3D image;
if a recording instruction of the target user is received, recording a second maximum offset and a second minimum offset between the center of the fifth image and the center of the sixth image according to the recording instruction;
and determining the reference offset interval according to the second maximum offset and the second minimum offset.
In one possible design, the periodically adjusting the target offset between the first image and the second image at the reference offset interval includes:
periodically adjusting the target offset amount in the reference offset interval by the following formula:
Δd=f(t);
where Δ d is the target offset, f (t) is a periodic continuous function, and t is a time variable.
In one possible design, the method further includes:
determining a first offset between the position of the first image after adjustment and the position of the first image before adjustment;
determining a second offset between the position of the second image after adjustment and the position of the second image before adjustment;
determining a sum of the first offset and the second offset as the target offset.
A second aspect of the present application provides a terminal device, including:
the device comprises a first determining unit, a second determining unit and a display unit, wherein the first determining unit is used for determining a first image and a second image corresponding to a target image, and the target image is a 2D image to be subjected to 3D display;
the second determining unit is used for determining a reference offset interval corresponding to a target user who watches the 2D image after 3D display at a preset distance;
an adjusting unit, configured to periodically adjust a target offset amount between the first image and the second image in the reference offset section;
and a generating unit, configured to interleave the first image and the second image after the target offset amount is adjusted, so as to perform 3D display on the 2D image.
In one possible design, the second determining unit is specifically configured to:
determining an initial visual parameter corresponding to the target user;
adjusting the offset between a third image and a fourth image corresponding to the first test 3D image;
recording actual visual parameters corresponding to the target user when the shifted third image and the shifted fourth image are displayed after blanking;
determining a first maximum offset and a first minimum offset between a center of the third image and a center of the fourth image when the actual visual parameters match the initial visual parameters;
and determining the reference offset interval according to the first maximum offset and the first minimum offset.
In one possible design, the second determining unit is further specifically configured to:
shifting at least one image of a fifth image and a sixth image according to the operation instruction of the target user, wherein the fifth image and the sixth image are images corresponding to a second test 3D image;
if a recording instruction of the target user is received, recording a second maximum offset and a second minimum offset between the center of the fifth image and the center of the sixth image according to the recording instruction;
and determining the reference offset interval according to the second maximum offset and the second minimum offset.
In one possible design, the adjusting unit is specifically configured to:
periodically adjusting the target offset amount in the reference offset interval by the following formula:
Δd=f(t);
where Δ d is the target offset, f (t) is a periodic continuous function, and t is a time variable.
In one possible design, the first determination unit is further configured to:
determining a first offset between the position of the first image after adjustment and the position of the first image before adjustment;
determining a second offset between the position of the second image after adjustment and the position of the second image before adjustment;
determining a sum of the first offset and the second offset as the target offset.
A third aspect of the application provides a computer device comprising at least one connected processor, a memory and a transceiver, wherein the memory is configured to store program code, and the processor is configured to invoke the program code in the memory to perform the steps of the method for 3D display of 2D images according to the first aspect.
A fourth aspect of the present application provides a computer storage medium comprising instructions which, when run on a computer, cause the computer to perform the steps of the method for 3D display of 2D images according to any of the above aspects.
In summary, it can be seen that, in the embodiment provided by the application, compared with the related art, the target offset between the first image and the second image can be periodically adjusted, so that the adjusted target offset is always located between the reference offset intervals, it is ensured that both eyes of a user can receive a view when watching a 2D image converted into a 3D image, and a clear 3D image can be observed.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, the division of modules presented herein is merely a logical division that may be implemented in a practical application in a further manner, such that a plurality of modules may be combined or integrated into another system, or some feature vectors may be omitted, or not implemented, and such that couplings or direct couplings or communicative coupling between each other as shown or discussed may be through some interfaces, indirect couplings or communicative coupling between modules may be electrical or other similar, this application is not intended to be limiting. The modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the present disclosure.
The 3D display method, the device and the storage medium of the 2D image can be applied to visual training, conventional 3D video processing and personalized 3D display schemes.
Referring to fig. 1, please refer to a schematic flowchart of a 3D display method for a 2D image provided in an embodiment of the present application, where fig. 1 is a schematic flowchart of the 3D display method for a 2D image provided in the embodiment of the present application, and the method includes:
101. and determining a first image and a second image corresponding to the target image.
In this embodiment, the terminal device may first obtain a target image, where the target image is a 2D image to be subjected to 3D display, the target image may be an independent 2D image, or may also be a 2D image of each frame in a video stream, and is not particularly limited, and after obtaining the target image, may determine a first image and a second image corresponding to the target image, that is, copy the 2D image to obtain the first image and the second image.
102. And determining a reference offset section corresponding to a target user who watches the 2D image for 3D display at a preset distance.
In this embodiment, the terminal device may determine a reference offset interval corresponding to a target user viewing a 3D image after 3D display at a preset distance (for example, a distance between eyes of the target user and a display screen of the terminal device is 40 cm, and certainly, other distances may also be used, and are not specifically limited), that is, different users have different reference offset intervals, which is described in detail: following method for determining reference offset interval corresponding to target user
The method for determining the reference offset interval corresponding to a target user who watches a 2D image for 3D display at a preset distance by using a terminal device comprises the following steps:
determining an initial visual parameter corresponding to the target user;
adjusting the offset between a third image and a fourth image corresponding to the first test 3D image;
recording actual visual parameters corresponding to the target user when the shifted third image and the shifted fourth image are displayed after blanking;
determining a first maximum offset and a first minimum offset between the center of the third image and the center of the fourth image when the actual visual parameters are matched with the initial visual parameters;
and determining the reference offset interval according to the first maximum offset and the first minimum offset.
In this embodiment, the terminal device may first determine an initial visual parameter corresponding to the target user, where the initial visual parameter is an initial visual parameter of a left eye and an initial visual parameter of a right eye of the target user in a relaxed state (for example, the initial visual parameters may be obtained by an eye tracker, or may also be obtained by other methods, which is not particularly limited to the above embodiments) The initial visual parameter of the left eye is (x)L0,yL0,rL0) The initial visual parameter for the right eye is (x)R0,yR0,rR0) And a binocular horizontal central disparity visual parameter distance _ x _ eyes, wherein distance _ x _ eyes ═ xL0-xR0And | x is the horizontal coordinate of the pupil, y is the vertical coordinate of the pupil, and r is the pupil radius. Here, the third image is an image corresponding to the left eye, and the fourth image is an image corresponding to the right eye.
Then, the offset between the third image and the fourth image is adjusted to obtain a first offset, it is understood that, here, adjusting the offset between the third image and the fourth image may be fixing the third image, and shifting the fourth image in a direction away from the third image, of course, the fourth image may also be fixing the fourth image, and shifting the third image in a direction away from the fourth image, and of course, the third image and the fourth image may also be simultaneously shifted in a direction away from each other, which is not limited specifically, for convenience of description, the third image is fixed, and the fourth image is shifted in a direction away from the third image to calculate a reference offset section:
shifting the fourth image to the direction far away from the third image, recording the shift variation quantity distance _ x of the fourth image shifted to the direction far away from the third image, blanking the fourth image and the third image, then presenting, and recording the actual visual parameter (x) corresponding to the target user after presentingL1,yL1,rL1) And (x)R1,yR1,rR1) And determining whether the initial visual parameter matches the actual visual parameter (the method for determining whether the initial visual parameter matches the actual visual parameter may be to determine whether the offset variation distance _ x is in direct proportion to distance _ x _ eyes, if so, the matching is determined, or of course, other methods may also be used, for example, determining whether the difference between the actual visual parameter and the initial visual parameter is smaller than a preset value, if so, the matching is determined, and no limitation is made specifically), and if so, determining the offset between the center of the third image and the center of the fourth image after the offset as a reference offset regionIf not, continuously shifting the fourth image to a direction away from the third image on the basis of the shift variation (the distance of each shift may be a preset distance), and repeatedly executing the above steps until the actual visual parameter corresponding to the target user matches the initial visual parameter when the fourth image and the third image are displayed again after being blanked, where the shift amount between the center of the fourth image and the center of the third image is the extreme value of the reference shift interval. And repeatedly executing the steps, fixing the third image, shifting the fourth image to the direction close to the third image, and determining the other extreme value of the reference shifting interval.
Referring to fig. 2, a manner of determining a reference offset interval is described, where fig. 2 is a schematic view of an embodiment of a method for 3D displaying a 2D image according to an embodiment of the present application, where a third image is fixed, a fourth image is moved away from the third image to adjust an offset between the third image and the fourth image, where 201 is the third image, 202 is the fourth image, the fourth image 202 is offset (in a direction indicated by an arrow 203 in fig. 2) in a direction away from the third image 201, the offset is a preset distance, then an offset between a center 202A of the fourth image 202 and a center 201A of the third image 201 is recorded, the fourth image 202 and the third image 201 are blanked, then the third image is re-rendered, and an actual visual parameter corresponding to a target user when the second image is re-rendered is recorded, and whether the actual visual parameter matches the initial visual parameter of the target user in a relaxed state is recorded, if so, determining the offset amount between the center 202A of the shifted fourth image 202 and the center 201A of the third image 201 as an extreme value of the reference offset interval, if not, shifting the fourth image 202 again in a direction away from the third image 201 on the basis of the shifted fourth image, the shifted distance being the same as the preset distance (that is, each shift may be the same or different according to the preset distance), and repeating the above steps until the actual visual parameter corresponding to the target user matches the initial visual parameter when the fourth image 202 and the third image 201 are displayed again after being blanked, where at this time, the offset amount between the center 202A of the fourth image 202 and the center 201A of the third image 201 is the extreme value of the reference offset interval.
Secondly, the step that the terminal equipment determines a reference offset interval corresponding to a target user who watches the 2D image after the 3D display at a preset distance comprises the following steps:
shifting at least one of a fifth image and a sixth image according to an operation instruction of a target user, wherein the fifth image and the sixth image are images corresponding to the second test 3D image;
if a recording instruction of the target user is received, recording a second maximum offset and a second minimum offset between the center of the fifth image and the center of the sixth image according to the recording instruction;
and determining the reference offset interval according to the second maximum offset and the second minimum offset.
In this embodiment, the terminal device may display the second test 3D image and send a prompt message, where the prompt message is used to prompt the target user to shift at least one of the fifth image and the sixth image, and then the target user may operate the second standard test 3D image, and the terminal device may receive an operation instruction of the target user and shift at least one of the fifth image and the sixth image according to the operation instruction of the user (where the shift may be that the sixth image is kept fixed, the fifth image is shifted in a direction away from the sixth image, and of course, the fifth image is kept fixed, and the sixth image is shifted in a direction away from the fifth image, and of course, the fifth image and the sixth image are simultaneously shifted in a direction away from each other, and is not limited specifically) until the target user cannot perceive that the fifth image and the sixth image form the 3D image in the brain, and sending a recording instruction, and recording the offset between the center of the fifth image and the center of the sixth image by the terminal equipment according to the recording instruction of the target user, wherein the offset between the center of the fifth image and the center of the sixth image is an extreme value in the reference offset interval. And repeatedly executing the steps, fixing the fifth image, shifting the sixth image to the direction close to the fifth image, and determining the other extreme value of the reference shifting interval.
It should be noted that, the step 101 may determine the first image and the second image corresponding to the target image, and the step 102 may determine the reference offset interval corresponding to the target user who views the 2D image after 3D display at the preset distance, however, there is no sequential execution order limitation between the two steps, and the step 101 may be executed first, or the step 102 may be executed first, or executed simultaneously, which is not specifically limited.
103. And periodically adjusting the target offset between the first image and the second image in a reference offset interval.
In this embodiment, after determining the reference offset interval and the first image and the second image corresponding to the target image, the terminal device may first determine a target offset amount between the first image and the second image, and periodically adjust the target offset amount between the first image and the second image in the reference offset interval.
It can be understood that the terminal device may periodically adjust the target offset amount in the reference offset interval by the following formula:
Δd=f(t);
where Δ d is the target offset, f (t) is a periodic continuous function, f (t) may be a cosine function, and t is a time variable. That is, the value of Δ d, the direction and frequency of the change in the first and second images varies as the periodic continuous function varies.
It should be noted that the terminal device may determine the target offset by:
determining a first offset between the position of the adjusted first image and the position of the first image before adjustment;
determining a second offset between the position of the adjusted second image and the position of the second image before adjustment;
the sum of the first offset amount and the second offset amount is determined as a target offset amount.
In this embodiment, when detecting a target offset between a first image and a second image after forming a 3D display image in real time, the terminal device may determine a first offset between a position of the first image after adjustment and a position of the first image before adjustment, determine a second offset between a position of the second image after adjustment and a position of the second image before adjustment, and determine a sum of the first offset and the second offset as the target offset, where the first image and the second image are at initial positions before adjustment, and the first image and the second image are at actual positions when the 3D image is formed after processing.
104. And interleaving the first image and the second image after the target offset is adjusted so as to display the 2D image in a 3D mode.
In this embodiment, after adjusting the target offset amount each time, the terminal device may interleave the first image and the second image after adjusting the target offset amount to generate and display a 3D image.
In summary, it can be seen that, in the embodiment provided by the application, the terminal device may periodically adjust the target offset between the first image and the second image, so that the adjusted target offset is always located between the reference offset intervals, and it is ensured that both eyes of a user can receive a view when watching a 2D image converted into a 3D image, and then the user can observe a clear 3D image.
The embodiments of the present application are described above from the perspective of a 3D display method of a 2D image, and the embodiments of the present application are described below from the perspective of a terminal device.
Referring to fig. 3, fig. 3 is a schematic view of a virtual structure of a terminal device according to an embodiment of the present application, where the terminal device 300 includes:
a first determining unit 301, configured to determine a first image and a second image corresponding to a target image, where the target image is a 2D image to be 3D displayed;
a second determining unit 302, configured to determine a reference offset interval corresponding to a target user viewing the 2D image after 3D display at a preset distance;
an adjusting unit 303, configured to periodically adjust a target offset amount between the first image and the second image in the reference offset section;
a generating unit 304, configured to interleave the first image and the second image after the target offset amount is adjusted, so as to perform 3D display on the 2D image.
In one possible design, the second determining unit 302 is specifically configured to:
determining an initial visual parameter corresponding to the target user;
adjusting the offset between a third image and a fourth image corresponding to the first test 3D image;
recording actual visual parameters corresponding to the target user when the shifted third image and the shifted fourth image are displayed after blanking;
determining a first maximum offset and a first minimum offset between a center of the third image and a center of the fourth image when the actual visual parameters match the initial visual parameters;
and determining the reference offset interval according to the first maximum offset and the first minimum offset.
In a possible design, the second determining unit 302 is further specifically configured to:
shifting at least one image of a fifth image and a sixth image according to the operation instruction of the target user, wherein the fifth image and the sixth image are images corresponding to a second test 3D image;
if a recording instruction of the target user is received, recording a second maximum offset and a second minimum offset between the center of the fifth image and the center of the sixth image according to the recording instruction;
and determining the reference offset interval according to the second maximum offset and the second minimum offset.
In one possible design, the adjusting unit 303 is specifically configured to:
periodically adjusting the target offset amount in the reference offset interval by the following formula:
Δd=f(t);
where Δ d is the target offset, f (t) is a periodic continuous function, and t is a time variable.
In one possible design, the first determining unit 301 is further configured to:
determining a first offset between the position of the adjusted first image and the position of the first image before adjustment;
determining a second offset between the position of the adjusted second image and the position of the second image before the adjustment;
the sum of the first offset amount and the second offset amount is determined as a target offset amount.
Next, another terminal device provided in the embodiment of the present application is introduced, please refer to fig. 4, where fig. 4 is a schematic diagram of a hardware structure of the terminal device provided in the embodiment of the present application, and the terminal device 400 includes:
a receiver 401, a transmitter 402, a processor 403 and a memory 404 (wherein the number of processors 403 in the terminal device 400 may be one or more, one processor is taken as an example in fig. 4). In some embodiments of the present application, the receiver 401, the transmitter 402, the processor 403 and the memory 404 may be connected by a bus or other means, wherein fig. 4 illustrates the connection by a bus.
Memory 404 may include both read-only memory and random-access memory and provides instructions and data to processor 403. A portion of the memory 404 may also include NVRAM. The memory 404 stores an operating system and operating instructions, executable modules or data structures, or a subset or an expanded set thereof, wherein the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various basic services and for handling hardware-based tasks.
Processor 403 controls the operation of the terminal device, and processor 403 may also be referred to as a CPU. In a specific application, the various components of the terminal device are coupled together by a bus system, wherein the bus system may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, the various buses are referred to in the figures as a bus system.
The 3D display method of the 2D image disclosed in the embodiment of the present application may be applied to the processor 403, or implemented by the processor 403. The processor 403 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method shown in fig. 1 may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 403. The processor 403 may be a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 404, and the processor 403 reads the information in the memory 404 and completes the steps of the method in combination with the hardware.
The embodiment of the present application further provides a computer-readable medium, which includes a computer execution instruction, where the computer execution instruction enables a server to execute the 3D display method for a 2D image described in the foregoing embodiment, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications and substitutions do not depart from the essence of the corresponding technical solutions.