CN115225884A - Interactive reproduction method, system, device and medium for image and sound - Google Patents
Interactive reproduction method, system, device and medium for image and sound Download PDFInfo
- Publication number
- CN115225884A CN115225884A CN202211059816.2A CN202211059816A CN115225884A CN 115225884 A CN115225884 A CN 115225884A CN 202211059816 A CN202211059816 A CN 202211059816A CN 115225884 A CN115225884 A CN 115225884A
- Authority
- CN
- China
- Prior art keywords
- information
- sound
- target object
- image
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 41
- 230000004927 fusion Effects 0.000 claims abstract description 62
- 230000000007 visual effect Effects 0.000 claims description 89
- 238000013507 mapping Methods 0.000 claims description 23
- 230000033001 locomotion Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000035939 shock Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
Abstract
The present invention is applicable to the field of image and sound reproduction, and provides an interactive reproduction method, system, device and medium for image and sound, wherein the interactive reproduction method for image and sound comprises: step S100: acquiring fusion information of a target object, wherein the fusion information comprises image information and sound information; step S200: acquiring control information of a target object, wherein the control information comprises a plurality of control parameters; step S300: and finishing interactive reproduction of the image and the sound of the target object according to the control information and the fusion information. By the method, the interactive reproduction of the image information and the sound information of the target object under different control parameters can be realized.
Description
Technical Field
The present invention relates to the field of image and sound reproduction, and more particularly, to an interactive reproduction method, system, device, and medium for image and sound.
Background
The places where the sound and the picture which can be seen simultaneously in the prior art are displayed together are a display terminal and a cinema, and visual and auditory enjoyment is brought to audiences.
At present, most of the existing sound presentation methods are mono or stereo, and stereo usually refers to only two audio channels played by using two speakers or headphones, but these methods cannot well present the reality of sound to users, especially in the field of audio and video applications such as movie theaters and games, and actually, audiences located at different positions in movie theaters hear the same sound effect.
In the prior art, an object may be displayed in a two-dimensional picture manner or in a three-dimensional (stereo) manner, and the three-dimensional manner is usually displayed in a three-dimensional modeling manner or in a surface or point cloud manner using data obtained by a three-dimensional scanning technique.
However, the prior art has the following problems:
in the prior art, the fusion display mode of the sound and the picture is only based on time synchronization display, the time is taken as a basis, the sound and the picture are not really fusion display, the sound and the picture are actually displayed separately, the sound and the picture cannot change in real time along with the change of one of the sound and the picture, and the sound and the picture presented by an object at the current moment cannot be really displayed.
Disclosure of Invention
The present invention provides a method, system, device and medium for interactive reproduction of images and sound, which solves the above technical problems in the prior art, and mainly comprises the following four aspects:
a first aspect of the present application provides a method for interactive reproduction of images and sound, comprising the steps of:
step S100: acquiring fusion information of a target object, wherein the fusion information comprises image information and sound information;
step S200: acquiring control information of a target object, wherein the control information comprises various control parameters;
step S300: and finishing interactive reproduction of the image and the sound of the target object according to the control information and the fusion information.
Further, the method for acquiring the fusion information in step S100 is as follows:
step S110: acquiring image information and sound information of a target object under multiple visual angles;
step S120: and establishing a mapping relation between the image information and the sound information under different visual angles on the basis of the visual angles to form fusion information.
Further, the control parameters comprise an origin parameter, a first direction parameter and/or a second direction parameter, and a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
Further, step S300 includes:
and acquiring the input control parameters, analyzing the fusion information corresponding to the input control parameters, and interactively reproducing the image and the sound of the target object.
Further, step S300 further includes:
and moving the origin, analyzing the fusion information corresponding to the new origin formed after the origin is moved, and interactively reproducing the image and the sound of the target object.
A second aspect of the present application provides an interactive reproduction system of images and sound, comprising the following modules:
a fusion module: the fusion information acquisition unit is used for acquiring fusion information of the target object, wherein the fusion information comprises image information and sound information;
a control module: the control information is used for acquiring control information of a target object, and the control information comprises various control parameters;
a reproduction module: for performing interactive reproduction of the image and sound of the target object based on the control information and the fusion information.
Further, the fusion module is configured to:
acquiring image information and sound information of a target object under multiple visual angles;
based on the visual angle, the mapping relation between the image information and the sound information under different visual angles is established to form fusion information
Further, the control module includes:
the control parameters comprise an origin parameter, a first direction parameter and/or a second direction parameter and a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
A third aspect of the present application provides an electronic device comprising:
one or more processors;
a reservoir;
a screen for displaying the images and sounds in the interactive reproduction method of the images and sounds as described above;
one or more application programs, wherein the one or more application programs are stored in the storage and configured to be executed by the one or more processors, the one or more programs configured to perform the method of interactive reproduction of images and sound as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein program code that is callable by a processor to execute the above-described interactive reproduction method of images and sounds.
Compared with the prior art, the invention at least has the following technical effects:
(1) The method comprises the steps that image information and sound information of a target object under multiple visual angles are obtained, and a mapping relation between the image information and the sound information under different visual angles is established on the basis of the visual angles to form fusion information; the input visual angle is obtained, the image information and the sound information corresponding to the input visual angle are determined according to the established mapping relation, the image information and the sound information corresponding to the input visual angle are fused and reproduced, and the three-dimensional image information and the three-dimensional sound information of the target object under different visual angles can be truly presented.
(2) Through the interactive reproduction method of the image and the sound, the reproduction of actions such as rotation, movement, rolling and the like of the image information and the sound information in the target object in any direction at any time can be completed through controlling various control parameters in the control information and according to the input different control parameters, the display of the sound information and the image information in the target object in the real world under different actions is achieved, and the visual shock is brought.
(3) By the method, the original point can be moved, the movement of the target object under different direction parameters under different visual angles corresponding to different original points is reproduced, the original points can be really switched in real time, and more image information and more sound information of more target objects are reproduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention or in the description of the prior art will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an interactive reproduction method of images and sounds in the present invention;
FIG. 2 is a diagram illustrating a mapping relationship between sound information and image information in the present invention;
FIG. 3 is a schematic diagram of an electronic device according to the present invention;
fig. 4 is a schematic diagram of a computer-readable storage medium structure in the present invention.
Detailed Description
The following description provides many different embodiments, or examples, for implementing different features of the invention. The particular examples set forth below are intended as a brief description of the invention and are not intended as limiting the scope of the invention.
The first embodiment is as follows:
as shown in fig. 1-2, an embodiment of the present application provides an interactive reproduction method of images and sound, including the following steps:
step S100: acquiring fusion information of a target object, wherein the fusion information comprises image information and sound information;
step S200: acquiring control information of a target object, wherein the control information comprises various control parameters;
step S300: and finishing interactive reproduction of the image and the sound of the target object according to the control information and the fusion information.
Further, the method for acquiring the fusion information in step S100 is as follows:
step S110: acquiring image information and sound information of a target object under multiple visual angles;
step S120: and establishing a mapping relation between the image information and the sound information under different visual angles on the basis of the visual angles to form fusion information.
In the above scheme, firstly, image information and sound information of a target object under different viewing angles need to be acquired, the image information and the sound information may be acquired simultaneously or may be acquired respectively, wherein the image information and the sound information may be acquired in the following manner:
before an acquisition device acquires a target object, in order to reduce the influence of redundant interference information when the target object is acquired, the view angle boundaries of the target object under different view angles need to be calculated and acquired, an image displayed by the target object under the view angle 1 is acquired firstly, then the central position of the target object in the image displayed by the target object under the view angle 1 is calculated, then the corresponding maximum boundary radius with the central position as the center of a circle is calculated, the edge of the circle (or the circle or the ellipse) formed by the maximum boundary radius is used as the view angle boundary, the target object can be completely accommodated in the corresponding view angle boundary through the view angle boundary formed in the way, further, the maximum boundary radii of the target object under each view angle are compared, one of the longest maximum boundary radii is selected as the maximum boundary radius of the target, the spherical view angle boundary formed by the maximum boundary is completely wrapped on the target object, and the image information of the target object under each view angle is acquired on the basis; of course, the edge of an ellipse (or an ellipsoid) formed by combining the maximum boundary radii of any two or more viewing angles can also be used as a viewing angle boundary, and image information of the target object under each viewing angle is acquired in the viewing angle boundary; the displayed image shapes of the target object at different viewing angles may all be the same, may be partially the same, or may all be different. By adopting the image acquisition method, the noise information of the target object outside the visual angle boundary under the corresponding visual angle can be removed in the acquisition of the image information of the target object, so that only the required information is acquired, the influence of other information on the subsequent three-dimensional reproduction is avoided, and the information content of the image is also reduced.
It should be noted that the capturing device in the present application may be a camera, a virtual camera, etc., as long as it can capture an image of a target object, and is not limited specifically herein.
Further, image information of the target object under multiple visual angles within the visual angle boundary is acquired, the image information comprises initial image information and/or fusion image information, the initial image is directly acquired through an acquisition device, no processing is performed in the middle, and the fusion information is acquired to be the acquisition distance between the acquisition device and the target object under the corresponding visual angle; dividing the acquisition distance into a plurality of preset acquisition distances along a preset number, and acquiring cutting image information along the preset acquisition distances by an acquisition device; and fusing the plurality of pieces of cutting image information under the same visual angle to form fused image information.
The sound information acquisition method of the target object comprises the following steps:
it should be noted that, in the present application, the sound recording device for collecting sound may be a sound recording device for recording only sound, or may be an image pickup device for collecting sound and image at the same time, which is not limited herein.
The method includes the steps that sound information of a target object under multiple visual angles is acquired, if the visual angles needed to be acquired by the target object comprise visual angles 1, 2, 3 and 4, the perspective.
It should be noted that the number of sound information collected at the same viewing angle is not less than the number of image information, because it may be necessary to collect sound information between the audience and the sound source (performance area) at different positions in the image information at the next viewing angle.
After the acquisition of the image information and the sound information of the target object under a plurality of visual angles is completed, a mapping relation among the visual angle, the image information and the sound information needs to be established according to the relation among the visual angle, the image information and the sound information, in the application, the visual angle is taken as a basis, namely the mapping relation between the image information and the sound information is established by taking the visual angle as an origin, for example, the visual angle 1 is taken as a basis, the image information and the sound information acquired by the target object under the visual angle 1 are acquired, the mapping relation between the image information and the sound information of the target object under different visual angles is established respectively, through the established mapping relation, when any one of the visual angle, the image information and the sound information is input, the other two information corresponding to the input information can be quickly found and determined according to the mapping relation, for example, the input visual angle is acquired, then the image information and the sound information corresponding to the input visual angle can be quickly determined according to the mapping relation among the visual angle, the sound information and the image information under the visual angle is reproduced.
Optionally, the mapping relationship among the visual angle, the sound information, and the image information may be established in three ways: 1) The method comprises the steps that during collection, when image information and sound information are collected for a target object, the target object can be collected according to a visual angle, the sound information and the picture information are collected at the same visual angle, and the collection process is the establishment of a mapping relation; 2) Later-stage endowing establishment can endow corresponding image information and sound information under different visual angles; 3) And establishing after calculation, calculating the image information and the sound information under a certain visual angle at a certain moment according to the currently obtained image information and the sound information, and then establishing a mapping relation between the image information and the sound information.
The method for reproducing the corresponding image information and sound information under the input visual angle of the target object in the application is as follows:
the method comprises the steps of respectively obtaining image information and sound information under different visual angles, respectively establishing attribute numbers, storing the image information and the sound information into preset positions of a storage unit according to the attribute numbers, setting a reproduction serial number by a reproduction unit, and finishing three-dimensional reproduction of a target object according to the preset positions and the reproduction serial number.
In the above solution, after the image information and the sound information of the target object are acquired from different viewing angles, the fusion and reproduction of the sound and the image of the target object can be completed according to the acquired image information and sound information, and there are two reproduction modes:
firstly, after the image information of the target object is obtained, the display end can directly reproduce according to the obtained image and sound information, and can be understood as fusion reproduction while obtaining the image information and the sound information of the target object, so that the image and the sound reproduction of the three-dimensional object at different moments can be carried out in real time, other links are not passed in the middle, the obtained image information and the obtained sound information are not lost, real information is directly reproduced, and the real reproduction of the image and the sound of the target object is realized.
Secondly, after image information and sound information of a target object are acquired, firstly, an attribute number is established according to the acquired image information and sound information under different viewing angles, the attribute number can be established according to the viewing angle information, azimuth information (such as longitude and latitude), the time information and the like in the image information and the sound information, for example, the image information number under the viewing angle 1 is 1-45-3, 001, abc, 1 \u1 \u0, 1 \1 \0 and the like, and the establishment rule of the attribute number is not limited as long as the image information and the sound information under the current viewing angle at the current moment can be represented; then storing the image information and the sound information with different attribute numbers into a preset position of a storage device, wherein a mapping relation is formed between the attribute numbers and the preset position, so that the storage is convenient, and the subsequent calling is convenient; and finally, setting reproduction serial numbers of different visual angles of the target object when the reproduction is realized, and then calling the image information and the sound information stored in the preset position to the corresponding reproduction position where the reproduction serial number is located, wherein a mapping relation is formed between the preset position and the reproduction serial number, so that the quick calling is convenient, and the fusion reproduction of the sound and the image of the target object is completed.
It should be noted that the storage device may be an acquisition device, or may be a backend server connected to the acquisition device, and the like, which is not limited herein.
Preferably, the image information includes: target object information, perspective information, orientation information (e.g., latitude and longitude), and time information.
The sound information includes: audio track, perspective information, orientation information (e.g., latitude and longitude), time information.
Therefore, by the image and sound fusion reproduction method provided by the application, the image information and the sound information of the target object under a plurality of visual angles are acquired, and the mapping relation between the image information and the sound information under different visual angles is established on the basis of the visual angles to form fusion information; the input visual angle is obtained, the image information and the sound information corresponding to the input visual angle are determined according to the established mapping relation, the image information and the sound information corresponding to the input visual angle are fused and reproduced, and the three-dimensional image information and the three-dimensional sound information of the target object under different visual angles can be truly presented.
Further, comprising:
acquiring input sound information, analyzing a visual angle corresponding to the input sound information, and reproducing image information of the visual angle corresponding to the sound information;
acquiring input image information, analyzing a visual angle corresponding to the image information, and reproducing sound information of the visual angle corresponding to the image information.
Further, the sound information includes at least one track:
reproducing image information corresponding to a track when the sound information includes the track;
when the sound information includes a plurality of tracks, an input track is acquired, image information corresponding to the input track is analyzed, and image information corresponding to the input track is reproduced.
By the method, the image information and the sound information of the target object under different visual angles can be reproduced at any time, further, the sound information of the target object can change along with the change of the image information during reproduction, and for example, the sound information can change along with the change conditions of visual angles such as picture magnification and reduction, distance, rotation and translation and the like in the image information; meanwhile, the image information of the target object can also change along with the change of the sound information, for example, the image information can display corresponding picture information according to the switching of the sound track in the sound information, the image information and the sound information of the target object under different visual angles can be truly reproduced, the real fusion between the sound information and the image information is realized, and the target object can be truly displayed.
When the fusion information of the target object is acquired by the above method, and then the control information of the target object is acquired, the interactive reproduction of the image and sound in the fusion information is controlled based on the input control information, wherein
The control parameters comprise an origin parameter, a first direction parameter and/or a second direction parameter and a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
The origin parameter is a fixed point of rotation or movement of the target object, the first direction parameter can be understood as the x direction in an x-y-z coordinate system, the fusion information of the target object can move, rotate and the like along the x direction, the second direction parameter can be understood as the y direction in the x-y-z coordinate system, and the fusion information of the target object can move, rotate and the like along the y direction; the time parameter can be understood as the z direction in the x-y-z coordinate system, and the target object can move and rotate at any time. As in video playback of a musical series, a pause is made at the current time, and then fusion information of the presented image and sound is made to rotate in different directions, and movement can be effected to reproduce interactive reproduction of the image and sound of the target object in an arbitrary direction.
Through the interactive reproduction method of the image and the sound, the reproduction of actions such as rotation, movement, rolling and the like of the image information and the sound information in the target object in any direction at any time can be completed through controlling various control parameters in the control information and according to the input different control parameters, the display of the sound information and the image information in the target object in the real world under different actions is achieved, and the visual shock is brought.
Further, step S300 includes:
and acquiring the input control parameters, analyzing the fusion information corresponding to the input control parameters, and interactively reproducing the image and the sound of the target object.
Further, step S300 further includes:
and moving the original point, analyzing the fusion information corresponding to the new original point formed after the original point is moved, and interactively reproducing the image and the sound of the target object.
In the above scheme, after the target object completes the fusion of the image information and the sound information, the interactive reproduction may be started, and the input control parameters are acquired, and if the input control parameters are the first direction parameter and the time parameter, the sound and the image information in the target object may be controlled to move in the first direction at the current time, for example, the sound and the image information may be turned over by 360 degrees, and the movement of the target object in the first direction is truly reproduced in an all-around manner at the current time.
In addition, the origin parameter is used as the origin, a coordinate system of the view angle parameter, the first direction parameter and/or the second direction parameter and the time parameter is established, the origin can move, for example, at the current moment, the origin can be kept unchanged, and the rotation and translation motions of the sound information and the image information of the target object on the first direction parameter, the second direction parameter or on the first direction parameter and the second direction parameter are reproduced; the sound information and the image information of the new origin formed after the origin of the target object is moved can be reproduced by moving the position of the origin at the current moment, and the rotation and translation motion of the sound information and the image information of the new origin formed after the origin of the target object is moved on the first direction parameter, the second direction parameter or the first direction parameter and the second direction parameter at the same time can also be reproduced.
By the method, the original point can be moved, the movement of the target object under different direction parameters under different visual angles corresponding to different original points is reproduced, the original points can be really switched in real time, and more image information and more sound information of more target objects are reproduced.
For a better understanding of the embodiments of the present invention, the following description is given in conjunction with specific cases:
when the target object is a concert, corresponding image information and sound information of the concert under different visual angles can be obtained firstly, then the image information and the sound information of the concert under different visual angles are established on the basis of the visual angles, and the image information and the sound information of the concert under different visual angles form a mapping relation, namely when the visual angle 1 is obtained and reproduced, the image information and the sound information under the visual angle 1 can be reproduced in time; then acquiring control information, such as video playing pause, and reproducing sound and images under the view angle 1; the concert scenes under different visual angles can be reproduced; the original point can be moved, if the original point is the director, the concert scenes under different visual angles can be reproduced around the director, the original point can be changed into the grand piano, the concert scenes under different visual angles can be reproduced around the grand piano, and the image information and the sound information of the real target object can be reproduced.
The second embodiment:
the second embodiment of the present application provides an interactive reproduction system for images and sound, which includes the following modules:
a fusion module: the fusion information acquisition unit is used for acquiring fusion information of the target object, wherein the fusion information comprises image information and sound information;
a control module: the control information is used for acquiring control information of a target object, and the control information comprises various control parameters;
a reproduction module: and the interactive reproduction of the image and the sound of the target object is completed according to the control information and the fusion information.
Further, the fusion module is configured to:
acquiring image information and sound information of a target object under multiple visual angles;
and establishing a mapping relation between the image information and the sound information under different visual angles on the basis of the visual angles to form fusion information.
Further, the control module includes:
the control parameters comprise an origin parameter, a first direction parameter and/or a second direction parameter and a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
Further, the reproduction module is configured to:
and acquiring the input control parameters, analyzing the fusion information corresponding to the input control parameters, and interactively reproducing the images and the sound of the target object.
Further, the reproduction module is further configured to:
and moving the original point, analyzing the fusion information corresponding to the new original point formed after the original point is moved, and interactively reproducing the image and the sound of the target object.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described system and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Example three:
an embodiment of the present application provides an electronic device, including:
one or more processors;
a reservoir;
a screen for displaying the images and sounds in the interactive reproduction method of the images and sounds as described above;
one or more application programs, wherein the one or more application programs are stored in the storage and configured to be executed by the one or more processors, the one or more programs configured to perform the method of interactive reproduction of images and sound as described above.
Referring to fig. 3, fig. 3 is a block diagram of an electronic device 1100 according to a third embodiment of the present disclosure. The electronic device 1100 in the present application may include one or more of the following components: memory 1110, processor 1120, screen 1130, and one or more applications, wherein the one or more applications may be stored in memory 1110 and configured to be executed by the one or more processors 1120, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
The Memory 1110 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 1110 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1110 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a histogram equalization function, etc.), instructions for implementing various method embodiments described below, and the like. The stored data area may also store data created during use by the electronic device 1100 (such as image matrix data, etc.).
Processor 1120 may include one or more processing cores. The processor 1120 interfaces with various parts throughout the electronic device 1100 using various interfaces and lines, and performs various functions of the electronic device 1100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1110 and calling data stored in the memory 1110. Alternatively, the processor 1120 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Processor 1120 may incorporate one or a combination of Central Processing Units (CPUs), modems, and the like. Wherein, the CPU mainly processes an operating system, an application program and the like; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 1120, but may be implemented by a communication chip.
Example four:
a fourth embodiment of the present application provides a computer-readable storage medium, in which program codes are stored, and the program codes can be called by a processor to execute the above-mentioned interactive reproduction method for images and sound.
Referring to fig. 4, fig. 4 is a block diagram illustrating a computer-readable storage medium according to a fourth embodiment of the present disclosure. The computer readable storage medium 1200 has stored therein a program code 1210, said program code 1210 being invokable by a processor for performing the method described in the above method embodiments.
The computer-readable storage medium 1200 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM (erasable programmable read only memory), a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1200 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1200 has storage space for program code 1210 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. The program code 1210 may be compressed, for example, in a suitable form.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. A method for interactive reproduction of images and sound, comprising the steps of:
step S100: acquiring fusion information of a target object, wherein the fusion information comprises image information and sound information;
step S200: acquiring control information of a target object, wherein the control information comprises a plurality of control parameters;
step S300: and finishing interactive reproduction of the image and the sound of the target object according to the control information and the fusion information.
2. The interactive reproduction method of claim 1, wherein the method of acquiring the fusion information in step S100 is as follows:
step S110: acquiring image information and sound information of a target object under multiple visual angles;
step S120: and establishing a mapping relation between the image information and the sound information under different visual angles on the basis of the visual angles to form fusion information.
3. The interactive reproduction method of claim 1, wherein the control parameters include an origin parameter, a first direction parameter and/or a second direction parameter, a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
4. The interactive reproduction method of claim 3, wherein the step S300 comprises:
acquiring input control parameters, analyzing fusion information corresponding to the input control parameters, and interactively reproducing the image and the sound of the target object.
5. The interactive reproducing method of claim 4, wherein the step S300 further comprises:
and moving the original point, analyzing the fusion information corresponding to the new original point formed after the original point is moved, and interactively reproducing the image and the sound of the target object.
6. An interactive reproduction system for images and sound, comprising the following modules:
a fusion module: the fusion information comprises image information and sound information;
a control module: the control information is used for acquiring control information of a target object, and the control information comprises various control parameters;
a reproduction module: and the interactive reproduction of the image and the sound of the target object is completed according to the control information and the fusion information.
7. The interactive reproduction system of claim 6, wherein the fusion module is to:
acquiring image information and sound information of a target object under multiple visual angles;
and establishing a mapping relation between the image information and the sound information under different visual angles on the basis of the visual angles to form fusion information.
8. The interactive reproduction system of claim 6, wherein the control module comprises:
the control parameters comprise an origin parameter, a first direction parameter and/or a second direction parameter and a time parameter; and establishing a coordinate system of the visual angle parameter, the first direction parameter and/or the second direction parameter and the time parameter by taking the origin point parameter as an origin point.
9. An electronic device, comprising:
one or more processors;
a reservoir;
a screen for displaying the image and sound in the interactive reproduction method of the image and sound according to any one of claims 1 to 5;
one or more application programs, wherein the one or more application programs are stored in the storage and configured to be executed by the one or more processors, the one or more programs configured to perform the method for interactive reproduction of images and sound of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to execute the method for interactive reproduction of images and sound according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211059816.2A CN115225884A (en) | 2022-08-30 | 2022-08-30 | Interactive reproduction method, system, device and medium for image and sound |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211059816.2A CN115225884A (en) | 2022-08-30 | 2022-08-30 | Interactive reproduction method, system, device and medium for image and sound |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115225884A true CN115225884A (en) | 2022-10-21 |
Family
ID=83617260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211059816.2A Pending CN115225884A (en) | 2022-08-30 | 2022-08-30 | Interactive reproduction method, system, device and medium for image and sound |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115225884A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006267672A (en) * | 2005-03-24 | 2006-10-05 | Yamaha Corp | Musical piece data generation system and program |
CN101681088A (en) * | 2007-05-29 | 2010-03-24 | 汤姆森许可贸易公司 | Method of creating and reproducing a panoramic sound image, and apparatus for reproducing such an image |
EP2352290A1 (en) * | 2009-12-04 | 2011-08-03 | Swisscom (Schweiz) AG | Method and apparatus for matching audio and video signals during a videoconference |
CN105389318A (en) * | 2014-09-09 | 2016-03-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
TW201734948A (en) * | 2016-03-03 | 2017-10-01 | 森翠根科技有限公司 | A method, system and device for generating associated audio and visual signals in a wide angle image system |
CN112351248A (en) * | 2020-10-20 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Processing method for associating image data and sound data |
CN113676592A (en) * | 2021-08-02 | 2021-11-19 | Oppo广东移动通信有限公司 | Recording method, recording device, electronic equipment and computer readable medium |
CN113853529A (en) * | 2019-05-20 | 2021-12-28 | 诺基亚技术有限公司 | Apparatus, and associated method, for spatial audio capture |
CN114270877A (en) * | 2019-07-08 | 2022-04-01 | Dts公司 | Non-coincident audiovisual capture system |
CN114648615A (en) * | 2022-05-24 | 2022-06-21 | 四川中绳矩阵技术发展有限公司 | Method, device and equipment for controlling interactive reproduction of target object and storage medium |
CN114926378A (en) * | 2022-04-01 | 2022-08-19 | 浙江西图盟数字科技有限公司 | Method, system, device and computer storage medium for sound source tracking |
-
2022
- 2022-08-30 CN CN202211059816.2A patent/CN115225884A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006267672A (en) * | 2005-03-24 | 2006-10-05 | Yamaha Corp | Musical piece data generation system and program |
CN101681088A (en) * | 2007-05-29 | 2010-03-24 | 汤姆森许可贸易公司 | Method of creating and reproducing a panoramic sound image, and apparatus for reproducing such an image |
EP2352290A1 (en) * | 2009-12-04 | 2011-08-03 | Swisscom (Schweiz) AG | Method and apparatus for matching audio and video signals during a videoconference |
CN105389318A (en) * | 2014-09-09 | 2016-03-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
TW201734948A (en) * | 2016-03-03 | 2017-10-01 | 森翠根科技有限公司 | A method, system and device for generating associated audio and visual signals in a wide angle image system |
CN113853529A (en) * | 2019-05-20 | 2021-12-28 | 诺基亚技术有限公司 | Apparatus, and associated method, for spatial audio capture |
CN114270877A (en) * | 2019-07-08 | 2022-04-01 | Dts公司 | Non-coincident audiovisual capture system |
CN112351248A (en) * | 2020-10-20 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Processing method for associating image data and sound data |
CN113676592A (en) * | 2021-08-02 | 2021-11-19 | Oppo广东移动通信有限公司 | Recording method, recording device, electronic equipment and computer readable medium |
CN114926378A (en) * | 2022-04-01 | 2022-08-19 | 浙江西图盟数字科技有限公司 | Method, system, device and computer storage medium for sound source tracking |
CN114648615A (en) * | 2022-05-24 | 2022-06-21 | 四川中绳矩阵技术发展有限公司 | Method, device and equipment for controlling interactive reproduction of target object and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11037365B2 (en) | Method, apparatus, medium, terminal, and device for processing multi-angle free-perspective data | |
EP3236345A1 (en) | An apparatus and associated methods | |
CN111669567B (en) | Multi-angle free view video data generation method and device, medium and server | |
US10278001B2 (en) | Multiple listener cloud render with enhanced instant replay | |
CN111669518A (en) | Multi-angle free visual angle interaction method and device, medium, terminal and equipment | |
WO2022100162A1 (en) | Method and apparatus for producing dynamic shots in short video | |
CN111669561A (en) | Multi-angle free visual angle image data processing method and device, medium and equipment | |
JP2019512177A (en) | Device and related method | |
CN111669568B (en) | Multi-angle free view angle interaction method and device, medium, terminal and equipment | |
CN114445600A (en) | Method, device and equipment for displaying special effect prop and storage medium | |
CN111669570B (en) | Multi-angle free view video data processing method and device, medium and equipment | |
CN111512640B (en) | Multi-camera device | |
JP2009246917A (en) | Video display device, and video processing apparatus | |
KR101908068B1 (en) | System for Authoring and Playing 360° VR Contents | |
CN115225884A (en) | Interactive reproduction method, system, device and medium for image and sound | |
KR20210056414A (en) | System for controlling audio-enabled connected devices in mixed reality environments | |
CN111669569A (en) | Video generation method and device, medium and terminal | |
CN111669604A (en) | Acquisition equipment setting method and device, terminal, acquisition system and equipment | |
CN111669603B (en) | Multi-angle free visual angle data processing method and device, medium, terminal and equipment | |
CN115134581A (en) | Fusion reproduction method, system, equipment and storage medium of image and sound | |
JP2019102940A (en) | Virtual viewpoint content generation system, voice processing device, control method for virtual viewpoint content generation system, and program | |
CN111448805A (en) | Apparatus, method and computer program for providing notifications | |
CN111669571B (en) | Multi-angle free view image data generation method and device, medium and equipment | |
CN117768766A (en) | Shooting method, video processing method and shooting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221021 |