CN114332326A - Data processing method, device, equipment and medium - Google Patents
Data processing method, device, equipment and medium Download PDFInfo
- Publication number
- CN114332326A CN114332326A CN202111659440.4A CN202111659440A CN114332326A CN 114332326 A CN114332326 A CN 114332326A CN 202111659440 A CN202111659440 A CN 202111659440A CN 114332326 A CN114332326 A CN 114332326A
- Authority
- CN
- China
- Prior art keywords
- target
- virtual scene
- environment
- scene
- image information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a data processing method, a data processing device, data processing equipment and a data processing medium. Collecting environmental image information in real time; generating a target virtual scene according to the environment image information; and sending the target virtual scene to a target user side so that the target user side reproduces the target virtual scene. The problem that the target virtual scene cannot be updated in real time and dynamically is solved, and the target virtual scene is updated according to the image information acquired by the fact, so that the dynamic change of the target virtual scene is realized.
Description
Technical Field
The present invention relates to the field of data processing, and in particular, to a data processing method, apparatus, device, and medium.
Background
Virtual reality technology encompasses computer, electronic information, simulation technology, the basic implementation of which is that a computer simulates a virtual environment to give a person a sense of environmental immersion. With the continuous development of social productivity and scientific technology, VR (Virtual Reality) technology is increasingly required by various industries. The VR technology has made great progress and gradually becomes a new scientific and technical field. In the existing virtual reality technology, a user wants to experience a virtual scene through VR equipment, only VR equipment manufacturers can be selected to construct the virtual scene in advance, and a target virtual scene cannot be generated according to self demand conditions, so that improvement is needed urgently.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, apparatus, device, and medium, which may automatically generate a target virtual scene according to environment image information of a target environment, so that a user may reproduce the target environment through the target virtual scene.
In a first aspect, an embodiment of the present invention provides a data processing method, including:
collecting environmental image information in real time;
generating a target virtual scene according to the environment image information;
and sending the target virtual scene to a target user side so that the target user side reproduces the target virtual scene.
In a second aspect, an embodiment of the present invention further provides a data processing apparatus, where the data processing apparatus includes:
the environment image information acquisition module is used for acquiring environment image information in real time;
the target virtual scene generation module is used for generating a target virtual scene according to the environment image information;
and the target virtual scene sending module is used for sending the target virtual scene to a target user side so that the target user side can reproduce the target virtual scene.
In a third aspect, an embodiment of the present invention further provides a data processing apparatus, where the apparatus includes:
one or more processors;
storage means for storing one or more programs
When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the data processing method according to any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements a data processing method according to any embodiment of the present invention.
The technical scheme provided by the embodiment can acquire the environmental image information of the target environment in real time, generate the target virtual scene according to the environmental image information, and transmit the target virtual scene to the target user side, so that the target user side can share the target virtual scene. By the scheme, the problems that a virtual scene cannot be constructed according to the user requirement, only the virtual scene provided by a VR equipment manufacturer can be obtained, and dynamic change of a target environment cannot be sensed through the generated virtual scene are solved; the method can acquire the environmental image information of the target environment appointed by the user in real time, generate the target virtual scene required by the user, update the target virtual scene in real time according to the change of the target environment, and enable the user to feel the dynamic change of the target environment according to the target virtual scene.
Drawings
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a data processing method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a data processing method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a data processing device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention, which is applicable to data processing of image information, and is particularly applicable to data processing of image information acquired by a VR device. The method can be executed by a data processing method device provided by the embodiment of the invention, and the device can be realized in a software and/or hardware mode. The apparatus may be configured in a terminal device, and further, the terminal device may be a VR device. The method specifically comprises the following steps:
and S110, acquiring environmental image information in real time.
The environment image information is image information of an environment to be referred to when a user constructs a virtual scene. The environment image information may include, but is not limited to, an environment image, an acquisition angle of the environment image, and an actual position of a scene in the environment image in the target environment. And the environment to be referred to when the user constructs the virtual scene is the target environment.
Optionally, an image acquisition device may be used to acquire environmental image information; the image acquisition device may be a camera installed on the VR device, that is, an internal image acquisition device, or an image acquisition device installed in the surrounding environment of the current time at which the user is located, that is, an external image acquisition device. The internal image acquisition device and the external image acquisition device have a panoramic shooting function. Each VR device has a device ID (Identity document), and can identify the VR device corresponding to the device ID according to the device ID.
Specifically, image information of each angle of the surrounding environment at the current moment of the user is acquired by adopting image acquisition equipment, and the acquired environment image information is sent to an image processing module of VR equipment carried by the user. The environment image information may be picture information of the target environment or video information of the target environment. When the environment image information is the picture information of the target environment, the image acquisition device needs to acquire the environment image information with different focal lengths at different angles at the current moment, and send the environment image information with different focal lengths at different angles to the image processing module of the VR device. When the environment image information is video information of a target environment, the image acquisition device needs to acquire the environment image information of each angle at the same time at the current moment and send the environment image information of each angle to the image processing module of the VR device.
Optionally, if the image capturing device is an internal image capturing device, for example, when the camera is installed on the VR device, the camera installed on the VR device may capture environment image information of each angle of the environment where the user is located. When a user needs to use VR equipment to generate a virtual scene, the VR equipment collects surrounding environment image information through the camera and sends the collected surrounding environment image information to the VR equipment connected with the camera. If the image acquisition device is an external image acquisition device, for example, when the user is installed in the surrounding environment at the current time, and the user needs to use the VR device to generate a virtual scene, the VR device sends an image acquisition request to the image acquisition device server, and simultaneously sends the device ID of the VR device and the current position information of the VR device to the image acquisition device server, after receiving the image acquisition request, the image acquisition device server determines whether the VR device corresponding to the device ID has an image acquisition permission for acquiring the environment image information of the current position information of the VR device according to the device ID, and if the VR device corresponding to the device ID has the image acquisition permission, the image acquisition device sends the acquired environment image information to an image processing module of the VR device. The image acquisition equipment installed in the target environment can acquire the complete environment image information of the surrounding environment.
Furthermore, the VR equipment can be connected with external image acquisition equipment in a scene frequently involved by a user, and the external image acquisition equipment can periodically send the environmental image information acquired by the external image acquisition equipment to the VR equipment, so that the VR equipment can update the environmental image information in real time according to the environmental image information acquired by the external image acquisition equipment.
And S120, generating a target virtual scene according to the environment image information.
The virtual scene refers to a VR scene constructed by VR equipment. The target virtual scene is a virtual scene which is constructed by a user according to image information of a real environment through VR equipment.
Specifically, after the image processing module of the VR device obtains the environment image information sent by the image acquisition device, the association relationship of each environment image is determined according to the acquisition angle of the image environment and the actual position of the scene in the environment image in the target environment. The incidence relation of each environment image represents the position relation among the environment images, and can be used for determining the environment images at the adjacent positions of a certain environment image. And the image processing module sends the incidence relation between the environment image information and each environment image to the VR scene editor. And after receiving the information sent by the image processing module, the VR scene editor generates a target virtual scene according to the incidence relation of each environment image. The generated target virtual scene is a reproduction of the target environment. The VR scene editor can change the target virtual scene in real time according to the environment image information acquired by the image acquisition equipment in real time, so that the scene information of the target virtual scene and the scene information of the target environment are kept consistent at any time.
Optionally, the user may set an initial viewing angle of the virtual scene according to actual needs, for example, a viewing angle of one of the image capturing devices may be selected as the initial viewing angle. And when the VR scene editor generates the target virtual scene, determining the view angle of the selected image acquisition equipment as an initial view angle, and generating the target virtual scene according to the incidence relation among the environment images based on the initial view angle.
Optionally, after the image processing module of the VR device obtains the environment image information sent by the image acquisition device, the received environment image information may also be sent to VR scene editors of other VR devices. And VR scene editors of other VR devices can generate the target virtual scene according to the received environment image information.
For example, when the target environment is a living room, the user may install an external image capturing device in the living room, and the VR device held by the user may perform information transmission with the external image capturing device installed in the living room. An external image acquisition device installed in the living room can acquire the environment image information of the living room in real time and send the acquired environment image information of the living room to an image processing module of the VR device in real time, and the VR scene editor generates a virtual scene of the living room according to the information sent by the image processing module. And updating the virtual scene of the living room in real time when the environment image information of the living room changes according to the information sent by the image processing module.
And S130, transmitting the target virtual scene to the target user side so that the target user side reproduces the target virtual scene.
The target user side is a user side of the VR device which needs to obtain the target virtual scene and has the target virtual scene obtaining authority. The target virtual scene is reproduced by using the VR device to show the target scene to the user, so that the user can be placed in the target virtual scene through the VR device.
Specifically, the target ue may be one target ue or a plurality of target ues. And after the target virtual scene is acquired by the target user side, the user can reproduce the target virtual scene through the target user side. When the environment image information of the target environment changes, the VR scene editor may update the target virtual scene in real time according to the environment image information, so that the target user end may feel dynamic changes of the target environment according to the reproduced target virtual scene.
Preferably, in this step, the participant can be determined according to the requirement information, the target user side can be determined according to the participant, and the target virtual scene can be sent to the target user side. Specifically, the method can be realized by the following substeps:
and S1301, determining the participants according to the demand information.
The demand information refers to information input into the VR device by a user according to current demands. The participants are specified by the user sending the demand and can obtain other users of the target virtual scene shared by the user sending the demand. For example, a user who sends the demand information is used as a local user, and when the local user needs to perform a video conference with a colleague through the VR device, the demand information may be a video conference initiation demand sent by the local user and user information needed to participate in the video conference. The user information may be a device ID of a VR device held by the user. At this point the participant is the video conference participant designated by the local user.
Specifically, function options may be preset for the VR device, for example, the function options may include a video function, a travel function, a shopping function, and the like. The user can select corresponding functions according to own requirements, and after the target functions are determined, the user can also invite other users to be used as participants of the functions selected by the user. The invitation method may be that the user inputs the device IDs of the VR devices of other users who need to be invited through the VR device, and the VR device determines that the other users invited by the user are participants.
S1302, the VR equipment of the participant is used as a target user side.
Specifically, when receiving the demand information sent by the local user and the device ID of the participant input by the local user, the VR device of the local user takes the VR device corresponding to the device ID of the participant as the target user side.
And S1303, sending the target virtual scene to a target user side.
Specifically, the VR device of the local user generates a target virtual scene according to the environment image information, and after determining the target user side according to the demand information of the local user, the target virtual scene may be sent to the target user side, so that the participant can reproduce the target virtual scene through the target user side.
Optionally, after the image processing module of the VR device of the local user acquires the environment image information of the target environment, the environment image information of the target environment may be sent to the target user side through the VR device, and after the target user side receives the environment image information, the target virtual scene is generated according to the environment image information through a VR scene editor in the VR device of the target user side, so that the participant can reproduce the target virtual scene through the target user side.
For example, a local user is in an office setting, and the local user needs to have a video conference with other users in the office setting. The target scene is now an office scene. And the VR equipment of the local user collects the image information of the office environment in real time, the local user can select the video function, and further inputs the equipment ID of the participant needing to participate in the video conference on the VR equipment. And the VR equipment of the target user determines the VR equipment of the participant according to the equipment ID of the participant, and the VR equipment of the participant is used as a target user side. And the VR equipment of the local user sends the generated target virtual scene to the target user side, and after the target user side receives the target virtual scene, the participants can select to carry out the video conference under the target virtual scene.
It can be understood that, in this embodiment, it is determined according to the requirement information that the participant can specify the target user side when the user has a requirement for sending the target virtual scene, so that the VR device can implement sharing of the target virtual scene according to the target user side specified by the user.
According to the technical scheme, the environment image information of the target environment can be collected in real time according to the user requirements, the target virtual scene is generated through the VR equipment according to the environment image information, and the target virtual scene can be transmitted between the target user sides through the VR equipment, so that the target user sides can share the target virtual scene. In addition, the target virtual scene can be updated by acquiring the environmental image information of the target environment in real time. The problem of can't construct virtual scene according to user's demand, only can acquire the virtual scene that VR equipment manufacturer provided, and can't experience the dynamic change of target environment through the virtual scene that generates is solved. The environment image information of the target environment specified by the user can be collected in real time according to the user requirement, the target virtual scene required by the user is generated, the target virtual scene can be updated in real time according to the change of the target environment, and the user can feel the dynamic change of the target environment according to the target virtual scene.
Example two
Fig. 2 is a flowchart of a data processing method according to a second embodiment of the present invention, which is optimized based on the above embodiments, and provides a preferred embodiment of selecting a target scene mode according to an environment image attribute and generating a target virtual scene according to the selected target scene mode. Specifically, as shown in fig. 2, the data processing method provided in this embodiment may include:
and S210, collecting environmental image information in real time.
S220, selecting a target scene mode from the selectable scene modes according to the environment image attribute of the environment image information.
The environment image attribute refers to an attribute of a scene where the environment image is located, which is determined according to the environment features extracted from the environment image information, such as a house attribute, a street attribute, a classroom attribute, or a conference attribute. The optional scene mode refers to a scene mode for generating a virtual scene, which is selectable by a user in the VR device, for example, the scene mode may include a loud mode, a quiet mode, a music mode, a landscape mode, or the like; further, optional scene modes may be dynamically added and deleted. The target scene mode is a scene mode required by a user, the user can select the target scene mode according to requirements, and the scene modes which can be selected by different environment image attributes are different. For example, if the scene attribute of the environment image is a conference attribute, the noisy mode cannot be selected. The specific attribute information and scene mode can be set according to actual conditions.
Specifically, after receiving environment image information of a target environment, an image processing module of the VR device performs feature extraction on the environment image by using an image processing algorithm, and determines attributes of the environment image acquired by the image acquisition device according to the extracted environment image features. The attribute of the environment image may be one or more. The VR device can show the selectable scene modes to the user according to the environment image attributes of the collected environment images, the user determines whether the target scene mode required by the user exists according to the displayed selectable scene modes, and if the target scene mode required by the user exists, the user can select the target scene mode.
Further, each selectable scene mode has a corresponding environment image. Optionally, the scene image is obtained as follows: after obtaining the material image information of the target environment, the user can send the material image information to the shared database through the sharing option, the shared database carries out mode classification on the material image information sent by the user according to the image characteristics of the environment image, a scene mode list is generated in the shared database, and the material image information sent by the user is stored in the mode list to which the material image information belongs.
Furthermore, the VR device may display a scene mode list corresponding to the target scene mode to the user according to the target scene mode selected by the user, and the user may select material image information in the scene mode list, where the material image information selected by the user may be one or multiple.
And S230, generating a target virtual scene according to the environment image information and the target scene mode.
Specifically, the VR device sends the environment image information collected in real time and the material image information in the target scene mode selected by the user to the image processing module. And the image processing module sends the environment image information acquired in real time, the material image information in the scene mode list selected by the user and the incidence relation of each environment image to the VR scene editor. And the VR scene editor generates a target virtual scene according to the environment image information acquired in real time, the material image information in the scene mode list selected by the user and the incidence relation of each environment image.
And S240, sending the target virtual scene to the target user side so that the target user side reproduces the target virtual scene.
According to the technical scheme of the embodiment, the selectable scene modes are provided for the user according to the environment image attributes acquired by the image acquisition equipment, so that the user can select the target scene mode to add to the target virtual scene to be generated according to the requirement of the user. According to the scheme, the effect that the required target scene mode can be selected from the selectable scene modes according to the actual requirements of the user, the target virtual scene to be generated is subjected to personalized design is achieved, and the personalized requirements of the user are met.
EXAMPLE III
Fig. 3 is a flowchart of a data processing method according to a third embodiment of the present invention, where the data processing method is optimized based on the third embodiment, and in a scene where the environment image information includes at least two frames of environment images, a preferred embodiment is given in which sub virtual scenes are generated according to each frame of environment images, and the sub virtual scenes are combined to obtain a complete target virtual scene. Specifically, as shown in fig. 3, the data processing method provided in this embodiment may include:
s310, collecting environmental image information in real time; wherein the environment image information comprises at least two frames of environment images.
And S320, generating a target virtual scene according to the environment image information.
Optionally, if the target scene range is large, multiple sets of image acquisition devices are required to acquire multiple frames of environmental images at the same time, so as to obtain environmental image information of the target environment in a full scene. At this time, the environment image information may include at least two frames of images.
Illustratively, the step may generate a sub-virtual scene of each frame of the environment image according to each frame of the environment image; and combining the at least two sub-virtual scenes to obtain a target virtual scene.
The sub-virtual scenes refer to local virtual scenes of the target virtual scene, and all the sub-virtual scenes are combined to generate the target virtual scene.
Specifically, a group of image acquisition devices are arranged at regular intervals in a target environment, each group of image acquisition devices is numbered, and N groups of image acquisition devices are arranged in total, so that the number of the image acquisition devices is 1,2, … and N, wherein N is a positive integer and N is greater than or equal to 2. Each group of image acquisition equipment can acquire the environment image information of a local scene in a target environment, the local scene of the target scene, which can acquire the environment image information, of each group of images is assigned with a scene number, and the scene number is consistent with the number of the group of image acquisition equipment capable of acquiring the environment sub-image of the scene. For example, a first group of image capturing devices captures first environment sub-image information of a first frame of target environment, a second group of image capturing devices captures second environment sub-image information of a second frame of target environment, and so on.
And the VR scene editor generates a virtual scene of a local scene in the target environment corresponding to the sub-environment image, namely a sub-virtual scene, according to the environment sub-image information acquired by each group of image acquisition equipment. And the VR scene editor generates N sub-virtual scenes according to the environment sub-image information acquired by the N groups of image acquisition equipment. Furthermore, the VR scene editor combines the sub-virtual scenes according to the number sequence corresponding to the sub-virtual images, and a complete virtual scene, namely the target virtual scene, is obtained after combination.
Preferably, this step can be realized by the following substeps:
s3201, selecting a target environment image from the at least two frames of environment images.
The target environment image referred to in this step is an environment sub-image selected by the user according to the user's own needs and used for generating the target virtual scene.
Specifically, a group of image acquisition devices are arranged at regular intervals in a target environment, each group of image acquisition devices is numbered, and N groups of image acquisition devices are arranged in total, so that the number of the image acquisition devices is 1,2, … and N, wherein N is a positive integer and N is greater than or equal to 2. Each group of image acquisition equipment can acquire the environment image information of a local scene in a target environment, the local scene of the target scene, which can acquire the environment image information, of each group of images is assigned with a scene number, and the scene number is consistent with the number of the group of image acquisition equipment capable of acquiring the environment sub-image of the scene. In practical application, a user can only need to acquire a virtual scene of a local scene in a target environment according to actual requirements, and does not need to acquire a virtual scene of a full scene of the target environment. Therefore, the target environment can be subjected to scene division in advance, the target environment can be divided into a plurality of local scenes, preferably, the number of the local scenes can be consistent with that of the image acquisition equipment groups, and all the local scenes are combined together to form the complete target environment. The user can select a local scene needing to generate the virtual scene according to the self requirement. The local scenes can be listed in a scene table according to the scene numbers, a user can select a target scene by clicking the local scene numbers in the scene table, the local scenes selected by the user can be one or more, N local scenes can be selected at most, and the local scenes selected by the user can be local scenes with continuous scene numbers or local scenes with non-continuous scene numbers. The local scene selected by the user is used as a target local scene, the image acquisition device in the selected target local scene can acquire a target environment image of the target local scene, and the acquired target environment image is sent to the image processing module of the VR device.
Optionally, in this step, the target environment image may be selected from at least two frames of environment images according to the requirement information and/or the environment image attribute of each frame of environment image.
The demand information refers to information input into the VR device by a user according to current demands.
Specifically, after receiving the environment image information sent by each group of image acquisition devices, the image processing module extracts the features of the environment image by using an image processing algorithm, and determines the attributes of the environment image acquired by the image acquisition devices according to the extracted environment image features. When a user has a requirement for generating a target virtual scene, the requirement information can be input into VR equipment in a keyword mode according to the requirement, and the keyword input mode of the user can be character input or voice input. And after receiving the demand information input by the user, the VR equipment performs correlation matching on the demand information input by the user and the attributes of the environment images, selects a target environment image from at least two frames of environment images according to a correlation matching result, and sends the selected target environment image to an image processing module of the VR equipment. And an image processing module of the VR equipment sends the association relation between the target environment image and each target environment image to a VR scene editor.
And S3202, generating a target virtual scene according to the target environment image.
And the VR scene editor generates a virtual scene, namely a sub-virtual scene, of the target local scene corresponding to the target environment image according to the target environment image of the target local scene acquired by each group of image acquisition equipment. Furthermore, the VR scene editor combines the sub-virtual scenes according to the scene number sequence corresponding to the target local scene, and the virtual scene, namely the target virtual scene, is obtained after combination.
The target virtual scene is generated according to the selected target environment image selected by the user, more virtual scene construction schemes can be provided for the user, and the user can independently select the target scene according to the requirement of the user, so that the generation mode of the target virtual scene is more flexible.
S330, the target virtual scene is sent to the target user side, so that the target user side reproduces the target virtual scene.
For example, after the target virtual scene is acquired, marking may be performed in the target virtual scene, for example, a user may select a destination in the target virtual scene for marking, and the VR device may generate an optimal route map according to marking information sent by the user and current location information of the user, and highlight the optimal route map in the target virtual scene, for example, a graphic mark or a color mark may be set on the optimal route map.
According to the technical scheme of the embodiment, multi-frame environment image information is acquired by the image acquisition equipment, the sub-virtual scenes of the target environment are generated according to each frame environment image, and the sub-virtual scenes are combined in sequence to obtain the target virtual scene. The method and the device solve the problem that when the range of the target environment is too large, only one frame of environment image information is acquired by a group of image acquisition equipment, the environment image information of the whole scene of the target environment cannot be acquired, and therefore a complete target virtual scene cannot be generated. The method and the device have the advantages that the effect of acquiring multiple frames of environment images, generating corresponding sub-virtual scenes according to each frame of environment images and obtaining a complete target virtual scene according to the combination of the sub-virtual scenes is achieved.
Example four
Fig. 4 is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the present invention, which is applicable to a case where a target virtual scene is generated according to environment image information and the target virtual scene can be reproduced at a target user side, as shown in fig. 4, the data processing apparatus includes: an environment image information acquisition module 410, a target virtual scene generation module 420 and a target virtual scene transmission module 430.
The environment image information acquisition module 410 is used for acquiring environment image information in real time;
a target virtual scene generation module 420, configured to generate a target virtual scene according to the environment image information;
the target virtual scene sending module 430 is configured to send the target virtual scene to the target user side, so that the target user side reproduces the target virtual scene.
According to the technical scheme, the environment image information of the target environment can be collected in real time according to the user requirements, the target virtual scene is generated through the VR equipment according to the environment image information, and the target virtual scene can be transmitted between the target user sides through the VR equipment, so that the target user sides can share the target virtual scene. In addition, the target virtual scene can be updated by acquiring the environmental image information of the target environment in real time. The problem of can't construct virtual scene according to user's demand, only can acquire the virtual scene that VR equipment manufacturer provided, and can't experience the dynamic change of target environment through the virtual scene that generates is solved. The environment image information of the target environment specified by the user can be collected in real time according to the user requirement, the target virtual scene required by the user is generated, the target virtual scene can be updated in real time according to the change of the target environment, and the user can feel the dynamic change of the target environment according to the target virtual scene.
The target virtual scene generation module 420 further includes:
a target scene mode selection unit for selecting a target scene mode from the selectable scene modes according to an environment image attribute of the environment image information; and generating a target virtual scene according to the environment image information and the target scene mode.
Illustratively, the target virtual scene generation module 420 further includes:
if the environment image information comprises at least two frames of environment images, generating a sub-virtual scene of each frame of environment images according to each frame of environment images;
and combining the at least two sub-virtual scenes to obtain a target virtual scene.
Illustratively, the target virtual scene generation module 420 further includes:
the target environment image selecting unit is used for selecting a target environment image from at least two frames of environment images when the environment image information comprises at least two frames of environment images; and generating a target virtual scene according to the target environment image.
Further, the target environment image rule selecting unit further includes:
and selecting a target environment image from at least two frame environment images according to the requirement information and/or the environment image attribute of each frame environment image.
Illustratively, the target virtual scene sending module 430 further includes:
the participant determining unit is used for determining participants according to the demand information;
a target user side determining unit, configured to use the VR device of the participant as a target user side; and sending the target virtual scene to a target user side.
The data processing device provided by the embodiment can be applied to the data processing method provided by any of the above embodiments, and has corresponding functions and beneficial effects.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a data processing apparatus according to a fifth embodiment of the present invention, as shown in fig. 5, the apparatus includes a processor 50, a memory 51, an input device 52, and an output device 53; the number of processors 50 in the device may be one or more, and one processor 50 is taken as an example in fig. 5; the processor 50, the memory 51, the input device 52 and the output device 53 in the apparatus may be connected by a bus or other means, which is exemplified in fig. 5.
The memory 51 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the data processing method in the embodiment of the present invention. The processor 50 executes various functional applications of the device and data processing, i.e., implements the above-described data processing method, by executing software programs, instructions, and modules stored in the memory 51.
The memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 51 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 51 may further include memory located remotely from the processor 50, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 52 may be used to receive the ambient image information and to generate ambient image information related parameter inputs related to user settings and function control of the device. The output device 53 may include a display device such as a display screen.
The data processing device provided by the embodiment can be applied to the data processing algorithm provided by any of the above embodiments, and has corresponding functions and beneficial effects.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a data processing method, including:
collecting environmental image information in real time;
generating a target virtual scene according to the environment image information;
and sending the target virtual scene to the target user side so that the target user side reproduces the target virtual scene.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the data processing method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the data processing method, each included unit and each included module are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A data processing method, comprising:
collecting environmental image information in real time;
generating a target virtual scene according to the environment image information;
and sending the target virtual scene to a target user side so that the target user side reproduces the target virtual scene.
2. The method of claim 1, wherein generating a target virtual scene from the environmental image information comprises:
selecting a target scene mode from selectable scene modes according to the environment image attribute of the environment image information;
and generating a target virtual scene according to the environment image information and the target scene mode.
3. The method of claim 1, wherein if the environment image information includes at least two frames of environment images, generating a target virtual scene according to the environment image information comprises:
generating a sub-virtual scene of each frame of environment image according to each frame of environment image;
and combining the at least two sub-virtual scenes to obtain a target virtual scene.
4. The method of claim 1, wherein if the environment image information includes at least two frames of environment images, generating a target virtual scene according to the environment image information comprises:
selecting a target environment image from at least two frame environment images;
and generating a target virtual scene according to the target environment image.
5. The method of claim 4, wherein selecting the target environment image from at least two frame environment images comprises:
and selecting a target environment image from at least two frame environment images according to the requirement information and/or the environment image attribute of each frame environment image.
6. The method of claim 1, wherein sending the target virtual scene to a target user side comprises:
determining a participant according to the demand information;
taking the VR equipment of the participant as a target user side;
and sending the target virtual scene to a target user side.
7. A data processing apparatus, comprising:
the environment image information acquisition module is used for acquiring environment image information in real time;
the target virtual scene generation module is used for generating a target virtual scene according to the environment image information;
and the target virtual scene sending module is used for sending the target virtual scene to a target user side so that the target user side can reproduce the target virtual scene.
8. The apparatus of claim 7, wherein the target virtual scene generation module comprises:
a target scene mode selecting unit for selecting a target scene mode from selectable scene modes according to an environment image attribute of the environment image information; and generating a target virtual scene according to the environment image information and the target scene mode.
9. A data processing apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a data processing method as claimed in any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111659440.4A CN114332326A (en) | 2021-12-30 | 2021-12-30 | Data processing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111659440.4A CN114332326A (en) | 2021-12-30 | 2021-12-30 | Data processing method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332326A true CN114332326A (en) | 2022-04-12 |
Family
ID=81017994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111659440.4A Pending CN114332326A (en) | 2021-12-30 | 2021-12-30 | Data processing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332326A (en) |
-
2021
- 2021-12-30 CN CN202111659440.4A patent/CN114332326A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106170101B (en) | Contents providing system, information processing equipment and content reproducing method | |
JP5074752B2 (en) | Image request method | |
CN104680480B (en) | A kind of method and device of image procossing | |
EP3131257B1 (en) | Program, information processing apparatus, and information processing system for use in an electronic conference system | |
EP3972236A1 (en) | Communication terminal, image communication system, method for displaying image, and carrier means | |
EP3962090A1 (en) | Communication terminal, image communication system, method for displaying image, and carrier means | |
JP6816973B2 (en) | Server control method and system | |
CN106604127A (en) | Multimedia information sharing method, device and terminal device | |
CN106664433B (en) | Multimedia messages playback method and system, standardized server, live streaming terminal | |
EP3989539A1 (en) | Communication management apparatus, image communication system, communication management method, and carrier means | |
EP3979631A1 (en) | Communication management apparatus, image communication system, communication management method, and carrier means | |
CN114422460A (en) | Method and system for establishing same-screen communication sharing in instant messaging application | |
CN115209083A (en) | Multi-video-conference collaborative conference opening method, terminal and storage medium | |
KR20090044105A (en) | Live-image providing system using contents of 3d virtual space | |
JP2007116692A (en) | Tv conferencing method for tv conference client, tv conferencing method, equipment for implementing the method, and computer program for implementing the method | |
US20240089603A1 (en) | Communication terminal, image communication system, and method of displaying image | |
CN114531564A (en) | Processing method and electronic equipment | |
KR20180056728A (en) | Method for controlling an image processing apparatus | |
CN110418180B (en) | Grouping-based screen projection display method and system and central control equipment | |
CN114332326A (en) | Data processing method, device, equipment and medium | |
CN109076251A (en) | Teleconference transmission | |
JP2017175590A (en) | program | |
US20050138561A1 (en) | Information display apparatus and information display method | |
CN116980555B (en) | Input signal source acquisition method and system | |
CN110493231A (en) | The methods, devices and systems of information transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |