[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112540674A - Virtual environment interaction method and equipment - Google Patents

Virtual environment interaction method and equipment Download PDF

Info

Publication number
CN112540674A
CN112540674A CN202011433036.0A CN202011433036A CN112540674A CN 112540674 A CN112540674 A CN 112540674A CN 202011433036 A CN202011433036 A CN 202011433036A CN 112540674 A CN112540674 A CN 112540674A
Authority
CN
China
Prior art keywords
determining
user
image
eye
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011433036.0A
Other languages
Chinese (zh)
Inventor
时准
莫畏
金雅庆
韩璘
张恒煦
杨嘉明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Jianzhu University
Original Assignee
Jilin Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Jianzhu University filed Critical Jilin Jianzhu University
Priority to CN202011433036.0A priority Critical patent/CN112540674A/en
Publication of CN112540674A publication Critical patent/CN112540674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses virtual environment interaction method and equipment, which are applied to virtual reality equipment, wherein the virtual reality equipment comprises a display device, a photographing device and a vibrating device, the display device comprises a plurality of display areas, and the method comprises the following steps: controlling the photographing device to photograph an eye image of a user; determining a corresponding observation region of a user on the display device according to the eye image; determining a building model corresponding to the observation area; determining material information of the building model; when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information; and when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient. The virtual environment interaction method increases the judgment capability of the user on the building model, and increases the possibility that the user acquires more information content in the virtual environment.

Description

Virtual environment interaction method and equipment
Technical Field
The application relates to the technical field of virtual reality, in particular to a virtual environment interaction method and equipment.
Background
In the prior art, when a user observes in a virtual environment, objects in the virtual environment interact with each other, but because a user model of the user and an object model in the virtual environment are both virtual models, the user cannot visually find the situation of interference with the object model when operating the user model, so that the user cannot visually determine the surface contour of the object model, and the information content which can be acquired by the user to the virtual environment is limited.
Disclosure of Invention
The embodiment of the application provides a virtual environment interaction method and equipment.
In a first aspect, an embodiment of the present application provides a virtual environment interaction method, which is applied to a virtual reality device, where the virtual reality device includes a display device, a photographing device, and a vibrating device, the display device includes a plurality of display areas, and the virtual environment interaction method includes:
controlling the photographing device to photograph an eye image of a user;
determining a corresponding observation region of a user on the display device according to the eye image;
determining a building model corresponding to the observation area;
determining material information of the building model;
when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information;
and when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient, wherein the second preset distance is smaller than the first preset distance.
In a second aspect, an embodiment of the present application provides a virtual environment interaction apparatus, which is applied to a virtual reality device, where the virtual reality device includes a display device, a photographing device, and a vibration device, the display device includes a plurality of display areas, and the virtual environment interaction apparatus includes:
the shooting device comprises an acquisition unit, a storage unit and a control unit, wherein the acquisition unit is used for controlling the shooting device to shoot an eye image of a user;
a determination unit that determines an observation region of a user on a display device from the eye image;
the determining unit is further configured to determine a building model corresponding to the observation area;
the determining unit is further configured to determine material information of the building model;
the determining unit is further configured to determine a vibration coefficient according to the material information when the distance between the user model and the building model is smaller than a first preset distance;
and the vibration unit is used for controlling the vibration device to vibrate according to the vibration coefficient when the distance between the user model and the building model is smaller than a second preset distance, and the second preset distance is smaller than the first preset distance.
In a third aspect, an embodiment of the present application provides a virtual reality device, including a processor, a memory, a transceiver, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, the photographing device is controlled to photograph the eye image of the user first; determining a corresponding observation area of the user on the display device according to the eye image; then determining a building model corresponding to the observation area; determining material information of the building model; when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information; and when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient, wherein the second preset distance is smaller than the first preset distance. The method and the device realize that the user can visually determine the surface contour and the surface material of the object model, and increase the information content acquired by the user to the virtual environment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a virtual environment interaction method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an eye image provided in an embodiment of the present application;
fig. 3 is a schematic diagram of an eye contour and a display device provided by an embodiment of the present application in a spatial coordinate system;
fig. 4 is a schematic structural diagram of a virtual reality device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a virtual environment interaction apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart of a virtual environment interaction method provided in an embodiment of the present application, and is applied to a virtual reality device, where the virtual reality device includes a display device, a photographing device, and a vibrating device, the display device includes a plurality of display areas, and the virtual environment interaction method includes the following steps.
And step 10, controlling the photographing device to photograph the eye image of the user.
Wherein, control the device of shooing shoots user's eyes, acquires the eye image, include:
determining, by the photographing device, a brightness of the eye region of the user.
And adjusting the f-number of the photographing device according to the brightness.
And photographing the eye region of the user according to the adjusted f-number to acquire an eye image.
For example, in order to ensure that the photographing device can obtain a clear and high-contrast image, when the photographing device obtains the eye image, the brightness of the light in the photographing environment needs to be determined first, where the brightness of the light is determined according to the currently displayed image of the display device 230, and when the brightness of the displayed image is higher, the brightness of the light is higher, and when the brightness of the displayed image is lower, the brightness of the light is lower. And after the brightness of the photographing environment is determined, adjusting the f-number of the photographing device according to the brightness, and controlling the photographing device to photograph the eye region of the user to acquire an eye image.
Therefore, the eye images acquired by the photographing device are ensured to have approximate image brightness by controlling the light incoming amount of the photographing device, so that the problem that the difficulty of subsequent eye image processing is increased due to inconsistent image brightness of the eye images at different moments when the eye images are acquired by the photographing device is solved.
Step 20, determining the corresponding observation area of the user on the display device 230 according to the eye image.
Wherein the determining the corresponding observation region of the user on the display device 230 according to the eye image includes:
determining an eye contour 210 and an iris contour 220 of the user according to the eye image;
determining the observation direction of the user according to the eye contour 210 and the iris contour 220;
the viewing area of the user on the display device 230 is determined according to the viewing direction.
In an implementation manner of the present application, the determining the eye contour 210 and the iris contour 220 of the user according to the eye image includes:
carrying out gray level processing on the eye image to obtain a first image;
the eye contour 210 and the iris contour 220 of the user are determined from the first image.
Wherein, carrying out gray processing on the eye image to obtain a first image comprises:
and determining color information of pixel points of the eye image, wherein the color information comprises red brightness, green brightness and red brightness.
Determining the gray value of the pixel point according to a first formula and the color information, wherein the first formula is G-R a1+ G a2+ B a3, G represents the gray value of the pixel point, R represents the red brightness of the pixel point, G represents the green brightness of the pixel point, B represents the blue brightness of the pixel point, a1 represents a first reference coefficient, a2 represents a second reference coefficient, a3 represents a third reference coefficient, and a1+ a2+ a3 is 100%.
And after gray values of all pixel points are obtained, determining the first image.
For example, when a pixel of the eye image is a color, it is determined that the red luminance of the pixel is 210, the blue luminance is 50, and the green luminance is 100, where a1 is 30%, a2 is 40%, and a3 is 30%, and then the gray scale value of the pixel is G210 + 30% +50 + 40% +100 + 30% + 113.
When the user observes, the eyeballs can be rotated when different directions are observed, so that the direction of the iris aligns with the direction to be observed, the object to be observed can be observed, and the observation direction of the user can be determined according to the iris position of human eyes.
In an embodiment, since the eyeball structure of the human eye includes a sclera and an iris, and the color of the sclera is different from the color of the iris, the eyeball structure of the human eye is located within the eye contour 210 of the human eye, after performing the gray-scale processing on the first image, the eye contour 210 is determined according to the first image and a first preset gray-scale value, and the iris contour 220 is located within the eye contour 210, so that a circular region inside the eye contour 210 can be determined according to the eye contour 210 and the first preset gray-scale value, and the circular region is the iris contour 220 of the human eye.
In an implementation manner of the present application, the step of determining the viewing direction of the user according to the eye contour 210 and the iris contour 220 includes:
determining a first center position of the eye contour 210 and a second center position of the iris contour 220;
establishing a space coordinate system comprising an X axis, a Y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the X axis and the Y axis;
and determining the observation direction according to the first preset position and the second central position.
Wherein the viewing direction refers to a direction in which a user's eyes gaze when viewing the display device 230.
Wherein the determining the first center position of the eye contour 210 and the second center position of the iris contour 220 comprises:
determining a first pixel point and a second pixel point of the eye contour 210 in the first direction and a third pixel point and a fourth pixel point along the second direction, wherein the first pixel point and the second pixel point are both side end points of the edge of the eye contour 210 along the first direction, and the third pixel point and the fourth pixel point are both side end points of the edge of the eye contour 210 along the second direction;
determining an intersection point of a connecting line of the first pixel point and the second pixel point and a connecting line of the third pixel point and the fourth pixel point as a first central position of the eye contour 210.
Referring to fig. 2, in a specific embodiment, the first pixel point is a, the second pixel point is B, the third pixel point is C, and the fourth pixel point is D, the a and the B are connected to obtain a line segment AB, the C and the D are connected to obtain a line segment CD, and then an intersection point of the line segment AB and the line segment CD is a first central position of the eye contour 210.
Determining the second center position of the iris outline 220 is the same as determining the first center position of the eye outline 210, and is not repeated herein.
After the first central position and the second central position are determined, referring to fig. 3, a spatial coordinate system including an X axis, a Y axis and a z axis is established with the first central position as an origin, wherein the second central position is located on a plane formed by the X axis and the Y axis; determining the observation direction according to the second formula, the first preset position and the second central position, wherein the second formula is
Figure BDA0002827271530000061
Wherein, A is the contained angle of viewing direction and y axle, x1 is the coordinate of second central point along the x axle direction, z1 is the coordinate of second central point along the z axle direction, y2 is the coordinate of first default position along the y axle direction.
In an embodiment, the origin of the spatial coordinate system is the first center position, the coordinate (0,0,0) of the first center position is the coordinate (2,0,1) of the second center position, and the coordinate of the first predetermined position is (0, -5,0), so that the viewing direction can be determined according to the second formula and the coordinate of the first predetermined position and the coordinate of the second center position, and the angle between the viewing direction and the x-axis is
Figure BDA0002827271530000062
In an implementation manner of the present application, the determining a viewing area of the user on the display device 230 according to the viewing direction includes:
determining a first distance between the display device 230 and a user's eye;
and determining a corresponding observation area of the user on the display device 230 according to the observation direction and the third distance.
Wherein the first distance is 10mm, 20mm or other value.
In an embodiment, the first distance may be a preset parameter of the virtual reality device.
In another embodiment, the virtual reality apparatus includes a distance measuring sensor, and the first distance is a distance between the display device 230 and a human eye of a user measured by the distance measuring sensor.
After the first distance is determined, the virtual reality apparatus can determine the position of the display device 230 in the spatial coordinate system, and then determine the corresponding observation region of the user on the display device 230 according to the observation direction and the position of the first display device 230 in the spatial coordinate system.
In an embodiment, if the first distance is 20mm, then the distance from the origin of the display device 230 on the spatial coordinate system along the y-axis direction is 20 units, the viewing direction forms an angle of 24 ° with the x-axis of the spatial coordinate system, and a straight line where the viewing direction is located passes through the second center position, an intersection point of the viewing direction and the display device 230 is the viewing area, and a coordinate position of the viewing area is (10,0, 5).
In an implementation manner of the present application, the step of determining the building model corresponding to the observation area specifically includes:
acquiring a display image of the observation area;
matching the display image with a prestored building image;
and when the display image is successfully matched with the pre-stored building image, determining that the building model corresponding to the display area is the building model corresponding to the successfully matched pre-stored building image.
Wherein the display image is an image corresponding to the observation area of the display device 230.
The pre-stored building image is an image associated with the building model in the virtual reality device, specifically, the building image is associated with a plurality of pre-stored building images, and each pre-stored building image includes feature information of at least a part of the building model, so that the building model can be determined according to the feature image in the pre-stored building image. Specifically, the characteristic information includes, but is not limited to, the color, shape, or other information of the building that can be used to distinguish the building model.
In order to determine the building model part in the virtual environment according to the sight of the user, the virtual device firstly acquires a display image in the observation area, then matches the display image with the prestored building image, and when the display image is successfully matched with the prestored building image, determines that the building model corresponding to the display area is the building model corresponding to the prestored building image which is successfully matched, and the model is the building model which is currently observed by the user.
And step 40, determining the material information of the building model.
The material information can be metal or wood or soft material.
In an implementation of this application, can be in for the convenience of user obtain more model information in the virtual environment, the user is in when interacting with the building model in the virtual environment, probably pass through the user model with the building model contacts, consequently observes when the user, in order to can be faster to the user contact feedback when the building model, is confirming that the user observes behind the building model, confirm the material information of building model, again according to the material information passes through vibrating device feeds back the vibration of different degrees to the user.
And step 50, when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information.
And step 60, when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient, wherein the second preset distance is smaller than the first preset distance.
Wherein, the first preset distance may be 10mm, 20mm or other lengths.
Wherein the second preset distance may be 5mm, 8mm or other length.
Wherein the vibration information comprises at least one parameter of vibration frequency, vibration amplitude and vibration period.
In an implementation of this application, for the convenience of the user can be right the material of building model confirms, works as user model to building model is close to, and with when building model's distance is less than first preset distance, according to material information confirms the coefficient of vibration, and user model continues to be close to building model, and with building model's distance is less than during the second preset distance, through vibrating device vibrates, thereby convenience of customers is right in the virtual environment building model experiences, and is visible, through different vibration modes, can make the accurate material to different building models of user judge, thereby it is right to have increased the user building model's judgment ability to it is in to have increased the user obtain more information content in the virtual environment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a virtual reality device provided in an embodiment of the present application, where as shown in the figure, the virtual reality device includes a processor, a memory, a transceiver, and one or more programs, the virtual reality device further includes a display device, a photographing device, and a vibrating device, and the display device includes a plurality of display areas; wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
controlling the photographing device to photograph the eye image of the user;
determining a corresponding observation region of the user on the display device 230 according to the eye image;
determining a building model corresponding to the observation area;
determining material information of the building model;
when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information;
and when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient, wherein the second preset distance is smaller than the first preset distance. In an implementation of the application, the program comprises instructions for performing the following steps in particular in determining a viewing direction of a user from the eye image:
determining an eye contour 210 and an iris contour 220 of the user according to the eye image;
and determining the observation direction of the user according to the eye contour 210 and the iris contour 220.
The viewing area of the user on the display device 230 is determined according to the viewing direction.
In one implementation of the present application, in determining the eye contour 210 and the iris contour 220 of the user from the captured image, the program includes instructions specifically configured to:
carrying out gray level processing on the eye image to obtain a first image;
the eye contour 210 and the iris contour 220 of the user are determined from the first image.
In one implementation of the present application, in determining the viewing direction of the user based on the eye contour 210 and the iris contour 220, the above program includes instructions specifically configured to:
determining a first center position of the eye contour 210 and a second center position of the iris contour 220;
establishing a space coordinate system comprising an X axis, a Y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the X axis and the Y axis;
and determining the observation direction according to the first preset position and the second central position.
In an implementation of the present application, in determining the viewing area of the user on the display device 230 according to the viewing direction, the program comprises instructions specifically configured to:
determining a first distance between the display device 230 and a user's eye;
and determining a corresponding observation area of the user on the display device 230 according to the observation direction and the third distance.
In an implementation of the present application, in determining the building model corresponding to the observation area, the program includes instructions specifically configured to:
acquiring a display image of the observation area;
matching the display image with a prestored building image;
and when the display image is successfully matched with the pre-stored building image, determining that the building model corresponding to the display area is the building model corresponding to the successfully matched pre-stored building image.
Referring to fig. 5, fig. 5 is a virtual environment interaction apparatus provided in an embodiment of the present application, which is applied to a virtual reality device, where the virtual reality device includes a display device, a photographing device, and a vibrating device, the display device includes a plurality of display areas, and the apparatus includes:
an obtaining unit 310, configured to control the photographing apparatus to photograph an eye image of the user;
a determining unit 320, which determines a corresponding observation area of the user on the display device 230 according to the eye image;
the determining unit 320 is further configured to determine a building model corresponding to the observation area;
the determining unit 320 is further configured to determine material information of the building model;
the determining unit 320 is further configured to determine a vibration coefficient according to the material information when a distance between the user model and the building model is smaller than a first preset distance;
and the vibration unit 330 is configured to control the vibration device to vibrate according to the vibration coefficient when the distance between the user model and the building model is smaller than a second preset distance, where the second preset distance is smaller than the first preset distance.
In an implementation manner of the present application, in determining the observation direction of the user according to the eye image, the determining unit 320 is specifically configured to:
determining an eye contour 210 and an iris contour 220 of the user according to the eye image;
and determining the observation direction of the user according to the eye contour 210 and the iris contour 220.
The viewing area of the user on the display device 230 is determined according to the viewing direction.
In an implementation manner of the present application, in determining the eye contour 210 and the iris contour 220 of the user according to the captured image, the determining unit 320 is specifically configured to:
carrying out gray level processing on the eye image to obtain a first image;
the eye contour 210 and the iris contour 220 of the user are determined from the first image.
In an implementation manner of the present application, in determining the viewing direction of the user according to the eye contour 210 and the iris contour 220, the determining unit 320 is specifically configured to:
determining a first center position of the eye contour 210 and a second center position of the iris contour 220;
establishing a space coordinate system comprising an X axis, a Y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the X axis and the Y axis;
and determining the observation direction according to the first preset position and the second central position.
In an implementation manner of the present application, in determining the observation area of the user on the display device 230 according to the observation direction, the determining unit 320 is specifically configured to:
determining a first distance between the display device 230 and a user's eye;
and determining a corresponding observation area of the user on the display device 230 according to the observation direction and the third distance.
In an implementation manner of the present application, in determining the building model corresponding to the observation area, the determining unit 320 is specifically configured to:
acquiring a display image of the observation area;
matching the display image with a prestored building image;
and when the display image is successfully matched with the pre-stored building image, determining that the building model corresponding to the display area is the building model corresponding to the successfully matched pre-stored building image.
It should be noted that the acquiring unit 310, the determining unit 320, and the vibrating unit 330 may be implemented by a processor.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the virtual reality device in the above method embodiment.
Embodiments of the present application also provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in the virtual reality apparatus in the above method. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. The virtual environment interaction method is applied to virtual reality equipment, the virtual reality equipment comprises a display device, a photographing device and a vibrating device, the display device comprises a plurality of display areas, and the virtual environment interaction method comprises the following steps:
controlling the photographing device to photograph an eye image of a user;
determining a corresponding observation region of a user on the display device according to the eye image;
determining a building model corresponding to the observation area;
determining material information of the building model;
when the distance between the user model and the building model is smaller than a first preset distance, determining a vibration coefficient according to the material information;
and when the distance between the user model and the building model is smaller than a second preset distance, controlling the vibration device to vibrate according to the vibration coefficient, wherein the second preset distance is smaller than the first preset distance.
2. The virtual environment interaction method according to claim 1, wherein the determining a viewing direction of the user from the eye image comprises:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display device according to the observation direction.
3. The virtual environment interaction method of claim 2, wherein determining the eye contour and the iris contour of the user from the eye image comprises:
carrying out gray level processing on the eye image to obtain a first image;
determining the eye contour and the iris contour of the user according to the first image.
4. The virtual environment interaction method of claim 2, wherein determining the viewing direction of the user according to the eye contour and the iris contour comprises:
determining a first center position of the eye contour and a second center position of the iris contour;
establishing a space coordinate system comprising an x axis, a y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the x axis and the y axis;
and determining the observation direction according to the first preset position and the second central position.
5. The virtual environment interaction method of claim 2, wherein determining a viewing area of a user on the display device according to the viewing direction comprises:
determining a first distance between the display device and a user's eye;
and determining a viewing area of a user on the display device according to the viewing direction and the first distance.
6. The virtual environment interaction method according to any one of claims 1 to 5, wherein the determining the building model corresponding to the observation region comprises:
acquiring a display image of the observation area;
matching the display image with a prestored building image;
and when the display image is successfully matched with the pre-stored building image, determining that the building model corresponding to the display area is the building model corresponding to the successfully matched pre-stored building image.
7. The utility model provides a virtual environment interaction device which characterized in that is applied to virtual reality equipment, virtual reality equipment includes display device, the device of shooing and vibrating device, display device includes a plurality of display areas, virtual environment interaction device includes:
the shooting device comprises an acquisition unit, a storage unit and a control unit, wherein the acquisition unit is used for controlling the shooting device to shoot an eye image of a user;
the determining unit is used for determining an observation area corresponding to the user on the display device according to the eye image;
the determining unit is further configured to determine a building model corresponding to the observation area;
the determining unit is further configured to determine material information of the building model;
the determining unit is further used for determining a vibration coefficient according to the material information when the distance between the user model and the building model is smaller than a first preset distance;
and the vibration unit is used for controlling the vibration device to vibrate according to the vibration coefficient when the distance between the user model and the building model is smaller than a second preset distance, and the second preset distance is smaller than the first preset distance.
8. The apparatus according to claim 7, wherein, in determining the viewing direction of the user from the eye image, the determining unit is specifically configured to:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display device according to the observation direction.
9. A virtual reality device comprising a processor, memory, a transceiver, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN202011433036.0A 2020-12-09 2020-12-09 Virtual environment interaction method and equipment Pending CN112540674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011433036.0A CN112540674A (en) 2020-12-09 2020-12-09 Virtual environment interaction method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011433036.0A CN112540674A (en) 2020-12-09 2020-12-09 Virtual environment interaction method and equipment

Publications (1)

Publication Number Publication Date
CN112540674A true CN112540674A (en) 2021-03-23

Family

ID=75019740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011433036.0A Pending CN112540674A (en) 2020-12-09 2020-12-09 Virtual environment interaction method and equipment

Country Status (1)

Country Link
CN (1) CN112540674A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
US20140236541A1 (en) * 2013-02-15 2014-08-21 Hyoung Seob LEE Virtual reality design apparatus and method thereof
CN105955463A (en) * 2016-04-26 2016-09-21 王立峰 BIM (Building Information Modeling)-based VR (Virtual Reality) virtual feeling system
CN106652049A (en) * 2017-01-10 2017-05-10 沈阳比目鱼信息科技有限公司 Full-professional design delivery method for building based on augmented reality technology of mobile terminal
CN106843475A (en) * 2017-01-03 2017-06-13 京东方科技集团股份有限公司 A kind of method and system for realizing virtual reality interaction
CN108005341A (en) * 2017-12-15 2018-05-08 苏州桃格思信息科技有限公司 A kind of method and device that augmented reality floor is realized by ultrasonic wave
WO2018084216A1 (en) * 2016-11-01 2018-05-11 株式会社Zweispace Japan Real estate evaluation system, method and program
US20180239840A1 (en) * 2017-02-22 2018-08-23 Stellar VDC Commercial, LLC Building model with capture of as built features and experiential data
CN109960411A (en) * 2019-03-19 2019-07-02 上海俊明网络科技有限公司 A kind of tangible formula building materials database of auxiliary VR observation
US20190377330A1 (en) * 2018-05-29 2019-12-12 Praxik, Llc Augmented Reality Systems, Methods And Devices
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
US20140236541A1 (en) * 2013-02-15 2014-08-21 Hyoung Seob LEE Virtual reality design apparatus and method thereof
CN105955463A (en) * 2016-04-26 2016-09-21 王立峰 BIM (Building Information Modeling)-based VR (Virtual Reality) virtual feeling system
WO2018084216A1 (en) * 2016-11-01 2018-05-11 株式会社Zweispace Japan Real estate evaluation system, method and program
CN106843475A (en) * 2017-01-03 2017-06-13 京东方科技集团股份有限公司 A kind of method and system for realizing virtual reality interaction
CN106652049A (en) * 2017-01-10 2017-05-10 沈阳比目鱼信息科技有限公司 Full-professional design delivery method for building based on augmented reality technology of mobile terminal
US20180239840A1 (en) * 2017-02-22 2018-08-23 Stellar VDC Commercial, LLC Building model with capture of as built features and experiential data
CN108005341A (en) * 2017-12-15 2018-05-08 苏州桃格思信息科技有限公司 A kind of method and device that augmented reality floor is realized by ultrasonic wave
US20190377330A1 (en) * 2018-05-29 2019-12-12 Praxik, Llc Augmented Reality Systems, Methods And Devices
CN109960411A (en) * 2019-03-19 2019-07-02 上海俊明网络科技有限公司 A kind of tangible formula building materials database of auxiliary VR observation
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱宁克等: "虚拟现实技术在建筑设计中的应用", 《北京建筑工程学院学报》 *
莫畏等: "基于虚拟现实技术的建筑设计体验研究――以UE4引擎为例", 《建材与装饰》 *

Similar Documents

Publication Publication Date Title
JP6871416B2 (en) Methods and devices for determining facial image quality, electronics and computer storage media
CN109993115B (en) Image processing method and device and wearable device
CN108989678B (en) Image processing method and mobile terminal
CN107172364B (en) Image exposure compensation method and device and computer readable storage medium
CN107911621B (en) Panoramic image shooting method, terminal equipment and storage medium
KR102465654B1 (en) Head mounted display device and method therefor
CN111510623B (en) Shooting method and electronic equipment
CN111552389A (en) Method and device for eliminating fixation point jitter and storage medium
CN107436681A (en) Automatically adjust the mobile terminal and its method of the display size of word
CN110826414A (en) Display control method and device of mobile terminal, terminal and medium
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN105144704B (en) Show equipment and display methods
CN109688325B (en) Image display method and terminal equipment
CN108769636B (en) Projection method and device and electronic equipment
CN106919246A (en) The display methods and device of a kind of application interface
CN110555815B (en) Image processing method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN113542597B (en) Focusing method and electronic device
CN112540674A (en) Virtual environment interaction method and equipment
CN114757866A (en) Definition detection method, device and computer storage medium
CN108960097B (en) Method and device for obtaining face depth information
CN112540673A (en) Virtual environment interaction method and equipment
CN112581435B (en) Anti-dizziness method and apparatus
CN111610886A (en) Method and device for adjusting brightness of touch screen and computer readable storage medium
JP2013258583A (en) Captured image display, captured image display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210323

RJ01 Rejection of invention patent application after publication