[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113741695A - Object display method and device - Google Patents

Object display method and device Download PDF

Info

Publication number
CN113741695A
CN113741695A CN202111037992.1A CN202111037992A CN113741695A CN 113741695 A CN113741695 A CN 113741695A CN 202111037992 A CN202111037992 A CN 202111037992A CN 113741695 A CN113741695 A CN 113741695A
Authority
CN
China
Prior art keywords
motion
display screen
physical object
virtual
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111037992.1A
Other languages
Chinese (zh)
Inventor
龚江涛
张柳新
韩腾
张晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111037992.1A priority Critical patent/CN113741695A/en
Publication of CN113741695A publication Critical patent/CN113741695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an object display method and device, and the method comprises the following steps: detecting that a physical object touches a display screen, and acquiring a touch position of the physical object touching the display screen; acquiring the motion characteristics of a physical object; building a virtual object for representing a physical object; and displaying the virtual object moving according to the motion trail conforming to the motion characteristic by taking the touch position as the initial display position of the virtual object in the display screen. The scheme of the application can more truly restore the motion effect of objects in the real world in the virtual scene, and improves the effect of virtual-real interaction conversion.

Description

Object display method and device
Technical Field
The present application relates to the field of display technologies, and in particular, to an object display method and apparatus.
Background
In scenes such as augmented reality and mixed reality, virtual-real interactive conversion between the real world and a virtual scene is often involved.
The more common virtual-real interaction translates into: after an object in the real world touches a display screen for presenting a virtual scene, the display screen outputs a virtual effect that simulates the object hitting the display screen. However, in the current virtual interaction conversion, only the virtual effect that the object collides with the display screen at the collision point is simply constructed, and the motion effect of the object in the real world cannot be really restored, so that the effect of virtual-real interaction conversion is poor.
Disclosure of Invention
The application provides an object display method and device.
The object display method comprises the following steps:
detecting that a physical object touches a display screen, and acquiring a touch position of the physical object touching the display screen;
acquiring the motion characteristics of the physical object;
building a virtual object for representing the physical object;
and displaying the virtual object which moves according to the motion track conforming to the motion characteristics by taking the touch position as the initial display position of the virtual object in the display screen.
In one possible implementation manner, the method further includes:
acquiring morphological characteristics of the physical object;
the constructing a virtual object for representing the physical object, comprising:
and constructing a virtual object matched with the morphological characteristics of the physical object.
In another possible implementation manner, the acquiring morphological characteristics of the physical object includes:
acquiring an acquired object image of the physical object to obtain morphological characteristics of the physical object represented by the object image;
or,
acquiring airflow flow rate data in front of the display screen sensed by an airflow sensor array, and determining morphological characteristics of the physical object based on the airflow flow rate data, wherein a gas injection device is erected at the top end of the display screen and used for injecting air flow downwards from the top end of the display screen so as to generate an air flow layer at the front end of the display screen.
In another possible implementation manner, the acquiring the motion feature of the physical object includes:
acquiring motion characteristics of the physical object when the physical object touches the display screen, wherein the motion characteristics at least comprise the motion speed, the motion direction and the acceleration of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
and displaying an image of the virtual object moving according to the movement speed, the movement direction and the acceleration in a display screen by taking the touch position as the initial display position of the virtual object.
In another possible implementation manner, the acquiring the motion feature of the physical object includes:
acquiring a plurality of motion characteristics of the physical object within a set time length from the current moment, wherein the motion characteristics comprise at least one of the motion speed, the motion direction and the motion position of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object;
and displaying the virtual object which changes according to a simulated motion track on the display screen by taking the touch position as an initial display position of the virtual object.
In yet another possible implementation manner, before displaying the virtual object that moves according to the motion trajectory conforming to the motion characteristic, the method further includes:
determining scene characteristics of a virtual scene displayed by the display screen, wherein the scene characteristics are at least used for representing the influence of the virtual scene on the motion of an object;
the displaying the virtual object moving according to the motion trajectory conforming to the motion characteristic by taking the touch position as the initial display position of the virtual object in the display screen includes:
and displaying the virtual object which moves according to the motion track corresponding to the motion characteristic and the scene characteristic by taking the touch position as the initial display position of the virtual object in the display screen.
In another possible implementation manner, the acquiring the motion feature of the physical object includes:
acquiring motion characteristics of the physical object when the physical object touches the display screen, wherein the motion characteristics at least comprise the motion speed and the motion direction of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
determining a motion track of a virtual object in the virtual scene from the touch position according to the motion speed and the motion direction by combining the scene characteristics;
and displaying the virtual object moving according to the simulated motion trail by taking the touch position as the initial display position of the virtual object in the display screen.
In another possible implementation manner, the acquiring the motion feature of the physical object includes:
acquiring airflow flow speed data in front of the display screen, which is sensed by an airflow sensor array;
and determining the motion characteristics of the physical object based on the airflow flow rate data, wherein a gas injection device is erected at the top end of the display screen and is used for injecting an airflow downwards from the top end of the display screen so as to generate an airflow layer at the front end of the display screen.
Wherein, an object display apparatus comprises:
the position determining unit is used for detecting that a physical object touches a display screen and acquiring the touch position of the physical object touching the display screen;
a feature acquisition unit configured to acquire a motion feature of the physical object;
an object construction unit for constructing a virtual object representing the physical object;
and the object display unit is used for displaying the virtual object which moves according to the motion track conforming to the motion characteristics by taking the touch position as the initial display position of the virtual object in the display screen.
In one possible implementation manner, the feature obtaining unit includes:
the multi-feature acquisition unit is used for acquiring a plurality of motion features of the physical object within a set time length from the current moment, wherein the motion features comprise at least one of the motion speed, the motion direction and the motion position of the physical object;
the object display unit includes:
the track determining unit is used for determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object;
and the object display subunit is used for displaying the virtual object which changes according to a simulated motion track on the display screen by taking the touch position as an initial display position of the virtual object.
According to the scheme, when the physical object is detected to touch the display screen, the touch position of the physical object touching the display screen is acquired, and the motion characteristics of the physical object can be acquired. On the basis, the touch position is used as the initial display position of the virtual object representing the physical object in the display screen, and the virtual object moving according to the motion track corresponding to the motion characteristic of the physical object is presented in the display screen, so that the motion track of the virtual object simulating the continuous motion of the physical object can be presented in the display screen after the physical object collides with the display screen, the motion effect of the object in the real world can be more truly restored in the virtual scene, and the effect of virtual-real interactive conversion is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an object display method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another object display method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an effect of a physical object before the physical object collides with a display screen in the present application;
FIG. 4 is a schematic diagram illustrating an effect of a physical object colliding with a display screen according to the present application;
FIG. 5 is a schematic diagram illustrating the motion effect of a virtual object displayed on a display screen after a physical object collides with the display screen in the present application;
FIG. 6 is a schematic flowchart illustrating an object display method according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a structure of an object display apparatus according to an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The scheme of the application is suitable for scenes such as augmented reality or mixed reality and the like, so that the interaction effect between a physical object in a real scene and a virtual object in a virtual scene is improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
As shown in fig. 1, which shows a flowchart of an embodiment of an object display method according to the present application, where the method of the present embodiment is applied to an electronic device, and the electronic device is connected to or has a display screen, the method of the present embodiment may include:
s101, detecting that a physical object touches a display screen, and acquiring a touch position of the physical object touching the display screen.
The physical object refers to an object in the real world, and the object may be an object, a person, or the like, which is not limited to this.
There are many possible ways to obtain the touch position. For example, in one possible scenario, if the display screen is a touch screen or has a sensor capable of sensing a touch, the display screen may sense a touch location where a physical object touches the display screen.
In yet another possible implementation, the present application may further be that a gas injection device is erected at the top end of the display screen, and the gas injection device is used for injecting the air flow downwards from the top end of the display screen to generate the air flow layer at the front end of the display screen. On the basis, the air flow velocity data in front of the display screen sensed by the air flow sensor array can be obtained, and the touch position of the physical object colliding with the display screen can be determined based on the air flow velocity data.
The gas flow sensor array may be disposed toward the spraying direction of the gas spraying device, for example, the gas flow sensor array is disposed at the bottom end of the display screen and faces the gas spraying device. On the basis, the airflow sensor array can sense the airflow change of the airflow jetted by the gas jetting device, so that if a physical object collides with the display screen, the object inevitably changes the change of the airflow, and on the basis, the airflow sensor array can calculate the position of the physical object colliding with the display screen according to the change data of the airflow.
Of course, in practical applications, there may be other ways to determine the collision position of the physical object colliding with the display screen, for example, capturing an image of the change of the position of the physical object relative to the display screen by a camera, calculating the position of the physical object colliding with the display screen according to the captured image, and the same is also applicable to the embodiment for determining the collision position by other ways.
It will be appreciated that in the present application, the display screen may take many possible forms. For example, in a mixed reality scene, the display screen may be a transparent display screen made of a holographic film, so that a user can see both objects in a virtual scene presented on the display screen and physical objects in the real world through the display screen. Of course, this is merely an example, and the display screen may also be an applicable display screen in an augmented reality scene, and the present application is not limited to this.
And S102, acquiring the motion characteristics of the physical object.
The motion characteristic of the physical object is a characteristic used for reflecting the current motion state of the physical object. The motion feature may further include a feature for reflecting a motion tendency of the object.
For example, the motion characteristics may include at least: the speed and direction of motion of the physical object. In practical application, the motion characteristics may further include one or more of motion-related characteristics such as position and acceleration of the physical object, as required.
Obtaining the motion characteristics of the physical object in the present application may include: the motion characteristics of the physical object when colliding with the display screen may also include motion characteristics within a set time period before the physical object collides with the display screen.
For example, in one possible implementation manner, the present application may acquire the current motion characteristic of the physical object when it is detected that the physical object collides with the display screen. In this case, the motion characteristics may include: the speed and direction of motion of the physical object. Of course, in order to accurately judge the motion trend of the physical object after the current time, the motion characteristic may further include an acceleration of the physical object.
In yet another possible implementation manner, the present application obtains a plurality of motion characteristics of the physical object within a set time period from the current time, where the plurality of motion characteristics are motion characteristics of a plurality of different times within the set time period. In this implementation, the motion characteristic may include at least one of a motion speed, a motion direction, and a motion position of the physical object.
In this implementation, the motion characteristics of the physical object may be continuously collected or obtained, and a plurality of motion characteristics within a set time period before the current time may be obtained. For example, the motion characteristics of the physical object are acquired or calculated according to a set frequency, and when it is detected that the physical object collides with the display screen, a plurality of motion characteristics obtained by setting a time length before the current time are acquired.
Since the motion characteristics at a plurality of different moments before the current moment can be obtained in this implementation manner, even if the motion characteristics only include one of a motion speed, a motion direction, and a motion position, the current motion state of the physical object and the motion trajectory of the physical object can be analyzed based on the motion characteristics at the plurality of different moments, and a motion trend of the physical object and the like can also be analyzed.
It is understood that the motion characteristics of the physical object in front of the display screen can be obtained in various ways, and the present application is not limited thereto. For example, in the present application, an image capturing device such as a camera may be disposed at the top end of the display screen, or an image capturing device may be disposed in a physical space where the display screen is located, in which case, the motion characteristics of the physical object may be captured by the camera, or the characteristic data used for determining the physical object may be captured, and the motion characteristics of the physical object may be calculated.
For another example, similarly to the previous acquisition of the touch position of the physical object, in the case where the gas injection device is mounted on the top end of the display screen, the data of the flow rate of the gas in front of the display screen sensed by the gas flow sensor array may be acquired, and based on this data of the flow rate of the gas, the motion characteristics of the physical object may be determined. For example, the movement characteristics of the physical object across the air flow layer in front of the display screen are calculated according to the air flow size sensed by different air flow sensors in the air flow sensor array and the change of the air flow velocity.
It will be appreciated that where images can be presented on both sides of the display screen, the gas injection means is arranged to inject a stream of air downwardly from the top of the display screen, and a layer of air stream can be generated on both sides of the display screen where images are presented, and therefore, reference to the front end of the display screen in the embodiments of the present application can be considered to be the front end of either side of the display screen where images are presented.
S103, a virtual object for representing the physical object is constructed.
The virtual object is a virtual display object which needs to be displayed in the display screen and is used for representing the physical object.
In order to more visually and intuitively embody the relationship between the virtual object and the physical object and more realistically present the conversion from the physical object to the virtual object in the virtual scene, in the present application, the virtual object may be a virtual image of the physical object, or a display object of the same category as the physical object or having a feature related to the physical object.
For example, in one possible implementation, morphological features of the physical object may be acquired. The morphological characteristics of the physical object can represent external characteristics such as appearance and posture presented by the physical object. For example, the morphological feature of the physical object may include part or all of the shape, structure, pose, and external color of the physical object. Accordingly, a physical object matching the morphological feature of the physical object may be constructed.
For example, a physical object matching the morphological feature of the physical object may be determined from the stored plurality of virtual objects. Or drawing a corresponding virtual object according to the morphological characteristics of the physical object.
And S104, displaying the virtual object which moves according to the motion track conforming to the motion characteristic by taking the touch position as the initial display position of the virtual object in the display screen.
It will be appreciated that in order to be able to translate the movement of a physical object in the real world into movement of the virtual object in the virtual scene and to present the effect of the physical object continuing to move in the virtual scene, the initial position of the virtual object representing the physical object in the display screen should be the position where the physical object touches the display screen.
Correspondingly, in order to represent the continuous motion of the physical object in the virtual scene through the virtual object, the presented virtual object is controlled to move according to the motion characteristics of the physical object, so that the motion effect of the physical object after the physical object collides with the display screen is simulated, and the motion effect of the physical object after the physical object collides with the display screen is presented to a user.
It is understood that there are many possibilities of the scene category of the virtual scene presented by the display screen, and for a certain virtual scene, in order to truly reproduce the real effect of the movement of the object in the virtual scene, the influence of the virtual scene on the movement of the object needs to be considered after the initial movement characteristic of the object needs to be considered. If the virtual scene is a virtual underwater world, the effect of the water, which is a medium, on the obstruction of the movement of the object needs to be considered. Based on the method and the device, scene characteristics of the virtual scene displayed by the display screen can be determined, and the scene characteristics are at least used for representing the influence of the virtual scene on the motion of the object.
Correspondingly, the application can also be: and displaying the virtual object which moves according to the motion track corresponding to the motion characteristic and the scene characteristic by taking the touch position as the initial display position of the virtual object in the display screen.
Therefore, when the application detects that the physical object touches the display screen, the touch position of the physical object touching the display screen is obtained, and the motion characteristic of the physical object is obtained. On the basis, the touch position is used as the initial display position of the virtual object representing the physical object in the display screen, and the virtual object moving according to the motion track corresponding to the motion characteristic of the physical object is presented in the display screen, so that the motion track of the virtual object simulating the continuous motion of the physical object can be presented in the display screen after the physical object collides with the display screen, the motion effect of the object in the real world is more truly restored, and the effect of virtual-real interaction conversion is improved.
It can be understood that, the information content included in the motion feature of the physical object acquired in the present application is different, and the specific manner of displaying the virtual object based on the motion feature of the physical object may also be different.
The solution of the present application is described below in connection with several possible scenarios for obtaining motion characteristics of a physical object.
In one possible case, the method can obtain the motion characteristics of the physical object when the physical object collides with the display screen. In this case, the motion characteristics may include: the speed and direction of motion of the physical object. In order to more truly simulate the motion effect of the physical object after the physical object collides with the display screen through the displayed virtual object, the motion characteristics of the physical object can also comprise the acceleration of the moving object when the moving object collides with the physical object.
In this case, after the virtual object is constructed, an image in which the virtual object moves at the movement speed, the movement direction, and the acceleration may be displayed in the display screen with the touch position as an initial display position of the virtual object. On the basis, the motion direction, the motion speed and the motion trend based on the acceleration of the virtual object are consistent with the actual motion situation of the physical object, so that a user can feel the motion effect of the physical object after entering the virtual scene of the display screen through the virtual object.
In yet another possible scenario, the present application may control the display of the virtual object based on a plurality of motion characteristics of the physical object within a set time period from the current time.
This is explained below with reference to fig. 2. As shown in fig. 2, which shows another schematic flow chart of an object display method according to the present application, the method of the present embodiment may include:
s201, detecting that a physical object touches a display screen, and acquiring a touch position of the physical object touching the display screen.
This step can be referred to the related description of the previous embodiment, and is not described herein again.
S202, acquiring a plurality of motion characteristics of the physical object within a set time length from the current time.
It will be appreciated that in this embodiment, the motion characteristics of the physical object may be determined periodically or aperiodically, rather than being collected or calculated upon detection of the physical object touching the display screen.
The motion characteristics of the physical object at a plurality of different moments in the set time length before the current moment can be obtained at each time, so that the motion characteristics of the physical object at the plurality of different moments can be obtained when the physical object touches the display screen, and the motion trend of the physical object after the physical object collides the display screen can be deduced.
It is understood that, since the motion characteristics of a plurality of time instants exist simultaneously, even if there is only one motion parameter in the motion characteristics, the motion trajectory of the physical object before the physical object collides with the display screen can be restored, and thus, the motion characteristics of the present application may include at least one of the motion speed, the motion direction, and the motion position of the physical object.
S203, the morphological characteristics of the physical object are acquired.
The morphological feature of the physical object may be a feature exhibited by an appearance of the physical object.
In one possible case, an acquired object image of the physical object may be acquired, and morphological features of the physical object may be determined based on the object image of the physical object. For example, the image of the physical object is used as the morphological feature of the physical object, the morphological feature of the physical object represented by the object image is obtained, or the extrinsic feature of the physical object is extracted based on the image of the physical object.
In a further possible case, if the gas injection device is arranged at the top end of the display screen, the method can also obtain the gas flow speed data in front of the display screen sensed by the gas flow sensor array, and determine the morphological characteristics of the physical object based on the gas flow speed data. Such as the determined structural form of the physical object, etc.
S204, constructing a virtual object matched with the morphological characteristics of the physical object.
For example, in one possible scenario, a virtual object library may be configured in the electronic device, and the virtual object library may include a plurality of virtual objects that are constructed in advance. On the basis, a virtual object matched with the morphological characteristics of the physical object can be selected from a virtual object library to obtain a constructed virtual object. For example, based on the image or the external shape structure of the physical object, a matching virtual object is queried, and the like.
In another possible case, the electronic device may also combine a virtual object matching the physical object by using the existing object components according to the morphological characteristics of the physical object; alternatively, the corresponding virtual object is rendered according to the morphological characteristics of the physical object.
It can be understood that, in the embodiment, the virtual object is constructed based on the morphological characteristics of the physical object as an example, so that the constructed virtual object can represent the morphological characteristics of the physical object more realistically, and when the physical scene is switched with the virtual scene, the physical object in the real world can be presented more realistically through the virtual object. It is understood that the virtual object is constructed in other ways and is also applicable to the present embodiment.
And S205, determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object.
For the sake of convenience of distinction, the predicted motion trajectory of the physical object after the physical object collides with the display screen is referred to as a simulated motion trajectory in combination with a plurality of motion characteristics of the physical object before the physical object collides with the display screen.
It can be understood that, since the plurality of motion characteristics can restore the motion trail and the motion trend of the physical object between the collision display screens, the motion trail of the physical object after the collision display screens can be presumed by combining the motion trail and the motion trend represented by the plurality of motion characteristics.
For example, assuming that each motion characteristic includes a motion speed of the physical object, the acceleration and the motion direction of the physical object may be determined by combining the motion speeds at a plurality of different time instants, and based on this, part or all of the information such as the motion speed, the motion direction and the acceleration of the physical object after colliding with the display screen may be estimated, and further, the information such as the position and the motion direction of the physical object after colliding with the display screen may be estimated, so as to obtain the simulated motion trajectory.
Of course, if the motion characteristics include various information such as motion direction, motion speed, motion position and acceleration, the combination of the motion characteristics can more accurately determine the simulated motion trajectory.
And S206, displaying the virtual object which changes according to the simulated motion track on the display screen by taking the touch position as the initial display position of the virtual object.
It can be understood that, because the simulated motion trajectory is the simulated motion trajectory of the physical object after colliding with the display screen, the display object is controlled to move according to the simulated motion trajectory from the touch position where the physical object collides with the display screen, and the motion effect of the physical object after colliding with the display screen can be reflected more truly through the virtual object.
To facilitate an understanding of the benefits of the present application, the following description is provided in connection with an application scenario. For example, taking a mixed reality scene as an example, the technology builds an interactive feedback information loop among the virtual world, the real world and the user by introducing the real scene information into the virtual environment, so as to enhance the sense of reality experienced by the user. Often in mixed reality scenes there is a need to involve interactive switching from physical objects in the real scene to virtual objects in the virtual scene.
Fig. 3-5 show a schematic diagram of the present application showing throwing of a sphere in a mixed reality scenario.
In the mixed reality scenes of fig. 3 to 5, the display screen is connected with the main control device or the display screen is a display unit belonging to the main control device.
A camera (not shown in fig. 3 to 5) may be disposed in the physical environment where the display screen is located, and the camera may be connected to the main control device. The camera is used for acquiring images of all physical objects in the physical environment where the display screen is located and transmitting the images to the main control equipment, and the main control equipment can determine morphological characteristics or motion characteristics and the like of the physical objects based on the images of all the physical objects.
Gas injection means may also be provided on top of the display screen, such as on top of the display screen 301 shown in fig. 3 (not shown in fig. 3). The air injection device can inject air flow from top to bottom, and on the basis, the front end of the display screen can present an air flow layer. The airflow layer is arranged at the front end of the side of the display screen facing the user and is parallel to the display plane of the display screen. It will be appreciated that the gas injection means may be provided with injection speed and the like as required to render the air stream layer invisible to the user.
An air flow sensor array facing the jet orifice of the gas jet device and capable of sensing the air flow jetted by the gas jet device is arranged at the bottom end of the display screen (such as the bottom of the display screen in fig. 3). The sensor array can display the airflow flow rate data corresponding to the airflow layer in front of the screen. The master control device can obtain the airflow flow rate data sensed by the sensor array, and determine the morphological characteristics, the motion characteristics and the like of the physical object entering the airflow layer according to the airflow flow rate data.
In practical application, the camera, the gas injection device and the sensor array can be arranged in a physical scene where the display screen is located at the same time.
As can be seen from fig. 3, the user throws a real sphere 302 in the physical space towards the display screen, where the sphere 302 has not yet touched the display screen 301. In this case, the image of the sphere can be captured by the camera, so that the main control device obtains the images of the sphere collected by the camera at different times, and analyzes the position, the moving direction, the moving speed and the like of the sphere at different times.
As can be seen from fig. 4, the sphere 302 touches the display screen 301, in this case, when the sphere touches the display screen, the sphere inevitably enters the front end of the display screen to the airflow layer, and since the sphere enters the airflow layer, the airflow rate in the airflow layer inevitably changes, and therefore, the gas sensor array inevitably senses the change of the airflow rate data. Correspondingly, the main control equipment can analyze the collision position, the motion characteristic, the morphological characteristic and the like of the object colliding with the display screen according to the change of the airflow flow rate data sensed by the gas sensor.
Of course, when the ball collides with the display screen, the collision image of the ball and the display screen can be collected through the camera, and the main control device can analyze the collision position of the ball colliding with the display screen based on the collision image. Similarly, the master control device can analyze the movement characteristics of the ball when the ball collides with the display screen based on the collision image and the image before the ball collides with the display screen.
When the main control equipment obtains that the collision object colliding with the display screen is a sphere, a virtual sphere can be constructed, and meanwhile, the motion track of the physical object after the physical object collides with the display screen for display is predicted by combining the image collected by the camera and one or more of airflow flow rate data sensed by the airflow sensor array.
On this basis, as shown in fig. 5, the host device may display a virtual sphere 303 on the display screen, where the virtual sphere starts from a position where the sphere 302 collides with the display screen, so that the predicted movement track of the sphere 302 moves in the virtual scene displayed on the display screen.
As can be seen from fig. 5, an image in which the virtual sphere 303 moves along a motion trajectory conforming to the sphere 302 after colliding with the display screen is presented in the display screen, so that a user feels a motion effect that the sphere in the real world enters a virtual scene presented by the display screen after colliding with the display screen, and the reality of interaction between the physical entity and the virtual entity is improved.
In addition, in the application, the air flow layer is formed at the front end of the display screen by spraying the air flow through the air flow spraying device on the top of the display screen, so that the information such as the movement speed and the position of the physical object entering the air flow layer can be determined, and the impact force damage of the physical object to the display screen can be reduced.
Meanwhile, the air flow layer is air flowing from top to bottom, so that after the physical object enters the air flow layer, the air flow layer can generate a downward force on the physical object, the physical object can fall more quickly, and the situation that the virtual object and the physical object exist at the same time can be reduced.
As shown in fig. 5, if there is an air flow layer at the front end of the display screen, the sphere 302 will fall down at an accelerated speed due to the impact of the air flow, so as to reduce the time that the sphere is in front of the display screen, thereby reducing the time that the user sees the real sphere and the virtual sphere at the same time, and further improving the reality of the virtual-real conversion.
It will be appreciated that the effects of different media on the resistance etc. to the movement of the physical object will also differ. In order to simulate the motion effect of the physical object in the environment type corresponding to the virtual scene more truly, the motion of the constructed virtual object can be controlled by combining the scene type of the virtual scene.
For convenience of understanding, the following description is made in conjunction with a case, as shown in fig. 6, which shows another flow diagram of the object display method provided by the present application, and the method of this embodiment may include:
s601, detecting that a physical object touches a display screen, acquiring a touch position of the physical object touching the display screen and acquiring motion characteristics of the physical object touching the display screen.
The motion characteristics include at least a motion speed and a motion direction of the physical object.
It can be understood that, in the case of considering the influence of the environmental scene on the motion of the physical object, in the case of knowing the motion speed and the motion direction of the physical object, in combination with the influence of the resistance and the like generated by the environment on the physical object, the variation trend of the motion trajectory of the physical object due to the environmental factors can be calculated, and therefore, the motion characteristics in the present embodiment may only include the motion speed and the motion direction of the physical object.
Of course, in practical applications, in order to more accurately determine the motion trend, the trajectory, and the like of the physical object, the motion characteristic may further include other motion characteristics such as an acceleration of the physical object, which is not limited.
It can be understood that, in the present embodiment, the motion characteristic obtained when the physical object collides with the display screen is taken as an example for explanation, but it can be understood that, if the motion characteristic obtained when the physical object is within the set time length before the current time is also applicable to the present embodiment, the description thereof is omitted.
S602, determining scene characteristics of the virtual scene displayed on the display screen.
Wherein the scene characteristics are used at least to characterize the effect of the virtual scene on the motion of the object.
The effect on the movement of the object herein may include at least a resistance to the object, and of course, may include other effects, which are not limited thereto.
If a scene of an underwater picture currently output by the display screen is assumed, the scene characteristics of the virtual scene are water environment characteristics, and correspondingly, the influence of the scene characteristics on the motion of the object is the resistance of water to the motion of the object.
For another example, assuming a virtual scene of a land environment displayed by the display screen, a scene characteristic of the virtual scene is an air environment characteristic, and an influence of the scene characteristic on the object motion can be considered as a resistance of air to the object.
S603, the morphological characteristics of the physical object are acquired.
It is understood that the sequence of steps S602 and S603 is not limited to that shown in fig. 6, and in practical applications, the two steps may be executed simultaneously or may be interchanged, which is not limited to this.
S604, constructing a virtual object matched with the morphological characteristics of the physical object.
The above steps S603 and S604 can refer to the related description of the previous embodiment, and are not described herein again.
And S605, determining a simulation motion track of the virtual object in the virtual scene from the touch position according to the motion speed and the motion direction by combining the scene characteristics.
For example, in combination with the resistance of the scene features to the object motion, the motion positions of the virtual object at a plurality of moments after the current moment are calculated from the touch position, and the simulated motion trajectory of the virtual object in the virtual scene is obtained under the condition that the virtual object moves according to the motion speed and the motion direction.
For example, assuming that the scene features represent that the virtual scene is an underwater environment, positions where the virtual object may arrive at a plurality of subsequent moments from the touch position when the virtual object is subjected to resistance of water and moves according to the movement direction and the movement speed are calculated, and then the simulated movement track is obtained.
And S606, displaying the virtual object moving according to the simulated motion track by taking the touch position as the initial display position of the virtual object in the display screen.
In this embodiment, under the condition that the physical object collides with the display screen, not only the motion characteristics of the physical object colliding with the display screen but also the scene characteristics of the virtual scene presented by the display screen are obtained, and on this basis, in combination with the scene characteristics of the virtual scene, the motion trajectory of the virtual object moving in the virtual scene according to the motion characteristics of the physical object colliding with the display screen can be simulated, so that the motion effect of the physical object entering the virtual scene after colliding with the display screen can be more truly presented through the displayed motion of the virtual object.
Corresponding to an object display method of the present application, the present application further provides an object display apparatus, as shown in fig. 7, which shows a schematic structural diagram of an object display apparatus of the present application, and the apparatus of this embodiment may include:
a position determining unit 701, configured to detect that a physical object touches a display screen, and acquire a touch position where the physical object touches the display screen;
a feature obtaining unit 702, configured to obtain a motion feature of the physical object;
an object construction unit 703 for constructing a virtual object representing the physical object;
an object display unit 704, configured to display the virtual object that moves according to the motion trajectory conforming to the motion characteristic with the touch position as an initial display position of the virtual object in the display screen.
In one possible implementation, the apparatus further includes:
a form acquiring unit configured to acquire form features of the physical object;
and an object construction unit, specifically, configured to construct a virtual object that matches morphological features of the physical object.
In an alternative form, the form acquiring unit includes:
and the first form acquisition unit is used for acquiring the acquired object image of the physical object to obtain the form characteristics of the physical object represented by the object image.
Or,
and the second form acquisition unit is used for acquiring airflow speed data in front of the display screen sensed by the airflow sensor array and determining form characteristics of the physical object based on the airflow speed data, and a gas injection device is erected at the top end of the display screen and used for injecting air flow downwards from the top end of the display screen so as to generate an air flow layer at the front end of the display screen.
In another possible implementation manner, the feature obtaining unit includes:
the collision characteristic acquisition unit is used for acquiring the motion characteristics of the physical object when the physical object touches the display screen, wherein the motion characteristics at least comprise the motion speed, the motion direction and the acceleration of the physical object;
and the object display unit is specifically configured to display an image of the virtual object moving according to the movement speed, the movement direction and the acceleration in a display screen by using the touch position as an initial display position of the virtual object.
In another possible implementation manner, the feature obtaining unit includes:
the multi-feature acquisition unit is used for acquiring a plurality of motion features of the physical object within a set time length from the current moment, wherein the motion features comprise at least one of the motion speed, the motion direction and the motion position of the physical object;
the object display unit includes:
the track determining unit is used for determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object;
and the object display subunit is used for displaying the virtual object which changes according to a simulated motion track on the display screen by taking the touch position as an initial display position of the virtual object.
In yet another possible implementation manner, the apparatus further includes:
the scene determining unit is used for determining scene characteristics of a virtual scene displayed by the display screen before the virtual object moving according to the motion track conforming to the motion characteristics is displayed by the object displaying unit, wherein the scene characteristics are at least used for representing the influence of the virtual scene on the motion of the object;
the object display unit is specifically configured to display the virtual object that moves according to the motion trajectory corresponding to the motion feature and the scene feature, with the touch position as an initial display position of the virtual object in the display screen.
In an optional manner, the characteristic obtaining unit is specifically configured to obtain a motion characteristic of the physical object when the physical object touches the display screen, where the motion characteristic at least includes a motion speed and a motion direction of the physical object;
an object display unit comprising:
the scene simulation subunit is configured to determine, in combination with the scene features, that the virtual object simulates a motion trajectory in the virtual scene from the touch position according to the motion speed and the motion direction;
and the object display subunit is used for displaying the virtual object moving according to the simulated motion track by taking the touch position as an initial display position of the virtual object in the display screen.
In another possible implementation manner, the acquiring the motion feature of the physical object includes:
acquiring airflow flow speed data in front of the display screen, which is sensed by an airflow sensor array;
and determining the motion characteristics of the physical object based on the airflow flow rate data, wherein a gas injection device is erected at the top end of the display screen and is used for injecting an airflow downwards from the top end of the display screen so as to generate an airflow layer at the front end of the display screen.
In yet another aspect, the present application further provides an electronic device, as shown in fig. 8, which shows a schematic structural diagram of the electronic device, and the electronic device may be any type of electronic device, and the electronic device at least includes a memory 801 and a processor 802;
wherein the processor 801 is adapted to perform the object display method as in any of the above embodiments.
The memory 802 is used to store programs needed for the processor to perform operations.
It is to be understood that the electronic device may further include a display unit 803 and an input unit 804.
Of course, the electronic device may have more or less components than those shown in fig. 8, which is not limited thereto.
In another aspect, the present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the object display method according to any one of the above embodiments.
The present application also proposes a computer program comprising computer instructions stored in a computer readable storage medium. A computer program for performing the object display method as in any one of the above embodiments when run on an electronic device.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. Meanwhile, the features described in the embodiments of the present specification may be replaced or combined with each other, so that those skilled in the art can implement or use the present application. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An object display method comprising:
detecting that a physical object touches a display screen, and acquiring a touch position of the physical object touching the display screen;
acquiring the motion characteristics of the physical object;
building a virtual object for representing the physical object;
and displaying the virtual object which moves according to the motion track conforming to the motion characteristics by taking the touch position as the initial display position of the virtual object in the display screen.
2. The method of claim 1, further comprising:
acquiring morphological characteristics of the physical object;
the constructing a virtual object for representing the physical object, comprising:
and constructing a virtual object matched with the morphological characteristics of the physical object.
3. The method of claim 2, the acquiring morphological features of the physical object, comprising:
acquiring an acquired object image of the physical object to obtain morphological characteristics of the physical object represented by the object image;
or,
acquiring airflow flow rate data in front of the display screen sensed by an airflow sensor array, and determining morphological characteristics of the physical object based on the airflow flow rate data, wherein a gas injection device is erected at the top end of the display screen and used for injecting air flow downwards from the top end of the display screen so as to generate an air flow layer at the front end of the display screen.
4. The method of claim 1, the obtaining motion characteristics of the physical object, comprising:
acquiring motion characteristics of the physical object when the physical object touches the display screen, wherein the motion characteristics at least comprise the motion speed, the motion direction and the acceleration of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
and displaying an image of the virtual object moving according to the movement speed, the movement direction and the acceleration in a display screen by taking the touch position as the initial display position of the virtual object.
5. The method of claim 1, the obtaining motion characteristics of the physical object, comprising:
acquiring a plurality of motion characteristics of the physical object within a set time length from the current moment, wherein the motion characteristics comprise at least one of the motion speed, the motion direction and the motion position of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object;
and displaying the virtual object which changes according to a simulated motion track on the display screen by taking the touch position as an initial display position of the virtual object.
6. The method of claim 1, further comprising, prior to said displaying said virtual object moving according to a motion trajectory conforming to said motion characteristic:
determining scene characteristics of a virtual scene displayed by the display screen, wherein the scene characteristics are at least used for representing the influence of the virtual scene on the motion of an object;
the displaying the virtual object moving according to the motion trajectory conforming to the motion characteristic by taking the touch position as the initial display position of the virtual object in the display screen includes:
and displaying the virtual object which moves according to the motion track corresponding to the motion characteristic and the scene characteristic by taking the touch position as the initial display position of the virtual object in the display screen.
7. The method of claim 6, the obtaining motion characteristics of the physical object, comprising:
acquiring motion characteristics of the physical object when the physical object touches the display screen, wherein the motion characteristics at least comprise the motion speed and the motion direction of the physical object;
the displaying the virtual object which changes according to the motion track conforming to the motion characteristics on the display screen by taking the touch position as the initial display position of the virtual object comprises:
determining a motion track of a virtual object in the virtual scene from the touch position according to the motion speed and the motion direction by combining the scene characteristics;
and displaying the virtual object moving according to the simulated motion trail by taking the touch position as the initial display position of the virtual object in the display screen.
8. The method of claim 1, the obtaining motion characteristics of the physical object, comprising:
acquiring airflow flow speed data in front of the display screen, which is sensed by an airflow sensor array;
and determining the motion characteristics of the physical object based on the airflow flow rate data, wherein a gas injection device is erected at the top end of the display screen and is used for injecting an airflow downwards from the top end of the display screen so as to generate an airflow layer at the front end of the display screen.
9. An object display apparatus, comprising:
the position determining unit is used for detecting that a physical object touches a display screen and acquiring the touch position of the physical object touching the display screen;
a feature acquisition unit configured to acquire a motion feature of the physical object;
an object construction unit for constructing a virtual object representing the physical object;
and the object display unit is used for displaying the virtual object which moves according to the motion track conforming to the motion characteristics by taking the touch position as the initial display position of the virtual object in the display screen.
10. The apparatus of claim 9, the feature acquisition unit, comprising:
the multi-feature acquisition unit is used for acquiring a plurality of motion features of the physical object within a set time length from the current moment, wherein the motion features comprise at least one of the motion speed, the motion direction and the motion position of the physical object;
the object display unit includes:
the track determining unit is used for determining a simulated motion track of the physical object after the physical object touches the display screen by combining a plurality of motion characteristics of the physical object;
and the object display subunit is used for displaying the virtual object which changes according to a simulated motion track on the display screen by taking the touch position as an initial display position of the virtual object.
CN202111037992.1A 2021-09-06 2021-09-06 Object display method and device Pending CN113741695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111037992.1A CN113741695A (en) 2021-09-06 2021-09-06 Object display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111037992.1A CN113741695A (en) 2021-09-06 2021-09-06 Object display method and device

Publications (1)

Publication Number Publication Date
CN113741695A true CN113741695A (en) 2021-12-03

Family

ID=78735956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111037992.1A Pending CN113741695A (en) 2021-09-06 2021-09-06 Object display method and device

Country Status (1)

Country Link
CN (1) CN113741695A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012091514A2 (en) * 2010-12-31 2012-07-05 (주) 골프존 Apparatus and method for a virtual golf simulation providing a putting mode
US20150199025A1 (en) * 2014-01-15 2015-07-16 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
CN107850936A (en) * 2015-01-28 2018-03-27 Ccp公司 For the method and system for the virtual display for providing physical environment
KR20190022974A (en) * 2017-08-24 2019-03-07 주식회사 에스지엠 Virtual sport simulation device showing improved reality
KR20190060748A (en) * 2019-05-27 2019-06-03 주식회사 에스지엠 Virtual golf simulation device using penetration type screen
US10409363B1 (en) * 2017-03-07 2019-09-10 vGolf, LLC Mixed-reality golf tracking and simulation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012091514A2 (en) * 2010-12-31 2012-07-05 (주) 골프존 Apparatus and method for a virtual golf simulation providing a putting mode
US20150199025A1 (en) * 2014-01-15 2015-07-16 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
CN107850936A (en) * 2015-01-28 2018-03-27 Ccp公司 For the method and system for the virtual display for providing physical environment
US10409363B1 (en) * 2017-03-07 2019-09-10 vGolf, LLC Mixed-reality golf tracking and simulation
KR20190022974A (en) * 2017-08-24 2019-03-07 주식회사 에스지엠 Virtual sport simulation device showing improved reality
KR20190060748A (en) * 2019-05-27 2019-06-03 주식회사 에스지엠 Virtual golf simulation device using penetration type screen

Similar Documents

Publication Publication Date Title
US8166421B2 (en) Three-dimensional user interface
KR102110811B1 (en) System and method for human computer interaction
US8959013B2 (en) Virtual keyboard for a non-tactile three dimensional user interface
KR101481880B1 (en) A system for portable tangible interaction
US9684372B2 (en) System and method for human computer interaction
US9927876B2 (en) Remote control with 3D pointing and gesture recognition capabilities
US20120202569A1 (en) Three-Dimensional User Interface for Game Applications
US20120204133A1 (en) Gesture-Based User Interface
US9075434B2 (en) Translating user motion into multiple object responses
EP3275514A1 (en) Virtuality-and-reality-combined interactive method and system for merging real environment
CN106200899A (en) The method and system that virtual reality is mutual are controlled according to user's headwork
CN103645844A (en) Page displaying method and device
CN106716331A (en) Simulating real-time responsiveness for touch displays
JP6810048B2 (en) How to simulate and control virtual balls on mobile devices
US20210042526A1 (en) Information processing apparatus, information processing method, and recording medium
CN106536004A (en) An augmented gaming platform
Spanogianopoulos et al. Human computer interaction using gestures for mobile devices and serious games: A review
CN113741695A (en) Object display method and device
KR101539666B1 (en) Golf simulation method through mobile terminal, recording medium recording program or application according to the same and mobile terminal for executing golf simulation
CN111722711B (en) Augmented reality scene output method, electronic device and computer readable storage medium
JP6732078B2 (en) System, method and non-transitory computer readable medium for integrating haptic overlays in augmented reality
CN107102725B (en) Control method and system for virtual reality movement based on somatosensory handle
KR101360322B1 (en) Apparatus and method for controlling electric boards using multiple hand shape detection and tracking
KR20140046197A (en) An apparatus and method for providing gesture recognition and computer-readable medium having thereon program
Bruno et al. Mixed prototyping environment with different video tracking techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination