[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2008065458A2 - System and method for moving real objects through operations performed in a virtual environment - Google Patents

System and method for moving real objects through operations performed in a virtual environment Download PDF

Info

Publication number
WO2008065458A2
WO2008065458A2 PCT/HU2007/000113 HU2007000113W WO2008065458A2 WO 2008065458 A2 WO2008065458 A2 WO 2008065458A2 HU 2007000113 W HU2007000113 W HU 2007000113W WO 2008065458 A2 WO2008065458 A2 WO 2008065458A2
Authority
WO
WIPO (PCT)
Prior art keywords
real
virtual
objects
environment
physical
Prior art date
Application number
PCT/HU2007/000113
Other languages
French (fr)
Other versions
WO2008065458A3 (en
Inventor
Ádám DÁLNOKI
Ádám HELYBÉLY
Tamás JUHÁSZ
Viktor KÁLMÁN
Tamás URBANCSEK
Ferenc Vajda
Miklós VOGEL
László VAJTA
Original Assignee
Dalnoki Adam
Helybely Adam
Juhasz Tamas
Kalman Viktor
Urbancsek Tamas
Ferenc Vajda
Vogel Miklos
Vajta Laszlo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalnoki Adam, Helybely Adam, Juhasz Tamas, Kalman Viktor, Urbancsek Tamas, Ferenc Vajda, Vogel Miklos, Vajta Laszlo filed Critical Dalnoki Adam
Publication of WO2008065458A2 publication Critical patent/WO2008065458A2/en
Publication of WO2008065458A3 publication Critical patent/WO2008065458A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles
    • A63F13/245Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video

Definitions

  • the present invention generally relates to a virtual reality system used by multiple users, and more particularly relates to a system for moving real objects by multiple users through operations performed in a virtual environment and a method for operating such a system.
  • the invention also relates to a computer program product implementing the method of the invention.
  • Patent No. EP 1286249 discloses a virtual reality system and method which allows physical environments to have a virtual presence everywhere in real time.
  • the concept of the virtual reality environment makes the presentation and presence of the sounds and sights of an actual and complex physical environment virtually available everywhere in real-time through the use of appropriate networks and devices.
  • This system has the drawback that the state of a physical environment is not modified according to the operations in the virtual realty environment, that is only a unidirectional mapping of the physical environments into one or more virtual environments are provided by the system and the physical environments are only virtually presented at the remote locations of the users.
  • One objective of the present invention is to provide a virtual reality system in which multiple user can move real movable objects in existing physical environments by initiating operations in a virtual environment.
  • Another objective of the present invention is to provide a virtual reality system in which multiple user can interact with one another through their associated real objects by moving the real objects in a plurality of substantially identical physical environments in real time and in a substantially synchronous manner.
  • a further objective of the present invention is to provide a virtual reality system capable of detecting the implemented motions in the physical environment and to make corrections for these motions in order to maintain the consistency between the virtual environment and the one or more physical environment at any time.
  • At least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment;;
  • a virtual environment including a plurality of virtual objects corresponding to said real movable objects in the at least one physical environment, said virtual environment being manipulated by the users;
  • a server for performing operations in the virtual environment based on user actions received from the users through the client devices and operational rules stored in the server;
  • a communications network for establishing interconnections between the server and the plurality of client devices so as to allow to remotely move a real movable object in any one of the physical environment by a remote user.
  • Figure 1 is a functional block diagram of the first embodiment of the system according to the invention.
  • FIG. 2 is a functional block diagram of the second embodiment of the system according to the invention.
  • Figure 3 illustrates a schematic view of the system arrangement for a multiplayer telepresence robot game application using the system according to the invention.
  • Figure 4 shows a preferred communications architecture of the system according to the invention.
  • FIG. 5 shows a flow diagram of the method according to the invention.
  • the overall system 100 comprises at least one physical environment 110 including real movable objects, typically robots.
  • Each physical environment 110 is associated with a client device 120.
  • the client devices 120 are connected to a server 130 through a communications network 140.
  • a client device 120 is preferably equipped with input means 122 and output means 124.
  • each of the physical environments 110 has an associated physical environment model 112 all of which, in the first embodiment of the system according to the invention, reside in and managed by the server 130.
  • the physical environment model 112 is used to dynamically model the real objects of the physical environment 110.
  • the physical environment model 112 may include, for example, the dynamic model of robot joints or other driving means, the physical tolerances, the inertial phenomena, tools for resolving singularities of robot arms, etc.
  • the physical environment model 112 is used to control motions of the real movable objects that can be physically implemented.
  • the central server 130 maintains a virtual environment model 136, which is affected by the users through their input actions.
  • Each physical environment 110 is adapted to copy the state of virtual environment model 136, thus the real objects (e.g. robots) of the physical environments 110 operate as similar to their virtual model as possible.
  • the physical environments 110 operate substantially identically to the virtual environment thus consistency between the different physical environments 110 can be maintained.
  • each physical environment 110 has the same chance as neither of them is given any preference.
  • the operational events are triggered in the virtual environment instead of being triggered in the physical environments 110 by physical sensors, the users cannot manually force real motions or other events in the physical environments 110.
  • the at least one physical environment that includes a plurality of real movable objects may be manipulated by multiple users.
  • Each physical environment comprises at least one real movable object treated by a local user and at least one real movable object treated by a remote user through the communications network.
  • a local user of a physical environment can visually track the motion of his own real movable object and the motion of other real movable objects of remote users in the particular physical environment, which allows to present a living and interactive performance for any user at any location, where a physical environment is installed.
  • each local user of a particular physical environment (110) possesses at least one corresponding remote object in one or more remote physical environments (110), and the several real movable objects at different locations, belonging to the same user, are operated in a synchronized way through the virtual environment. It also means that the operation of the whole physical environments (112) are synchronized to a single virtual environment, thus resulting a more or less identical operation thereof. It is also appreciated, however, that one user treats multiple real objects or one real object is treated by multiple users. In this latter case, sharing of such real objects must be controlled by the system.
  • the server 130 has two basic performance tools, namely a virtual environment simulator 132 that manages the events in the virtual environment model 136 and an event simulator 134 that generates the virtual events based on the user actions and other predetermined operational rules stored in the server 130.
  • the server 130 is operated by appropriate computer programs implementing the virtual environment simulator 132, the event simulator 134 and other system level functions thereof.
  • the server 130 operates as a service (or daemon) device, therefore it has no common user interface, except the command line option for the system administrator.
  • the virtual environment model 136 that represents the virtual environment, constitutes the core of the server 130.
  • the virtual environment model 136 is a computer simulation model of the virtual environment in that the virtual objects are placed and moved according to the user actions. That is, the virtual environment model 136 is used to kinematically model the real objects of the physical environments 110.
  • the virtual environment model 136 can, in fact, execute any kind of motions for the virtual objects, even those being impossible or having no sense in the real physical environment.
  • the virtual environment model 136 also allows, inter alia, to detect certain events in the virtual enviromnemt, e.g. collision of the virtual object.
  • constraints may be applied for the motions in the virtual environment, for example certain moving parts of the virtual objects may be connected rigidly or by rotational joints in order to enhance an easier implementation of the virtual motions in the physical environments 110.
  • a kinematic model for the real robots is essential, which means that the virtual environment simulator 132 is allowed to move only the joints of the virtual robots, and the real positions and orientations of the robot segments are evaluated automatically.
  • a dynamic model and real physical movement simulations may also be added to the functionality of the server 130 in order to improve the precision of the virtual environment model 136, but the implementation of such components may involve huge expenses, and therefore it is not a common objective of the present invention.
  • a volumetric model is preferably used for each virtual object in the virtual environment model 136.
  • the event simulator 134 is responsible for generating the virtual events and managing the virtual operations initiated by the users.
  • the event simulator 134 processes the motion control information received from the client devices 120 and also uses a set of predetermined operating rules, e.g. gaming rules for fighting robots, driving rules for a motor race, etc., stored in the server 130.
  • the rules may, however, include some constraints like limited motion capabilities of the virtual objects and their parts or inhibited movements for avoiding the overload of a certain joint of an object which might cause weakening or even damaging the object, etc.
  • the motion control information is inputted by the users at the client devices 120, for example, through commercial input devices like a joy-stick or a game steering wheel or a consol.
  • the motion control information may be inputted by a real object directly controlled by a particular local user, wherein a position sensing means is used to generate the appropriate motion control information for the server 130.
  • the virtual environment model 136 is automatically mapped into the physical environment model 112 of the respective physical environments 110.
  • the physical environment model 112 carries additional information on the corresponding physical environment 110 with respect to the virtual environment model 136.
  • additional information may include, for example, differences between the appearance of the physical and the virtual environment, actually implemented operations over the virtual operations, physical constraints in the physical environments 110, the above mentioned parameters of the dynamic modelling, etc.
  • the actual position and/or orientation and/or operational state of the physical objects need to be detected and evaluated.
  • the system 100 also comprises correction means for each physical environment 110 for correcting motions of the real objects in the physical environments 110 based on the above mentioned additional information associated with the physical environment models 112.
  • additional information is obtained as position and/or orientation and/or operational state information from appropriate position sensing means of the respective physical environment 110.
  • the correction means Upon processing the position and/or orientation and/or other operational errors, the correction means sends the motion corrections to each client device 120 to perform subsequent motion control of the real objects.
  • the corrections means uses parameterized motion primitives for continuous correction of the motions of the real movable objects instead of recalculating the trajectory of the motion of the real objects at some intermediate points of the motion .
  • the client devices 120 are used as interfaces between the server 130 and the physical environments 110 and may be also used as input devices for providing the server 130 with user actions to be performed in the virtual environment. Accordingly, the client devices 120 contain no intelligence.
  • the client devices 120 are adapted to receive position and operational status information form the position sensing means of the physical environments 110 and to forward such information to the server 130 on one hand, and to receive motion control information generated by the virtual environment simulator 132 of the server 130 and to control the motion of the real objects in the physical environments 110 based on the information received from the server 130 on the other hand.
  • the communication between the client devices 120 and the server 130 is performed through a communications network 140 that may be any kind of wired or wireless communication network, preferably capable of real-time communication.
  • the communications network 140 may equally be the internet, a mobile telephone network or an intranet.
  • the data to be transmitted through the communications network 140 may relate to connection establishment, joint and magnet control commands for a real object (server to client device communication), game information like results, status, etc. (server to client device communication), control signals inputted by the users via a joy-stick, a keyboard, etc. (client device to server communication), position and/or orientation and/or operational state error (client device to server communication).
  • the client device 120 is preferably equipped with input means 122 for inputting user actions to be performed in the virtual environment.
  • the input means 122 is typically a joy-stick, a game steering wheel, a console or the like.
  • the client device 130 may also be equipped with output means 124 for outputting information to help the user to experience the virtual environment.
  • the output means 124 is a graphic display. that visually presents the virtual environment for the user.
  • the use of the graphic display as an output means 124 is particularly beneficial in a situation where there is no physical environment 110 adjacent to a user. In this case it is preferred that the display is a three-dimensional (3D) display.
  • the position sensing means of a physical environment 110 is implemented by means of a camera system.
  • the image processing algorithm set used for evaluating the image information as an optical feedback received from the camera system reside in the client devices 120.
  • all of the feet of the real robots carry an active marker line rectangle, which is observed by the camera system. Some parts of these rectangles are always hidden by other parts of the robots, but there is always some parts which can be seen on a camera view, and it is enough to determine the position and/or orientation of the robots by the image processing algorithms.
  • the optical position sensing means has to be calibrated after fixing the camera using the active landmarks at the corners of the physical environment 110.
  • the active landmarks can be turned off and on all together or one at a time.
  • the calibration detects the geometrical distortion of the physical environment 110 and stores the distortion parameters for further correction.
  • the appropriate apertures have to be set and the necessary threshold levels have to be found as well while performing the calibration.
  • a client devices 120 is capable of locating the one-at-a-time turned on active line rectangle. If the rectangles can be turned on and off separately, the locating action may be easier to do.
  • Controlling the motion of the real objects e.g. robots, may be easily performed by driving servo motors in the real objects.
  • the real-time control without accumulating delay may, however, cause difficulties.
  • the actual state of the servos to be reached at the implementation of a virtual motion or event in the physical environment 110 by a real object is determined by the server 130.
  • FIG 2 shows the second preferred embodiment of the system according to the invention. This embodiment differs from the first embodiment shown in Figure 1 in that the physical environment models 112 are maintained in the client devices 120, which allows higher performance for the server 130 due to the thus released computing and storage capacity thereof.
  • FIG 3 a schematic view of the system arrangement for a multiplayer telepresence humanoid robot game using the system according to the invention is illustrated.
  • two players at different geographical locations are involved in the robot game.
  • the two players interactively play in a virtual environment via the internet in such a way that both of them operates their own humanoid robot.
  • the robots may be KHR-1 humanoid robots produced by Kondo Inc.
  • the robots may stand on an electromagnetic floor that is adapted to control the displacement of the robot legs by either fixing or releasing them magnetically.
  • the stable stand and movements of the robots are controlled by a sophisticated software tool.
  • the actual position and orientation status of the robots are detected by means of a camera system and appropriate optical markers.
  • the virtual movements of the virtual robots performed in the virtual environment are implemented in the real physical environments at both locations in real-time and in a substantially synchronous manner.
  • the players can move their robots by means of a commercial joy-stick.
  • the game rules e.g. fighting rules of the robots, are implemented in the server, and the client devices are only used to control the motion of the real robots based on the instructions of the server.
  • Figure 3 shows only two players, it is obvious that more than two, i.e. an arbitrary number of players may participate in such a robot game.
  • Figure 4 shows a preferred communications architecture of the system according to the invention, which is particularly suitable for the system arrangement of the multiplayer telepresence robot game illustrated in Figure 3.
  • a plurality of client devices 310 (only two of them are illustrated)
  • a virtual reality server 320 and a meta server 330 are interconnected through communication links 340 and 342.
  • the virtual reality server 320 has the same functionality as described above in connection with the first and second embodiment of the system, such as controlling the ongoing robot game, receiving input information from the players, performing player actions in the virtual environment, sending status information and scoring information to the players, reporting final scoring to the meta server 330 for high- score management and for updating player statistics at the end of the game, etc.
  • the meta server 330 has the responsibility of keeping track of signed-in players, helping the players to form pairs, managing multiple virtual reality servers 320, storing player capability information, maintaining high-score/statistics databases and other meta data, or the like.
  • the meta server 330 is a centrally managed computer, whereas the virtual reality server 320 is implemented on one of the client devices 310.
  • the communication links 340 between the virtual reality servers 320 and the client devices 310 are real-time UDP/IP connections, whereas the communication links 342 between the meta server 330 and the other components are reliable TCP/IP connections.
  • the system has an optimal communications architecture corresponding to the different requirements on the underlying communication network in terms of timing and data transfer rates.
  • FIG. 5 shows a flow diagram of the method according to the invention. The basic steps of the method will be now described in detail.
  • the method starts in step S500 with providing at least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment.
  • a virtual environment is generated in a server that includes a plurality of virtual objects corresponding to said real movable objects of the at least one physical environment.
  • a first motion control information is provided for the virtual environment to generate an operation on at least one virtual object therein.
  • the first motion control information may be inputted by a user through, for example, a joy-stick movement or a fire-button press and release, at a client device.
  • the first motion control information may be provided by a real movable object controlled directly by a user.
  • position sensing means is used to generate the first motion control information for the server.
  • the first motion control information initiates an operation on a virtual object in the virtual environment.
  • the first motion control information is forwarded to the server in an appropriate format.
  • an operation on at least one virtual object in the virtual environment is performed in the server based on the first motion control information and a predetermined set of operational rules stored in the server.
  • the operation on the at least one virtual object may include moving the virtual object of the acting user in the intended way or an undesired way, moving the real objects of other users though interactions between the real objects of different users, etc.
  • the server advances the simulation time of the virtual environment simulator for simulating the behaviour of the whole virtual environment and to determine if any causal event (e.g. collision of objects with each other or undesirable/inexecutable motion) occurs in response to the first motion control information. If such an event is detected, the server determines whether the initiated motion may be executed in an unchanged way or with certain changes or it is not allowed to be performed at all due to the risk of causing damage in any movable or fixed object of the physical environment. If the user action is allowed to be executed, the virtual environment model is updated accordingly in such a way that the server switches to a real-time simulation mode and starts to execute the movement in the virtual environment model and commands the physical environment model to synchronously start to execute the corresponding movement.
  • any causal event e.g. collision of objects with each other or undesirable/inexecutable motion
  • a second motion control information is generated for at least one real movable object according to the operation performed on the corresponding at least one virtual object in the virtual environment and then, in step S508, the second motion control information is communicated via a communications network from the server to the corresponding client devices for the respective real movable object(s) of at least one physical environment.
  • the client devices are adapted to control the motion of the real objects in the associated physical environment according to the second motion control information received from the server.
  • step S510 the real objects are then moved in at least one physical environment according to the second motion control information.
  • the virtual environment represents all of the physical environments and the corresponding real objects in the different physical environments are preferably moved in a substantially synchronous manner.
  • the communication platform allows real-time interconnections between the server and the client devices, the real objects are moved in real time with respect to the user actions.
  • the virtual environment is displayed for at least one user on a graphic display.
  • users with no real physical environment around them are also capable of sensing the performance of the operations in the virtual environment and therefore to remotely interact with the system.
  • the method according to the invention further comprises detecting the actual position and/or orientation and/or operational state of the real objects in the physical environments.
  • the motion of the real movable objects of each of the physical environments are subject to a correction based on the actual position and/or orientation and/or operational state of the real objects detected by appropriate sensors, e.g. electromagnetic sensors or optical sensor such as a camera system.
  • the present invention also relates to a computer program product stored in a computer-readable medium, wherein the computer program product comprises instructions that, when executed by a computer, enable the computer to perform any embodiments of the method according to the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The system for moving real objects through operations performed in a virtual environment comprises at least one physical environment (110) including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments (110) belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment (110); a virtual environment including a plurality of virtual objects corresponding to said real movable objects in the at least one physical environment (110), said virtual environment (110) being manipulated by the users; a plurality of client devices (120) for controlling the motion of the real objects in the at least one physical environment (110); a server (130) for performing operations in the virtual environment based on user actions received from the users through the client devices (120) and operational rules stored in the server (130); and a communications network (140) for establishing interconnections between the server (130) and the plurality of client devices (120) so as to allow to remotely move a real movable object in any one of the physical environment (110) by a remote user.

Description

System and method for moving real objects through operations performed in a virtual environment
The present invention generally relates to a virtual reality system used by multiple users, and more particularly relates to a system for moving real objects by multiple users through operations performed in a virtual environment and a method for operating such a system. The invention also relates to a computer program product implementing the method of the invention.
Conventional multiple user virtual reality systems use a server and a plurality of client computers to enable the users to perceive and interact with a computer generated virtual world. In such a virtual reality system, the virtual objects emulate the position, the orientation and the movements of the real objects.
Patent No. EP 1286249 discloses a virtual reality system and method which allows physical environments to have a virtual presence everywhere in real time. The concept of the virtual reality environment makes the presentation and presence of the sounds and sights of an actual and complex physical environment virtually available everywhere in real-time through the use of appropriate networks and devices. This system has the drawback that the state of a physical environment is not modified according to the operations in the virtual realty environment, that is only a unidirectional mapping of the physical environments into one or more virtual environments are provided by the system and the physical environments are only virtually presented at the remote locations of the users.
One objective of the present invention is to provide a virtual reality system in which multiple user can move real movable objects in existing physical environments by initiating operations in a virtual environment.
Another objective of the present invention is to provide a virtual reality system in which multiple user can interact with one another through their associated real objects by moving the real objects in a plurality of substantially identical physical environments in real time and in a substantially synchronous manner.
A further objective of the present invention is to provide a virtual reality system capable of detecting the implemented motions in the physical environment and to make corrections for these motions in order to maintain the consistency between the virtual environment and the one or more physical environment at any time.
These and other objectives are achieved by providing a system for moving real objects through operations performed in a virtual environment, wherein the system comprising:
- at least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment;;
- a virtual environment including a plurality of virtual objects corresponding to said real movable objects in the at least one physical environment, said virtual environment being manipulated by the users;
- a plurality of client devices for controlling the motion of the real objects in the at least one physical environment;
- a server for performing operations in the virtual environment based on user actions received from the users through the client devices and operational rules stored in the server; and
- a communications network for establishing interconnections between the server and the plurality of client devices so as to allow to remotely move a real movable object in any one of the physical environment by a remote user.
The above objects are further achieved by providing a method for moving real objects through operations to be performed in a virtual environment, wherein the method comprising the steps of:
- providing at least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment;
- generating a virtual environment including a plurality of virtual objects corresponding to said real movable objects of the at least one physical environment;
- providing a first motion control information for the virtual environment to generate an operation on at least one virtual object therein ;
- performing the operation on said at least one virtual object in the virtual environment based on said first motion control information and a predetermined set of operational rules;
- generating a second motion control information for at least one real movable object according to the operation performed on the corresponding at least one virtual object in the virtual environment;
- communicating the second motion control information to the respective real movable object(s) in at least one physical environment; and
- moving the real object(s) in at least one physical environment according to said second motion control information.
Finally, the above object are achieved by providing a computer program product in a computer-readable medium, comprising instructions that, when executed by a computer, enable the computer to perform the method according to the invention.
These and other advantages of the present invention will be described in more detail by means of preferred embodiments with reference to the accompanying drawings, wherein:
Figure 1 is a functional block diagram of the first embodiment of the system according to the invention.
Figure 2 is a functional block diagram of the second embodiment of the system according to the invention.
Figure 3 illustrates a schematic view of the system arrangement for a multiplayer telepresence robot game application using the system according to the invention.
Figure 4 shows a preferred communications architecture of the system according to the invention.
Figure 5 shows a flow diagram of the method according to the invention. The first embodiment of the system according to the invention, as illustrated in Figure 1 , the overall system 100 comprises at least one physical environment 110 including real movable objects, typically robots. Each physical environment 110 is associated with a client device 120. The client devices 120 are connected to a server 130 through a communications network 140. A client device 120 is preferably equipped with input means 122 and output means 124. Preferably, each of the physical environments 110 has an associated physical environment model 112 all of which, in the first embodiment of the system according to the invention, reside in and managed by the server 130. The physical environment model 112 is used to dynamically model the real objects of the physical environment 110. The physical environment model 112 may include, for example, the dynamic model of robot joints or other driving means, the physical tolerances, the inertial phenomena, tools for resolving singularities of robot arms, etc. The physical environment model 112 is used to control motions of the real movable objects that can be physically implemented.
The most important system-level design consideration is that the operations are first performed virtually, that is, every physical event implies virtual events in the virtual environment. Additional events that are independent of the users intent may also be generated in the virtual domain. Accordingly, the central server 130 maintains a virtual environment model 136, which is affected by the users through their input actions. Each physical environment 110 is adapted to copy the state of virtual environment model 136, thus the real objects (e.g. robots) of the physical environments 110 operate as similar to their virtual model as possible. In the preferred embodiments of the system, the physical environments 110 operate substantially identically to the virtual environment thus consistency between the different physical environments 110 can be maintained.
This approach has several advantages. First, each physical environment 110 has the same chance as neither of them is given any preference. Second, it is impossible to "cheat" at any location of the physical environments 110. As the operational events are triggered in the virtual environment instead of being triggered in the physical environments 110 by physical sensors, the users cannot manually force real motions or other events in the physical environments 110. Finally, in a technical sense, it is much easier to synchronize one or more physical environments to a single virtual environment than to synchronize two or even more physical environments to each other.
Due to this approach, the at least one physical environment that includes a plurality of real movable objects, may be manipulated by multiple users. Each physical environment comprises at least one real movable object treated by a local user and at least one real movable object treated by a remote user through the communications network. Thus a local user of a physical environment can visually track the motion of his own real movable object and the motion of other real movable objects of remote users in the particular physical environment, which allows to present a living and interactive performance for any user at any location, where a physical environment is installed.
In the most preferred embodiment of the system, each local user of a particular physical environment (110) possesses at least one corresponding remote object in one or more remote physical environments (110), and the several real movable objects at different locations, belonging to the same user, are operated in a synchronized way through the virtual environment. It also means that the operation of the whole physical environments (112) are synchronized to a single virtual environment, thus resulting a more or less identical operation thereof. It is also appreciated, however, that one user treats multiple real objects or one real object is treated by multiple users. In this latter case, sharing of such real objects must be controlled by the system.
The server 130 has two basic performance tools, namely a virtual environment simulator 132 that manages the events in the virtual environment model 136 and an event simulator 134 that generates the virtual events based on the user actions and other predetermined operational rules stored in the server 130. As it is obvious for a person skilled in the art, the server 130 is operated by appropriate computer programs implementing the virtual environment simulator 132, the event simulator 134 and other system level functions thereof. In the preferred embodiment of the system, the server 130 operates as a service (or daemon) device, therefore it has no common user interface, except the command line option for the system administrator. The virtual environment model 136, that represents the virtual environment, constitutes the core of the server 130. The virtual environment model 136 is a computer simulation model of the virtual environment in that the virtual objects are placed and moved according to the user actions. That is, the virtual environment model 136 is used to kinematically model the real objects of the physical environments 110. The virtual environment model 136 can, in fact, execute any kind of motions for the virtual objects, even those being impossible or having no sense in the real physical environment. The virtual environment model 136 also allows, inter alia, to detect certain events in the virtual enviromnemt, e.g. collision of the virtual object. In a practical implementation of the system according to the invention, constraints may be applied for the motions in the virtual environment, for example certain moving parts of the virtual objects may be connected rigidly or by rotational joints in order to enhance an easier implementation of the virtual motions in the physical environments 110.
In a robotic environment, for example, the use of a kinematic model for the real robots is essential, which means that the virtual environment simulator 132 is allowed to move only the joints of the virtual robots, and the real positions and orientations of the robot segments are evaluated automatically.
A dynamic model and real physical movement simulations may also be added to the functionality of the server 130 in order to improve the precision of the virtual environment model 136, but the implementation of such components may involve huge expenses, and therefore it is not a common objective of the present invention.
If interactions between the virtual objects, e.g. collision of virtual robots, are allowed in the virtual environment, detection of such interactions requires the knowledge of the dimensions, as well as the position and/or the orientation of all virtual objects. Therefore, a volumetric model is preferably used for each virtual object in the virtual environment model 136.
The event simulator 134 is responsible for generating the virtual events and managing the virtual operations initiated by the users. The event simulator 134 processes the motion control information received from the client devices 120 and also uses a set of predetermined operating rules, e.g. gaming rules for fighting robots, driving rules for a motor race, etc., stored in the server 130. The rules may, however, include some constraints like limited motion capabilities of the virtual objects and their parts or inhibited movements for avoiding the overload of a certain joint of an object which might cause weakening or even damaging the object, etc.
In a preferred embodiment of the system, the motion control information is inputted by the users at the client devices 120, for example, through commercial input devices like a joy-stick or a game steering wheel or a consol. In an alternative embodiment of the system, the motion control information may be inputted by a real object directly controlled by a particular local user, wherein a position sensing means is used to generate the appropriate motion control information for the server 130.
In a preferred embodiment of the system according to the invention, the virtual environment model 136 is automatically mapped into the physical environment model 112 of the respective physical environments 110. The physical environment model 112 carries additional information on the corresponding physical environment 110 with respect to the virtual environment model 136. Such additional information may include, for example, differences between the appearance of the physical and the virtual environment, actually implemented operations over the virtual operations, physical constraints in the physical environments 110, the above mentioned parameters of the dynamic modelling, etc. In order to achieve the most precise physical implementation of the virtual motions or events, the actual position and/or orientation and/or operational state of the physical objects need to be detected and evaluated. To this end, the system 100 also comprises correction means for each physical environment 110 for correcting motions of the real objects in the physical environments 110 based on the above mentioned additional information associated with the physical environment models 112. Such additional information is obtained as position and/or orientation and/or operational state information from appropriate position sensing means of the respective physical environment 110. Upon processing the position and/or orientation and/or other operational errors, the correction means sends the motion corrections to each client device 120 to perform subsequent motion control of the real objects.
In a preferred embodiment of the system according to the invention, the corrections means uses parameterized motion primitives for continuous correction of the motions of the real movable objects instead of recalculating the trajectory of the motion of the real objects at some intermediate points of the motion .
The client devices 120 are used as interfaces between the server 130 and the physical environments 110 and may be also used as input devices for providing the server 130 with user actions to be performed in the virtual environment. Accordingly, the client devices 120 contain no intelligence. The client devices 120 are adapted to receive position and operational status information form the position sensing means of the physical environments 110 and to forward such information to the server 130 on one hand, and to receive motion control information generated by the virtual environment simulator 132 of the server 130 and to control the motion of the real objects in the physical environments 110 based on the information received from the server 130 on the other hand.
The communication between the client devices 120 and the server 130 is performed through a communications network 140 that may be any kind of wired or wireless communication network, preferably capable of real-time communication. The communications network 140 may equally be the internet, a mobile telephone network or an intranet.
The data to be transmitted through the communications network 140 may relate to connection establishment, joint and magnet control commands for a real object (server to client device communication), game information like results, status, etc. (server to client device communication), control signals inputted by the users via a joy-stick, a keyboard, etc. (client device to server communication), position and/or orientation and/or operational state error (client device to server communication).
The client device 120 is preferably equipped with input means 122 for inputting user actions to be performed in the virtual environment. The input means 122 is typically a joy-stick, a game steering wheel, a console or the like. The client device 130 may also be equipped with output means 124 for outputting information to help the user to experience the virtual environment. In a preferred embodiment of the system according to the invention, the output means 124 is a graphic display. that visually presents the virtual environment for the user. The use of the graphic display as an output means 124 is particularly beneficial in a situation where there is no physical environment 110 adjacent to a user. In this case it is preferred that the display is a three-dimensional (3D) display.
In a preferred embodiment of the system according to the invention, the position sensing means of a physical environment 110 is implemented by means of a camera system. The image processing algorithm set used for evaluating the image information as an optical feedback received from the camera system reside in the client devices 120. In a robot game application, for example, all of the feet of the real robots carry an active marker line rectangle, which is observed by the camera system. Some parts of these rectangles are always hidden by other parts of the robots, but there is always some parts which can be seen on a camera view, and it is enough to determine the position and/or orientation of the robots by the image processing algorithms.
The optical position sensing means has to be calibrated after fixing the camera using the active landmarks at the corners of the physical environment 110. The active landmarks can be turned off and on all together or one at a time. The calibration detects the geometrical distortion of the physical environment 110 and stores the distortion parameters for further correction. The appropriate apertures have to be set and the necessary threshold levels have to be found as well while performing the calibration.
In case of the above mentioned optical feedback, special algorithms have to be applied by which a client devices 120 is capable of locating the one-at-a-time turned on active line rectangle. If the rectangles can be turned on and off separately, the locating action may be easier to do. Controlling the motion of the real objects, e.g. robots, may be easily performed by driving servo motors in the real objects. The real-time control without accumulating delay may, however, cause difficulties. The actual state of the servos to be reached at the implementation of a virtual motion or event in the physical environment 110 by a real object is determined by the server 130.
Figure 2 shows the second preferred embodiment of the system according to the invention. This embodiment differs from the first embodiment shown in Figure 1 in that the physical environment models 112 are maintained in the client devices 120, which allows higher performance for the server 130 due to the thus released computing and storage capacity thereof.
In Figure 3, a schematic view of the system arrangement for a multiplayer telepresence humanoid robot game using the system according to the invention is illustrated. In this particular application of the system, two players at different geographical locations are involved in the robot game. The two players interactively play in a virtual environment via the internet in such a way that both of them operates their own humanoid robot. For example, the robots may be KHR-1 humanoid robots produced by Kondo Inc. The robots may stand on an electromagnetic floor that is adapted to control the displacement of the robot legs by either fixing or releasing them magnetically. The stable stand and movements of the robots are controlled by a sophisticated software tool. The actual position and orientation status of the robots are detected by means of a camera system and appropriate optical markers.
The virtual movements of the virtual robots performed in the virtual environment are implemented in the real physical environments at both locations in real-time and in a substantially synchronous manner. The players can move their robots by means of a commercial joy-stick. The game rules, e.g. fighting rules of the robots, are implemented in the server, and the client devices are only used to control the motion of the real robots based on the instructions of the server. Although Figure 3 shows only two players, it is obvious that more than two, i.e. an arbitrary number of players may participate in such a robot game. Figure 4 shows a preferred communications architecture of the system according to the invention, which is particularly suitable for the system arrangement of the multiplayer telepresence robot game illustrated in Figure 3. In this communications architecture, a plurality of client devices 310 (only two of them are illustrated), a virtual reality server 320 and a meta server 330 are interconnected through communication links 340 and 342.
The virtual reality server 320 has the same functionality as described above in connection with the first and second embodiment of the system, such as controlling the ongoing robot game, receiving input information from the players, performing player actions in the virtual environment, sending status information and scoring information to the players, reporting final scoring to the meta server 330 for high- score management and for updating player statistics at the end of the game, etc. The meta server 330 has the responsibility of keeping track of signed-in players, helping the players to form pairs, managing multiple virtual reality servers 320, storing player capability information, maintaining high-score/statistics databases and other meta data, or the like. Preferably, the meta server 330 is a centrally managed computer, whereas the virtual reality server 320 is implemented on one of the client devices 310.
In order to achieve an optimal performance of the system according to the invention, the communication links 340 between the virtual reality servers 320 and the client devices 310 are real-time UDP/IP connections, whereas the communication links 342 between the meta server 330 and the other components are reliable TCP/IP connections. Thus the system has an optimal communications architecture corresponding to the different requirements on the underlying communication network in terms of timing and data transfer rates.
Figure 5 shows a flow diagram of the method according to the invention. The basic steps of the method will be now described in detail.
The method starts in step S500 with providing at least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment. Also in step S500, a virtual environment is generated in a server that includes a plurality of virtual objects corresponding to said real movable objects of the at least one physical environment. In step S502, a first motion control information is provided for the virtual environment to generate an operation on at least one virtual object therein. The first motion control information may be inputted by a user through, for example, a joy-stick movement or a fire-button press and release, at a client device. Alternatively, the first motion control information may be provided by a real movable object controlled directly by a user. In this latter case, position sensing means is used to generate the first motion control information for the server. The first motion control information initiates an operation on a virtual object in the virtual environment. In step S502, the first motion control information is forwarded to the server in an appropriate format. In step S504, an operation on at least one virtual object in the virtual environment is performed in the server based on the first motion control information and a predetermined set of operational rules stored in the server. The operation on the at least one virtual object may include moving the virtual object of the acting user in the intended way or an undesired way, moving the real objects of other users though interactions between the real objects of different users, etc. At this stage, the server advances the simulation time of the virtual environment simulator for simulating the behaviour of the whole virtual environment and to determine if any causal event (e.g. collision of objects with each other or undesirable/inexecutable motion) occurs in response to the first motion control information. If such an event is detected, the server determines whether the initiated motion may be executed in an unchanged way or with certain changes or it is not allowed to be performed at all due to the risk of causing damage in any movable or fixed object of the physical environment. If the user action is allowed to be executed, the virtual environment model is updated accordingly in such a way that the server switches to a real-time simulation mode and starts to execute the movement in the virtual environment model and commands the physical environment model to synchronously start to execute the corresponding movement.
In step S506, a second motion control information is generated for at least one real movable object according to the operation performed on the corresponding at least one virtual object in the virtual environment and then, in step S508, the second motion control information is communicated via a communications network from the server to the corresponding client devices for the respective real movable object(s) of at least one physical environment. The client devices are adapted to control the motion of the real objects in the associated physical environment according to the second motion control information received from the server.
In step S510, the real objects are then moved in at least one physical environment according to the second motion control information.
If a plurality of substantially identical physical environments are maintained at different locations, the virtual environment represents all of the physical environments and the corresponding real objects in the different physical environments are preferably moved in a substantially synchronous manner. In case the communication platform allows real-time interconnections between the server and the client devices, the real objects are moved in real time with respect to the user actions.
In a preferred embodiment of the method, the virtual environment is displayed for at least one user on a graphic display. Thus, users with no real physical environment around them are also capable of sensing the performance of the operations in the virtual environment and therefore to remotely interact with the system.
It is appreciated that the method according to the invention further comprises detecting the actual position and/or orientation and/or operational state of the real objects in the physical environments. To this end, the motion of the real movable objects of each of the physical environments are subject to a correction based on the actual position and/or orientation and/or operational state of the real objects detected by appropriate sensors, e.g. electromagnetic sensors or optical sensor such as a camera system.
It is obvious for a person skilled in the art how the above described method can be implemented by a specific computer program. Thus, the present invention also relates to a computer program product stored in a computer-readable medium, wherein the computer program product comprises instructions that, when executed by a computer, enable the computer to perform any embodiments of the method according to the invention.
It will be also understood by those skilled in the art that the present invention is not limited to the embodiments described above with reference to the drawings, and that many additions and modifications are possible without departing from the scope of the present invention as defined in the appending claims.

Claims

Claims:
1. A system for moving real objects through operations performed in a virtual environment, wherein the system (100) comprising:
- at least one physical environment (110) including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments (110) belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment (110);
- a virtual environment including a plurality of virtual objects corresponding to said real movable objects in the at least one physical environment (110), said virtual environment being manipulated by the users;
- a plurality of client devices (120) for controlling the motion of the real objects in the at least one physical environment (110);
- a server (130) for performing operations in the virtual environment based on user actions received from the users through the client devices (120) and operational rules stored in the server (130); and
- a communications network (140) for establishing interconnections between the server (130) and the plurality of client devices (120) so as to allow to remotely move a real movable object in any one of the physical environment (110) by a remote user.
2. The system according to claim 1 , wherein
- the system (100) comprises a plurality of substantially identical physical environments (110) at different locations,
- said virtual environment provides a virtual representation of said plurality of physical environments (110), and
- the corresponding real objects in the different physical environments (110) are moved in a substantially synchronous manner.
3. The system according to claim 1 or 2, wherein the client device (120) comprises input means (122) for inputting motion control information by a user to provide a user action.
4. The system according to claim 3, wherein the input means (122) is a joy-stick or a steering wheel or the like.
5. The system according to claim 1 to 4, wherein client device (120) comprises a graphic display for displaying the virtual environment for a user.
6. The system according to claim 1 to 5, wherein
- a physical environment (110) comprises position sensing means for detecting the actual position and/or orientation and/or operational state of the real objects therein, and
- for each physical environment (110), the system (100) further comprises correction means for correcting the implemented motion of the real movable objects in the respective physical environment (110) based on the actual position and/or orientation and/or operational state detected by the position sensing means of the respective physical environment (110).
7. The system according to claim 6, wherein said position sensing means comprises electromagnetic sensors for detecting the actual position and/or orientation of the real objects in the physical environment (110).
8. The system according to claim 6, wherein said position sensing means comprises optical sensors for detecting the actual position and/or orientation and/or operational state of the real objects in the physical environment (110).
9. The system according to claim 8, wherein the optical sensors are implemented as a camera system.
10. The system according to claim 1 to 9, wherein the correction means of the at least one physical environment (110) resides in the server (130).
11. The system according to claim 1 to 9, wherein the correction means of the at least one physical environment (110) resides in the associated client device(s) (120).
12. The system according to claim 1 to 11 , wherein communications network (140) is the internet.
13. The system according to claim 1 to 12, wherein the interconnections between the server (130) and the client devices (120) provide real time communication.
14. A method for moving real objects through operations to be performed in a virtual environment, wherein the method comprising the steps of:
- providing at least one physical environment including a plurality of real movable objects, wherein said plurality of real movable objects of any one of the physical environments belongs to multiple users, at least one of said multiple users being local and at least one of said multiple users being remote with respect to a physical environment;
- generating a virtual environment including a plurality of virtual objects corresponding to said real movable objects of the at least one physical environment;
- providing a first motion control information for the virtual environment to generate an operation on at least one virtual object therein ;
- performing the operation on said at least one virtual object in the virtual environment based on said first motion control information and a predetermined set of operational rules;
- generating a second motion control information for at least one real movable object according to the operation performed on the corresponding at least one virtual object in the virtual environment;
- communicating the second motion control information to the respective real movable object(s) of at least one physical environment; and
- moving the real object(s) in at least one physical environment according to said second motion control information.
15. The method of claim 14, wherein:
- a plurality of substantially identical physical environments are provided at different locations, each represented by the virtual environment, and
- moving the corresponding real objects in the different physical environments in a substantially synchronous manner.
16. The method according to claim 14 or 15, wherein said first motion control information is provided by a user action inputted by a user.
17. The method according to claim 14 or 15, wherein said first motion control information is provided by a real movable object controlled directly by a user.
18. The method according to claim 14 to 17, wherein the method further comprises a step of displaying the virtual environment for at least one user.
19. The method according to claim 14 to 18, wherein the method further comprises the steps of:
- detecting the actual position and/or orientation and/or operational state of the real objects, and
- for each physical environment, correcting the implemented motion of the real movable objects in the respective physical environment based on the actual position and/or orientation and/or operational state of the real objects in the respective physical environment.
20. The method of claim 19, wherein said correction is carried out by the use of parameterized motion primitives.
21. The method according to claim 14 to 20, wherein the motion of the real objects is controlled in real time.
22. A computer program product in a computer-readable medium, comprising instructions that, when executed by a computer, enable the computer to perform the methods of any one of claims 14 to 21.
PCT/HU2007/000113 2006-11-28 2007-11-27 System and method for moving real objects through operations performed in a virtual environment WO2008065458A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HUP0600879 2006-11-28
HU0600879A HUP0600879A2 (en) 2006-11-28 2006-11-28 System and method for moving real objects through operations performed in a virtual environment

Publications (2)

Publication Number Publication Date
WO2008065458A2 true WO2008065458A2 (en) 2008-06-05
WO2008065458A3 WO2008065458A3 (en) 2009-03-26

Family

ID=89987163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/HU2007/000113 WO2008065458A2 (en) 2006-11-28 2007-11-27 System and method for moving real objects through operations performed in a virtual environment

Country Status (2)

Country Link
HU (1) HUP0600879A2 (en)
WO (1) WO2008065458A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549749A (en) * 2016-03-10 2016-05-04 北京虚实互动科技有限公司 Signal transmission method, device and system in virtual reality
WO2016072132A1 (en) * 2014-11-07 2016-05-12 ソニー株式会社 Information processing device, information processing system, real object system, and information processing method
EP3400992A1 (en) * 2017-05-08 2018-11-14 Trimoo IP Europe B.V. Providing a location-based mixed-reality experience
EP3400991A1 (en) * 2017-05-09 2018-11-14 Trimoo IP Europe B.V. Transport simulation in a location-based mixed-reality game system
WO2018206605A1 (en) * 2017-05-08 2018-11-15 Trimoo Ip Europe B.V. Transport simulation in a location-based mixed-reality game system
WO2018226448A1 (en) 2017-06-08 2018-12-13 Honeywell International Inc. Apparatus and method for recording and replaying interactive content in augmented/virtual reality in industrial automation systems and other systems
WO2018226472A1 (en) 2017-06-08 2018-12-13 Honeywell International Inc. Apparatus and method for visual-assisted training, collaboration, and monitoring in augmented/virtual reality in industrial automation systems and other systems
EP3572914A3 (en) * 2018-05-24 2020-02-19 TMRW Foundation IP & Holding S.A.R.L. Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world
US11083968B2 (en) 2016-07-05 2021-08-10 Lego A/S Method for creating a virtual object
US11115468B2 (en) 2019-05-23 2021-09-07 The Calany Holding S. À R.L. Live management of real world via a persistent virtual world system
US11196964B2 (en) 2019-06-18 2021-12-07 The Calany Holding S. À R.L. Merged reality live event management system and method
US11307968B2 (en) 2018-05-24 2022-04-19 The Calany Holding S. À R.L. System and method for developing, testing and deploying digital reality applications into the real world via a virtual world
US11433310B2 (en) 2016-07-05 2022-09-06 Lego A/S Method for creating a virtual object
US11471772B2 (en) 2019-06-18 2022-10-18 The Calany Holding S. À R.L. System and method for deploying virtual replicas of real-world elements into a persistent virtual world system
CN116047978A (en) * 2023-01-31 2023-05-02 联想(北京)有限公司 Control method and device
WO2023103380A1 (en) * 2021-12-07 2023-06-15 达闼机器人股份有限公司 Intelligent device control method and apparatus, and server and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3091135B2 (en) * 1995-05-26 2000-09-25 株式会社バンダイ Game equipment
AU6754400A (en) * 1999-07-31 2001-02-19 Craig L. Linden Method and apparatus for powered interactive physical displays
US8046408B2 (en) * 2001-08-20 2011-10-25 Alcatel Lucent Virtual reality systems and methods

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072132A1 (en) * 2014-11-07 2016-05-12 ソニー株式会社 Information processing device, information processing system, real object system, and information processing method
CN105549749A (en) * 2016-03-10 2016-05-04 北京虚实互动科技有限公司 Signal transmission method, device and system in virtual reality
CN105549749B (en) * 2016-03-10 2018-08-14 北京虚实互动科技有限公司 Method for transmitting signals, apparatus and system in a kind of virtual reality
US11779846B2 (en) 2016-07-05 2023-10-10 Lego A/S Method for creating a virtual object
US11433310B2 (en) 2016-07-05 2022-09-06 Lego A/S Method for creating a virtual object
US11083968B2 (en) 2016-07-05 2021-08-10 Lego A/S Method for creating a virtual object
EP3400992A1 (en) * 2017-05-08 2018-11-14 Trimoo IP Europe B.V. Providing a location-based mixed-reality experience
WO2018206605A1 (en) * 2017-05-08 2018-11-15 Trimoo Ip Europe B.V. Transport simulation in a location-based mixed-reality game system
WO2018206603A1 (en) * 2017-05-08 2018-11-15 Trimoo Ip Europe B.V. Providing a location-based mixed-reality experience
EP3400991A1 (en) * 2017-05-09 2018-11-14 Trimoo IP Europe B.V. Transport simulation in a location-based mixed-reality game system
EP3635521A4 (en) * 2017-06-08 2021-05-19 Honeywell International Inc. Apparatus and method for visual-assisted training, collaboration, and monitoring in augmented/virtual reality in industrial automation systems and other systems
EP3635520A4 (en) * 2017-06-08 2021-05-19 Honeywell International Inc. Apparatus and method for recording and replaying interactive content in augmented/virtual reality in industrial automation systems and other systems
CN110869889A (en) * 2017-06-08 2020-03-06 霍尼韦尔国际公司 Apparatus and method for visual assistance training, collaboration and monitoring of augmented/virtual reality in industrial automation systems and other systems
WO2018226448A1 (en) 2017-06-08 2018-12-13 Honeywell International Inc. Apparatus and method for recording and replaying interactive content in augmented/virtual reality in industrial automation systems and other systems
WO2018226472A1 (en) 2017-06-08 2018-12-13 Honeywell International Inc. Apparatus and method for visual-assisted training, collaboration, and monitoring in augmented/virtual reality in industrial automation systems and other systems
US11079897B2 (en) 2018-05-24 2021-08-03 The Calany Holding S. À R.L. Two-way real-time 3D interactive operations of real-time 3D virtual objects within a real-time 3D virtual world representing the real world
EP3572914A3 (en) * 2018-05-24 2020-02-19 TMRW Foundation IP & Holding S.A.R.L. Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world
US11307968B2 (en) 2018-05-24 2022-04-19 The Calany Holding S. À R.L. System and method for developing, testing and deploying digital reality applications into the real world via a virtual world
US11115468B2 (en) 2019-05-23 2021-09-07 The Calany Holding S. À R.L. Live management of real world via a persistent virtual world system
US11245872B2 (en) 2019-06-18 2022-02-08 The Calany Holding S. À R.L. Merged reality spatial streaming of virtual spaces
US11202036B2 (en) 2019-06-18 2021-12-14 The Calany Holding S. À R.L. Merged reality system and method
US11202037B2 (en) 2019-06-18 2021-12-14 The Calany Holding S. À R.L. Virtual presence system and method through merged reality
US11471772B2 (en) 2019-06-18 2022-10-18 The Calany Holding S. À R.L. System and method for deploying virtual replicas of real-world elements into a persistent virtual world system
US11665317B2 (en) 2019-06-18 2023-05-30 The Calany Holding S. À R.L. Interacting with real-world items and corresponding databases through a virtual twin reality
US11196964B2 (en) 2019-06-18 2021-12-07 The Calany Holding S. À R.L. Merged reality live event management system and method
WO2023103380A1 (en) * 2021-12-07 2023-06-15 达闼机器人股份有限公司 Intelligent device control method and apparatus, and server and storage medium
CN116047978A (en) * 2023-01-31 2023-05-02 联想(北京)有限公司 Control method and device

Also Published As

Publication number Publication date
WO2008065458A3 (en) 2009-03-26
HU0600879D0 (en) 2007-01-29
HUP0600879A2 (en) 2008-06-30

Similar Documents

Publication Publication Date Title
WO2008065458A2 (en) System and method for moving real objects through operations performed in a virtual environment
US11688118B2 (en) Time-dependent client inactivity indicia in a multi-user animation environment
US11762369B2 (en) Robotic control via a virtual world simulation
US8144148B2 (en) Method and system for vision-based interaction in a virtual environment
US6741911B2 (en) Natural robot control
Carpin et al. Bridging the gap between simulation and reality in urban search and rescue
CN108369478A (en) Hand Tracking for Interactive Feedback
KR20130028878A (en) Combined stereo camera and stereo display interaction
Berkelman et al. Interaction with a real time dynamic environment simulation using a magnetic levitation haptic interface device
WO2008069366A1 (en) Robot simulation system using the network
CN111968204B (en) Motion display method and device for bone model
CN113144592B (en) Interaction method of VR equipment and mobile equipment
Faust et al. A video game-based mobile robot simulation environment
Johansson et al. Using simple force feedback mechanisms as haptic visualization tools
EP3598270A1 (en) Method and control unit for controlling a virtual reality display, virtual reality display and virtual reality system
WO2021240601A1 (en) Virtual space body sensation system
Awaad et al. Xpersim: A simulator for robot learning by experimentation
Xie Experiment Design and Implementation for Physical Human-Robot Interaction Tasks
Bressler A virtual reality training tool for upper limp prostheses
JPH06289773A (en) Power plant operation training device
Laaki Virtual reality control system for industrial robot
Rodrigues et al. Collaborative virtual training using force feedback devices
Colombini et al. A framework for learning in humanoid simulated robots
CN119200854A (en) A multi-sensory simulation method for virtual reality and related devices
De Paolis et al. An interactive and immersive 3D game simulation provided with force feedback

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07848772

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07848772

Country of ref document: EP

Kind code of ref document: A2