[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US10410420B2 - Information processing apparatus, information processing method, and storage medium - Google Patents

Information processing apparatus, information processing method, and storage medium Download PDF

Info

Publication number
US10410420B2
US10410420B2 US15/195,118 US201615195118A US10410420B2 US 10410420 B2 US10410420 B2 US 10410420B2 US 201615195118 A US201615195118 A US 201615195118A US 10410420 B2 US10410420 B2 US 10410420B2
Authority
US
United States
Prior art keywords
interference
virtual objects
image
orientation
information processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/195,118
Other versions
US20170004653A1 (en
Inventor
Kazuki Takemoto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TAKEMOTO, KAZUKI
Publication of US20170004653A1 publication Critical patent/US20170004653A1/en
Application granted granted Critical
Publication of US10410420B2 publication Critical patent/US10410420B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Definitions

  • the present invention relates to a mixed reality system.
  • MR mixed reality
  • CAD computer aided design
  • a representative evaluation item is an item for evaluating whether a work can be performed without causing a tool to interfere with components other than a target component, based on only virtual components, even if real components are not present.
  • the MR system for performing such an evaluation is expected to have a function of presenting whether a virtual object (e.g., a tool) moved by an experiencing person has interfered with another virtual object (a component), or where and what type of interference trace has remained in the event of the interference.
  • a virtual object e.g., a tool
  • another virtual object a component
  • Japanese Patent Application Laid-Open No. 2006-302035 discusses a method for recording a group of positions where interference has occurred into a memory, and displaying a trail by linking such groups of positions on the memory with a line segment in a time series manner.
  • an information processing apparatus includes an image acquisition unit configured to acquire a captured image of a real space, a storage unit configured to store model data of a plurality of virtual objects, a position and orientation acquisition unit configured to acquire a position and orientation of each of the plurality of virtual objects, an interference determination unit configured to determine whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and the model data of the plurality of virtual objects, an interference information recording unit configured to record an outline of an interference part where the virtual objects interfere with each other as interference information, based on a result of the interference determination unit, a first generation unit configured to generate a composite image including the captured image, an image of the plurality of virtual objects, and an image representing the outline of the interference part, and an output unit configured to output the generated composite image.
  • interference information indicating a shape of an interfering part can be presented with respect to interference between a virtual object and a target object.
  • FIG. 1 is a block diagram illustrating a configuration of an information processing apparatus in a first exemplary embodiment.
  • FIGS. 2A, 2B, and 2C are schematic diagrams illustrating virtual objects, a current interference part, and past interference parts, respectively, in the first exemplary embodiment.
  • FIG. 3 is a schematic diagram illustrating a method for displaying an interference part in a first modification.
  • FIG. 4 is a flowchart illustrating details of a process to be performed by the information processing apparatus in the first exemplary embodiment.
  • FIG. 5 is a flowchart illustrating details of a process to be performed by an interference information holding unit in the first exemplary embodiment.
  • FIG. 6 is a process flowchart of an information processing apparatus in a second modification.
  • FIG. 7 is a diagram illustrating an example of a hardware configuration of the information processing apparatus in the first exemplary embodiment.
  • FIG. 8 is a table illustrating an example of a data set of interference information in the first exemplary embodiment.
  • FIG. 9 is a flowchart illustrating details of a process to be performed by an interference information holding unit in the first modification.
  • FIG. 10 is a schematic diagram illustrating a result of rendering an image of a virtual space in the second modification.
  • FIG. 7 is a block diagram illustrating hardware for implementing a first exemplary embodiment.
  • FIG. 7 is a hardware block diagram of an information processing apparatus 100 (see FIG. 1 ) in the present exemplary embodiment.
  • a central processing unit (CPU) 710 comprehensively controls each device connected via a bus 700 .
  • the CPU 710 reads out a processing step or a program stored in a read only memory (ROM) 720 , and executes the read-out processing step or program.
  • ROM read only memory
  • Including an operating system (OS), software such as each processing program and a device driver according to the present exemplary embodiment is stored in the ROM 720 , and temporarily stored into a random access memory (RAM) 730 to be executed by the CPU 710 as appropriate.
  • RAM random access memory
  • An input interface (I/F) 740 receives a signal from an external apparatus (such as an image capturing apparatus and an operation apparatus), and inputs the received signal as an input signal in a form processable by the information processing apparatus 100 .
  • An output I/F 750 outputs a signal as an output signal in a form processable by an external apparatus (e.g., a display apparatus such as a head mounted display (HMD)).
  • an external apparatus e.g., a display apparatus such as a head mounted display (HMD)
  • FIG. 1 is a block diagram illustrating a functional configuration of the information processing apparatus 100 in the present exemplary embodiment.
  • the information processing apparatus 100 in the present exemplary embodiment is connected to an HMD 301 .
  • the information processing apparatus 100 combines an image of a real space acquired from an image capturing unit 107 in the HMD 301 with an image of a virtual space, and outputs a mixed reality image obtained thereby to a display unit 110 in the HMD 301 .
  • the image capturing unit 107 captures an image of a real space, and provides the captured image to the information processing apparatus 100 .
  • the composition unit 109 combines the captured image with an image of a virtual object, and the display unit 110 displays a resultant composite image.
  • the image capturing unit 107 is built in the HMD 301 .
  • the display unit 110 presents the composite image rendered by the composition unit 109 to an experiencing person.
  • a display built in the HMD 301 is used.
  • a vibration presentation unit 113 receives an output indicating occurrence of interference from an interference information notification unit 112 .
  • the vibration presentation unit 113 then presents vibration for notifying to a tactile sense of the experiencing person (user) that the interference has occurred.
  • a device for rotating a motor having a leading end, to which an eccentric weight is attached is used as the vibration presentation unit 113 .
  • This device is attached to a body part of the experiencing person.
  • the present invention is not limited to this example, and any method may be used if the method stimulates the tactile sense. For example, a weak electric current may be applied to an electrode attached to a fingertip.
  • a sound presentation unit 115 receives, from the interference information notification unit 112 , an output indicating occurrence of interference. The sound presentation unit 115 then presents to an auditory sense of the experiencing person sound for notifying that the interference has occurred.
  • a speaker provided independently of the information processing apparatus 100 is used.
  • a headphone may be built in the HMD 301 , and the sound may be presented using the headphone.
  • a stationary speaker may be used.
  • a speaker may be built in the information processing apparatus 100 . Any type of sound may be used if the sound can stimulate the auditory sense of the experiencing person. For example, a beep of a predetermined wavelength may be used.
  • the information processing apparatus 100 includes a virtual object database 101 , an interference determination unit 103 , a virtual object rendering unit 104 , an interference part calculation unit 105 , a position and orientation acquisition unit 108 , the composition unit 109 , an interference information holding unit 111 , and the interference information notification unit 112 .
  • the CPU 710 implements each of these functional units by reading a program stored in the ROM 720 into the RAM 730 , and executing a process according to each of flowcharts to be described below based on the program. Further, for example, if hardware is configured as an alternative of a software process using the CPU 710 , an arithmetic unit and a circuit corresponding to the process of each of the functional units described here may be configured.
  • the virtual object database 101 holds a model of a virtual object.
  • the model of the virtual object includes, for example, a shape, a display color, and an initial position and orientation.
  • two virtual objects 201 and 202 are each taken as an example.
  • the virtual objects 201 and 202 have a rectangular parallelepiped shape and a column shape, respectively, as illustrated in FIG. 2A .
  • the position and orientation acquisition unit 108 acquires an image from the image capturing unit 107 (that is the position and orientation acquisition unit 108 has a function of an image acquisition unit) and estimates a position and orientation of the image capturing unit 107 (i.e., the position and orientation of the HMD 301 ) in the real space, and a position and orientation of the virtual object, based on an image acquired from the image capturing unit 107 (image acquisition).
  • the position and orientation of the image capturing unit 107 is assumed to be a viewpoint position and orientation of the experiencing person.
  • the position and orientation of the image capturing unit 107 is calculated based on an image in which a marker placed in the real space is imaged (image capturing unit position and orientation measurement).
  • the position and orientation of the virtual object for example, an existing real object corresponding to the virtual object is placed in the real space beforehand.
  • a marker is affixed to the existing real object, and a position and orientation of the real object is estimated based on the image in which the marker is imaged.
  • the estimated position and orientation of the existing real object is estimated as the position and orientation of the virtual object corresponding to the real object.
  • the experiencing person can change the position and orientation of the virtual object by moving the existing real object.
  • a method for acquiring the viewpoint position and orientation of the experiencing person as well as the position and orientation of the virtual object is not limited to this example.
  • a three-dimensional sensor may be affixed to or built in the existing real object and used.
  • the position and orientation of the virtual object relative to the image capturing unit 107 can be measured using a commercial motion capture system.
  • the virtual object may be displayed at a predetermined position and moved by using a device such as a controller, without providing the existing real object corresponding to the virtual object.
  • the interference determination unit 103 determines presence or absence of interference between a plurality of virtual objects based on a position and orientation as well as a shape of each of the virtual objects. Determination of interference between two virtual objects is implemented in the following procedure. One triangular polygon is extracted from each of the virtual objects, and whether the respective triangle polygons intersect each other is determined. Such determination is performed for every combination of triangle polygons. Even if there is only one intersection, the virtual objects are determined to be interfering with each other, when the intersection is present. Further, this process is performed for every combination of virtual objects.
  • the way of implementing the interference determination unit 103 in the present exemplary embodiment is not limited to this example.
  • the interference part calculation unit 105 calculates an interference part from the virtual objects determined to be interfering with each other by the interference determination unit 103 .
  • a set of intersecting triangle polygons and a set of lines of intersection are calculated as the interference part (an outline of an interfering part).
  • the way of representing the interference part is not limited to this example.
  • the interference information holding unit 111 holds interference information calculated by the interference part calculation unit 105 (interference information recording).
  • FIG. 8 illustrates an example of the interference information.
  • An item “frame number” represents a number that is incremented by one each time an image captured by the image capturing unit 107 is updated.
  • An item “interference object” represents identification information of a model that identifies each interfering virtual object.
  • a name of each virtual object is registered.
  • An item “time” represents the time when occurrence of interference is actually determined.
  • An item “number of vertexes” is the number of end nodes of the lines of intersection of the polygons in the virtual objects 201 and 202 interfering with each other.
  • vertex position represents a three-dimensional position of every end node of the lines of intersection.
  • vertex positions are rendered as a line segment, a circular line (an interference part 203 of the virtual objects 201 and 202 ) in FIG. 2B is obtained.
  • Items “color” and “line width” each represent a rendering attribute to be used when the interference information is rendered as an image of a virtual object.
  • the color and the line width that are the rendering attributes are not limited to being generated in the event of interference, and parameters may be set beforehand.
  • the virtual object rendering unit 104 generates an image in which the interference information as well as the virtual objects 201 and 202 are rendered. To be more specific, the virtual object rendering unit 104 generates an image based on the model data stored in the virtual object database 101 and the interference information stored in the interference information holding unit 111 .
  • the virtual object rendering unit 104 acquires, from the position and orientation acquisition unit 108 , a position and orientation of a virtual viewpoint for generating an image of a virtual space.
  • a position and orientation of the virtual viewpoint for example, position and orientation information of the image capturing unit 107 in a world coordinate system used as a reference may be input.
  • three-dimensional vertex positions of the virtual objects 201 and 202 as well as the interference information are updated based on the position and orientation acquired by the position and orientation acquisition unit 108 .
  • the updated three-dimensional vertexes of the virtual objects and the interference information are projected onto a projection plane that is set based on the virtual viewpoint.
  • FIG. 2C illustrates an image of the virtual space generated by the virtual object rendering unit 104 in the present exemplary embodiment.
  • the virtual objects 201 and 202 as well as an interference part 206 based on the interference information accumulated so far are rendered (interference information image generation).
  • the outline of the interference part 206 is rendered. Further, for example, a specific color may be superimposed on the inside of the outline, thereby making it easy to recognize the interference part 206 .
  • the composition unit 109 combines the image of the real space acquired from the image capturing unit 107 with the image of the virtual space rendered by the virtual object rendering unit 104 . This composition process is performed by rendering the image of the virtual space on the image of the real space acquired from the image capturing unit 107 , thereby obtaining a composite image. In the composite image, pixels of the image of the real space are displayed as pixels in an area except for an area forming the virtual objects 201 and 202 . The composition unit 109 outputs the composite image to an display unit 110 in the HMD 301 .
  • the interference information notification unit 112 notifies another notification unit of the current interference state based on the information of the interference information holding unit 111 .
  • whether the interference exists is explicitly presented to the experiencing person by visually displaying the image generated by the virtual object rendering unit 104 based on the virtual objects 201 and 202 as well as the interference information.
  • the experiencing person cannot perceive the interference based on the visual display alone.
  • the experiencing person can perceive the interference occurring outside the visual field, if whether the interference exists is presented to the auditory sense and the tactile sense.
  • FIGS. 2A, 2B, and 2C are schematic diagrams each illustrating an example of the image of the virtual space in the present exemplary embodiment. Further, FIG. 4 is a flowchart illustrating details of a process to be performed in the present exemplary embodiment.
  • FIGS. 2A to 2C and FIG. 4 the process of the present exemplary embodiment will be described in detail, by taking, as an example, the state where the virtual objects 201 and 202 are arranged.
  • step S 801 the virtual object rendering unit 104 reads data of the virtual objects 201 and 202 from the virtual object database 101 .
  • step S 803 the position and orientation acquisition unit 108 sends the position and orientation information to the interference determination unit 103 and the virtual object rendering unit 104 .
  • step S 806 the interference determination unit 103 determines whether intersection exists for each of the polygons forming the virtual objects 201 and 202 , by referring to the virtual objects 201 and 202 stored in the virtual object database 101 .
  • FIG. 2B illustrates a state where the virtual objects 201 and 202 interfere with each other, after the marker corresponding to the virtual object 202 is moved by the experiencing person.
  • the processing proceeds to step S 811 .
  • the processing proceeds to step S 809 .
  • step S 809 the interference part calculation unit 105 calculates interference part information for the virtual objects 201 and 202 .
  • the interference part 203 which is a set of lines of intersection of polygons, is generated as the interference part information.
  • step S 810 the interference information holding unit 111 records the data set of the interference information for each combination of the virtual objects 201 and 202 .
  • the interference information illustrated in FIG. 8 is recorded. This data set recording is executed each time the image captured by the image capturing unit 107 is updated.
  • the interference information illustrated in FIG. 8 is assumed to represent, in addition to the data set of the combination of the virtual objects 201 and 202 , an accumulation of the data sets, which are each added each time the image captured by the image capturing unit 107 is updated.
  • the process in step S 810 will be described in detail below.
  • the virtual object rendering unit 104 renders the virtual objects as well as the interference part, as the image of the virtual space.
  • the virtual objects 201 and 202 are rendered based on the information of the triangular polygons.
  • the interference part 206 is rendered as a line segment having two vertexes based on the interference information. Furthermore, as for the color and the line width of the line segment at the time of rendering, the rendering is performed with reference to the interference information.
  • FIG. 2C illustrates a result of rendering the virtual objects 201 and 202 as well as the interference part 206 .
  • the interference part 206 includes the interference part information accumulated in the past. Therefore, how the virtual object 202 having the column shape is moved can be confirmed after a task.
  • step S 812 the composition unit 109 combines the image of the real space captured by the image capturing unit 107 with the image of the virtual space, thereby generating a composite image.
  • step S 813 first, the display unit 110 displays the composite image generated by the composition unit 109 , and presents the composite image to the experiencing person wearing the HMD 301 . Further, the vibration presentation unit 113 and the sound presentation unit 115 present vibration and sound, respectively, to the experiencing person, according to the information of the interference information notification unit 112 .
  • step S 815 it is determined whether a termination command is transmitted from the experiencing person via an input unit such as a keyboard. If no termination command is transmitted (No in step S 815 ), the processing returns to step S 802 . If the termination command is transmitted (Yes in step S 815 ), the processing ends.
  • FIG. 5 is a flowchart illustrating the details of the process for recording the interference information in step S 810 .
  • rendering performance may decrease due to an accumulation of highlights of the interference parts, or it may be difficult to view the interference part due to unnecessary highlight display.
  • such issue is addressed by adopting a method for deleting (erasing) the data sets of the interference part, starting from the oldest one, if the number of the data sets of the interference part is equal to or more than a threshold set beforehand by the experiencing person. This will be described in detail with reference to the flowchart of FIG. 5 .
  • step S 901 the interference information holding unit 111 sums the data sets by referring to the past interference information, and determines whether the total number of the data sets (the past data sets) is equal to or more than the threshold set beforehand by the experiencing person. If the total number of the data sets is equal to or more than the threshold (Yes in step S 901 ), the processing proceeds to step S 905 . If the total number of the data sets is not equal to or more than the threshold (No in step S 901 ), the processing proceeds to step S 910 .
  • step S 905 the interference information holding unit 111 retrieves the oldest data set from the past interference information, and deletes the retrieved oldest data set.
  • the present invention is not limited to the deletion of the oldest data set.
  • the data sets at and before a predetermined time may be all deleted, or the data set having a large number of vertexes may be deleted first.
  • step S 910 the interference information holding unit 111 adds the current interference information calculated by the interference part calculation unit 105 , as a new data set.
  • the display load can be reduced and the interference part can be easily viewed by providing the threshold for preventing the number of data sets from being equal to or more than a certain number.
  • the first exemplary embodiment is described above using the example in which the threshold is provided to prevent the number of data sets from being equal to or more than the certain number, and if the number of data sets is equal to or more than the certain number, the data sets are deleted from the memory. However, the data set may be excluded from the displayed image without being deleted from the memory.
  • the first exemplary embodiment employs the method for representing the interference part 206 by linking the lines of intersection of the polygons, as the method for representing the interference part 206 of the virtual objects 201 and 202 .
  • the present invention is not limited to representing the interference part by linking the lines of intersection of the polygons interfering with each other, and another method may be adopted to represent the interference part.
  • FIG. 3 schematically illustrates work for performing a task of bringing a tool 305 that is a virtual object, close to a door 310 that is another virtual object. The task is performed by the experiencing person wearing the HMD 301 .
  • a configuration for implementing the present modification may be provided by changing the process of the interference information holding unit 111 in the first exemplary embodiment.
  • the interference information holding unit 111 records the lines of intersection of the polygons interfering with each other, as the vertex positions of the data set of the interference information.
  • polygon information indicating the shape of the tool (virtual object) 305 moved by the experiencing person wearing the HMD 301 by hand is recorded as the vertex positions.
  • the data set is recorded each time the image captured by the image capturing unit 107 is updated and all the interference parts 206 are displayed on the door 310 , the number of the interference parts 206 is too large, which complicates the display.
  • a method for restricting the display based on a barycentric position in the data set is employed to prevent the display from becoming complicated. This method will be described using a flowchart of FIG. 9 .
  • the present invention is not limited to the triangular polygon with regard to the shape for representing the virtual object.
  • a method for parametrically representing the model data may be employed.
  • FIG. 9 illustrates details of a process, which corresponds to the process in step S 810 in FIG. 4 , in the present modification. Only a part different from the first exemplary embodiment will be described below.
  • step S 925 the interference information holding unit 111 calculates an average of the three-dimensional positions of the vertex positions in the interference information calculated by the interference part calculation unit 105 , and calculates the average to be a barycentric position.
  • the barycentric position is associated with the data set and stored.
  • the interference information holding unit 111 performs the following process.
  • the interference information holding unit 111 refers to a barycentric position group associated with the past interference information, and compares the barycentric position group with the barycentric position calculated by the interference information holding unit 111 . Based on this comparison, the interference information holding unit 111 determines whether a distance between the barycentric position calculated by the interference information holding unit 111 and at least one of the barycentric positions in the past interference information is equal to or less than a predetermined threshold. If the distance is equal to or less than the threshold (Yes in step S 930 ), the processing ends without adding the data set as the interference information.
  • the data set is not added if a close barycentric position is present in the past interference information.
  • the distance is not equal to or less than the threshold (No in step S 930 )
  • the data set is added as the interference information.
  • the confirmation of the interference state after the task can be complemented by replacing the representation of the interference part with easier-to-view representation.
  • the current interference part information is not added to the data sets if the position where the interference has occurred is close.
  • the current interference part information may be added to the data sets, without being included in the image to be displayed.
  • the present invention is not limited to determining whether to record the interference information based on the position of the interference part in the space.
  • whether to record the interference information is determined according to reliability of the position and orientation output from the position and orientation acquisition unit 108 . A process for this determination will be described.
  • a configuration for implementing the present modification may be provided as follows.
  • Information indicating the reliability of the position and orientation is input into the interference information holding unit 111 , from the position and orientation acquisition unit 108 of the above first modification.
  • the interference information holding unit 111 then performs a process for determining whether to record the interference information according to the reliability.
  • the position and orientation acquisition unit 108 may store the reliability of the current position and orientation measuring result in the process for calculating the position and orientation in step S 803 in FIG. 4 , for example.
  • the reliability is defined, for example, as 1 if the marker is recognized, 0 if the marker is not recognized, and 0.5 if the ratio of a region occupied by the marker to the captured image is below 10%.
  • FIG. 6 illustrates details of the process performed by the interference information holding unit 111 . The same processes as those in the first modification will not be described.
  • step S 915 the interference information holding unit 111 acquires (derives) positioning reliability, and determines whether the positioning reliability is equal to or less than a threshold. If the positioning reliability is equal to or less than the threshold (Yes in step S 915 ), the processing proceeds to step S 920 . If the positioning reliability is not equal to or less than the threshold (No in step S 915 ), the processing proceeds to step S 910 .
  • step S 920 the interference information holding unit 111 instructs the virtual object rendering unit 104 to display a message for notifying that the interference information is not recorded because the current positioning reliability is low.
  • the virtual object rendering unit 104 may display a display message 250 in front of the virtual object image, in the virtual object rendering process in step S 811 in FIG. 4 .
  • FIG. 10 illustrates a schematic diagram of an image including the display message 250 and displayed by the virtual object rendering unit 104 .
  • incorrect interference information is removed by not recording the interference part when the reliability at the time of measuring the positioning is low.
  • the confirmation of the interference state after the task can be complemented by such removal.
  • the virtual objects 201 and 202 are represented by the rectangular parallelepiped shape and the column shape, respectively. Further, the position and orientation of the virtual object is allocated to the position and orientation of the marker, and the experiencing person intentionally moves the virtual object by hand.
  • the present invention is not limited to moving the virtual object by hand, and another method may be employed.
  • the method may measure the position and orientation of the body part of the experiencing person, and allocate this position and orientation to the virtual object.
  • a body part such as an elbow and a knee of the experiencing person is measured by a position and orientation sensor.
  • interference of not only the tool 305 but also the body part of the experiencing person may be detected during the work and presented.
  • the position and orientation acquisition unit 108 may obtain a position and orientation of a body part such as a knee of the experiencing person, and output the position and orientation to the interference determination unit 103 and the virtual object rendering unit 104 .
  • model data corresponding to the shape of the body part of the experiencing person may be recorded beforehand in the virtual object database 101 , and output to the interference determination unit 103 and the virtual object rendering unit 104 .
  • the vertexes of the line segment representing the past interference part is held as the vertex positions as illustrated in FIG. 8 .
  • the present invention is not limited to holding the past interference part as the vertex positions.
  • the past interference part may be determined by calculation at the occurrence of the interference.
  • the polygon information as well as the position and orientation of each of the virtual objects 201 and 202 may be added to the interference information in FIG. 8 . Further, the end nodes of the lines of intersection of the polygons may be calculated from the data set of the past interference information, each time the image of the virtual space is generated.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as a
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An apparatus includes an image acquisition unit configured to acquire a captured image of a real space, a position and orientation acquisition unit configured to acquire a position and orientation of each of a plurality of virtual objects, an interference determination unit configured to determine whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and model data of the plurality of virtual objects, an interference information recording unit configured to record an outline of an interference part where the virtual objects interfere with each other as interference information, based on a result of the interference determination unit, a generation unit configured to generate a composite image including the captured image, an image of the plurality of virtual objects, and an image representing the outline of the interference part.

Description

BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates to a mixed reality system.
Description of the Related Art
In recent years, reductions in a time period and costs for evaluation using a prototype have been expected in design and manufacturing fields. Meanwhile, a mixed reality (MR) system has been introduced. The MR system virtually evaluates easiness of assembly and maintainability by using layout (shape and design) data created by a computer aided design (CAD) system. A representative evaluation item is an item for evaluating whether a work can be performed without causing a tool to interfere with components other than a target component, based on only virtual components, even if real components are not present.
The MR system for performing such an evaluation is expected to have a function of presenting whether a virtual object (e.g., a tool) moved by an experiencing person has interfered with another virtual object (a component), or where and what type of interference trace has remained in the event of the interference.
Japanese Patent Application Laid-Open No. 2006-302035 discusses a method for recording a group of positions where interference has occurred into a memory, and displaying a trail by linking such groups of positions on the memory with a line segment in a time series manner.
However, in the method discussed in Japanese Patent Application Laid-Open No. 2006-302035, only sequential linking with the line segment is performed. Therefore, it is difficult to determine to what extent the interference trace has remained in the component.
SUMMARY OF THE INVENTION
According to an aspect of the present invention, an information processing apparatus includes an image acquisition unit configured to acquire a captured image of a real space, a storage unit configured to store model data of a plurality of virtual objects, a position and orientation acquisition unit configured to acquire a position and orientation of each of the plurality of virtual objects, an interference determination unit configured to determine whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and the model data of the plurality of virtual objects, an interference information recording unit configured to record an outline of an interference part where the virtual objects interfere with each other as interference information, based on a result of the interference determination unit, a first generation unit configured to generate a composite image including the captured image, an image of the plurality of virtual objects, and an image representing the outline of the interference part, and an output unit configured to output the generated composite image.
According to the present application, interference information indicating a shape of an interfering part can be presented with respect to interference between a virtual object and a target object.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating a configuration of an information processing apparatus in a first exemplary embodiment.
FIGS. 2A, 2B, and 2C are schematic diagrams illustrating virtual objects, a current interference part, and past interference parts, respectively, in the first exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a method for displaying an interference part in a first modification.
FIG. 4 is a flowchart illustrating details of a process to be performed by the information processing apparatus in the first exemplary embodiment.
FIG. 5 is a flowchart illustrating details of a process to be performed by an interference information holding unit in the first exemplary embodiment.
FIG. 6 is a process flowchart of an information processing apparatus in a second modification.
FIG. 7 is a diagram illustrating an example of a hardware configuration of the information processing apparatus in the first exemplary embodiment.
FIG. 8 is a table illustrating an example of a data set of interference information in the first exemplary embodiment.
FIG. 9 is a flowchart illustrating details of a process to be performed by an interference information holding unit in the first modification.
FIG. 10 is a schematic diagram illustrating a result of rendering an image of a virtual space in the second modification.
DESCRIPTION OF THE EMBODIMENTS
Exemplary embodiments to which the present invention is applied will be described in detail below with reference to the attached drawings.
FIG. 7 is a block diagram illustrating hardware for implementing a first exemplary embodiment.
Specifically, FIG. 7 is a hardware block diagram of an information processing apparatus 100 (see FIG. 1) in the present exemplary embodiment. In FIG. 7, a central processing unit (CPU) 710 comprehensively controls each device connected via a bus 700. The CPU 710 reads out a processing step or a program stored in a read only memory (ROM) 720, and executes the read-out processing step or program. Including an operating system (OS), software such as each processing program and a device driver according to the present exemplary embodiment is stored in the ROM 720, and temporarily stored into a random access memory (RAM) 730 to be executed by the CPU 710 as appropriate. An input interface (I/F) 740 receives a signal from an external apparatus (such as an image capturing apparatus and an operation apparatus), and inputs the received signal as an input signal in a form processable by the information processing apparatus 100. An output I/F 750 outputs a signal as an output signal in a form processable by an external apparatus (e.g., a display apparatus such as a head mounted display (HMD)).
FIG. 1 is a block diagram illustrating a functional configuration of the information processing apparatus 100 in the present exemplary embodiment.
The information processing apparatus 100 in the present exemplary embodiment is connected to an HMD 301. The information processing apparatus 100 combines an image of a real space acquired from an image capturing unit 107 in the HMD 301 with an image of a virtual space, and outputs a mixed reality image obtained thereby to a display unit 110 in the HMD 301.
The image capturing unit 107 captures an image of a real space, and provides the captured image to the information processing apparatus 100. The composition unit 109 combines the captured image with an image of a virtual object, and the display unit 110 displays a resultant composite image. In the present exemplary embodiment, the image capturing unit 107 is built in the HMD 301.
The display unit 110 presents the composite image rendered by the composition unit 109 to an experiencing person. In the present exemplary embodiment, a display built in the HMD 301 is used.
A vibration presentation unit 113 receives an output indicating occurrence of interference from an interference information notification unit 112. The vibration presentation unit 113 then presents vibration for notifying to a tactile sense of the experiencing person (user) that the interference has occurred. In the present exemplary embodiment, a device for rotating a motor having a leading end, to which an eccentric weight is attached, is used as the vibration presentation unit 113. This device is attached to a body part of the experiencing person. However, the present invention is not limited to this example, and any method may be used if the method stimulates the tactile sense. For example, a weak electric current may be applied to an electrode attached to a fingertip.
A sound presentation unit 115 receives, from the interference information notification unit 112, an output indicating occurrence of interference. The sound presentation unit 115 then presents to an auditory sense of the experiencing person sound for notifying that the interference has occurred. In the present exemplary embodiment, a speaker provided independently of the information processing apparatus 100 is used. For example, a headphone may be built in the HMD 301, and the sound may be presented using the headphone. Further, a stationary speaker may be used. Furthermore, a speaker may be built in the information processing apparatus 100. Any type of sound may be used if the sound can stimulate the auditory sense of the experiencing person. For example, a beep of a predetermined wavelength may be used.
The information processing apparatus 100 includes a virtual object database 101, an interference determination unit 103, a virtual object rendering unit 104, an interference part calculation unit 105, a position and orientation acquisition unit 108, the composition unit 109, an interference information holding unit 111, and the interference information notification unit 112.
The CPU 710 implements each of these functional units by reading a program stored in the ROM 720 into the RAM 730, and executing a process according to each of flowcharts to be described below based on the program. Further, for example, if hardware is configured as an alternative of a software process using the CPU 710, an arithmetic unit and a circuit corresponding to the process of each of the functional units described here may be configured.
The virtual object database 101 holds a model of a virtual object. The model of the virtual object includes, for example, a shape, a display color, and an initial position and orientation. In the present exemplary embodiment, two virtual objects 201 and 202 are each taken as an example. The virtual objects 201 and 202 have a rectangular parallelepiped shape and a column shape, respectively, as illustrated in FIG. 2A.
The position and orientation acquisition unit 108 acquires an image from the image capturing unit 107 (that is the position and orientation acquisition unit 108 has a function of an image acquisition unit) and estimates a position and orientation of the image capturing unit 107 (i.e., the position and orientation of the HMD 301) in the real space, and a position and orientation of the virtual object, based on an image acquired from the image capturing unit 107 (image acquisition). In the present exemplary embodiment, the position and orientation of the image capturing unit 107 is assumed to be a viewpoint position and orientation of the experiencing person. The position and orientation of the image capturing unit 107 is calculated based on an image in which a marker placed in the real space is imaged (image capturing unit position and orientation measurement). Further, as for the position and orientation of the virtual object, for example, an existing real object corresponding to the virtual object is placed in the real space beforehand. A marker is affixed to the existing real object, and a position and orientation of the real object is estimated based on the image in which the marker is imaged. The estimated position and orientation of the existing real object is estimated as the position and orientation of the virtual object corresponding to the real object. The experiencing person can change the position and orientation of the virtual object by moving the existing real object. However, in the present invention, a method for acquiring the viewpoint position and orientation of the experiencing person as well as the position and orientation of the virtual object is not limited to this example. For example, a three-dimensional sensor may be affixed to or built in the existing real object and used. Further, the position and orientation of the virtual object relative to the image capturing unit 107 can be measured using a commercial motion capture system. The virtual object may be displayed at a predetermined position and moved by using a device such as a controller, without providing the existing real object corresponding to the virtual object.
The interference determination unit 103 determines presence or absence of interference between a plurality of virtual objects based on a position and orientation as well as a shape of each of the virtual objects. Determination of interference between two virtual objects is implemented in the following procedure. One triangular polygon is extracted from each of the virtual objects, and whether the respective triangle polygons intersect each other is determined. Such determination is performed for every combination of triangle polygons. Even if there is only one intersection, the virtual objects are determined to be interfering with each other, when the intersection is present. Further, this process is performed for every combination of virtual objects. However, the way of implementing the interference determination unit 103 in the present exemplary embodiment is not limited to this example.
The interference part calculation unit 105 calculates an interference part from the virtual objects determined to be interfering with each other by the interference determination unit 103. In the present exemplary embodiment, a set of intersecting triangle polygons and a set of lines of intersection are calculated as the interference part (an outline of an interfering part). However, in the present exemplary embodiment, the way of representing the interference part is not limited to this example.
The interference information holding unit 111 holds interference information calculated by the interference part calculation unit 105 (interference information recording). FIG. 8 illustrates an example of the interference information.
An item “frame number” represents a number that is incremented by one each time an image captured by the image capturing unit 107 is updated.
An item “interference object” represents identification information of a model that identifies each interfering virtual object. In the present exemplary embodiment, a name of each virtual object is registered. An item “time” represents the time when occurrence of interference is actually determined.
An item “number of vertexes” is the number of end nodes of the lines of intersection of the polygons in the virtual objects 201 and 202 interfering with each other.
An item “vertex position” represents a three-dimensional position of every end node of the lines of intersection. When the vertex positions are rendered as a line segment, a circular line (an interference part 203 of the virtual objects 201 and 202) in FIG. 2B is obtained.
Items “color” and “line width” each represent a rendering attribute to be used when the interference information is rendered as an image of a virtual object. The color and the line width that are the rendering attributes are not limited to being generated in the event of interference, and parameters may be set beforehand.
The table illustrated in FIG. 8 represents a data set for each frame number, and a history of data sets is held. The present invention is not limited to segmenting the interference information into the data sets based on the frame number. The data set may be created using the time as a key. Further, the data set may be created for each combination of two virtual objects indicated as the interference object.
The virtual object rendering unit 104 generates an image in which the interference information as well as the virtual objects 201 and 202 are rendered. To be more specific, the virtual object rendering unit 104 generates an image based on the model data stored in the virtual object database 101 and the interference information stored in the interference information holding unit 111.
The virtual object rendering unit 104 acquires, from the position and orientation acquisition unit 108, a position and orientation of a virtual viewpoint for generating an image of a virtual space. As the position and orientation of the virtual viewpoint, for example, position and orientation information of the image capturing unit 107 in a world coordinate system used as a reference may be input. Further, three-dimensional vertex positions of the virtual objects 201 and 202 as well as the interference information are updated based on the position and orientation acquired by the position and orientation acquisition unit 108. The updated three-dimensional vertexes of the virtual objects and the interference information are projected onto a projection plane that is set based on the virtual viewpoint. A polygon and line segment rendering process after the projection is similar to an ordinary rendering process for generating a three-dimensional (3-D) image and thus will not be described in detail. FIG. 2C illustrates an image of the virtual space generated by the virtual object rendering unit 104 in the present exemplary embodiment. The virtual objects 201 and 202 as well as an interference part 206 based on the interference information accumulated so far are rendered (interference information image generation). In the present exemplary embodiment, the outline of the interference part 206 is rendered. Further, for example, a specific color may be superimposed on the inside of the outline, thereby making it easy to recognize the interference part 206.
The composition unit 109 combines the image of the real space acquired from the image capturing unit 107 with the image of the virtual space rendered by the virtual object rendering unit 104. This composition process is performed by rendering the image of the virtual space on the image of the real space acquired from the image capturing unit 107, thereby obtaining a composite image. In the composite image, pixels of the image of the real space are displayed as pixels in an area except for an area forming the virtual objects 201 and 202. The composition unit 109 outputs the composite image to an display unit 110 in the HMD 301.
The interference information notification unit 112 notifies another notification unit of the current interference state based on the information of the interference information holding unit 111. In the present exemplary embodiment, whether the interference exists is explicitly presented to the experiencing person by visually displaying the image generated by the virtual object rendering unit 104 based on the virtual objects 201 and 202 as well as the interference information. However, if a place where interfere occurs is outside a visual field of the image capturing unit 107, the experiencing person cannot perceive the interference based on the visual display alone. The experiencing person can perceive the interference occurring outside the visual field, if whether the interference exists is presented to the auditory sense and the tactile sense.
FIGS. 2A, 2B, and 2C are schematic diagrams each illustrating an example of the image of the virtual space in the present exemplary embodiment. Further, FIG. 4 is a flowchart illustrating details of a process to be performed in the present exemplary embodiment.
Using FIGS. 2A to 2C and FIG. 4, the process of the present exemplary embodiment will be described in detail, by taking, as an example, the state where the virtual objects 201 and 202 are arranged.
In step S801, the virtual object rendering unit 104 reads data of the virtual objects 201 and 202 from the virtual object database 101.
In step S802, the position and orientation acquisition unit 108 acquires an image of the real space captured by the image capturing unit 107. Further, in step S803, the position and orientation acquisition unit 108 calculates the position and orientation information of each of the virtual objects 201 and 202 as well as the image capturing unit 107 (virtual viewpoint), based on the acquired image. The processes in and after step S802 are suspended until the image of the real space is updated by the image capturing unit 107, and when this update occurs, the processes in subsequent stages are executed. However, the present exemplary embodiment is not limited to performing the processes each time the image is updated by the image capturing unit 107. For example, a method for executing the processes when new position and orientation is acquired by the position and orientation acquisition unit 108 may be employed.
In addition, in step S803, the position and orientation acquisition unit 108 sends the position and orientation information to the interference determination unit 103 and the virtual object rendering unit 104.
In step S806, the interference determination unit 103 determines whether intersection exists for each of the polygons forming the virtual objects 201 and 202, by referring to the virtual objects 201 and 202 stored in the virtual object database 101.
FIG. 2B illustrates a state where the virtual objects 201 and 202 interfere with each other, after the marker corresponding to the virtual object 202 is moved by the experiencing person. In the state illustrated in FIG. 2A, i.e., if no intersection exists (No in step S806), the processing proceeds to step S811. In the state illustrated in FIG. 2B, i.e., if intersection exists (Yes in step S806), the processing proceeds to step S809.
In step S809, the interference part calculation unit 105 calculates interference part information for the virtual objects 201 and 202.
For example, in the present exemplary embodiment, as illustrated in FIG. 2B, assume that the virtual object 202 having the column shape is moved to interfere with the virtual object 201 having the rectangular parallelepiped shape. Then, the interference part 203, which is a set of lines of intersection of polygons, is generated as the interference part information.
In step S810, the interference information holding unit 111 records the data set of the interference information for each combination of the virtual objects 201 and 202. In the present exemplary embodiment, the interference information illustrated in FIG. 8 is recorded. This data set recording is executed each time the image captured by the image capturing unit 107 is updated. In other words, the interference information illustrated in FIG. 8 is assumed to represent, in addition to the data set of the combination of the virtual objects 201 and 202, an accumulation of the data sets, which are each added each time the image captured by the image capturing unit 107 is updated. The process in step S810 will be described in detail below.
In step S811, the virtual object rendering unit 104 renders the virtual objects as well as the interference part, as the image of the virtual space. In the present exemplary embodiment, the virtual objects 201 and 202 are rendered based on the information of the triangular polygons. Further, the interference part 206 is rendered as a line segment having two vertexes based on the interference information. Furthermore, as for the color and the line width of the line segment at the time of rendering, the rendering is performed with reference to the interference information.
FIG. 2C illustrates a result of rendering the virtual objects 201 and 202 as well as the interference part 206. The interference part 206 includes the interference part information accumulated in the past. Therefore, how the virtual object 202 having the column shape is moved can be confirmed after a task.
In step S812, the composition unit 109 combines the image of the real space captured by the image capturing unit 107 with the image of the virtual space, thereby generating a composite image.
In step S813, first, the display unit 110 displays the composite image generated by the composition unit 109, and presents the composite image to the experiencing person wearing the HMD 301. Further, the vibration presentation unit 113 and the sound presentation unit 115 present vibration and sound, respectively, to the experiencing person, according to the information of the interference information notification unit 112.
In step S815, it is determined whether a termination command is transmitted from the experiencing person via an input unit such as a keyboard. If no termination command is transmitted (No in step S815), the processing returns to step S802. If the termination command is transmitted (Yes in step S815), the processing ends.
FIG. 5 is a flowchart illustrating the details of the process for recording the interference information in step S810.
When the past interference parts are superimposed and displayed as illustrated in FIG. 2C, rendering performance may decrease due to an accumulation of highlights of the interference parts, or it may be difficult to view the interference part due to unnecessary highlight display. In the present exemplary embodiment, such issue is addressed by adopting a method for deleting (erasing) the data sets of the interference part, starting from the oldest one, if the number of the data sets of the interference part is equal to or more than a threshold set beforehand by the experiencing person. This will be described in detail with reference to the flowchart of FIG. 5.
In step S901, the interference information holding unit 111 sums the data sets by referring to the past interference information, and determines whether the total number of the data sets (the past data sets) is equal to or more than the threshold set beforehand by the experiencing person. If the total number of the data sets is equal to or more than the threshold (Yes in step S901), the processing proceeds to step S905. If the total number of the data sets is not equal to or more than the threshold (No in step S901), the processing proceeds to step S910.
In step S905, the interference information holding unit 111 retrieves the oldest data set from the past interference information, and deletes the retrieved oldest data set. The present invention is not limited to the deletion of the oldest data set. The data sets at and before a predetermined time may be all deleted, or the data set having a large number of vertexes may be deleted first.
In step S910, the interference information holding unit 111 adds the current interference information calculated by the interference part calculation unit 105, as a new data set.
Accordingly, it is possible to reconfirm the state of the interference after completing the task, by highlighting the result of the interference between the virtual objects 201 and 202, on the image of the virtual space in the current viewpoint, as the interference part information in the past. Further, the display load can be reduced and the interference part can be easily viewed by providing the threshold for preventing the number of data sets from being equal to or more than a certain number.
The first exemplary embodiment is described above using the example in which the threshold is provided to prevent the number of data sets from being equal to or more than the certain number, and if the number of data sets is equal to or more than the certain number, the data sets are deleted from the memory. However, the data set may be excluded from the displayed image without being deleted from the memory.
The first exemplary embodiment employs the method for representing the interference part 206 by linking the lines of intersection of the polygons, as the method for representing the interference part 206 of the virtual objects 201 and 202. However, the present invention is not limited to representing the interference part by linking the lines of intersection of the polygons interfering with each other, and another method may be adopted to represent the interference part.
In a present first modification, another method is employed to represent the interference part 206. This method keeps the shape itself of the virtual object at a location where interference has occurred and displaying the shape, as illustrated in FIG. 3. FIG. 3 schematically illustrates work for performing a task of bringing a tool 305 that is a virtual object, close to a door 310 that is another virtual object. The task is performed by the experiencing person wearing the HMD 301.
A configuration for implementing the present modification may be provided by changing the process of the interference information holding unit 111 in the first exemplary embodiment.
In the first exemplary embodiment, the interference information holding unit 111 records the lines of intersection of the polygons interfering with each other, as the vertex positions of the data set of the interference information. However, in the present modification, polygon information indicating the shape of the tool (virtual object) 305 moved by the experiencing person wearing the HMD 301 by hand is recorded as the vertex positions. However, if the data set is recorded each time the image captured by the image capturing unit 107 is updated and all the interference parts 206 are displayed on the door 310, the number of the interference parts 206 is too large, which complicates the display. In the present modification, a method for restricting the display based on a barycentric position in the data set is employed to prevent the display from becoming complicated. This method will be described using a flowchart of FIG. 9.
The present invention is not limited to the triangular polygon with regard to the shape for representing the virtual object. For example, a method for parametrically representing the model data may be employed.
The flowchart of FIG. 9 illustrates details of a process, which corresponds to the process in step S810 in FIG. 4, in the present modification. Only a part different from the first exemplary embodiment will be described below.
In step S925, the interference information holding unit 111 calculates an average of the three-dimensional positions of the vertex positions in the interference information calculated by the interference part calculation unit 105, and calculates the average to be a barycentric position. The barycentric position is associated with the data set and stored.
In step S930, the interference information holding unit 111 performs the following process. The interference information holding unit 111 refers to a barycentric position group associated with the past interference information, and compares the barycentric position group with the barycentric position calculated by the interference information holding unit 111. Based on this comparison, the interference information holding unit 111 determines whether a distance between the barycentric position calculated by the interference information holding unit 111 and at least one of the barycentric positions in the past interference information is equal to or less than a predetermined threshold. If the distance is equal to or less than the threshold (Yes in step S930), the processing ends without adding the data set as the interference information. In other words, when the tool 305 at the current position and orientation interferes with the door 310, the data set is not added if a close barycentric position is present in the past interference information. On the other hand, if the distance is not equal to or less than the threshold (No in step S930), the data set is added as the interference information.
As described above, in the interference display method in the present modification, the confirmation of the interference state after the task can be complemented by replacing the representation of the interference part with easier-to-view representation.
In the above-described first modification, the current interference part information is not added to the data sets if the position where the interference has occurred is close. However, the current interference part information may be added to the data sets, without being included in the image to be displayed.
In the above first modification, if the barycentric position of the virtual object at the time of the occurrence of the interference is close to the interference occurring position in the past interference information, the current interference part information is not added to the data sets. However, the present invention is not limited to determining whether to record the interference information based on the position of the interference part in the space.
In a present second modification, whether to record the interference information is determined according to reliability of the position and orientation output from the position and orientation acquisition unit 108. A process for this determination will be described.
When the position and orientation of the virtual object is determined based on the marker reflected in the captured image, reliability of the position and orientation measuring result decreases in a case where;
  • (1) the marker appears small on the screen,
  • (2) a shielding object is present between the image capturing unit 107 and the marker to hide the marker, or
  • (3) the marker is outside the visual field of the image capturing unit 107, thereby not appearing.
    Further, even if the position and orientation of the virtual object is measured by a position and orientation sensor, occurrence of a situation where the reliability decreases is conceivable. Examples of such a situation include a situation where environmental noise is generated, and a situation where the position and orientation falls outside a measuring range.
When the interference information is added in the above-described state where the reliability of the measurement of the position and orientation is reduced, the past interference state cannot be correctly determined at the confirmation after the task. Therefore, a method for not recoding the interference information is performed. This method will be described.
A configuration for implementing the present modification may be provided as follows. Information indicating the reliability of the position and orientation is input into the interference information holding unit 111, from the position and orientation acquisition unit 108 of the above first modification. The interference information holding unit 111 then performs a process for determining whether to record the interference information according to the reliability.
The position and orientation acquisition unit 108 may store the reliability of the current position and orientation measuring result in the process for calculating the position and orientation in step S803 in FIG. 4, for example. The reliability is defined, for example, as 1 if the marker is recognized, 0 if the marker is not recognized, and 0.5 if the ratio of a region occupied by the marker to the captured image is below 10%.
FIG. 6 illustrates details of the process performed by the interference information holding unit 111. The same processes as those in the first modification will not be described.
In step S915, the interference information holding unit 111 acquires (derives) positioning reliability, and determines whether the positioning reliability is equal to or less than a threshold. If the positioning reliability is equal to or less than the threshold (Yes in step S915), the processing proceeds to step S920. If the positioning reliability is not equal to or less than the threshold (No in step S915), the processing proceeds to step S910.
In step S920, the interference information holding unit 111 instructs the virtual object rendering unit 104 to display a message for notifying that the interference information is not recorded because the current positioning reliability is low.
The virtual object rendering unit 104 may display a display message 250 in front of the virtual object image, in the virtual object rendering process in step S811 in FIG. 4. FIG. 10 illustrates a schematic diagram of an image including the display message 250 and displayed by the virtual object rendering unit 104.
As described above, in the interference display method in the present modification, incorrect interference information is removed by not recording the interference part when the reliability at the time of measuring the positioning is low. The confirmation of the interference state after the task can be complemented by such removal.
In the first exemplary embodiment, the virtual objects 201 and 202 are represented by the rectangular parallelepiped shape and the column shape, respectively. Further, the position and orientation of the virtual object is allocated to the position and orientation of the marker, and the experiencing person intentionally moves the virtual object by hand.
However, the present invention is not limited to moving the virtual object by hand, and another method may be employed. The method may measure the position and orientation of the body part of the experiencing person, and allocate this position and orientation to the virtual object.
For example, assume that, in a situation where work is performed using the tool 305 on the door 310 of the first modification, a body part such as an elbow and a knee of the experiencing person is measured by a position and orientation sensor. In this case, interference of not only the tool 305 but also the body part of the experiencing person may be detected during the work and presented.
In a configuration for implementing such a modification, the position and orientation acquisition unit 108 may obtain a position and orientation of a body part such as a knee of the experiencing person, and output the position and orientation to the interference determination unit 103 and the virtual object rendering unit 104. Alternatively, model data corresponding to the shape of the body part of the experiencing person may be recorded beforehand in the virtual object database 101, and output to the interference determination unit 103 and the virtual object rendering unit 104.
As described above, in the present modification, it is possible to confirm not only the interference state of the virtual object intentionally moved by the experiencing person by hand, but also the interference state of the body part unintentionally moved to cause interference during the work.
In the first exemplary embodiment, the vertexes of the line segment representing the past interference part is held as the vertex positions as illustrated in FIG. 8. However, the present invention is not limited to holding the past interference part as the vertex positions. The past interference part may be determined by calculation at the occurrence of the interference.
To implement such a modification, the polygon information as well as the position and orientation of each of the virtual objects 201 and 202 may be added to the interference information in FIG. 8. Further, the end nodes of the lines of intersection of the polygons may be calculated from the data set of the past interference information, each time the image of the virtual space is generated.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2015-132197, filed Jun. 30, 2015, which is hereby incorporated by reference herein in its entirety.

Claims (15)

What is claimed is:
1. An information processing apparatus comprising:
one or more processors; and
at least one memory device that stores a program executed by the one or more processors in order for the image processing apparatus to function as:
a storage unit configured to store model data of a plurality of virtual objects;
a position and orientation acquisition unit configured to acquire a position and orientation of each of the plurality of virtual objects;
an interference determination unit configured to determine whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and the model data of the plurality of virtual objects;
an interference area recording unit configured to, in response to being determined that the interference is present between the virtual objects, determine a set of lines of intersections between interfered virtual objects as a contour of an interference area on a surface of an interfered virtual object and record the contour of the interference area;
a repeating unit configured to record contours of the interference area by repeating execution of the interference area recording unit at a predetermined timing;
a generation unit configured to generate a composite image including a captured image, an image of the plurality of virtual objects, and an image representing the recorded contours of the interference area, after an end of the interference; and
an output unit configured to output the generated composite image.
2. The information processing apparatus according to claim 1, further comprising a position and orientation measurement unit configured to measure a position and orientation of an image capturing unit producing the captured image,
wherein the generation unit is configured to generate the image of the virtual objects and the image representing the outline of the interference part, based on the position and orientation of the image capturing unit.
3. The information processing apparatus according to claim 1, wherein the interference part is a line of intersection of the model data when the plurality of virtual objects interfere with each other.
4. The information processing apparatus according to claim 1, further comprising an interference information image generation unit configured to generate an image representing the outline of the interference part.
5. The information processing apparatus according to claim 4, wherein the interference information image generation unit generates an image representing a history of interference in images captured before the captured image, by referring to the interference information recording unit.
6. The information processing apparatus according to claim 5, wherein the interference information image generation unit includes a deletion unit configured to delete past data starting from oldest past data, in a case where a number of pieces of data included in the history of the interference is equal to or more than a threshold.
7. The information processing apparatus according to claim 5, wherein the interference information image generation unit incorporates data into the history of the interference, a distance between a position of the virtual objects in the captured image and a position of the virtual objects in an image captured before the captured image is greater than a threshold.
8. The information processing apparatus according to claim 7, wherein the position is a barycentric position determined by calculating an average of all three-dimensional vertex positions of the model data of the virtual objects.
9. The information processing apparatus according to claim 1, further comprising a derivation unit configured to derive reliability of the position and orientation acquired by the position and orientation acquisition unit,
wherein the interference information recording unit does not record the interference information, in a case where the reliability is equal to or less than a threshold.
10. The information processing apparatus according to claim 1, wherein the interference determination unit further determines whether interference is present between any of the virtual objects and a predetermined part of a human body.
11. The information processing apparatus according to claim 1, wherein the interference information recording unit records, as the interference information, at least one of a frame number of an image, a time, identification information of the interfering virtual objects, a shape of the interference part, a three-dimensional position of the interference part, at the time of occurrence of the interference.
12. The information processing apparatus according to claim 1, wherein the output unit outputs the composite image to a head mounted display.
13. A system comprising a head mounted display and an information processing apparatus,
wherein the head mounted display comprises an image capturing unit configured to capture an image of a real space, and
wherein the information processing apparatus comprises,
one or more processors; and
at least one memory that stores a program executed by the one or more processors in order for the image processing apparatus to function as:
a storage unit configured to store model data of a plurality of virtual objects;
a position and orientation acquisition unit configured to acquire a position and orientation of each of the plurality of virtual objects;
an interference determination unit configured to determine whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and the model data of the plurality of virtual objects;
an interference area recording unit configured to, in response to being determined that the interference is present between the virtual objects, determine a set of lines of intersections between interfered virtual objects as a contour of an interference area on a surface of an interfered virtual object and record the contour of the interference area;
a repeating unit to record contours of the interference area by repeating execution of the interference area recording unit at a predetermined timing;
a generation unit configured to generate a composite image including the captured image captured by the image capturing unit, an image of the plurality of virtual objects, and an image representing the contours of the recorded interference area, after an end of the interference; and
an output unit configured to output the generated composite image to the head mounted display.
14. An information processing method comprising:
acquiring a captured image of a real space;
acquiring a position and orientation of each of a plurality of virtual objects;
determining whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and model data of the plurality of virtual objects;
recording a contour of an interference area, in response to being determined that the interference is present between the virtual objects, determine a set of lines of intersections between interfered virtual objects as a contour of an interference area on a surface of an interfered virtual object at a predetermined timing;
recording contours of the interference area by repeating the recording a contour of an interference area at a predetermined timing;
generating a composite image including the captured image, an image of the plurality of virtual objects, and an image representing the recorded contours of the interference area, after an end of the interference; and
outputting the generated composite image.
15. A non-transitory computer-readable storage medium storing a program for causing a computer to perform a method for controlling an information processing apparatus, the information processing method comprising:
acquiring a captured image of a real space;
acquiring a position and orientation of each of a plurality of virtual objects;
determining whether interference is present between the virtual objects based on the position and orientation of each of the plurality of virtual objects and model data of the plurality of virtual objects;
recording a contour of an interference area, in response to being determined that the interference is present between the virtual objects, determine a set of lines of intersections between interfered virtual objects as a contour of an interference area on a surface of an interfered virtual object;
recording contours of the interference area by repeating the recording a contour of an interference area at a predetermined timing;
generating a composite image including the captured image, an image of the plurality of virtual objects, and an image representing the recorded contours of the interference area, after an end of the interference; and
outputting the generated composite image.
US15/195,118 2015-06-30 2016-06-28 Information processing apparatus, information processing method, and storage medium Active US10410420B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015132197A JP6676294B2 (en) 2015-06-30 2015-06-30 Information processing apparatus, information processing method, and program
JP2015-132197 2015-06-30

Publications (2)

Publication Number Publication Date
US20170004653A1 US20170004653A1 (en) 2017-01-05
US10410420B2 true US10410420B2 (en) 2019-09-10

Family

ID=56235656

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/195,118 Active US10410420B2 (en) 2015-06-30 2016-06-28 Information processing apparatus, information processing method, and storage medium

Country Status (5)

Country Link
US (1) US10410420B2 (en)
EP (1) EP3113117A1 (en)
JP (1) JP6676294B2 (en)
KR (1) KR102059834B1 (en)
CN (1) CN106326523B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7039216B2 (en) * 2017-08-30 2022-03-22 キヤノン株式会社 Information processing equipment and its methods, programs
DE102017216953A1 (en) * 2017-09-25 2019-03-28 Siemens Healthcare Gmbh Display of a virtual object associated with a real object in an augmented reality display device
JP7133926B2 (en) * 2018-01-15 2022-09-09 キヤノン株式会社 Information processing device, system, information processing method
JP2019185475A (en) * 2018-04-12 2019-10-24 富士通株式会社 Specification program, specification method, and information processing device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10201947A (en) 1996-11-22 1998-08-04 Sega Enterp Ltd Game device, image data forming method and medium
WO2000038117A1 (en) 1998-12-23 2000-06-29 Washington State University Research Foundation Method and system for a virtual assembly design environment
US6552722B1 (en) * 1998-07-17 2003-04-22 Sensable Technologies, Inc. Systems and methods for sculpting virtual objects in a haptic virtual reality environment
JP2006215750A (en) 2005-02-02 2006-08-17 Canon Inc Image processing method, image processor
JP2006302035A (en) 2005-04-21 2006-11-02 Canon Inc Image processing method and image processing apparatus
US20060256110A1 (en) 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
JP2007004713A (en) 2005-06-27 2007-01-11 Canon Inc Image processing method and image processor
US20080112646A1 (en) * 2005-07-19 2008-05-15 Asako Kitaura Image determining method, image determining apparatus, and recording medium having recorded therein program for causing computer to execute image determining method
US20080218515A1 (en) 2007-03-07 2008-09-11 Rieko Fukushima Three-dimensional-image display system and displaying method
US20110077088A1 (en) * 2009-09-29 2011-03-31 Nintendo Co., Ltd. Computer-readable storage medium having stored information processing program thereon, and information processing apparatus
US20120118947A1 (en) * 2010-11-15 2012-05-17 Bally Gaming, Inc. System and Method for Assisted Maintenance in a Gaming Machine Using a Mobile Device
US20130113802A1 (en) * 2011-04-29 2013-05-09 University Health Network Methods and systems for visualization of 3d parametric data during 2d imaging
US20130182012A1 (en) 2012-01-12 2013-07-18 Samsung Electronics Co., Ltd. Method of providing augmented reality and terminal supporting the same
CN103472909A (en) 2012-04-10 2013-12-25 微软公司 Realistic occlusion for a head mounted augmented reality display
KR20140028096A (en) 2011-12-15 2014-03-07 한양대학교 산학협력단 Apparatus and method for providing haptic on vitrual video
US20140104274A1 (en) * 2012-10-17 2014-04-17 Microsoft Corporation Grasping virtual objects in augmented reality
CN104641399A (en) 2012-02-23 2015-05-20 查尔斯·D·休斯顿 System and method for creating an environment and for sharing location-based experiences in an environment
US20150282116A1 (en) * 2014-03-28 2015-10-01 Electronics And Telecommunications Research Institute Method for determining moving direction of terminal and correcting position thereof, and positioning apparatus using the method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10201947A (en) 1996-11-22 1998-08-04 Sega Enterp Ltd Game device, image data forming method and medium
US6552722B1 (en) * 1998-07-17 2003-04-22 Sensable Technologies, Inc. Systems and methods for sculpting virtual objects in a haptic virtual reality environment
WO2000038117A1 (en) 1998-12-23 2000-06-29 Washington State University Research Foundation Method and system for a virtual assembly design environment
JP2006215750A (en) 2005-02-02 2006-08-17 Canon Inc Image processing method, image processor
JP2006302035A (en) 2005-04-21 2006-11-02 Canon Inc Image processing method and image processing apparatus
US20060256110A1 (en) 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
JP2007004713A (en) 2005-06-27 2007-01-11 Canon Inc Image processing method and image processor
US20080112646A1 (en) * 2005-07-19 2008-05-15 Asako Kitaura Image determining method, image determining apparatus, and recording medium having recorded therein program for causing computer to execute image determining method
US20080218515A1 (en) 2007-03-07 2008-09-11 Rieko Fukushima Three-dimensional-image display system and displaying method
US20110077088A1 (en) * 2009-09-29 2011-03-31 Nintendo Co., Ltd. Computer-readable storage medium having stored information processing program thereon, and information processing apparatus
US20120118947A1 (en) * 2010-11-15 2012-05-17 Bally Gaming, Inc. System and Method for Assisted Maintenance in a Gaming Machine Using a Mobile Device
US20130113802A1 (en) * 2011-04-29 2013-05-09 University Health Network Methods and systems for visualization of 3d parametric data during 2d imaging
KR20140028096A (en) 2011-12-15 2014-03-07 한양대학교 산학협력단 Apparatus and method for providing haptic on vitrual video
US20130182012A1 (en) 2012-01-12 2013-07-18 Samsung Electronics Co., Ltd. Method of providing augmented reality and terminal supporting the same
CN104641399A (en) 2012-02-23 2015-05-20 查尔斯·D·休斯顿 System and method for creating an environment and for sharing location-based experiences in an environment
CN103472909A (en) 2012-04-10 2013-12-25 微软公司 Realistic occlusion for a head mounted augmented reality display
US20140104274A1 (en) * 2012-10-17 2014-04-17 Microsoft Corporation Grasping virtual objects in augmented reality
US20150282116A1 (en) * 2014-03-28 2015-10-01 Electronics And Telecommunications Research Institute Method for determining moving direction of terminal and correcting position thereof, and positioning apparatus using the method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BARNHILL R E, KERSEY S N: "A MARCHING METHOD FOR PARAMETRIC SURFACE/SURFACE INTERSECTION", COMPUTER AIDED GEOMETRIC DESIGN., XX, XX, vol. 07, no. 01-04, 16 April 1990 (1990-04-16), XX, pages 257 - 280, XP008023488, DOI: 10.1016/0167-8396(90)90035-P
Barnhill, et al., "A marching method for parametric surface/surface intersection", Computer Aided Geometric Design, 1990, pp. 257-280, vol. 7, No. 1-04, XP008023488.
Jayaram, et al., "A virtual assembly design environment", Proceedings IEEE Virtual Reality, Mar. 13, 1999, pp. 172-179,, XP055315372.
S. JAYARAM, YONG WANG, U. JAYARAM, K. LYONS, P. HART: "A Virtual Assembly Design Environment", VIRTUAL REALITY, 1999. PROCEEDINGS., IEEE HOUSTON, TX, USA 13-17 MARCH 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 1 January 1999 (1999-01-01), US, pages 172 - 179, XP055315372, ISBN: 978-0-7695-0093-5, DOI: 10.1109/VR.1999.756948

Also Published As

Publication number Publication date
CN106326523B (en) 2020-06-30
JP6676294B2 (en) 2020-04-08
KR20170003435A (en) 2017-01-09
KR102059834B1 (en) 2019-12-27
JP2017016376A (en) 2017-01-19
EP3113117A1 (en) 2017-01-04
US20170004653A1 (en) 2017-01-05
CN106326523A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
US10453235B2 (en) Image processing apparatus displaying image of virtual object and method of displaying the same
US10366539B2 (en) Information processing apparatus, information processing method, and storage medium for reporting based on elapse time and positional relationships between 3-D objects
US11490062B2 (en) Information processing apparatus, information processing method, and storage medium
US9607439B2 (en) Information processing apparatus and information processing method
US10410420B2 (en) Information processing apparatus, information processing method, and storage medium
US10614590B2 (en) Apparatus for determination of interference between virtual objects, control method of the apparatus, and storage medium
US11508150B2 (en) Image processing apparatus and method of controlling the same
US20170249822A1 (en) Apparatus configured to issue warning to wearer of display, and method therefor
US11527020B2 (en) Information processing apparatus, information processing method, and storage medium
US10573073B2 (en) Information processing apparatus, information processing method, and storage medium
JP2015079433A5 (en)
US20170365084A1 (en) Image generating apparatus and image generating method
US11423622B2 (en) Apparatus for generating feature positions in a virtual world, information processing method, and storage medium
US10672196B2 (en) Information processing apparatus and method of controlling an information processing apparatus that generate, for display, a composed image in which a computer graphics object is overland on a portion of the image
JP2007004713A (en) Image processing method and image processor
JP2017184136A (en) Information processing device, information processing method, information processing system, and program
JP7039216B2 (en) Information processing equipment and its methods, programs
JP6983939B2 (en) Information processing equipment, information processing methods, programs
US10068375B2 (en) Information processing apparatus, information processing method, and recording medium
US20200258193A1 (en) Information processing apparatus, information processing method, and storage medium
JP6572406B1 (en) Warning device, detection device, warning method, program, and non-transitory computer-readable information recording medium
JP2017084319A (en) Information processor, information processing method, and program
WO2016148142A1 (en) Display data processing device, display device, and program
JP2019168252A (en) Processing device, three-dimensional model division method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAKEMOTO, KAZUKI;REEL/FRAME:039680/0612

Effective date: 20160603

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4