[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US10089909B2 - Display control method, display control device, and display system - Google Patents

Display control method, display control device, and display system Download PDF

Info

Publication number
US10089909B2
US10089909B2 US15/700,783 US201715700783A US10089909B2 US 10089909 B2 US10089909 B2 US 10089909B2 US 201715700783 A US201715700783 A US 201715700783A US 10089909 B2 US10089909 B2 US 10089909B2
Authority
US
United States
Prior art keywords
targets
user
image
picture
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/700,783
Other versions
US20170372649A1 (en
Inventor
Atsushi Itoh
Aiko OHTSUKA
Tetsuya Sakayori
Hidekazu Suzuki
Takanobu Tanaka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to US15/700,783 priority Critical patent/US10089909B2/en
Publication of US20170372649A1 publication Critical patent/US20170372649A1/en
Application granted granted Critical
Publication of US10089909B2 publication Critical patent/US10089909B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/002Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to project the image of a two-dimensional display, such as an array of light emitting or modulating elements or a CRT
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/106Determination of movement vectors or equivalent parameters within the image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background

Definitions

  • the present invention relates to a display control method, a display control device, and a display system.
  • 3DCG three-dimensional coordinates
  • 3DCG three-dimensional coordinates
  • Japanese Patent No. 4161325 discloses a technique of displaying an image made by an animator by hand-drawing while adding motion thereto and combining therewith a background image prepared in advance, in an image using 3DCG.
  • a display control method includes: inputting user's image including a drawing portion made by hand drawing and being a display target image; and performing image control including causing the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moving the user's image that has emerged.
  • a display control device includes: an input unit that inputs user's image including a drawing portion made by hand drawing and being a display target image; and an image control unit that causes the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moves the user's image that has emerged.
  • a display system includes: an image input apparatus; an image processing apparatus; and at least one display apparatus.
  • the image input apparatus includes an image input unit, through which an image including a drawing portion made by hand drawing and being a display target image is input.
  • the display apparatus includes a display unit that displays an image.
  • the image processing apparatus includes an input unit that inputs user's image including the drawing portion made by hand drawing and being a display target image, originating from the image input through the image input apparatus, and an image control unit that causes the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moves the user's image that has emerged.
  • FIG. 1 is a block diagram illustrating an example configuration of a display system according to a first embodiment
  • FIG. 2 is a diagram schematically illustrating image-data space with a three-dimensional coordinate system according to the embodiment
  • FIG. 3 is a block diagram illustrating an example configuration of a PC applicable to the embodiment
  • FIG. 4 is an example functional block diagram for describing functions of the PC according to the embodiment.
  • FIG. 5 is an example flowchart of a general procedure of a display control process according to the embodiment.
  • FIG. 6 is a diagram illustrating an example of a sheet on which a drawing is to be hand-drawn according to the embodiment
  • FIG. 7 is a diagram for describing an aspect ratio R of user's image data according to the embodiment.
  • FIG. 8 is a flowchart of a procedure of a deformation-mode determining process to be performed by a mode determining unit according to the embodiment
  • FIG. 9 is a diagram for describing motion of a first mode according to the embodiment.
  • FIG. 10 is a diagram for describing motion of a second mode according to the embodiment.
  • FIG. 11 is a diagram for describing motion of a third mode according to the embodiment.
  • FIG. 12 is a flowchart of an example procedure of a parameter-generating-and-determining process to be performed by a parameter generating unit according to the embodiment
  • FIG. 13 is a diagram illustrating an example of a method for determining parameters p using feature values c and maximum values m according to the embodiment
  • FIGS. 14A and 14B are diagrams for describing projection of image-data space according to the embodiment.
  • FIG. 15 is a flowchart of an example procedure of controlling motion of target user's object by an image control unit according to the embodiment
  • FIG. 16 is a diagram for describing a first modification of the embodiment
  • FIG. 17 is a diagram for describing a second modification of the embodiment.
  • FIG. 18 is a diagram for describing a third modification of the embodiment.
  • FIG. 1 is a block diagram illustrating an example configuration of a display system according to a first embodiment.
  • a display system 1 includes a personal computer (PC) 10 , one or more projector apparatuses (PJs) 11 1 , 11 2 , and 11 3 , and a scanner apparatus 20 .
  • the PC 10 generates display image data by performing predetermined image processing on image data obtained by the scanner apparatus 20 by scanning an original document (hereinafter, “document”) 21 and supplies the display image data to the PJs 11 1 , 11 2 , and 11 3 .
  • the PJs 11 1 , 11 2 , and 11 3 project respective images 13 1 , 13 2 , and 13 3 onto a screen 12 according to the display image data fed from the PC 10 .
  • a camera 14 captures the images 13 1 , 13 2 , and 13 3 projected on the screen 12 .
  • the PC 10 adjusts images of the overlapping portions by controlling the images 13 1 , 13 2 , and 13 3 or the PJs 11 1 , 11 2 , and 11 3 based on captured image data.
  • an image of the document 21 on which a picture 22 is hand-drawn by a user 23 , for example, is read by the scanner apparatus 20 .
  • the scanner apparatus 20 obtains document image data by scanning the image of the document 21 and supplies the document image data to the PC 10 .
  • the PC 10 extracts data representing the picture 22 from the document image data fed from the scanner apparatus 20 , and holds the extracted image data as user's image data.
  • the PC 10 generates image-data space with a three-dimensional coordinate system.
  • the PC 10 assigns coordinates in the image-data space to user's image data, thereby incorporating the user's image data as data in the image-data space.
  • the user's image data in the three-dimensional image-data space is referred to as “user's object”.
  • the PC 10 generates image data by projecting the three-dimensional image-data space including the user's object onto a two-dimensional image-data plane.
  • the PC 10 then divides the generated image data into a number of pieces as many as the number (in this embodiment, three) of the PJs 11 1 , 11 2 , and 11 3 , and supplies the divided image data to the PJs 11 1 , 11 2 , and 11 3 .
  • the PC 10 is capable of adding motion in the image-data space to user's object in the following manner, for example.
  • the PC 10 obtains feature values of user's image data, from which the user's object derives, and generates motion-related parameters including a deformation mode of the user's object.
  • the PC 10 adds motion to the user's object in the image-data space by applying the parameters to the user's image data.
  • the PC 10 can include a plurality of user's objects in the same image-data space. Accordingly, when the operation described above is repeatedly performed, the user 23 can view each of images derived from a plurality of the pictures 22 , which may be different pictures, for example, as moving in the three-dimensional image-data space.
  • FIG. 2 schematically illustrates image-data space, which is generated by the PC 10 , with a three-dimensional coordinate system according to the embodiment.
  • the Cartesian coordinate system having three mutually-orthogonal coordinate axes (the x-axis, the y-axis, and the z-axis) is used as the three-dimensional coordinate system.
  • the PC 10 generates image-data space 30 with a three-dimensional coordinate system in which the x-axis, the y-axis, and the z-axis are the height axis, the width axis, and the depth axis, respectively.
  • the image-data space 30 includes user's objects 40 4 , 40 2 , 40 3 , 40 4 , and 40 5 derived from user's image data.
  • the PC 10 defines a defined region 31 , which is a space defined by a height H, a width W, and a depth D having respective values determined in advance, in the image-data space 30 .
  • Motion of the user's objects 40 4 , 40 2 . . . derived from the user's image data are confined within the defined region 31 .
  • the pictures 22 from which the user's objects 40 4 , 40 2 . . . derive, are creatures living in water such as fish, squids, octopuses, and jellyfish.
  • the defined region 31 can be considered as a virtual aquarium.
  • the defined region 31 is referred to as the virtual aquarium 31 .
  • FIG. 3 illustrates an example configuration of the PC 10 that is applicable to the embodiment.
  • a central processing unit (CPU) 110 a read only memory (ROM) 111 , a random access memory (RAM) 112 , and a display control unit 113 are connected to a bus 100 .
  • the CPU 110 controls the entire PC 10 according to program stored in advance in the ROM 111 and the storage 114 by using the RAM 112 as a working memory.
  • the display control unit 113 to which a monitor 120 is connected, converts a display control signal generated by the CPU 110 to a signal usable by the monitor 120 for displaying, and outputs the converted signal to the monitor 120 .
  • the display control unit 113 is also capable of converting the display control signal into signals usable by the projector apparatuses 11 1 , 11 2 , and 11 3 for displaying, and outputting the signals to the PJs 11 1 , 11 2 , and 11 3 .
  • the storage 114 is a storage medium capable of nonvolatile storage of data and can be, for instance, a hard disk drive. Alternatively, a nonvolatile semiconductor memory, such as a flash memory, may be used as the storage 114 .
  • the storage 114 stores the program to be executed by the CPU 110 and various types of data.
  • the data I/F 115 controls data inputs and outputs from and to external equipment.
  • the data I/F 115 is used as an interface between the PC 10 and the scanner apparatus 20 .
  • the data I/F 115 also receives signal input from a pointing device, such as a mouse, and a keyboard (KBD) (not shown).
  • the display control signal generated by the CPU 110 may be output via the data I/F 115 to the projector apparatuses 11 1 , 11 2 , and 11 3 , for example.
  • Universal serial bus (USB) interface, Bluetooth (registered trademark) interface, or the like interface can be used as the data I/F 115 .
  • the communication I/F 116 controls communication over a network such as the Internet or a local area network (LAN).
  • a network such as the Internet or a local area network (LAN).
  • LAN local area network
  • FIG. 4 is an example functional block diagram for describing functions of the PC 10 according to the embodiment.
  • the PC 10 includes an input unit 130 , an image obtaining unit 131 , a 3D-space generating unit 132 , a region defining unit 133 , a mode determining unit 134 , a parameter generating unit 135 , an image control unit 136 , and a storage unit 137 .
  • the input unit 130 inputs an image including a drawing portion hand-drawn by the user 23 .
  • the input unit 130 may have a function of the data I/F 115 and input document image data obtained by the scanner apparatus 20 by scanning an image of the document 21 including the picture 22 , which is the drawing portion hand-drawn by the user 23 .
  • the input unit 130 extracts, as user's image data, image data representing the picture 22 from the document image data.
  • the 3D-space generating unit 132 generates the image-data space 30 having the three-dimensional coordinate axes, which are the three axes: the height axis, the width axis, and the depth axis, described above with reference to FIG. 2 .
  • the 3D-space generating unit 132 generates the image-data space 30 as an address space in the RAM 112 , for example.
  • the region defining unit 133 defines the defined region 31 (the virtual aquarium 31 ) with the height H, the width W, and the depth D, which are the predetermined values, in the image-data space 30 generated by the 3D-space generating unit 132 .
  • the image obtaining unit 131 sets a predetermined eye point with respect to the image-data space 30 , projects the image-data space 30 onto a two-dimensional image-data plane from the set eye point, and obtains image data to be projected by the projector apparatuses 11 1 , 11 2 , and 11 3 .
  • the mode determining unit 134 determines a mode, which is to be assigned to the user's object derived from the user's image data when the user's image data is included in the image-data space 30 , and based on which user's object is to be deformed (hereinafter, “deformation mode”), based on the user's image data obtained by the input unit 130 .
  • the parameter generating unit 135 generates motion-related parameters that determine motion-related properties of the user's object, based on the user's image data obtained by the input unit 130 .
  • the image control unit 136 controls motion of the user's object in the image-data space 30 according to the deformation mode determined by the mode determining unit 134 and the parameters generated by the parameter generating unit 135 . In other words, the image control unit 136 performs control of assigning coordinates in the virtual aquarium 31 in the image-data space 30 to the user's object and continuously changing the coordinates with lapse of time.
  • the storage unit 137 corresponds to the RAM 112 and stores user's image data, from which user's object derives, and the like.
  • the storage unit 137 is not limited to the RAM 112 , and the storage 114 may be used as the storage unit 137 .
  • determination of the deformation mode by the mode determining unit 134 and generation of the parameters by the parameter generating unit 135 are performed by using the user's image data stored in the storage unit 137 .
  • the image control unit 136 is capable of incorporating the user's image data into the image-data space 30 as user's object by assigning coordinates in the virtual aquarium 31 to the user's image data stored in the storage unit 137 .
  • the image control unit 136 deforms and moves the user's object according to the deformation mode and the parameters.
  • the input unit 130 , the image obtaining unit 131 , the 3D-space generating unit 132 , the region defining unit 133 , the mode determining unit 134 , the parameter generating unit 135 , and the image control unit 136 included in the PC 10 may be implemented by a display control program that is stored in the storage 114 in advance and executed on the CPU 110 , for instance.
  • the display control program may be provided by being recorded as a file in an installable format or in an executable format in a computer-readable recording medium such as a compact disc (CD), a flexible disk (FD), or a digital versatile disk (DVD).
  • the display control program to be executed by the PC 10 of the embodiment may be configured to be provided by being stored in a computer connected to a network such as the Internet and downloaded over the network.
  • the display control program to be executed by the PC 10 of the embodiment may be configured to be provided or distributed over a network such as the Internet.
  • the display control program of the embodiment may be configured to be provided by being installed in the ROM 111 or the like in advance.
  • the display control program to be executed by the PC 10 of the embodiment has a module structure including the units (the input unit 130 , the image obtaining unit 131 , the 3D-space generating unit 132 , the region defining unit 133 , the mode determining unit 134 , the parameter generating unit 135 , and the image control unit 136 ) described above.
  • the CPU 110 reads out the display control program from a storage medium, e.g., the storage 114 or the ROM 111 , and executes the program to load the units on a main storage device, e.g., the RAM 112 , thereby generating the input unit 130 , the image obtaining unit 131 , the 3D-space generating unit 132 , the region defining unit 133 , the mode determining unit 134 , the parameter generating unit 135 , and the image control unit 136 on the main storage device.
  • a storage medium e.g., the storage 114 or the ROM 111
  • main storage device e.g., the RAM 112
  • FIG. 5 is an example flowchart of a general procedure of the display control process according to the embodiment. Prior to execution of the process of this flowchart, a hand-drawn picture is made by a user. It is assumed in this example that a user hand-draws a picture on a sheet of a predetermined format.
  • FIG. 6 illustrates an example of a sheet on which a picture is to be hand-drawn according to the embodiment.
  • a drawing region 210 and a title text region 211 are arranged on a sheet 200 illustrated on the left of FIG. 6 .
  • a picture 212 from which an object (user's object) to be incorporated in the image-data space 30 is to be derived, is to be hand-drawn in the drawing region 210 .
  • a title of the picture drawn in the drawing region 210 is to be written in the title text region 211 .
  • Marks 220 1 , 220 2 , and 220 3 are arranged on three corners of the four corners of the sheet 200 . Orientation and size of the sheet 200 can be obtained by detecting the marks 220 1 , 220 2 , and 220 3 from a document image, which is obtained by the scanner apparatus 20 by scanning an image of the sheet 200 .
  • Step S 10 of the flowchart illustrated in FIG. 5 the scanner apparatus 20 scans an image of the sheet 200 , on which the picture 212 is hand-drawn by a user, and supplies the document image data originating from the scanned image to the PC 10 , in which the document image data is input to the input unit 130 .
  • the input unit 130 of the PC 10 extracts user's image data from the input document image data. More specifically, the input unit 130 detects the marks 220 1 , 220 2 , and 220 3 from the document image data. The input unit 130 determines orientation and size of the document image data based on locations of the marks 220 1 , 220 2 , and 220 3 on the document image data.
  • the input unit 130 may find a markerless corner, which is one of the four corners of the document image data and at which none of the marks 220 1 , 220 2 , and 220 3 is detected, and determines the orientation of the document image data based on the location of the markerless corner. In the example illustrated in FIG. 6 , the input unit 130 determines that the markerless corner, at which none of the marks 220 1 , 220 2 , and 220 3 is detected, is the bottom right corner of the document image data. Based on this, the locations of the marks 220 1 , 220 2 , and 220 3 on the document image data may be determined.
  • the input unit 130 measures distances between the respective marks 220 1 , 220 2 , and 220 3 , and compares the distances against known corresponding distances stored in the storage 114 or the like in advance. Warping in vertical and horizontal size of the document image data can be corrected based on a result of this comparison.
  • the input unit 130 extracts the drawing region 210 and the title text region 211 from the document image data based on the orientation and the size of the document image data obtained as described above.
  • An example of an image represented by the image data of the drawing region 210 and that of the title text region 211 extracted from the document image data are illustrated on the right of FIG. 6 .
  • the input unit 130 further extracts the picture 212 from the image data representing the drawing region 210 .
  • the input unit 130 may extract the picture 212 by determining whether each pixel constituting the image data representing the drawing region 210 is white (which is the background color of the sheet 200 ) or other-than-white.
  • the input unit 130 determines image data in a minimum rectangular region 213 including the picture 212 and having a lower side parallel to the lower side of the drawing region 210 as user's image data.
  • the input unit 130 stores the user's image data in the storage unit 137 .
  • Step S 12 the mode determining unit 134 of the PC 10 determines a deformation mode to be assigned to the user's image data extracted in Step S 11 .
  • a deformation mode is assigned to each of user's objects in the virtual aquarium 31 . Motion of the each user's object in the virtual aquarium 31 is controlled based on the deformation mode.
  • three deformation modes are defined.
  • the three modes are: a first mode in which user's object is deformed such that a trailing one (with respect to a moving direction in which the user's object moves) of two parts, into which the user's object is vertically divided, is moved; a second mode in which the user's object is deformed such that the user's object is vertically extended and contracted the object; and a third mode in which the user's object is deformed such that a lower one of two parts, into which the user's object is horizontally divided, is moved.
  • an aspect ratio R of the user's image data is obtained. Which one of the first, second, and third modes is to be assigned to the user's object originating from the user's image data is determined according to the aspect ratio R.
  • the aspect ratio R of user's image data is described below with reference to FIG. 7 .
  • the rectangular region 213 illustrated in FIG. 7 is a minimum region including a portion of the picture 212 (which is, for example, a portion where pixel values are other-than-white) in the drawing region 210 and having the lower side parallel to the lower side of the drawing region 210 as described above.
  • the aspect ratio R is expressed by Equation (1) below using a height h and a width w of the rectangular region 213 . Equation (1) indicates that the larger the aspect ratio R is, the more vertically elongated the rectangular region 213 is.
  • R h/w (1)
  • the direction extending through the marks 220 1 and 220 3 is the vertical direction of the rectangular region 213 ; the direction extending through the marks 220 1 and 220 2 is the horizontal direction of the rectangular region 213 .
  • FIG. 8 is a flowchart of a procedure of a deformation-mode determining process to be performed by the mode determining unit 134 in Step S 12 .
  • the mode determining unit 134 obtains the aspect ratio R of the user's image data in Step S 20 as described above.
  • the mode determining unit 134 makes determination about the value of the aspect ratio R in Step S 21 . More specifically, in the embodiment, the mode determining unit 134 determines to which one of a range smaller than 1, a range from equal to or larger than 1 to smaller than a predetermined value s, and a range equal to or larger than the predetermined value s the aspect ratio R belongs.
  • the deformation mode is assigned to the user's object originating from the user's image data according to the aspect ratio R.
  • Step S 21 If the mode determining unit 134 determines that the aspect ratio R is smaller than 1 in Step S 21 , the process proceeds to Step S 22 , in which the mode determining unit 134 assigns the first mode to the object originating from the user's image data.
  • FIG. 9 Motion of the first mode according to the embodiment is described below with reference to FIG. 9 .
  • FIGS. 10 and 11 which will be referenced later, the rectangular region 213 is described to be larger relative to the picture 212 for clarity.
  • the first mode is a mode in which an image of the rectangular region 213 represented by the user's image data is deformed in the following manner.
  • the image is vertically divided at a dividing line 221 which may be, for example, at a center of the image.
  • One (hereinafter, referred to as “moving part”) of two parts, into which the image is divided, is continuously moved or, more specifically, moved continuously with lapse of time.
  • the first mode is meant for creating a typical motion of fish or the like.
  • the moving part is defined so as to swing in a front and back direction on the dividing line 221 within a predetermined angular range. That is, when the rectangular region 213 is in the image-data space 30 , the moving part is moved such that the moving part pivots about the dividing line 221 within the predetermined angular range in a plane, which is orthogonal to a plane where the rectangular region 213 initially resides and to the dividing line 221 .
  • the moving part is preferably a trailing part with respect to the moving direction of the user's object, so that appearance of motion similar to a swimming motion of fish can be created.
  • a scheme of specifying the orientation of the picture 212 hand-drawn on the drawing region 210 of the sheet 200 in advance can be employed, for example.
  • an applicable scheme is not limited thereto.
  • a mark indicating the moving direction may be embedded in the picture 212 .
  • the eye is drawn in black, and black is not used in other portions than the eye. Accordingly, the orientation of the picture 212 can be determined by detecting a black portion from the picture 212 .
  • Step S 21 the process proceeds to Step S 23 , in which the mode determining unit 134 assigns the second mode to the object originating from the user's image data.
  • the second mode according to the embodiment is described below with reference to FIG. 10 .
  • the image of the rectangular region 213 represented by the user's image data is deformed such that the image is continuously vertically extended and contracted.
  • the image may be extended and contracted at predetermined intervals, for example.
  • the second mode is meant for creating motion of a creature, such as jellyfish, orientation of which is obscure and which is less active.
  • a value which will cause an extended image and a contracted image to appear natural is to be employed as the predetermined value s, which is a criterion for determining the second mode and the third mode described later.
  • a value which will not cause a vertically extended image to appear to be excessively elongated and will not cause a vertically contracted image to appear to be excessively flattened may preferably be experimentally obtained as the predetermined value s, for example.
  • Step S 21 If the mode determining unit 134 determines that the aspect ratio R is larger than the predetermined value s in Step S 21 , the process proceeds to Step S 24 , in which the mode determining unit 134 assigns the third mode to the object originating from the user's image data.
  • the third mode is a mode in which the image of the rectangular region 213 represented by the user's image data is deformed in the following manner.
  • the image is horizontally divided at a dividing line 224 which may be, for example, at a center of the image.
  • One of two parts, into which the image is divided, is moved as a moving part.
  • the third mode is meant for creating motion of a creature, such as a squid or an octopus, which is usually represented in a vertically elongated form.
  • the height of the moving part is kept constant, and an upper side of the moving part is made be the dividing line 224 , and a lower side of the moving part is continuously moved in a left and right direction. More specifically, in the third mode, the moving part moves on the plane of the rectangular region 213 while keeping its parallelogram shape, which is constant in height and the upper side of which is the dividing line 224 .
  • Step S 22 to S 24 When one of the deformation modes is assigned in any one of Steps S 22 to S 24 , the process proceeds to Step S 13 of FIG. 5 .
  • the parameter generating unit 135 of the PC 10 generates and determines parameters p that determine properties (hereinafter, “motion properties”) related to motion of user's object originating from the user's image data in the image-data space 30 for the user's object in Step S 13 .
  • the motion of the user's object in the image-data space 30 is controlled according to the parameters p generated and determined here.
  • the parameter generating unit 135 determines parameters p 0 to p 8 , which are the following nine types of motion properties, on a per-user's-object basis.
  • the parameter p 0 , the maximum moving-direction velocity v max , and p 1 , the moving-direction acceleration a are parameters for controlling the moving-direction velocity of the user's object in the image-data space 30 .
  • the parameter p 2 , the maximum angular acceleration ⁇ h max in the horizontal plane, and p 3 , the maximum angular acceleration ⁇ v max in the vertical plane, are parameters for rotating the user's object in the horizontal plane and the vertical plane, respectively.
  • the parameter p 4 is a parameter for providing the maximum value of the changing rate when the moving velocity of the user's object in the vertical direction is to be changed. More specifically, in the embodiment, vertical motion of the user's object is confined within the virtual aquarium 31 by reversing a vertical component of the moving direction when the user's objects contacts a top end surface (ceiling) or a bottom end surface (floor) of the virtual aquarium 31 .
  • the parameter p 4 provides the maximum value of the angular reversal velocity at which the vertical component of the moving direction is to be reversed.
  • the parameters p 5 and p 6 are parameters for setting a bound to angular acceleration of a rotating motion of the user's object in the vertical direction or, in other words, upward or downward to the moving direction.
  • Angular acceleration given by the parameters p 5 and p 6 is limited by the parameter p 3 , the angular acceleration ⁇ v max in the vertical plane.
  • the parameters p 7 and p 8 are parameters for setting a bound to angular acceleration of a rotating motion of the user's object in the horizontal direction or, more specifically, rightward or leftward to the moving direction.
  • FIG. 12 is a flowchart of an example procedure of a parameter-generating-and-determining process to be performed by the parameter generating unit 135 in Step S 13 .
  • the parameter generating unit 135 extracts feature values c 0 to c 8 of the user's image data corresponding to the respective parameters p 0 to p 8 in Step S 30 .
  • the parameter generating unit 135 obtains maximum values m 1 to m 3 of each of the parameters p 0 to p 8 for each of the deformation modes or, more specifically, for each of the first, second, and third modes in Step S 31 .
  • Step S 32 the parameter generating unit 135 determines the parameters p 0 to p 8 of the user's object originating from the user's image data by using the feature values c 0 to c 8 extracted in Step S 30 and the maximum values m 1 to m 3 for the respective parameters p 0 to p 8 obtained in Step S 31 .
  • Feature values of an image can be obtained based on color distribution, edge level, or the like of user's image data.
  • feature values may be obtained based on bit information of each pixel constituting user's image data.
  • the feature values are obtained using the method described in Japanese Laid-open Patent Application No. 2009-101122.
  • the parameter generating unit 135 repeatedly and sequentially extracts n-bit (n is a predetermined positive integer) data, i.e., predetermined-bit strings, from user's image data stored in the storage unit 137 .
  • n-bit a predetermined positive integer
  • the parameter generating unit 135 repeatedly and sequentially extracts 8-bit data from the user's image data until the terminal end of the user's image data.
  • the n-bit strings extracted from the user's image data are referred to as extracted data.
  • the parameter generating unit 135 classifies the extracted data into the parameters p 0 to p 8 described above in accordance with a preset classification rule.
  • the classification rule provides, for example, data ranges associated with the respective parameters p 0 to p 8 for classifying the extracted data into the parameters p 0 to p 8 .
  • a range of data “h00 to h6F” is classified into the “parameter p 0 ”; a range of data “h70 to h9F” is classified into the “parameter p 1 ”; a range of data “hA0 to hFF” is classified into the “parameter p 2 ”.
  • the parameter generating unit 135 classifies the extracted data extracted from the user's image data in accordance with the ranges of data, which is preset as the classification rule. For example, when the extracted data is “b01101100” (the prefix letter “b” indicates that digits following “b” are in the binary notation), this eight-digit binary number is converted to “h6C” in the two-digit hexadecimal notation. Accordingly, when the extracted data is “b01101100”, the extracted data is classified into the parameter p 0 to which the range of data “h00 to h6F” is allocated. When the extracted data is “b00110100”, which is “h34” in the two-digit hexadecimal notation, this value “b00110100” is also classified into the parameter p 0 .
  • the parameter generating unit 135 calculates, for each of the parameters p 0 to p 8 , an extraction count (extraction frequency) of how many times extracted data is classified into the parameter. For example, in the example described above, extracted data is classified into the parameter p 0 two times (“b01101100” and “b00110100”); accordingly, the extraction count is “2”. This extraction count is used as the feature value c 0 corresponding to the parameter p 0 .
  • Step S 31 of FIG. 12 the parameter generating unit 135 obtains the maximum values m, which have been set in advance for each of the parameters p 0 to p 8 on a per-deformation-mode basis.
  • the maximum values m, which have been set on the per-deformation-mode basis for each of the respective parameters p 0 to p 8 are stored in a nonvolatile storage medium such as the storage 114 in advance.
  • the parameter generating unit 135 obtains the maximum values m by reading them from the nonvolatile storage medium in Step S 31 .
  • Step S 32 of FIG. 12 the parameter generating unit 135 determines the parameters p 0 to p 8 by using the feature values c 0 to c 8 obtained in Step S 30 and the maximum values m obtained in Step S 31 .
  • FIG. 13 illustrates an example of a method for determining the parameters p 0 to p 8 using the feature values c 0 to c 8 and the maximum values m according to the embodiment. Note that only the parameters p 0 to p 2 of the parameters p 0 to p 8 are illustrated in FIG. 13 for clarity.
  • the suffix letter “x” added to the maximum value m xy indicates the parameters p 0 to p 8 ; the suffix letter “y” indicates the first to third mode. More specifically, maximum values m 01 , m 02 , and m 03 for the first, second, and third modes, respectively, are set for the parameter p 0 in advance. Similarly, maximum values m 11 , m 12 , and m 13 for the first, second, and third modes, respectively, are set for the parameter p 1 in advance. In this example, each of the maximum values m xy is a value larger than 0 and equal to or smaller than 1.
  • the parameter p 2 can be determined in a similar manner.
  • the parameter generating unit 135 stores the parameters p 0 to p 8 determined in this manner in the storage unit 137 .
  • Step S 14 the image control unit 136 defines a partially-transparent region for the user's image data.
  • the image control unit 136 detects an inner region and an outer region of the part of the picture 212 extracted by the input unit 130 from the rectangular region 213 including the user's image data as described above with reference to FIG. 6 .
  • the image control unit 136 sets white pixels in the detected inner region to be partially-transparent.
  • the image control unit 136 sets the detected outer region to be transparent.
  • Image data can be set to be partially-transparent or transparent by using a known alpha-blending technique.
  • Defining a partially-transparent region on the part of the picture 212 represented by the user's image data in this manner makes the user's object originating from the user's image data appear more water-creature-like in the virtual aquarium 31 .
  • next Step S 15 the image control unit 136 assigns, to the user's object, initial coordinates, at which the user's object originating from the user's image data is to be displayed in the virtual aquarium 31 for the first time.
  • initial coordinates at which the user's object originating from the user's image data is to be displayed in the virtual aquarium 31 for the first time.
  • Projection of the image-data space 30 by the image obtaining unit 131 according to the embodiment is described below with reference to FIGS. 14A and 14B .
  • the virtual aquarium 31 is projected because motion of the user's objects is confined within the virtual aquarium 31 in the image-data space 30 .
  • the depth and the width in the image-data space 30 are depicted as the vertical direction and the horizontal direction, respectively.
  • the image obtaining unit 131 projects the virtual aquarium 31 onto a plane containing the height H and the width W along the depth direction of the virtual aquarium 31 from an eye point 231 , which is located outside the virtual aquarium 31 , with an angular field of view ⁇ as illustrated in FIG. 14A .
  • projection is performed by perspective projection.
  • the position of the eye point 231 and the angular field of view ⁇ are set so as to form at least one of a left dead zone 240 1 and a right dead zone 240 2 , each of which is a dead angle zone that is in a dead angle with respect to the eye point 231 , in the virtual aquarium 31 .
  • a region of width W v that is obtained by excluding the left dead zone 240 1 and the right dead zone 240 2 among from the plane with the height H and the width W of the virtual aquarium 31 , is projected as illustrated in FIG. 14B .
  • image data representing the portions of the left dead zone 240 1 and the right dead zone 240 2 in the virtual aquarium 31 is provided, the portions are not displayed on a projection image (displayed area) projected onto the two-dimensional plane. In other words, images of the left dead zone 240 1 and the right dead zone 240 2 are not visible in the image of the virtual aquarium 31 projected onto the two-dimensional plane.
  • the image control unit 136 sets the initial coordinates when the user's object emerges into the virtual aquarium 31 , to within the dead zone 240 1 or 240 2 .
  • the user's object, to which the initial coordinates in the dead zone 240 1 or 240 2 are assigned, is controlled by the image control unit 136 to move to enter the region of the width W v from the dead zone 240 1 or 240 2 . Accordingly, when user's object is newly created, the new user's object appears to move such that the new user's object emerges from an end (the left end or the right end) of the projection image. Consequently, an unnatural impression, which would otherwise be given if the new user's object emerges at a portion in the projected image where the object was not at all displayed, can be ⁇ voided.
  • user's object When projection is performed by perspective projection, user's object is projected such that the size of the user's object in the virtual aquarium 31 is maximized when the user's object is on the near side or, more specifically, at a position closest to the eye point 231 , and gradually decreases when the user's object moves in the depth direction. Because the dead zones 240 1 and 240 2 are on the near side in the depth direction, the newly-created user's image is projected as a large image. Accordingly, the newly-created user's image is more likely to be noticed.
  • Step S 16 the image control unit 136 adds motion to the user's object to which the initial coordinates are assigned in Step S 15 and causes the user's object to start moving.
  • next Step S 17 the PC 10 determines whether or not a next document image is to be read. If the PC 10 determines that a next document image is to be read, the process is returned to Step S 10 . On the other hand, if the PC 10 determines that a next document image is not to be read, the PC 10 completes the sequence of processing of the flowchart illustrated in FIG. 5 . Determination that a document image is not to be read can be made based on, for example, whether or not the scanner apparatus 20 is disconnected from the PC 10 or whether or not a predetermined period of time has elapsed since reading is performed last.
  • FIG. 15 is a flowchart of an example procedure of controlling motion of one user's object (target user's object) by the image control unit 136 according to the embodiment.
  • the image control unit 136 performs motion control through Steps S 40 to S 52 described below, in which the target user's object is deformed according to one of the first to third deformation modes assigned to the target user's object.
  • Step S 40 the image control unit 136 determines, based on the parameter p 0 , whether or not the moving-direction velocity v of the target user's object has exceeded the maximum moving-direction velocity v max . If the image control unit 136 determines that the velocity v has exceeded the maximum velocity v max , the process proceeds to Step S 42 . If the image control unit 136 determines that the velocity v has not exceeded the maximum velocity v max , the process proceeds to Step S 41 , in which the velocity v of the target user's object is increased by the moving-direction acceleration ⁇ . Thereafter, the process proceeds to Step S 42 .
  • Step S 42 the image control unit 136 determines, based on the parameter p 2 , whether or not the angular acceleration ⁇ h in the horizontal plane has exceeded the maximum value ⁇ h max . If the image control unit 136 determines that the angular acceleration ⁇ h has exceeded the maximum value ⁇ h max , the process proceeds to Step S 44 .
  • Step S 43 the image control unit 136 increases or decreases the angular acceleration ⁇ h of the target user's object in the horizontal plane to thereby change the changing rate of orientation in the horizontal plane.
  • the image control unit 136 increases or decreases the angular acceleration ⁇ h by an amount that depends on a random number, which is generated to fall within a range between the parameter p 7 , which is the maximum random number for rightward angle dRr max , and the parameter p 8 , which is the maximum random number for leftward angle dRl max .
  • the image control unit 136 increases or decreases the changing rate of orientation of the target user's object in the horizontal plane according to the amount, by which the angular acceleration ⁇ h in the horizontal plane is increased or decreased.
  • the process proceeds to Step S 44 .
  • Step S 44 the image control unit 136 determines, based on the parameter p 3 , whether or not the angular acceleration ⁇ v in the vertical plane has exceeded the maximum value ⁇ v max If the image control unit 136 determines that the angular acceleration ⁇ v has exceeded the maximum value ⁇ v max , the process proceeds to Step S 46 .
  • Step S 45 the image control unit 136 increases or decreases the angular acceleration ⁇ v of the target user's object in the vertical plane to thereby change the changing rate of orientation in the vertical plane.
  • the image control unit 136 increases or decreases the angular acceleration ⁇ v by an amount that depends on a random number, which is generated to fall within a range between the parameter p 5 , which is the maximum random number for upward angle dRu max , and the parameter p 6 , which is the maximum random number for downward angle dRu max .
  • the image control unit 136 increases or decreases the changing rate of orientation of the target user's object in the vertical plane according to the amount, by which the angular acceleration ⁇ v in the vertical plane is increased or decreased.
  • the process proceeds to Step S 46 .
  • Step S 46 the image control unit 136 determines whether or not the target user's object has contacted the top or bottom wall (the ceiling or the floor) of the virtual aquarium 31 . If the image control unit 136 determines that the target user's object has not contacted the top or bottom wall of the virtual aquarium 31 , the process proceeds to Step S 48 .
  • Step S 47 the image control unit 136 reverses the changing rate of orientation of the target user's object in the vertical plane and, simultaneously, decreases the moving-direction velocity v. Reversing the changing rate of orientation in the vertical plane is performed by reversing the direction of a vertical component of the moving direction while maintaining a horizontal component of the same. Thereafter, the process proceeds to Step S 48 .
  • Step S 48 the image control unit 136 determines whether or not the target user's object has contacted a front, back, right, or left wall of the virtual aquarium 31 . If the image control unit 136 determines that the target user's object has not contacted the front, back, right, or left wall of the virtual aquarium 31 , the process proceeds to Step S 50 .
  • Step S 49 the image control unit 136 increases the changing rate of orientation of the target user's object in the horizontal plane and, simultaneously, decreases the moving-direction velocity v. In this example, the image control unit 136 increases a changing rate of orientation of the moving-direction of the target user's object by ten times. Thereafter, the process proceeds to Step S 50 .
  • Step S 50 the image control unit 136 changes the orientation of the target user's object in the vertical plane according to the changing rate of orientation of the target user's object in the vertical plane.
  • Step S 51 the image control unit 136 changes orientation of the target user's object in the horizontal plane according to the changing rate of orientation of the target user's object in the horizontal plane.
  • Step S 52 the image control unit 136 changes the position (coordinates) of the target user's object according to the moving-direction velocity v.
  • the image control unit 136 projects the virtual aquarium 31 onto the two-dimensional image-data plane, thereby updating the image on the two-dimensional image-data plane.
  • Step S 52 When the image control unit 136 completes processing of Step S 52 , the process is returned to Step S 40 .
  • the image control unit 136 repeats processing from Steps S 40 to S 52 described above at predetermined intervals, e.g., at every frame period.
  • the image control unit 136 can add vertical and horizontal fluctuation to the motion of the target user's object. Furthermore, the image control unit 136 causes the target user's object to move seamlessly after the user's object contacts a wall surface of the virtual aquarium 31 . Consequently, the embodiment can move the user's object with a water-creature-like motion. Furthermore, the user's object is displayed within a screen according to the width W v at most times.
  • the display system 1 projects the virtual aquarium 31 onto the two-dimensional plane by causing the plurality of projector devices 11 1 , 11 2 , and 11 3 to project the images 13 1 , 13 2 , and 13 3 , into which a to-be-projected image is divided, onto the screen 12 .
  • applicable projection scheme is not limited thereto.
  • the single projector apparatus 11 may project the single to-be-projected image, which is not divided, onto the screen 12 .
  • a scene in which the user's object moves within the three-dimensional image-data space is projected onto (displayed on) the two-dimensional image-data plane.
  • application of the embodiment is not necessarily limited to a three-dimensional image-data space region.
  • the embodiment may be applied to an image, in which user's object moves on a two-dimensional image-data plane.
  • a three-dimensional motion can give more real appearance than two-dimensional motion does. Accordingly, utilizing a three-dimensional image-data space region will be more suitable.
  • the user's object is based on an image representing a creature, such as fish, living in water.
  • the user's object is not limited thereto.
  • the user's object may be based on an image of an insect, a plant, or the like.
  • the picture is made by a user by hand-drawing on a sheet with a marker pen or the like.
  • a user may make a picture by another method.
  • a picture hand-drawn by a user may be made using a stylus pen and a tablet terminal, a finger (i.e., by finger touch operation) and a tablet terminal, or the like, rather than using a marker pen and a sheet.
  • a hand-drawn picture may be made by any method that allows a user to make a picture in a manner or by a method similar to that by which the user makes a picture using paper and a pen, and the type of the medium to be employed is not particularly limited.
  • the projector apparatus 11 projects the two-dimensional projection image of the virtual aquarium 31 onto the screen 12 , which is plain and flat.
  • a screen onto which the projection image is to be projected is not limited to the screen 12 .
  • the projector apparatus 11 may project a two-dimensional projection image of the virtual aquarium 31 onto a semi-transparent screen 301 placed over the whole area of an aquarium 300 where real fish and/or the like are swimming.
  • An image of user's object projected onto the screen 301 is combined with the real fish in the aquarium 300 that can be seen through the semi-transparent screen 301 . Consequently, a user can enjoy a sight to which more real appearance is given.
  • a projection surface of the screen 12 is not necessarily flat, and may have a three-dimensional profile.
  • a screen 310 having a semi-spherical shape as illustrated in FIG. 17 may be employed.
  • the projector apparatus 11 By causing the projector apparatus 11 to project a two-dimensional projection image of the virtual aquarium 31 onto the semi-spherical portion of the screen 310 , more three-dimensional appearance can be given to the image of the inside of the virtual aquarium 31 .
  • a user can enjoy an experience as if the user looks into water through a window provided in water.
  • the user can view a scene in which fish or the like represented by the picture 22 drawn by the user himself/herself appears to be swimming in water.
  • the shape of the screen 12 is not necessarily fixed.
  • a screen 320 which is a hanging cloth as illustrated in FIG. 18 , may be employed.
  • the screen 320 is preferably not fixed at its bottom end so as to be easily swayed by wind or vibrations.
  • the swaying screen 320 can give more real appearance of an under-water scene to the image.
  • a silhouette and user's object are displayed together. This can further increase entertainment value.
  • a background image is added to the virtual aquarium 31 .
  • an object hereinafter, “background object” having distance information (coordinate information) independent of user's object may be arranged in the virtual aquarium 31 .
  • a background object may be arranged parallel to the plane containing the height and the width at a position of the depth D in the virtual aquarium 31 .
  • the background object may be fixed or, alternatively, changed according to predetermined schedule.
  • the background object may be changed on a per-time-period basis (e.g., morning hours, day hours, and night hours) per day. Schedule according to which the background object is to be changed is not limited thereto.
  • the background object may be changed at longer time intervals; e.g., on a per-season basis (spring, summer, fall, and winter) or for annual events. When scheduled as such, it is preferable that the background object is changed to an object suitable for the time period, season, annual event, or the like each time. Changing the background object in this manner allows a user to enjoy various scenes which vary according to time period, season, annual event, or the like.
  • the fifth modification of the embodiment is configured to allow a user to take an action on user's object in the virtual aquarium 31 .
  • the user's object is a creature living in water
  • possible actions taken by a user include giving food to the creature. More specifically, the user draws the picture 22 representing food on the document 21 .
  • the user adds information indicating that the picture 22 represents food to the document 21 . This information may be added by, for example, embedding information indicating that the picture 22 represents food in at least one of the marks 220 1 to 220 3 .
  • the PC 10 generates an object (hereinafter, “food object”) from the picture 22 of food based on a document image obtained by the scanner apparatus 20 by scanning an image of the document 21 , on which the picture 22 of the food is drawn, and adds the food object into the virtual aquarium 31 .
  • object hereinafter, “food object”
  • the PC 10 When adding the food object, the PC 10 assigns coordinates on the top wall (ceiling) or, more specifically, on the plane containing the width W and the depth D at the height H, to the food object as initial coordinates.
  • the initial coordinates are not limited thereto, and coordinates of a position higher than the ceiling of the virtual aquarium 31 in the image-data space 30 may alternatively be assigned to the food object.
  • the PC 10 defines, as the motion of the food object, motion of moving (falling) from the initial coordinates of the food object in the image-data space 30 . Motion, through which the moving food object is swayed due to resistance of water, may be further added to the food object.
  • Such motion control of the food may be performed by: making determination as to whether an image representing a hand-drawn picture, which is obtained by scanning a document image from a sheet, is a food object or user's object; and setting the parameters p 0 to p 8 differently depending on whether the image represents a food object or user's object. More specifically, the parameters p 0 to p 8 may be set as follows. If the image represents a food object, predetermined falling-motion values are set as the parameters p 0 to p 8 . On the other hand, if the image represents user's object, the parameters p 0 to p 8 are set by utilizing the mode determination and the feature values described above.
  • the PC 10 adds motion to each of user's objects according to the food object in the virtual aquarium 31 .
  • the PC 10 may add motion of approaching the food object to user's object according to a distance between the user's object and the food object. More specifically, the user's object is moved such that the closer the user's object, the faster the user's object approaches the food object.
  • the PC 10 may delete the food object from the virtual aquarium 31 when one of the user's objects contacts the food object. By deleting the food object in this manner, a scene in which, for example, fish represented by the user's object has eaten the food can be created.
  • Such motion control may be implemented as follows, for example.
  • the PC 10 manages all the objects by associating each object with identification information indicating whether the object is a food object or user's object.
  • the PC 10 calculates distances from coordinates of every food object and every user's object.
  • the PC 10 determines whether the calculated distance is equal to or smaller than a threshold value for each of the distances. If the PC 10 determines that a calculated distance is equal to or smaller than the threshold value, the PC 10 increases the parameter(s) (e.g., the parameters p 0 and p 1 ) related to the moving-direction velocity of the corresponding user's object.
  • the parameter(s) e.g., the parameters p 0 and p 1
  • the PC 10 may further control the user's object, for which the parameter(s) related to the velocity has been increased according to a result of the determination, as follows.
  • the PC 10 sets a flag indicating that the parameter(s) has been increased for the user's object and monitors the flagged user's object.
  • the PC 10 decreases the parameter(s) (e.g., the p 0 and p 1 ) related to the moving-direction velocity. Meanwhile, in a case where the food object is deleted, the distance between the user's object and the food object is set to zero.
  • the control method described above causes the moving-direction velocity of the user's object to decrease when the distance between the flagged user's object and the food object has become equal to or larger than the threshold value, for example. As a result, a scene in which the creature represented by the user's object moving away from the food object appears to become less active can be created. Furthermore, the control method described above causes a moving-direction velocity of the creature represented by the user's object to decrease when the distance between the flagged user's object and the food object has become zero, for example. As a result, a scene in which the creature appears to give up eating the food object because the food has disappeared can be created.
  • the PC 10 can cause motion of user's object to vary between before and after the user's object contacts a food object. For example, the velocity of the user's object may be increased after the user's object contacts the food object. The moving direction of the user's object may be changed after the user's object contacts the food object. For another example, the user's object may be increased in size after the user's object contacts the food object.
  • the fifth modification of the embodiment is configured to change the motion of each of user's objects in the virtual aquarium 31 according to an action taken by a user. As a result, the entertainment value can be further increased.
  • a decorative object which is based on the picture 22 drawn by a user on the document 21 , is arranged in the virtual aquarium 31 . More specifically, the user draws the picture 22 representing a decorative object, such as an ornament rock or seaweed, on the document 21 . The user adds information indicating that the picture 22 represents the decorative object to the document 21 .
  • the PC 10 generates an object (hereinafter, “decorative object”) from the picture 22 representing the decorative object based on a document image obtained by the scanner apparatus 20 by scanning an image of the document 21 , on which the picture 22 of the decorative object is drawn, and adds the decorative object into the virtual aquarium 31 .
  • the PC 10 When adding the decorative object, the PC 10 assigns coordinates on the bottom wall (floor) or, more specifically, on the plane containing the width W and the depth D at the height zero, to the decorative object as initial coordinates.
  • the PC 10 does not define motion according to which the decorative object is displaced. In other words, the decorative object is positionally fixed in the virtual aquarium 31 . Meanwhile, a vertically extending and contracting motion may be added to the decorative object as in the case of the second deformation mode described above.
  • image processing technique can be improved to make it possible to display user's image, which is made by a general user by hand-drawing, such that the user's image is moved in a more lifelike manner.
  • the present invention can be implemented in any convenient form, for example using dedicated hardware, or a mixture of dedicated hardware and software.
  • the present invention may be implemented as computer software implemented by one or more network processing apparatus.
  • the network can comprise any conventional terrestrial or wireless communications network, such as the Internet.
  • the processing apparatus can compromise any suitably programmed apparatuses such as a general purpose computer, personal digital assistant, mobile telephone (such as a WAP or 3G-compliant phone) and so on. Since the present invention can be implemented as software, each and every aspect of the present invention thus encompasses computer software implemental on a programmable device.
  • the computer software can be provided to the programmable device using any storage medium for storing processor readable code such as a floppy disk, hard disk, CD ROM, magnetic tape device or solid state memory device.
  • the hardware platform includes any desired kind of hardware resources including, for example, a central processing unit (CPU), a random access memory (RAM), and a hard disk drive (HDD).
  • the CPU may be implemented by any desired kind of any desired number of processor.
  • the RAM may be implemented by any desired kind of volatile or non-volatile memory.
  • the HDD may be implemented by any desired kind of non-volatile memory capable of storing a large amount of data.
  • the hardware resources may additionally include an input device, an output device, or a network device, depending on the type of the apparatus. Alternatively, the HDD may be provided outside of the apparatus as long as the HDD is accessible.
  • the CPU such as a cache memory of the CPU
  • the RAM may function as a physical memory or a primary memory of the apparatus, while the HDD may function as a secondary memory of the apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Toys (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A display control method includes: inputting user's image including a drawing portion made by hand drawing and being a display target image; and performing image control including causing the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moving the user's image that has emerged.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. application Ser. No. 15/297,772, filed Oct. 19, 2016, which is a continuation of U.S. application Ser. No. 14/269,375, filed on May 5, 2014, which claims priority under U.S.C. § 119 to Japanese Patent Application No. 2013-099750, filed on May 9, 2013, in the Japan Patent Office, the entire contents of each of which are incorporated by reference herein.
BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to a display control method, a display control device, and a display system.
2. Description of the Related Art
Increase in computer performance has made it possible to display computer graphics images using the three-dimensional coordinates (hereinafter, “3DCG”) with ease in recent years. In the field of 3DCG, it is also widely practiced to display a motion picture by setting regular or random motion for each of objects arranged in a three-dimensional coordinate space. Such a motion picture can realize display as if the objects are independently moving in the three-dimensional coordinate space.
Japanese Patent No. 4161325 discloses a technique of displaying an image made by an animator by hand-drawing while adding motion thereto and combining therewith a background image prepared in advance, in an image using 3DCG.
There is a need for a technique that makes it possible to display user's image, which is made by a general user by hand-drawing, the user's image is moved in a more lifelike manner in 3DCG images with ease. According to the technique disclosed in Japanese Patent No. 4161325, what motion is to be added to an image is determined by an animator, and some experience is required to create lifelike motion. Accordingly, such a conventional technique is not sufficient to make it possible to display user's image, which is made by a general user by hand-drawing, such that the user's image is moved in a more lifelike manner.
Under the circumstances, there is a need to improve image processing technique to make it possible to display user's image, which is made by a general user by hand-drawing, such that the user's image is moved in a more lifelike manner.
SUMMARY OF THE INVENTION
It is an object of the present invention to at least partially solve the problems in the conventional technology.
A display control method includes: inputting user's image including a drawing portion made by hand drawing and being a display target image; and performing image control including causing the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moving the user's image that has emerged.
A display control device includes: an input unit that inputs user's image including a drawing portion made by hand drawing and being a display target image; and an image control unit that causes the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moves the user's image that has emerged.
A display system includes: an image input apparatus; an image processing apparatus; and at least one display apparatus. The image input apparatus includes an image input unit, through which an image including a drawing portion made by hand drawing and being a display target image is input. The display apparatus includes a display unit that displays an image. The image processing apparatus includes an input unit that inputs user's image including the drawing portion made by hand drawing and being a display target image, originating from the image input through the image input apparatus, and an image control unit that causes the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moves the user's image that has emerged.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating an example configuration of a display system according to a first embodiment;
FIG. 2 is a diagram schematically illustrating image-data space with a three-dimensional coordinate system according to the embodiment;
FIG. 3 is a block diagram illustrating an example configuration of a PC applicable to the embodiment;
FIG. 4 is an example functional block diagram for describing functions of the PC according to the embodiment;
FIG. 5 is an example flowchart of a general procedure of a display control process according to the embodiment;
FIG. 6 is a diagram illustrating an example of a sheet on which a drawing is to be hand-drawn according to the embodiment;
FIG. 7 is a diagram for describing an aspect ratio R of user's image data according to the embodiment;
FIG. 8 is a flowchart of a procedure of a deformation-mode determining process to be performed by a mode determining unit according to the embodiment;
FIG. 9 is a diagram for describing motion of a first mode according to the embodiment;
FIG. 10 is a diagram for describing motion of a second mode according to the embodiment;
FIG. 11 is a diagram for describing motion of a third mode according to the embodiment;
FIG. 12 is a flowchart of an example procedure of a parameter-generating-and-determining process to be performed by a parameter generating unit according to the embodiment;
FIG. 13 is a diagram illustrating an example of a method for determining parameters p using feature values c and maximum values m according to the embodiment;
FIGS. 14A and 14B are diagrams for describing projection of image-data space according to the embodiment;
FIG. 15 is a flowchart of an example procedure of controlling motion of target user's object by an image control unit according to the embodiment;
FIG. 16 is a diagram for describing a first modification of the embodiment;
FIG. 17 is a diagram for describing a second modification of the embodiment; and
FIG. 18 is a diagram for describing a third modification of the embodiment.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Embodiments are described in detail below with reference to the accompanying drawings.
First Embodiment
FIG. 1 is a block diagram illustrating an example configuration of a display system according to a first embodiment. Referring to FIG. 1, a display system 1 includes a personal computer (PC) 10, one or more projector apparatuses (PJs) 11 1, 11 2, and 11 3, and a scanner apparatus 20. The PC 10 generates display image data by performing predetermined image processing on image data obtained by the scanner apparatus 20 by scanning an original document (hereinafter, “document”) 21 and supplies the display image data to the PJs 11 1, 11 2, and 11 3. The PJs 11 1, 11 2, and 11 3 project respective images 13 1, 13 2, and 13 3 onto a screen 12 according to the display image data fed from the PC 10.
Meanwhile, in a situation where the plurality of PJs 11 1, 11 2, and 11 3 project the images 13 1, 13 2, and 13 3 onto the screen 12 as illustrated in FIG. 1, it is preferable that adjacent portions of the images 13 1, 13 2, and 13 3 overlap each other. In the example illustrated in FIG. 1, a camera 14 captures the images 13 1, 13 2, and 13 3 projected on the screen 12. The PC 10 adjusts images of the overlapping portions by controlling the images 13 1, 13 2, and 13 3 or the PJs 11 1, 11 2, and 11 3 based on captured image data.
In the display system 1 configured as described above, an image of the document 21, on which a picture 22 is hand-drawn by a user 23, for example, is read by the scanner apparatus 20. The scanner apparatus 20 obtains document image data by scanning the image of the document 21 and supplies the document image data to the PC 10. The PC 10 extracts data representing the picture 22 from the document image data fed from the scanner apparatus 20, and holds the extracted image data as user's image data.
Meanwhile, the PC 10 generates image-data space with a three-dimensional coordinate system. The PC 10 assigns coordinates in the image-data space to user's image data, thereby incorporating the user's image data as data in the image-data space. Hereinafter, the user's image data in the three-dimensional image-data space is referred to as “user's object”. The PC 10 generates image data by projecting the three-dimensional image-data space including the user's object onto a two-dimensional image-data plane. The PC 10 then divides the generated image data into a number of pieces as many as the number (in this embodiment, three) of the PJs 11 1, 11 2, and 11 3, and supplies the divided image data to the PJs 11 1, 11 2, and 11 3.
Note that the PC 10 is capable of adding motion in the image-data space to user's object in the following manner, for example. The PC 10 obtains feature values of user's image data, from which the user's object derives, and generates motion-related parameters including a deformation mode of the user's object. The PC 10 adds motion to the user's object in the image-data space by applying the parameters to the user's image data.
Adding motion in this manner allows the user 23 to view an image derived from the picture 22 hand-drawn by the user 23 himself/herself as moving in the three-dimensional image-data space. Furthermore, the PC 10 can include a plurality of user's objects in the same image-data space. Accordingly, when the operation described above is repeatedly performed, the user 23 can view each of images derived from a plurality of the pictures 22, which may be different pictures, for example, as moving in the three-dimensional image-data space.
FIG. 2 schematically illustrates image-data space, which is generated by the PC 10, with a three-dimensional coordinate system according to the embodiment. In the embodiment, the Cartesian coordinate system having three mutually-orthogonal coordinate axes (the x-axis, the y-axis, and the z-axis) is used as the three-dimensional coordinate system. Hereinafter, as illustrated in FIG. 2, it is assumed that the PC 10 generates image-data space 30 with a three-dimensional coordinate system in which the x-axis, the y-axis, and the z-axis are the height axis, the width axis, and the depth axis, respectively. In the example illustrated in FIG. 2, the image-data space 30 includes user's objects 40 4, 40 2, 40 3, 40 4, and 40 5 derived from user's image data.
The PC 10 defines a defined region 31, which is a space defined by a height H, a width W, and a depth D having respective values determined in advance, in the image-data space 30. Motion of the user's objects 40 4, 40 2 . . . derived from the user's image data are confined within the defined region 31. In the embodiment, it is assumed that the pictures 22, from which the user's objects 40 4, 40 2 . . . derive, are creatures living in water such as fish, squids, octopuses, and jellyfish. The defined region 31 can be considered as a virtual aquarium. Hereinafter, unless otherwise specified, the defined region 31 is referred to as the virtual aquarium 31.
FIG. 3 illustrates an example configuration of the PC 10 that is applicable to the embodiment. In the PC 10 illustrated in FIG. 3, a central processing unit (CPU) 110, a read only memory (ROM) 111, a random access memory (RAM) 112, and a display control unit 113 are connected to a bus 100. A storage 114, a data I/F 115, and a communication I/F 116 are also connected to the bus 100 in the PC 10.
The CPU 110 controls the entire PC 10 according to program stored in advance in the ROM 111 and the storage 114 by using the RAM 112 as a working memory. The display control unit 113, to which a monitor 120 is connected, converts a display control signal generated by the CPU 110 to a signal usable by the monitor 120 for displaying, and outputs the converted signal to the monitor 120. The display control unit 113 is also capable of converting the display control signal into signals usable by the projector apparatuses 11 1, 11 2, and 11 3 for displaying, and outputting the signals to the PJs 11 1, 11 2, and 11 3.
The storage 114 is a storage medium capable of nonvolatile storage of data and can be, for instance, a hard disk drive. Alternatively, a nonvolatile semiconductor memory, such as a flash memory, may be used as the storage 114. The storage 114 stores the program to be executed by the CPU 110 and various types of data.
The data I/F 115 controls data inputs and outputs from and to external equipment. For instance, the data I/F 115 is used as an interface between the PC 10 and the scanner apparatus 20. The data I/F 115 also receives signal input from a pointing device, such as a mouse, and a keyboard (KBD) (not shown). The display control signal generated by the CPU 110 may be output via the data I/F 115 to the projector apparatuses 11 1, 11 2, and 11 3, for example. Universal serial bus (USB) interface, Bluetooth (registered trademark) interface, or the like interface can be used as the data I/F 115.
The communication I/F 116 controls communication over a network such as the Internet or a local area network (LAN).
FIG. 4 is an example functional block diagram for describing functions of the PC 10 according to the embodiment. The PC 10 includes an input unit 130, an image obtaining unit 131, a 3D-space generating unit 132, a region defining unit 133, a mode determining unit 134, a parameter generating unit 135, an image control unit 136, and a storage unit 137.
The input unit 130 inputs an image including a drawing portion hand-drawn by the user 23. For instance, the input unit 130 may have a function of the data I/F 115 and input document image data obtained by the scanner apparatus 20 by scanning an image of the document 21 including the picture 22, which is the drawing portion hand-drawn by the user 23. The input unit 130 extracts, as user's image data, image data representing the picture 22 from the document image data.
The 3D-space generating unit 132 generates the image-data space 30 having the three-dimensional coordinate axes, which are the three axes: the height axis, the width axis, and the depth axis, described above with reference to FIG. 2. The 3D-space generating unit 132 generates the image-data space 30 as an address space in the RAM 112, for example. The region defining unit 133 defines the defined region 31 (the virtual aquarium 31) with the height H, the width W, and the depth D, which are the predetermined values, in the image-data space 30 generated by the 3D-space generating unit 132.
The image obtaining unit 131 sets a predetermined eye point with respect to the image-data space 30, projects the image-data space 30 onto a two-dimensional image-data plane from the set eye point, and obtains image data to be projected by the projector apparatuses 11 1, 11 2, and 11 3.
The mode determining unit 134 determines a mode, which is to be assigned to the user's object derived from the user's image data when the user's image data is included in the image-data space 30, and based on which user's object is to be deformed (hereinafter, “deformation mode”), based on the user's image data obtained by the input unit 130. The parameter generating unit 135 generates motion-related parameters that determine motion-related properties of the user's object, based on the user's image data obtained by the input unit 130.
The image control unit 136 controls motion of the user's object in the image-data space 30 according to the deformation mode determined by the mode determining unit 134 and the parameters generated by the parameter generating unit 135. In other words, the image control unit 136 performs control of assigning coordinates in the virtual aquarium 31 in the image-data space 30 to the user's object and continuously changing the coordinates with lapse of time.
The storage unit 137 corresponds to the RAM 112 and stores user's image data, from which user's object derives, and the like. However, the storage unit 137 is not limited to the RAM 112, and the storage 114 may be used as the storage unit 137. For example, determination of the deformation mode by the mode determining unit 134 and generation of the parameters by the parameter generating unit 135 are performed by using the user's image data stored in the storage unit 137. The image control unit 136 is capable of incorporating the user's image data into the image-data space 30 as user's object by assigning coordinates in the virtual aquarium 31 to the user's image data stored in the storage unit 137. Furthermore, the image control unit 136 deforms and moves the user's object according to the deformation mode and the parameters.
The input unit 130, the image obtaining unit 131, the 3D-space generating unit 132, the region defining unit 133, the mode determining unit 134, the parameter generating unit 135, and the image control unit 136 included in the PC 10 may be implemented by a display control program that is stored in the storage 114 in advance and executed on the CPU 110, for instance. The display control program may be provided by being recorded as a file in an installable format or in an executable format in a computer-readable recording medium such as a compact disc (CD), a flexible disk (FD), or a digital versatile disk (DVD).
The display control program to be executed by the PC 10 of the embodiment may be configured to be provided by being stored in a computer connected to a network such as the Internet and downloaded over the network. The display control program to be executed by the PC 10 of the embodiment may be configured to be provided or distributed over a network such as the Internet. The display control program of the embodiment may be configured to be provided by being installed in the ROM 111 or the like in advance.
The display control program to be executed by the PC 10 of the embodiment has a module structure including the units (the input unit 130, the image obtaining unit 131, the 3D-space generating unit 132, the region defining unit 133, the mode determining unit 134, the parameter generating unit 135, and the image control unit 136) described above. From viewpoint of actual hardware, the CPU 110 reads out the display control program from a storage medium, e.g., the storage 114 or the ROM 111, and executes the program to load the units on a main storage device, e.g., the RAM 112, thereby generating the input unit 130, the image obtaining unit 131, the 3D-space generating unit 132, the region defining unit 133, the mode determining unit 134, the parameter generating unit 135, and the image control unit 136 on the main storage device.
A display control process according to the embodiment is described in more detail below. FIG. 5 is an example flowchart of a general procedure of the display control process according to the embodiment. Prior to execution of the process of this flowchart, a hand-drawn picture is made by a user. It is assumed in this example that a user hand-draws a picture on a sheet of a predetermined format.
FIG. 6 illustrates an example of a sheet on which a picture is to be hand-drawn according to the embodiment. A drawing region 210 and a title text region 211 are arranged on a sheet 200 illustrated on the left of FIG. 6. A picture 212, from which an object (user's object) to be incorporated in the image-data space 30 is to be derived, is to be hand-drawn in the drawing region 210. A title of the picture drawn in the drawing region 210 is to be written in the title text region 211.
Marks 220 1, 220 2, and 220 3 are arranged on three corners of the four corners of the sheet 200. Orientation and size of the sheet 200 can be obtained by detecting the marks 220 1, 220 2, and 220 3 from a document image, which is obtained by the scanner apparatus 20 by scanning an image of the sheet 200.
In Step S10 of the flowchart illustrated in FIG. 5, the scanner apparatus 20 scans an image of the sheet 200, on which the picture 212 is hand-drawn by a user, and supplies the document image data originating from the scanned image to the PC 10, in which the document image data is input to the input unit 130.
In next Step S11, the input unit 130 of the PC 10 extracts user's image data from the input document image data. More specifically, the input unit 130 detects the marks 220 1, 220 2, and 220 3 from the document image data. The input unit 130 determines orientation and size of the document image data based on locations of the marks 220 1, 220 2, and 220 3 on the document image data.
For example, the input unit 130 may find a markerless corner, which is one of the four corners of the document image data and at which none of the marks 220 1, 220 2, and 220 3 is detected, and determines the orientation of the document image data based on the location of the markerless corner. In the example illustrated in FIG. 6, the input unit 130 determines that the markerless corner, at which none of the marks 220 1, 220 2, and 220 3 is detected, is the bottom right corner of the document image data. Based on this, the locations of the marks 220 1, 220 2, and 220 3 on the document image data may be determined. Furthermore, the input unit 130 measures distances between the respective marks 220 1, 220 2, and 220 3, and compares the distances against known corresponding distances stored in the storage 114 or the like in advance. Warping in vertical and horizontal size of the document image data can be corrected based on a result of this comparison.
The input unit 130 extracts the drawing region 210 and the title text region 211 from the document image data based on the orientation and the size of the document image data obtained as described above. An example of an image represented by the image data of the drawing region 210 and that of the title text region 211 extracted from the document image data are illustrated on the right of FIG. 6. The input unit 130 further extracts the picture 212 from the image data representing the drawing region 210. For example, the input unit 130 may extract the picture 212 by determining whether each pixel constituting the image data representing the drawing region 210 is white (which is the background color of the sheet 200) or other-than-white. The input unit 130 determines image data in a minimum rectangular region 213 including the picture 212 and having a lower side parallel to the lower side of the drawing region 210 as user's image data. The input unit 130 stores the user's image data in the storage unit 137.
In next Step S12, the mode determining unit 134 of the PC 10 determines a deformation mode to be assigned to the user's image data extracted in Step S11. In the embodiment, a deformation mode is assigned to each of user's objects in the virtual aquarium 31. Motion of the each user's object in the virtual aquarium 31 is controlled based on the deformation mode.
In the embodiment, three deformation modes are defined. The three modes are: a first mode in which user's object is deformed such that a trailing one (with respect to a moving direction in which the user's object moves) of two parts, into which the user's object is vertically divided, is moved; a second mode in which the user's object is deformed such that the user's object is vertically extended and contracted the object; and a third mode in which the user's object is deformed such that a lower one of two parts, into which the user's object is horizontally divided, is moved. In the embodiment, an aspect ratio R of the user's image data is obtained. Which one of the first, second, and third modes is to be assigned to the user's object originating from the user's image data is determined according to the aspect ratio R.
The aspect ratio R of user's image data according to the embodiment is described below with reference to FIG. 7. The rectangular region 213 illustrated in FIG. 7 is a minimum region including a portion of the picture 212 (which is, for example, a portion where pixel values are other-than-white) in the drawing region 210 and having the lower side parallel to the lower side of the drawing region 210 as described above. The aspect ratio R is expressed by Equation (1) below using a height h and a width w of the rectangular region 213. Equation (1) indicates that the larger the aspect ratio R is, the more vertically elongated the rectangular region 213 is.
R=h/w  (1)
In this example, the direction extending through the marks 220 1 and 220 3 is the vertical direction of the rectangular region 213; the direction extending through the marks 220 1 and 220 2 is the horizontal direction of the rectangular region 213.
FIG. 8 is a flowchart of a procedure of a deformation-mode determining process to be performed by the mode determining unit 134 in Step S12. The mode determining unit 134 obtains the aspect ratio R of the user's image data in Step S20 as described above. The mode determining unit 134 makes determination about the value of the aspect ratio R in Step S21. More specifically, in the embodiment, the mode determining unit 134 determines to which one of a range smaller than 1, a range from equal to or larger than 1 to smaller than a predetermined value s, and a range equal to or larger than the predetermined value s the aspect ratio R belongs. In the embodiment, the deformation mode is assigned to the user's object originating from the user's image data according to the aspect ratio R.
If the mode determining unit 134 determines that the aspect ratio R is smaller than 1 in Step S21, the process proceeds to Step S22, in which the mode determining unit 134 assigns the first mode to the object originating from the user's image data.
Motion of the first mode according to the embodiment is described below with reference to FIG. 9. In FIG. 9, and FIGS. 10 and 11, which will be referenced later, the rectangular region 213 is described to be larger relative to the picture 212 for clarity.
As illustrated in FIG. 9, the first mode is a mode in which an image of the rectangular region 213 represented by the user's image data is deformed in the following manner. The image is vertically divided at a dividing line 221 which may be, for example, at a center of the image. One (hereinafter, referred to as “moving part”) of two parts, into which the image is divided, is continuously moved or, more specifically, moved continuously with lapse of time. The first mode is meant for creating a typical motion of fish or the like.
In the first mode, the moving part is defined so as to swing in a front and back direction on the dividing line 221 within a predetermined angular range. That is, when the rectangular region 213 is in the image-data space 30, the moving part is moved such that the moving part pivots about the dividing line 221 within the predetermined angular range in a plane, which is orthogonal to a plane where the rectangular region 213 initially resides and to the dividing line 221.
In the first mode, the moving part is preferably a trailing part with respect to the moving direction of the user's object, so that appearance of motion similar to a swimming motion of fish can be created. To set a trailing part with respect to the moving direction as the moving part, a scheme of specifying the orientation of the picture 212 hand-drawn on the drawing region 210 of the sheet 200 in advance can be employed, for example. However, an applicable scheme is not limited thereto. Alternatively, for example, a mark indicating the moving direction may be embedded in the picture 212. In the example illustrated in FIG. 6, the eye is drawn in black, and black is not used in other portions than the eye. Accordingly, the orientation of the picture 212 can be determined by detecting a black portion from the picture 212.
If the mode determining unit 134 determines that the aspect ratio R is equal to or larger than 1 and smaller than the predetermined value s in Step S21, the process proceeds to Step S23, in which the mode determining unit 134 assigns the second mode to the object originating from the user's image data. The second mode according to the embodiment is described below with reference to FIG. 10. As illustrated in FIG. 10, in the second mode, the image of the rectangular region 213 represented by the user's image data is deformed such that the image is continuously vertically extended and contracted. The image may be extended and contracted at predetermined intervals, for example. The second mode is meant for creating motion of a creature, such as jellyfish, orientation of which is obscure and which is less active.
A value which will cause an extended image and a contracted image to appear natural is to be employed as the predetermined value s, which is a criterion for determining the second mode and the third mode described later. A value which will not cause a vertically extended image to appear to be excessively elongated and will not cause a vertically contracted image to appear to be excessively flattened may preferably be experimentally obtained as the predetermined value s, for example. As such a value, a value equal to or larger than 1 and smaller than 2; e.g., 1.5., is conceivable
If the mode determining unit 134 determines that the aspect ratio R is larger than the predetermined value s in Step S21, the process proceeds to Step S24, in which the mode determining unit 134 assigns the third mode to the object originating from the user's image data.
The third mode according to the embodiment is described below with reference to FIG. 11. As illustrated in FIG. 11, the third mode is a mode in which the image of the rectangular region 213 represented by the user's image data is deformed in the following manner. The image is horizontally divided at a dividing line 224 which may be, for example, at a center of the image. One of two parts, into which the image is divided, is moved as a moving part. The third mode is meant for creating motion of a creature, such as a squid or an octopus, which is usually represented in a vertically elongated form. In the third mode, the height of the moving part is kept constant, and an upper side of the moving part is made be the dividing line 224, and a lower side of the moving part is continuously moved in a left and right direction. More specifically, in the third mode, the moving part moves on the plane of the rectangular region 213 while keeping its parallelogram shape, which is constant in height and the upper side of which is the dividing line 224.
When one of the deformation modes is assigned in any one of Steps S22 to S24, the process proceeds to Step S13 of FIG. 5.
Referring back to FIG. 5, the parameter generating unit 135 of the PC 10 generates and determines parameters p that determine properties (hereinafter, “motion properties”) related to motion of user's object originating from the user's image data in the image-data space 30 for the user's object in Step S13. The motion of the user's object in the image-data space 30 is controlled according to the parameters p generated and determined here.
The parameters p to be determined in the embodiment are described below. In the embodiment, the parameter generating unit 135 determines parameters p0 to p8, which are the following nine types of motion properties, on a per-user's-object basis.
  • (1) p0: maximum moving-direction velocity vmax
  • (2) p1: moving-direction acceleration a
  • (3) p2: maximum angular acceleration αhmax in horizontal plane
  • (4) p3: maximum angular acceleration αvmax in vertical plane
  • (5) p4: maximum value (hereinafter, “maximum changing rate”) vdmax of changing rate of orientation in vertical plane
  • (6) p5: maximum value of random number (hereinafter, “maximum random number”) dRumax for upward angle
  • (7) p6: maximum random number dRdmax for downward angle
  • (8) p7: maximum random number dRrmax for rightward angle
  • (9) p8: maximum random number dRlmax for leftward angle
Of these parameters, the parameter p0, the maximum moving-direction velocity vmax, and p1, the moving-direction acceleration a, are parameters for controlling the moving-direction velocity of the user's object in the image-data space 30. The parameter p2, the maximum angular acceleration αhmax in the horizontal plane, and p3, the maximum angular acceleration αvmax in the vertical plane, are parameters for rotating the user's object in the horizontal plane and the vertical plane, respectively.
The parameter p4, the maximum changing rate vdmax of orientation in the vertical plane, is a parameter for providing the maximum value of the changing rate when the moving velocity of the user's object in the vertical direction is to be changed. More specifically, in the embodiment, vertical motion of the user's object is confined within the virtual aquarium 31 by reversing a vertical component of the moving direction when the user's objects contacts a top end surface (ceiling) or a bottom end surface (floor) of the virtual aquarium 31. The parameter p4 provides the maximum value of the angular reversal velocity at which the vertical component of the moving direction is to be reversed.
The parameters p5 and p6, the maximum random number dRumax for upward direction and the maximum random number dRdmax for downward direction, are parameters for setting a bound to angular acceleration of a rotating motion of the user's object in the vertical direction or, in other words, upward or downward to the moving direction. Angular acceleration given by the parameters p5 and p6 is limited by the parameter p3, the angular acceleration αvmax in the vertical plane.
The parameters p7 and p8, the maximum random number dRrmax for rightward direction and the maximum random number dRlmax for leftward direction, are parameters for setting a bound to angular acceleration of a rotating motion of the user's object in the horizontal direction or, more specifically, rightward or leftward to the moving direction.
FIG. 12 is a flowchart of an example procedure of a parameter-generating-and-determining process to be performed by the parameter generating unit 135 in Step S13. The parameter generating unit 135 extracts feature values c0 to c8 of the user's image data corresponding to the respective parameters p0 to p8 in Step S30. The parameter generating unit 135 obtains maximum values m1 to m3 of each of the parameters p0 to p8 for each of the deformation modes or, more specifically, for each of the first, second, and third modes in Step S31. In next Step S32, the parameter generating unit 135 determines the parameters p0 to p8 of the user's object originating from the user's image data by using the feature values c0 to c8 extracted in Step S30 and the maximum values m1 to m3 for the respective parameters p0 to p8 obtained in Step S31.
A method for obtaining the feature values c0 to c8 from the user's image data to be performed in Step S30 of FIG. 12 is described below. Feature values of an image can be obtained based on color distribution, edge level, or the like of user's image data. As described in Japanese Laid-open Patent Application No. 2009-101122, feature values may be obtained based on bit information of each pixel constituting user's image data. In the embodiment, the feature values are obtained using the method described in Japanese Laid-open Patent Application No. 2009-101122.
The feature value obtaining method described in Japanese Laid-open Patent Application No. 2009-101122 is briefly described below. The parameter generating unit 135 repeatedly and sequentially extracts n-bit (n is a predetermined positive integer) data, i.e., predetermined-bit strings, from user's image data stored in the storage unit 137. For example, if n is 8, the parameter generating unit 135 repeatedly and sequentially extracts 8-bit data from the user's image data until the terminal end of the user's image data. Hereinafter, the n-bit strings extracted from the user's image data are referred to as extracted data.
The parameter generating unit 135 classifies the extracted data into the parameters p0 to p8 described above in accordance with a preset classification rule. The classification rule provides, for example, data ranges associated with the respective parameters p0 to p8 for classifying the extracted data into the parameters p0 to p8.
An example of generating the feature values c0 to c2 corresponding to the respective three types of parameters p0 to p2 is described below. In this example, because data is extracted in units of 8 bits, a classification rule for classifying data in a range of data “h00 to hFF” in the two-digit hexadecimal number notation (the prefix letter “h” indicates that digits following “h” are in the hexadecimal notation) into the following three ranges of data is stored in the storage unit 137. According to this rule, a range of data “h00 to h6F” is classified into the “parameter p0”; a range of data “h70 to h9F” is classified into the “parameter p1”; a range of data “hA0 to hFF” is classified into the “parameter p2”.
The parameter generating unit 135 classifies the extracted data extracted from the user's image data in accordance with the ranges of data, which is preset as the classification rule. For example, when the extracted data is “b01101100” (the prefix letter “b” indicates that digits following “b” are in the binary notation), this eight-digit binary number is converted to “h6C” in the two-digit hexadecimal notation. Accordingly, when the extracted data is “b01101100”, the extracted data is classified into the parameter p0 to which the range of data “h00 to h6F” is allocated. When the extracted data is “b00110100”, which is “h34” in the two-digit hexadecimal notation, this value “b00110100” is also classified into the parameter p0.
Furthermore, when classifying the extracted data, the parameter generating unit 135 calculates, for each of the parameters p0 to p8, an extraction count (extraction frequency) of how many times extracted data is classified into the parameter. For example, in the example described above, extracted data is classified into the parameter p0 two times (“b01101100” and “b00110100”); accordingly, the extraction count is “2”. This extraction count is used as the feature value c0 corresponding to the parameter p0.
In Step S31 of FIG. 12, the parameter generating unit 135 obtains the maximum values m, which have been set in advance for each of the parameters p0 to p8 on a per-deformation-mode basis. The maximum values m, which have been set on the per-deformation-mode basis for each of the respective parameters p0 to p8, are stored in a nonvolatile storage medium such as the storage 114 in advance. The parameter generating unit 135 obtains the maximum values m by reading them from the nonvolatile storage medium in Step S31.
In Step S32 of FIG. 12, the parameter generating unit 135 determines the parameters p0 to p8 by using the feature values c0 to c8 obtained in Step S30 and the maximum values m obtained in Step S31. FIG. 13 illustrates an example of a method for determining the parameters p0 to p8 using the feature values c0 to c8 and the maximum values m according to the embodiment. Note that only the parameters p0 to p2 of the parameters p0 to p8 are illustrated in FIG. 13 for clarity.
In FIG. 13, the suffix letter “x” added to the maximum value mxy indicates the parameters p0 to p8; the suffix letter “y” indicates the first to third mode. More specifically, maximum values m01, m02, and m03 for the first, second, and third modes, respectively, are set for the parameter p0 in advance. Similarly, maximum values m11, m12, and m13 for the first, second, and third modes, respectively, are set for the parameter p1 in advance. In this example, each of the maximum values mxy is a value larger than 0 and equal to or smaller than 1.
As illustrated in FIG. 13, in the embodiment, the parameters p0 to p8 are determined by multiplying the feature values c0 to c8 by the maximum values mxy, which vary on the per-deformation-mode basis of the first to third modes and on the per-parameter basis of the parameters p0 to p8. More specifically, the parameter p0 that provides the maximum moving-direction velocity of the first mode is determined as p0=m01×c0; that of the second mode is determined as p0=m02×c0; and that of the third mode is determined as p0=m03×c0. Similarly, the parameter p1 that provides the moving-direction acceleration of the first mode is determined as p1=mn×c1; that of the second mode is determined as p1=m12×c1; and that of the third mode is determined as p1=m13×c1. The parameter p2 can be determined in a similar manner.
The parameter generating unit 135 stores the parameters p0 to p8 determined in this manner in the storage unit 137.
Referring back to the flowchart of FIG. 5, when the parameter generating unit 135 has determined the parameters p0 to p8 in Step S13, the process proceeds to Step S14. In Step S14, the image control unit 136 defines a partially-transparent region for the user's image data. For example, the image control unit 136 detects an inner region and an outer region of the part of the picture 212 extracted by the input unit 130 from the rectangular region 213 including the user's image data as described above with reference to FIG. 6. The image control unit 136 sets white pixels in the detected inner region to be partially-transparent. The image control unit 136 sets the detected outer region to be transparent. Image data can be set to be partially-transparent or transparent by using a known alpha-blending technique.
Defining a partially-transparent region on the part of the picture 212 represented by the user's image data in this manner makes the user's object originating from the user's image data appear more water-creature-like in the virtual aquarium 31.
In next Step S15, the image control unit 136 assigns, to the user's object, initial coordinates, at which the user's object originating from the user's image data is to be displayed in the virtual aquarium 31 for the first time. Hereinafter, unless otherwise specified, displaying user's object in the virtual aquarium 31 for the first time is described as “the user's object emerges”.
Projection of the image-data space 30 by the image obtaining unit 131 according to the embodiment is described below with reference to FIGS. 14A and 14B. To display the image-data space 30 having the three-dimensional coordinate axes on a display or a screen, it is necessary to project the image-data space 30 from a predetermined eye point to a two-dimensional plane. In the embodiment, the virtual aquarium 31 is projected because motion of the user's objects is confined within the virtual aquarium 31 in the image-data space 30.
In FIG. 14A, the depth and the width in the image-data space 30 are depicted as the vertical direction and the horizontal direction, respectively. In the embodiment, the image obtaining unit 131 projects the virtual aquarium 31 onto a plane containing the height H and the width W along the depth direction of the virtual aquarium 31 from an eye point 231, which is located outside the virtual aquarium 31, with an angular field of view θ as illustrated in FIG. 14A. In the embodiment, projection is performed by perspective projection. The position of the eye point 231 and the angular field of view θ are set so as to form at least one of a left dead zone 240 1 and a right dead zone 240 2, each of which is a dead angle zone that is in a dead angle with respect to the eye point 231, in the virtual aquarium 31.
For example, in a case where the left dead zone 240 1 and the right dead zone 240 2 are formed on the left and the right with respect to the eye point 231, a region of width Wv, that is obtained by excluding the left dead zone 240 1 and the right dead zone 240 2 among from the plane with the height H and the width W of the virtual aquarium 31, is projected as illustrated in FIG. 14B. More specifically, although image data representing the portions of the left dead zone 240 1 and the right dead zone 240 2 in the virtual aquarium 31 is provided, the portions are not displayed on a projection image (displayed area) projected onto the two-dimensional plane. In other words, images of the left dead zone 240 1 and the right dead zone 240 2 are not visible in the image of the virtual aquarium 31 projected onto the two-dimensional plane.
In the embodiment, the image control unit 136 sets the initial coordinates when the user's object emerges into the virtual aquarium 31, to within the dead zone 240 1 or 240 2. The user's object, to which the initial coordinates in the dead zone 240 1 or 240 2 are assigned, is controlled by the image control unit 136 to move to enter the region of the width Wv from the dead zone 240 1 or 240 2. Accordingly, when user's object is newly created, the new user's object appears to move such that the new user's object emerges from an end (the left end or the right end) of the projection image. Consequently, an unnatural impression, which would otherwise be given if the new user's object emerges at a portion in the projected image where the object was not at all displayed, can be αvoided.
When projection is performed by perspective projection, user's object is projected such that the size of the user's object in the virtual aquarium 31 is maximized when the user's object is on the near side or, more specifically, at a position closest to the eye point 231, and gradually decreases when the user's object moves in the depth direction. Because the dead zones 240 1 and 240 2 are on the near side in the depth direction, the newly-created user's image is projected as a large image. Accordingly, the newly-created user's image is more likely to be noticed.
In next Step S16, the image control unit 136 adds motion to the user's object to which the initial coordinates are assigned in Step S15 and causes the user's object to start moving.
In next Step S17, the PC 10 determines whether or not a next document image is to be read. If the PC 10 determines that a next document image is to be read, the process is returned to Step S10. On the other hand, if the PC 10 determines that a next document image is not to be read, the PC 10 completes the sequence of processing of the flowchart illustrated in FIG. 5. Determination that a document image is not to be read can be made based on, for example, whether or not the scanner apparatus 20 is disconnected from the PC 10 or whether or not a predetermined period of time has elapsed since reading is performed last.
How the image control unit 136 controls motion of the user's object in Step S16 is described below. FIG. 15 is a flowchart of an example procedure of controlling motion of one user's object (target user's object) by the image control unit 136 according to the embodiment. The image control unit 136 performs motion control through Steps S40 to S52 described below, in which the target user's object is deformed according to one of the first to third deformation modes assigned to the target user's object.
In Step S40, the image control unit 136 determines, based on the parameter p0, whether or not the moving-direction velocity v of the target user's object has exceeded the maximum moving-direction velocity vmax. If the image control unit 136 determines that the velocity v has exceeded the maximum velocity vmax, the process proceeds to Step S42. If the image control unit 136 determines that the velocity v has not exceeded the maximum velocity vmax, the process proceeds to Step S41, in which the velocity v of the target user's object is increased by the moving-direction acceleration α. Thereafter, the process proceeds to Step S42.
In Step S42, the image control unit 136 determines, based on the parameter p2, whether or not the angular acceleration αh in the horizontal plane has exceeded the maximum value αhmax. If the image control unit 136 determines that the angular acceleration αh has exceeded the maximum value αhmax, the process proceeds to Step S44.
If the image control unit 136 determines that the angular acceleration αh has not exceeded the maximum value αhmax, the process proceeds to Step S43. In Step S43, the image control unit 136 increases or decreases the angular acceleration αh of the target user's object in the horizontal plane to thereby change the changing rate of orientation in the horizontal plane.
More specifically, the image control unit 136 increases or decreases the angular acceleration αh by an amount that depends on a random number, which is generated to fall within a range between the parameter p7, which is the maximum random number for rightward angle dRrmax, and the parameter p8, which is the maximum random number for leftward angle dRlmax. The image control unit 136 increases or decreases the changing rate of orientation of the target user's object in the horizontal plane according to the amount, by which the angular acceleration αh in the horizontal plane is increased or decreased. When the changing rate of orientation in the horizontal plane has been changed, the process proceeds to Step S44.
In Step S44, the image control unit 136 determines, based on the parameter p3, whether or not the angular acceleration αv in the vertical plane has exceeded the maximum value αvmax If the image control unit 136 determines that the angular acceleration αv has exceeded the maximum value αvmax, the process proceeds to Step S46.
If the image control unit 136 determines that the angular acceleration αv has not exceeded the maximum value αvmax, the process proceeds to Step S45. In Step S45, the image control unit 136 increases or decreases the angular acceleration αv of the target user's object in the vertical plane to thereby change the changing rate of orientation in the vertical plane.
More specifically, the image control unit 136 increases or decreases the angular acceleration αv by an amount that depends on a random number, which is generated to fall within a range between the parameter p5, which is the maximum random number for upward angle dRumax, and the parameter p6, which is the maximum random number for downward angle dRumax. The image control unit 136 increases or decreases the changing rate of orientation of the target user's object in the vertical plane according to the amount, by which the angular acceleration αv in the vertical plane is increased or decreased. When the changing rate of orientation in the vertical plane has been changed, the process proceeds to Step S46.
In Step S46, the image control unit 136 determines whether or not the target user's object has contacted the top or bottom wall (the ceiling or the floor) of the virtual aquarium 31. If the image control unit 136 determines that the target user's object has not contacted the top or bottom wall of the virtual aquarium 31, the process proceeds to Step S48.
If the image control unit 136 determines that the target user's object has contacted the top or bottom wall of the virtual aquarium 31, the process proceeds to Step S47. In Step S47, the image control unit 136 reverses the changing rate of orientation of the target user's object in the vertical plane and, simultaneously, decreases the moving-direction velocity v. Reversing the changing rate of orientation in the vertical plane is performed by reversing the direction of a vertical component of the moving direction while maintaining a horizontal component of the same. Thereafter, the process proceeds to Step S48.
In Step S48, the image control unit 136 determines whether or not the target user's object has contacted a front, back, right, or left wall of the virtual aquarium 31. If the image control unit 136 determines that the target user's object has not contacted the front, back, right, or left wall of the virtual aquarium 31, the process proceeds to Step S50.
If the image control unit 136 determines that the target user's object has contacted the front, back, right, or left wall of the virtual aquarium 31, the process proceeds to Step S49. In Step S49, the image control unit 136 increases the changing rate of orientation of the target user's object in the horizontal plane and, simultaneously, decreases the moving-direction velocity v. In this example, the image control unit 136 increases a changing rate of orientation of the moving-direction of the target user's object by ten times. Thereafter, the process proceeds to Step S50.
In Step S50, the image control unit 136 changes the orientation of the target user's object in the vertical plane according to the changing rate of orientation of the target user's object in the vertical plane. In next Step S51, the image control unit 136 changes orientation of the target user's object in the horizontal plane according to the changing rate of orientation of the target user's object in the horizontal plane. In next Step S52, the image control unit 136 changes the position (coordinates) of the target user's object according to the moving-direction velocity v. The image control unit 136 projects the virtual aquarium 31 onto the two-dimensional image-data plane, thereby updating the image on the two-dimensional image-data plane.
When the image control unit 136 completes processing of Step S52, the process is returned to Step S40. The image control unit 136 repeats processing from Steps S40 to S52 described above at predetermined intervals, e.g., at every frame period.
By performing the processes described above, the image control unit 136 can add vertical and horizontal fluctuation to the motion of the target user's object. Furthermore, the image control unit 136 causes the target user's object to move seamlessly after the user's object contacts a wall surface of the virtual aquarium 31. Consequently, the embodiment can move the user's object with a water-creature-like motion. Furthermore, the user's object is displayed within a screen according to the width Wv at most times.
In the example described above, the display system 1 according to the embodiment projects the virtual aquarium 31 onto the two-dimensional plane by causing the plurality of projector devices 11 1, 11 2, and 11 3 to project the images 13 1, 13 2, and 13 3, into which a to-be-projected image is divided, onto the screen 12. However, applicable projection scheme is not limited thereto. For example, alternatively, the single projector apparatus 11 may project the single to-be-projected image, which is not divided, onto the screen 12.
In the example described above, a scene in which the user's object moves within the three-dimensional image-data space is projected onto (displayed on) the two-dimensional image-data plane. However, application of the embodiment is not necessarily limited to a three-dimensional image-data space region. For example, the embodiment may be applied to an image, in which user's object moves on a two-dimensional image-data plane. However, to create motion which appears as a creature swimming in a fish tank, a three-dimensional motion can give more real appearance than two-dimensional motion does. Accordingly, utilizing a three-dimensional image-data space region will be more suitable.
In the example described above, the user's object is based on an image representing a creature, such as fish, living in water. However, the user's object is not limited thereto. For example, the user's object may be based on an image of an insect, a plant, or the like.
In the example described above, the picture is made by a user by hand-drawing on a sheet with a marker pen or the like. However, a user may make a picture by another method. For example, a picture hand-drawn by a user may be made using a stylus pen and a tablet terminal, a finger (i.e., by finger touch operation) and a tablet terminal, or the like, rather than using a marker pen and a sheet. Put another way, a hand-drawn picture may be made by any method that allows a user to make a picture in a manner or by a method similar to that by which the user makes a picture using paper and a pen, and the type of the medium to be employed is not particularly limited.
First Modification of Embodiment
A first modification of the embodiment is described below. In the embodiment described above, the projector apparatus 11 projects the two-dimensional projection image of the virtual aquarium 31 onto the screen 12, which is plain and flat. However, a screen onto which the projection image is to be projected is not limited to the screen 12. For example, alternatively, the projector apparatus 11 may project a two-dimensional projection image of the virtual aquarium 31 onto a semi-transparent screen 301 placed over the whole area of an aquarium 300 where real fish and/or the like are swimming. An image of user's object projected onto the screen 301 is combined with the real fish in the aquarium 300 that can be seen through the semi-transparent screen 301. Consequently, a user can enjoy a sight to which more real appearance is given.
Second Modification of Embodiment
A second modification of the embodiment is described below. A projection surface of the screen 12 is not necessarily flat, and may have a three-dimensional profile. For example, a screen 310 having a semi-spherical shape as illustrated in FIG. 17 may be employed. By causing the projector apparatus 11 to project a two-dimensional projection image of the virtual aquarium 31 onto the semi-spherical portion of the screen 310, more three-dimensional appearance can be given to the image of the inside of the virtual aquarium 31. In this case, a user can enjoy an experience as if the user looks into water through a window provided in water. Furthermore, the user can view a scene in which fish or the like represented by the picture 22 drawn by the user himself/herself appears to be swimming in water.
Third Modification of Embodiment
A third modification of the embodiment is described below. The shape of the screen 12 is not necessarily fixed. For example, a screen 320, which is a hanging cloth as illustrated in FIG. 18, may be employed. In this case, the screen 320 is preferably not fixed at its bottom end so as to be easily swayed by wind or vibrations. When the projector apparatus 11 projects a projection image of the virtual aquarium 31 onto the screen 320, the swaying screen 320 can give more real appearance of an under-water scene to the image. In a situation where the projector apparatus 11 projects a projection image from the backside of the screen 320, and a viewing person is positioned between the screen 320 and the projector apparatus 11, a silhouette and user's object are displayed together. This can further increase entertainment value.
Fourth Modification of Embodiment
A fourth modification of the embodiment is described below. In the fourth modification of the embodiment, a background image is added to the virtual aquarium 31. For example, an object (hereinafter, “background object”) having distance information (coordinate information) independent of user's object may be arranged in the virtual aquarium 31. Alternatively, a background object may be arranged parallel to the plane containing the height and the width at a position of the depth D in the virtual aquarium 31. The background object may be fixed or, alternatively, changed according to predetermined schedule. For example, the background object may be changed on a per-time-period basis (e.g., morning hours, day hours, and night hours) per day. Schedule according to which the background object is to be changed is not limited thereto. The background object may be changed at longer time intervals; e.g., on a per-season basis (spring, summer, fall, and winter) or for annual events. When scheduled as such, it is preferable that the background object is changed to an object suitable for the time period, season, annual event, or the like each time. Changing the background object in this manner allows a user to enjoy various scenes which vary according to time period, season, annual event, or the like.
Fifth Modification of Embodiment
A fifth modification of the embodiment is described below. The fifth modification of the embodiment is configured to allow a user to take an action on user's object in the virtual aquarium 31. In the embodiment, because it is assumed that the user's object is a creature living in water, possible actions taken by a user include giving food to the creature. More specifically, the user draws the picture 22 representing food on the document 21. The user adds information indicating that the picture 22 represents food to the document 21. This information may be added by, for example, embedding information indicating that the picture 22 represents food in at least one of the marks 220 1 to 220 3. The PC 10 generates an object (hereinafter, “food object”) from the picture 22 of food based on a document image obtained by the scanner apparatus 20 by scanning an image of the document 21, on which the picture 22 of the food is drawn, and adds the food object into the virtual aquarium 31.
When adding the food object, the PC 10 assigns coordinates on the top wall (ceiling) or, more specifically, on the plane containing the width W and the depth D at the height H, to the food object as initial coordinates. However, the initial coordinates are not limited thereto, and coordinates of a position higher than the ceiling of the virtual aquarium 31 in the image-data space 30 may alternatively be assigned to the food object. The PC 10 defines, as the motion of the food object, motion of moving (falling) from the initial coordinates of the food object in the image-data space 30. Motion, through which the moving food object is swayed due to resistance of water, may be further added to the food object.
Such motion control of the food may be performed by: making determination as to whether an image representing a hand-drawn picture, which is obtained by scanning a document image from a sheet, is a food object or user's object; and setting the parameters p0 to p8 differently depending on whether the image represents a food object or user's object. More specifically, the parameters p0 to p8 may be set as follows. If the image represents a food object, predetermined falling-motion values are set as the parameters p0 to p8. On the other hand, if the image represents user's object, the parameters p0 to p8 are set by utilizing the mode determination and the feature values described above.
Furthermore, the PC 10 adds motion to each of user's objects according to the food object in the virtual aquarium 31. For example, the PC 10 may add motion of approaching the food object to user's object according to a distance between the user's object and the food object. More specifically, the user's object is moved such that the closer the user's object, the faster the user's object approaches the food object. Furthermore, the PC 10 may delete the food object from the virtual aquarium 31 when one of the user's objects contacts the food object. By deleting the food object in this manner, a scene in which, for example, fish represented by the user's object has eaten the food can be created.
Such motion control may be implemented as follows, for example. The PC 10 manages all the objects by associating each object with identification information indicating whether the object is a food object or user's object. The PC 10 calculates distances from coordinates of every food object and every user's object. The PC 10 determines whether the calculated distance is equal to or smaller than a threshold value for each of the distances. If the PC 10 determines that a calculated distance is equal to or smaller than the threshold value, the PC 10 increases the parameter(s) (e.g., the parameters p0 and p1) related to the moving-direction velocity of the corresponding user's object.
The PC 10 may further control the user's object, for which the parameter(s) related to the velocity has been increased according to a result of the determination, as follows. The PC 10 sets a flag indicating that the parameter(s) has been increased for the user's object and monitors the flagged user's object. When it is determined that the distance between the flagged user's object and the food object has become equal to or larger than the threshold value or zero, the PC 10 decreases the parameter(s) (e.g., the p0 and p1) related to the moving-direction velocity. Meanwhile, in a case where the food object is deleted, the distance between the user's object and the food object is set to zero.
The control method described above causes the moving-direction velocity of the user's object to decrease when the distance between the flagged user's object and the food object has become equal to or larger than the threshold value, for example. As a result, a scene in which the creature represented by the user's object moving away from the food object appears to become less active can be created. Furthermore, the control method described above causes a moving-direction velocity of the creature represented by the user's object to decrease when the distance between the flagged user's object and the food object has become zero, for example. As a result, a scene in which the creature appears to give up eating the food object because the food has disappeared can be created.
The PC 10 can cause motion of user's object to vary between before and after the user's object contacts a food object. For example, the velocity of the user's object may be increased after the user's object contacts the food object. The moving direction of the user's object may be changed after the user's object contacts the food object. For another example, the user's object may be increased in size after the user's object contacts the food object.
As described above, the fifth modification of the embodiment is configured to change the motion of each of user's objects in the virtual aquarium 31 according to an action taken by a user. As a result, the entertainment value can be further increased.
Sixth Modification of Embodiment
A sixth modification of the embodiment is described below. In the sixth modification of the embodiment, a decorative object, which is based on the picture 22 drawn by a user on the document 21, is arranged in the virtual aquarium 31. More specifically, the user draws the picture 22 representing a decorative object, such as an ornament rock or seaweed, on the document 21. The user adds information indicating that the picture 22 represents the decorative object to the document 21. The PC 10 generates an object (hereinafter, “decorative object”) from the picture 22 representing the decorative object based on a document image obtained by the scanner apparatus 20 by scanning an image of the document 21, on which the picture 22 of the decorative object is drawn, and adds the decorative object into the virtual aquarium 31.
When adding the decorative object, the PC 10 assigns coordinates on the bottom wall (floor) or, more specifically, on the plane containing the width W and the depth D at the height zero, to the decorative object as initial coordinates. The PC 10 does not define motion according to which the decorative object is displaced. In other words, the decorative object is positionally fixed in the virtual aquarium 31. Meanwhile, a vertically extending and contracting motion may be added to the decorative object as in the case of the second deformation mode described above.
According to an embodiment, image processing technique can be improved to make it possible to display user's image, which is made by a general user by hand-drawing, such that the user's image is moved in a more lifelike manner.
The present invention can be implemented in any convenient form, for example using dedicated hardware, or a mixture of dedicated hardware and software. The present invention may be implemented as computer software implemented by one or more network processing apparatus. The network can comprise any conventional terrestrial or wireless communications network, such as the Internet. The processing apparatus can compromise any suitably programmed apparatuses such as a general purpose computer, personal digital assistant, mobile telephone (such as a WAP or 3G-compliant phone) and so on. Since the present invention can be implemented as software, each and every aspect of the present invention thus encompasses computer software implemental on a programmable device. The computer software can be provided to the programmable device using any storage medium for storing processor readable code such as a floppy disk, hard disk, CD ROM, magnetic tape device or solid state memory device.
The hardware platform includes any desired kind of hardware resources including, for example, a central processing unit (CPU), a random access memory (RAM), and a hard disk drive (HDD). The CPU may be implemented by any desired kind of any desired number of processor. The RAM may be implemented by any desired kind of volatile or non-volatile memory. The HDD may be implemented by any desired kind of non-volatile memory capable of storing a large amount of data. The hardware resources may additionally include an input device, an output device, or a network device, depending on the type of the apparatus. Alternatively, the HDD may be provided outside of the apparatus as long as the HDD is accessible. In this example, the CPU, such as a cache memory of the CPU, and the RAM may function as a physical memory or a primary memory of the apparatus, while the HDD may function as a secondary memory of the apparatus.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.

Claims (12)

What is claimed is:
1. A system comprising:
a display apparatus configured to display a screen; and
an information processing apparatus including at least a processor connected to a memory, the processor configured to,
detect identification information from image data of a sheet, the image data including a drawing region for allowing a user to draw a picture is set, the identification information identifying whether targets drawn on the sheet in the drawing region are a first object representing a creature or a second object representing an object eatable by the creature,
extract the targets drawn in the drawing region of the sheet from the image data of the sheet,
display the picture including the targets on the screen by such that the targets are displayed differently based on whether the targets are the first object or the second object,
determine whether a respective one of the targets that are the first object approaches a respective one of the targets that are the second object, and
animate the picture displayed on the screen by adding motion to the picture such that the respective one of the targets that are the first object appear to eat the respective one of the targets that are the second object, when the processor determines that the respective one of the targets that are the first object approaches the respective one of the targets that are the second object.
2. The system of claim 1, wherein the processor is configured to animate the picture by changing coordinates in a three dimensional space assigned to the respective one of the targets that are the first object and the respective one of the targets that are the second object.
3. The system of claim 2, wherein the processor is configured to animate the picture such that the respective one of the targets that are the second object is deleted from the three dimensional space.
4. The system of claim 2, wherein the processor is configured to change the coordinates assigned to the respective one of the targets that are the first object to change a shape of the respective one of the targets that are the first object changes based on an aspect ratio of the respective one of the targets that are the first object.
5. An information processing apparatus connected to a display apparatus, the display apparatus configured to display a screen, the information processing apparatus comprising:
a processor connected to a memory, the processor is configured to,
detect identification information from image data of a sheet, the image data including a drawing region for allowing a user to draw a picture is set, the identification information identifying whether targets drawn on the sheet in the drawing region are a first object representing a creature or a second object representing an object eatable by the creature,
extract the targets drawn in the drawing region of the sheet from the image data of the sheet,
display the picture including the targets on the screen by such that the targets are displayed differently based on whether the targets are the first object or the second object,
determine whether a respective one of the targets that are the first object approaches a respective one of the targets that are the second object, and
animate the picture displayed on the screen by adding motion to the picture such that the respective one of the targets that are the first object appear to eat the respective one of the targets that are the second object, when the processor determines that the respective one of the targets that are the first object approaches the respective one of the targets that are the second object.
6. The information processing apparatus of claim 5, wherein the processor is configured to animate the picture by changing coordinates in a three dimensional space assigned to the respective one of the targets that are the first object and the respective one of the targets that are the second object.
7. The information processing apparatus of claim 6, wherein the processor is configured to animate the picture such that the respective one of the targets that are the second object is deleted from the three dimensional space.
8. The information processing apparatus of claim 6, wherein the processor is configured to change the coordinates assigned to the respective one of the targets that are the first object to change a shape of the respective one of the targets that are the first object changes based on an aspect ratio of the respective one of the targets that are the first object.
9. An information processing apparatus comprising:
a processor connected to a memory, the processor configured to execute a display control program stored on the memory, to cause the information processing apparatus to,
detect identification information from image data of a sheet, the image data including a drawing region for allowing a user to draw a picture is set, the identification information identifying whether targets drawn on the sheet in the drawing region are a first object representing a creature or a second object representing an object eatable by the creature,
extract the targets drawn in the drawing region of the sheet from the image data of the sheet,
display the picture including the targets on a display screen by such that the targets are displayed differently based on whether the targets are the first object or the second object,
determine whether a respective one of the targets that are the first object approaches a respective one of the targets that are the second object, and
animate the picture displayed on the display screen by adding motion to the picture such that the respective one of the targets that are the first object appear to eat the respective one of the targets that are the second object, when the processor determines that the respective one of the targets that are the first object approaches the respective one of the targets that are the second object.
10. The information processing apparatus of claim 9, wherein the processor is configured to animate the picture by changing coordinates in a three dimensional space assigned to the respective one of the targets that are the first object and the respective one of the targets that are the second object.
11. The information processing apparatus of claim 10, wherein the processor is configured to animate the picture such that the respective one of the targets that are the second object is deleted from the three dimensional space.
12. The information processing apparatus of claim 10, wherein the processor is configured to change the coordinates assigned to the respective one of the targets that are the first object to change a shape of the respective one of the targets that are the first object changes based on an aspect ratio of the respective one of the targets that are the first object.
US15/700,783 2013-05-09 2017-09-11 Display control method, display control device, and display system Active US10089909B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/700,783 US10089909B2 (en) 2013-05-09 2017-09-11 Display control method, display control device, and display system

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2013099750 2013-05-09
JP2013-099750 2013-05-09
US14/269,375 US9501857B2 (en) 2013-05-09 2014-05-05 Display control method configured to cause an input image to emerge and move on a display region, display control device, and display system configured to perform same
US15/297,772 US9786208B2 (en) 2013-05-09 2016-10-19 Display control method, display control device, and display system
US15/700,783 US10089909B2 (en) 2013-05-09 2017-09-11 Display control method, display control device, and display system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/297,772 Continuation US9786208B2 (en) 2013-05-09 2016-10-19 Display control method, display control device, and display system

Publications (2)

Publication Number Publication Date
US20170372649A1 US20170372649A1 (en) 2017-12-28
US10089909B2 true US10089909B2 (en) 2018-10-02

Family

ID=51864443

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/269,375 Active 2035-01-17 US9501857B2 (en) 2013-05-09 2014-05-05 Display control method configured to cause an input image to emerge and move on a display region, display control device, and display system configured to perform same
US15/297,772 Active US9786208B2 (en) 2013-05-09 2016-10-19 Display control method, display control device, and display system
US15/700,783 Active US10089909B2 (en) 2013-05-09 2017-09-11 Display control method, display control device, and display system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/269,375 Active 2035-01-17 US9501857B2 (en) 2013-05-09 2014-05-05 Display control method configured to cause an input image to emerge and move on a display region, display control device, and display system configured to perform same
US15/297,772 Active US9786208B2 (en) 2013-05-09 2016-10-19 Display control method, display control device, and display system

Country Status (2)

Country Link
US (3) US9501857B2 (en)
JP (4) JP6361146B2 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2976681B1 (en) * 2011-06-17 2013-07-12 Inst Nat Rech Inf Automat SYSTEM FOR COLOCATING A TOUCH SCREEN AND A VIRTUAL OBJECT AND DEVICE FOR HANDLING VIRTUAL OBJECTS USING SUCH A SYSTEM
US9606992B2 (en) * 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
US9934613B2 (en) * 2014-04-29 2018-04-03 The Florida International University Board Of Trustees Systems for controlling a movable object
JP6390163B2 (en) 2014-05-16 2018-09-19 株式会社リコー Information processing apparatus, information processing method, and program
JP5989151B2 (en) * 2015-01-27 2016-09-07 株式会社タカラトミーアーツ Display system
JP6558006B2 (en) 2015-03-20 2019-08-14 株式会社リコー Image management apparatus, image management method, image management program, and display system
JP6521055B2 (en) * 2015-03-20 2019-05-29 株式会社リコー Display device, display control method, display control program and display system
JP6752007B2 (en) * 2015-10-06 2020-09-09 チームラボ株式会社 Drawing image display system
US10223605B2 (en) * 2016-03-18 2019-03-05 Colorvision International, Inc. Interactive virtual aquarium simulation system and associated methods
US9971556B2 (en) 2016-07-05 2018-05-15 Ricoh Company, Ltd. Image processing apparatus, image processing method, and recording medium
WO2018037645A1 (en) * 2016-08-24 2018-03-01 ソニー株式会社 Information processing device, information processing method, and program
JP6903886B2 (en) * 2016-09-16 2021-07-14 株式会社リコー Display control device, program, display system and display control method
US10674137B2 (en) * 2017-05-19 2020-06-02 Ricoh Company, Ltd. Display control, apparatus, display system, display control method, and non-transitory recording medium
JP6863072B2 (en) * 2017-05-19 2021-04-21 株式会社リコー Display control device, display control program, display system and display control method
JP6965822B2 (en) * 2017-05-19 2021-11-10 株式会社リコー Display control device, display system, display control method and program
CN107688450A (en) * 2017-09-12 2018-02-13 威创集团股份有限公司 A kind of combination control method and device
EP3729373A4 (en) * 2018-02-08 2021-02-24 Samsung Electronics Co., Ltd. Method and electronic device for rendering background in image
US11017578B2 (en) 2018-09-28 2021-05-25 Ricoh Company, Ltd. Display control system to control a display based on detecting wind
CN113454507B (en) 2018-12-21 2024-05-07 奇跃公司 Cavitation structure for promoting total internal reflection within a waveguide
WO2021021670A1 (en) * 2019-07-26 2021-02-04 Magic Leap, Inc. Systems and methods for augmented reality
JP7490943B2 (en) * 2019-10-11 2024-05-28 株式会社リコー DISPLAY CONTROL DEVICE, DISPLAY CONTROL SYSTEM, PROGRAM, AND DISPLAY CONTROL METHOD
JP7472681B2 (en) 2019-11-25 2024-04-23 株式会社リコー Information processing device, program, and information processing method
JP7396326B2 (en) * 2021-04-21 2023-12-12 株式会社リコー Information processing system, information processing device, information processing method and program
US20230091386A1 (en) * 2021-09-17 2023-03-23 Multics Co., Ltd. Augmented reality hologram virtual aquarium system
US11710430B1 (en) * 2022-03-23 2023-07-25 Reginald Wray Virtual fish tank assembly

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0779662A (en) 1993-09-20 1995-03-28 Tec Corp Admiration of aquatic organism and device therefor
JPH0851886A (en) 1994-08-09 1996-02-27 Jiyareko:Kk Aquarium device
JPH09298977A (en) 1996-05-17 1997-11-25 Dainippon Printing Co Ltd Aquarium for ornamentation
US5854634A (en) 1995-12-26 1998-12-29 Imax Corporation Computer-assisted animation construction system using source poses within a pose transformation space
JPH1186024A (en) 1997-07-18 1999-03-30 Internatl Business Mach Corp <Ibm> Method and system for defining motion path of multimedia object
JP3327127B2 (en) 1996-07-09 2002-09-24 松下電器産業株式会社 Image presentation device
US20080242429A1 (en) 2007-03-26 2008-10-02 Ricoh Company, Ltd., Information processing apparatus, information processing method, information processing program, and storage medium storing information processing program
US20090005162A1 (en) 2007-06-19 2009-01-01 Ricoh Company, Ltd. Information Processing Devices, Information Processing Method, and Information Processing Means
JP2009101122A (en) 2007-03-26 2009-05-14 Ricoh Co Ltd Information processing apparatus, information processing method, information processing program, and storage medium storing information processing program
JP2009131596A (en) 2007-11-09 2009-06-18 Ricoh Co Ltd Information processor, image processor, information processing method, information processing program, and recording medium
JP2009273859A (en) 2008-04-16 2009-11-26 Ricoh Co Ltd Information processing device, image processing device, information processing method, information processing program, and recording medium
JP2010012036A (en) 2008-07-03 2010-01-21 Ricoh Co Ltd Information processor, game system, image processor, information processing method, information processing program, and recording medium
US20100257555A1 (en) 2009-04-02 2010-10-07 Ted Dunn TV widget animation with audio
US20110181606A1 (en) * 2010-01-19 2011-07-28 Disney Enterprises, Inc. Automatic and semi-automatic generation of image features suggestive of motion for computer-generated images and video
US20110306417A1 (en) * 2010-06-14 2011-12-15 Nintendo Co., Ltd. 2d imposters for simplifying processing of plural animation objects in computer graphics generation
US20120308159A1 (en) 2011-05-30 2012-12-06 Aiko Otsuka Image processing apparatus, image processing method, and computer-readable recording medium
US8581905B2 (en) 2010-04-08 2013-11-12 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
JP2014149633A (en) 2013-01-31 2014-08-21 Ricoh Co Ltd Information processing system, information processing apparatus, and information processing method, and program
US20140267423A1 (en) 2013-03-15 2014-09-18 Ricoh Company, Limited Information processing system, information processing apparatus, and information processing method
US20140320507A1 (en) 2013-04-26 2014-10-30 Samsung Electronics Co., Ltd. User terminal device for providing animation effect and display method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04161325A (en) 1990-10-25 1992-06-04 Osaka Gas Co Ltd Reverse lining method for pipe
JP4223112B2 (en) 1998-12-04 2009-02-12 株式会社バンダイナムコゲームス GAME DEVICE AND INFORMATION STORAGE MEDIUM
JP5604674B2 (en) 2009-09-18 2014-10-15 株式会社アズライト Game device control program
JP3163590U (en) * 2010-08-10 2010-10-21 株式会社 トライエックス Foreign language study calendar

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0779662A (en) 1993-09-20 1995-03-28 Tec Corp Admiration of aquatic organism and device therefor
JPH0851886A (en) 1994-08-09 1996-02-27 Jiyareko:Kk Aquarium device
US6373492B1 (en) 1995-12-26 2002-04-16 Imax Corporation Computer-assisted animation construction system and method and user interface
US5854634A (en) 1995-12-26 1998-12-29 Imax Corporation Computer-assisted animation construction system using source poses within a pose transformation space
US6577315B1 (en) 1995-12-26 2003-06-10 Imax Corporation Computer-assisted animation construction system and method and user interface
JP4161325B2 (en) 1995-12-26 2008-10-08 イマックス コーポレーション Animation production system and method using computer and user interface
JPH09298977A (en) 1996-05-17 1997-11-25 Dainippon Printing Co Ltd Aquarium for ornamentation
JP3327127B2 (en) 1996-07-09 2002-09-24 松下電器産業株式会社 Image presentation device
JPH1186024A (en) 1997-07-18 1999-03-30 Internatl Business Mach Corp <Ibm> Method and system for defining motion path of multimedia object
US6108010A (en) 1997-07-18 2000-08-22 International Business Machines Corp. Method and system for a true-scale motion path editor
JP2013230374A (en) 2007-03-26 2013-11-14 Ricoh Co Ltd Information processing apparatus, information processing method, information processing program, and storage medium storing information processing program
US20080242429A1 (en) 2007-03-26 2008-10-02 Ricoh Company, Ltd., Information processing apparatus, information processing method, information processing program, and storage medium storing information processing program
JP2009101122A (en) 2007-03-26 2009-05-14 Ricoh Co Ltd Information processing apparatus, information processing method, information processing program, and storage medium storing information processing program
US20090005162A1 (en) 2007-06-19 2009-01-01 Ricoh Company, Ltd. Information Processing Devices, Information Processing Method, and Information Processing Means
JP2009000174A (en) 2007-06-19 2009-01-08 Ricoh Co Ltd Information processing equipment, information processing method, information processing program, and recording medium recording information processing program
JP2009131596A (en) 2007-11-09 2009-06-18 Ricoh Co Ltd Information processor, image processor, information processing method, information processing program, and recording medium
JP2009273859A (en) 2008-04-16 2009-11-26 Ricoh Co Ltd Information processing device, image processing device, information processing method, information processing program, and recording medium
JP2010012036A (en) 2008-07-03 2010-01-21 Ricoh Co Ltd Information processor, game system, image processor, information processing method, information processing program, and recording medium
US20100257555A1 (en) 2009-04-02 2010-10-07 Ted Dunn TV widget animation with audio
US20110181606A1 (en) * 2010-01-19 2011-07-28 Disney Enterprises, Inc. Automatic and semi-automatic generation of image features suggestive of motion for computer-generated images and video
US8581905B2 (en) 2010-04-08 2013-11-12 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
US20110306417A1 (en) * 2010-06-14 2011-12-15 Nintendo Co., Ltd. 2d imposters for simplifying processing of plural animation objects in computer graphics generation
US20120308159A1 (en) 2011-05-30 2012-12-06 Aiko Otsuka Image processing apparatus, image processing method, and computer-readable recording medium
JP2012249172A (en) 2011-05-30 2012-12-13 Ricoh Co Ltd Image processing apparatus, image processing program and memory medium
JP2014149633A (en) 2013-01-31 2014-08-21 Ricoh Co Ltd Information processing system, information processing apparatus, and information processing method, and program
US20140267423A1 (en) 2013-03-15 2014-09-18 Ricoh Company, Limited Information processing system, information processing apparatus, and information processing method
JP2014182406A (en) 2013-03-15 2014-09-29 Ricoh Co Ltd Information processing system, information processing device and information processing method, and program
US20140320507A1 (en) 2013-04-26 2014-10-30 Samsung Electronics Co., Ltd. User terminal device for providing animation effect and display method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Japanese Office Action dated Jan. 16, 2018 for Japanese Application No. 2014-012816.

Also Published As

Publication number Publication date
JP2014238811A (en) 2014-12-18
JP2020140737A (en) 2020-09-03
US9501857B2 (en) 2016-11-22
JP6501017B2 (en) 2019-04-17
JP2019071111A (en) 2019-05-09
US20140333612A1 (en) 2014-11-13
JP2018120610A (en) 2018-08-02
US20170372649A1 (en) 2017-12-28
JP6361146B2 (en) 2018-07-25
US20170039909A1 (en) 2017-02-09
US9786208B2 (en) 2017-10-10
JP7020511B2 (en) 2022-02-16

Similar Documents

Publication Publication Date Title
US10089909B2 (en) Display control method, display control device, and display system
US8395614B2 (en) Generating an asset for interactive entertainment using digital image capture
KR101319805B1 (en) Photographing big things
JP6423435B2 (en) Method and apparatus for representing a physical scene
JP5985619B2 (en) Controlling objects in a virtual environment
US20150002419A1 (en) Recognizing interactions with hot zones
US20120313960A1 (en) Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file
EP2161691A1 (en) Storage medium having stored thereon image processing program and image processing apparatus
US20180082618A1 (en) Display control device, display system, and display control method
US20120139827A1 (en) Method and apparatus for interacting with projected displays using shadows
CN110858409A (en) Animation generation method and device
JP2006072577A (en) Image processor, image processing method, and image processing program
JP2023171298A (en) Adaptation of space and content for augmented reality and composite reality
CN115516517A (en) Method and device for constructing three-dimensional geometric figure
JP7490943B2 (en) DISPLAY CONTROL DEVICE, DISPLAY CONTROL SYSTEM, PROGRAM, AND DISPLAY CONTROL METHOD
CN111639611A (en) Historical relic display control method and device
JP6859763B2 (en) Program, information processing device
US11941499B2 (en) Training using rendered images
US20240320916A1 (en) Information processing device, information processing method, and program
US20240346743A1 (en) Image generation method and computer-readable medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4