WO2010038337A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2010038337A1 WO2010038337A1 PCT/JP2009/003043 JP2009003043W WO2010038337A1 WO 2010038337 A1 WO2010038337 A1 WO 2010038337A1 JP 2009003043 W JP2009003043 W JP 2009003043W WO 2010038337 A1 WO2010038337 A1 WO 2010038337A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- area
- unit
- display
- data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/41—Bandwidth or redundancy reduction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/02—Handling of images in compressed format, e.g. JPEG, MPEG
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/121—Frame memory handling using a cache memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/122—Tiling
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/34—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling
- G09G5/346—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling for systems having a bit-mapped display memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/393—Arrangements for updating the contents of the bit-mapped memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/395—Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/399—Control of the bit-mapped memory using two or more bit-mapped memories, the operations of which are switched in time, e.g. ping-pong buffers
Definitions
- the present invention relates to an image processing technique for enlarging / reducing an image displayed on a display, or moving the image vertically and horizontally.
- a technique for enlarging / reducing a display image and moving in the vertical and horizontal directions using tile images having a plurality of resolutions generated from digital images such as high-definition photographs has been proposed.
- an original image size is reduced in a plurality of stages to generate images with different resolutions, and images in each layer are divided into one or a plurality of tile images to represent the original image in a hierarchical structure.
- the image with the lowest resolution is composed of one tile image
- the original image with the highest resolution is composed of the largest number of tile images.
- the image processing apparatus quickly performs enlarged display or reduced display by switching the tile image being used to a tile image of a different hierarchy at the time of enlargement processing or reduction processing of the display image.
- image change when the user requests to move the display area or to enlarge or reduce the image (hereinafter collectively referred to as “image change”), processing such as reading of the tile image data and decoding is performed. Therefore, it may take time that cannot be overlooked before a new image is output.
- image change processing such as reading of the tile image data and decoding is performed. Therefore, it may take time that cannot be overlooked before a new image is output.
- a large-capacity memory is required especially for high-definition and high-resolution images, and the supported images are limited by the data size. There will also be a situation that must be done.
- the present invention has been made in view of such problems, and an object thereof is to provide an image processing technique excellent in responsiveness to an image change request from a user.
- the image processing apparatus is an image processing apparatus that displays at least a part of an image on a display, and a storage device that holds a plurality of image blocks obtained by dividing compressed data of an image to be processed according to a predetermined rule; Either a load unit that loads image blocks including necessary area data determined according to a predetermined rule in accordance with the area of the image being displayed from the storage device to the memory, and one of movement, enlargement, and reduction of the display area from the user And a display image processing unit that reads out and decodes at least a part of the image block loaded by the load unit from the memory and generates a new display image.
- This image processing method is an image processing method for displaying at least a part of an image on a display, and generates a plurality of image blocks obtained by dividing compressed data of an image to be processed according to a predetermined rule, and stores it in a storage device.
- the method includes a step of reading out and decoding at least a part of the loaded image block from the memory and generating a new display image.
- Still another embodiment of the present invention relates to an image processing apparatus.
- This image processing apparatus is an image processing apparatus that displays an area in an image on a display in accordance with a user's request.
- the decoding section reads out and decodes compressed image data of a necessary area from a memory based on the request, and the decoding section decodes the image processing apparatus.
- a display image processing unit that reads at least a part of the image stored in the buffer memory and draws a display area, and the decoding unit stores the new image in the buffer memory.
- an overlapping area acquisition unit that identifies an overlapping area between the previously stored image and the new image, and a partial area decoding that decodes compressed image data of the area including the partial area excluding the overlapping area of the new image Part and the overlapping area of the images stored so far and the partial area decoded by the partial area decoding unit Characterized in that it comprises a decoding image storage unit for storing in the buffer memory.
- Still another aspect of the present invention relates to an image processing method.
- This image processing method is an image processing method for displaying an area in an image on a display in accordance with a user's request.
- a step of identifying an overlapping area between the image stored up to and the new image a step of reading out the compressed image data of the area including the partial area excluding the overlapping area from the new image, and decoding the compressed image data
- FIG. 1 shows a use environment of an image processing system 1 according to an embodiment of the present invention.
- the image processing system 1 includes an image processing device 10 that executes image processing software, and a display device 12 that outputs a processing result by the image processing device 10.
- the display device 12 may be a television having a display that outputs an image and a speaker that outputs sound.
- the display device 12 may be connected to the image processing device 10 by a wired cable, or may be wirelessly connected by a wireless LAN (Local Area Network) or the like.
- the image processing apparatus 10 may be connected to an external network such as the Internet via the cable 14 to download and acquire hierarchical compressed image data.
- the image processing apparatus 10 may be connected to an external network by wireless communication.
- the image processing device 10 may be a game device, for example, and may implement an image processing function by loading an application program for image processing.
- the image processing apparatus 10 may be a personal computer, and may implement an image processing function by loading an image processing application program.
- the image processing apparatus 10 performs a process of changing a display image, such as an enlargement / reduction process of an image displayed on the display of the display apparatus 12 or a movement process in the up / down / left / right directions, in response to a request from the user.
- a display image change request signal When the user operates the input device while viewing the image displayed on the display, the input device transmits a display image change request signal to the image processing device 10.
- FIG. 2 shows an external configuration of the input device 20.
- the input device 20 includes a cross key 21, analog sticks 27a and 27b, and four types of operation buttons 26 as operation means that can be operated by the user.
- the four types of operation buttons 26 include a circle button 22, a x button 23, a square button 24, and a triangle button 25.
- a function for inputting a display image enlargement / reduction request and a vertical / left / right scroll request is assigned to the operation unit of the input device 20.
- the input function of the display image enlargement / reduction request is assigned to the right analog stick 27b.
- the user can input a display image reduction request by pulling the analog stick 27b forward, and can input a display image enlargement request by pressing the analog stick 27b from the front.
- the input function of the display area movement request is assigned to the cross key 21.
- the user can input a movement request in the direction in which the cross key 21 is pressed by pressing the cross key 21.
- the image change request input function may be assigned to another operation means, for example, the scroll request input function may be assigned to the analog stick 27a.
- the input device 20 has a function of transmitting the input image change request signal to the image processing device 10, and is configured to be capable of wireless communication with the image processing device 10 in the present embodiment.
- the input device 20 and the image processing device 10 may establish a wireless connection using a Bluetooth (registered trademark) protocol, an IEEE802.11 protocol, or the like.
- the input device 20 may be connected to the image processing apparatus 10 via a cable and transmit an image change request signal to the image processing apparatus 10.
- FIG. 3 shows a hierarchical structure of image data used in the present embodiment.
- the image data has a hierarchical structure including a 0th hierarchy 30, a first hierarchy 32, a second hierarchy 34, and a third hierarchy 36 in the depth (Z-axis) direction. Although only four layers are shown in the figure, the number of layers is not limited to this.
- image data having such a hierarchical structure is referred to as “hierarchical data”.
- the hierarchical data shown in Fig. 3 has a hierarchical structure of a quadtree, and each hierarchy is composed of one or more tile images 38. All the tile images 38 are formed in the same size having the same number of pixels, and have, for example, 256 ⁇ 256 pixels.
- the image data of each layer expresses one image at different resolutions, and the original image of the third layer 36 having the highest resolution is reduced in a plurality of stages to obtain the second layer 34, the first layer 32, the 0th layer.
- Image data of the hierarchy 30 is generated.
- the resolution of the Nth layer (N is an integer greater than or equal to 0) may be 1 ⁇ 2 of the resolution of the (N + 1) th layer in both the left and right (X axis) direction and the up and down (Y axis) direction.
- the hierarchical data is held in the storage device in a compressed state in a predetermined compression format, and is read from the storage device and decoded before being displayed on the display.
- the image processing apparatus 10 has a decoding function corresponding to a plurality of types of compression formats, and can decode, for example, compressed data in the S3TC format, JPEG format, and JPEG2000 format.
- the compression processing may be performed in units of tile images, or may be performed in units of a plurality of tile images included in the same layer or a plurality of layers.
- the hierarchical structure of the hierarchical data is set with the horizontal direction as the X axis, the vertical direction as the Y axis, and the depth direction as the Z axis, thereby constructing a virtual three-dimensional space.
- the image processing device 10 derives the change amount of the display image from the image change request signal supplied from the input device 20, the image processing device 10 derives the coordinates (frame coordinates) of the four corners of the frame in the virtual space using the change amount.
- the frame coordinates in the virtual space are used for loading to a main memory and display image generation processing described later.
- the image processing apparatus 10 may derive information for specifying the hierarchy and texture coordinates (UV coordinates) in the hierarchy.
- the combination of the hierarchy specifying information and the texture coordinates is also referred to as frame coordinates.
- FIG. 4 shows the configuration of the image processing apparatus 10.
- the image processing apparatus 10 includes a wireless interface 40, a switch 42, a display processing unit 44, a hard disk drive 50, a recording medium mounting unit 52, a disk drive 54, a main memory 60, a buffer memory 70, and a control unit 100.
- the display processing unit 44 has a frame memory that buffers data to be displayed on the display of the display device 12.
- the switch 42 is an Ethernet switch (Ethernet is a registered trademark), and is a device that transmits and receives data by connecting to an external device in a wired or wireless manner.
- the switch 42 may be connected to an external network via the cable 14 and receive the compressed image data layered from the image server.
- the switch 42 is connected to the wireless interface 40, and the wireless interface 40 is connected to the input device 20 using a predetermined wireless communication protocol.
- An image change request signal input from the user in the input device 20 is supplied to the control unit 100 via the wireless interface 40 and the switch 42.
- the hard disk drive 50 functions as a storage device that stores data.
- the compressed image data received via the switch 42 is stored in the hard disk drive 50.
- the recording medium mounting unit 52 reads data from the removable recording medium.
- the disk drive 54 drives and recognizes the ROM disk to read data.
- the ROM disk may be an optical disk or a magneto-optical disk.
- the compressed image data may be stored in these recording media.
- the control unit 100 includes a multi-core CPU, and has one general-purpose processor core and a plurality of simple processor cores in one CPU.
- the general-purpose processor core is called PPU (Power Processing Unit), and the remaining processor cores are called SPU (Synergistic-Processing Unit).
- the control unit 100 includes a memory controller connected to the main memory 60 and the buffer memory 70.
- the PPU has a register, has a main processor as an operation execution subject, and efficiently assigns a task as a basic processing unit in an application to be executed to each SPU. Note that the PPU itself may execute the task.
- the SPU has a register, and includes a sub-processor as an operation execution subject and a local memory as a local storage area. The local memory may be used as the buffer memory 70.
- the main memory 60 and the buffer memory 70 are storage devices and are configured as a RAM (Random Access Memory).
- the SPU has a dedicated DMA (Direct Memory Access) controller as a control unit, can transfer data between the main memory 60 and the buffer memory 70 at high speed, and the frame memory and the buffer memory 70 in the display processing unit 44 High-speed data transfer can be realized.
- the control unit 100 according to the present embodiment realizes a high-speed image processing function by operating a plurality of SPUs in parallel.
- the display processing unit 44 is connected to the display device 12 and outputs an image processing result according to a request from the user.
- the image processing apparatus 10 uses a part of compressed image data determined according to a rule described later in order to smoothly change a display image when performing a display image enlargement / reduction process or a display area movement process. Is loaded from the hard disk drive 50 into the main memory 60. Further, a part of the compressed image data loaded in the main memory 60 is decoded and stored in the buffer memory 70. This makes it possible to instantaneously switch an image used for generating a display image at a necessary later timing.
- FIG. 5 schematically shows the flow of image data in the present embodiment.
- the hierarchical data is stored in the hard disk drive 50.
- a recording medium mounted on the recording medium mounting unit 52 or the disk drive 54 may be held.
- the hierarchical data may be downloaded from an image server connected by the image processing apparatus 10 via a network.
- the hierarchical data here is subjected to fixed length compression in the S3TC format or the like, or variable length compression in the JPEG format or the like.
- a part of the image data is loaded into the main memory 60 in a compressed state (S10).
- the area to be loaded here is a predetermined rule such as the vicinity of the current display image in the virtual space, the area of the image, the area where a display request is predicted to be frequently made based on the user's browsing history, etc. Determined by.
- the loading is performed not only when an image change request is made, but also at any given time interval, for example. This prevents the load process from being concentrated at one time.
- the compressed image data to be loaded is a block unit having a substantially constant size. Therefore, the hierarchical data held by the hard disk drive 50 is divided into blocks according to a predetermined rule. By doing so, data management in the main memory 60 can be performed efficiently. That is, even if the compressed image data is variable-length compressed, the data size to be loaded is almost equal in units of blocks (hereinafter referred to as “image blocks”). A new load is completed by overwriting any of the stored blocks. As a result, fragmentation is unlikely to occur, the memory can be used efficiently, and address management is facilitated.
- the buffer memory 70 includes at least two buffer areas 72 and 74.
- the size of each buffer area 72, 74 is set larger than the size of the frame memory 90.
- One of the buffer areas 72 and 74 is used for holding an image used for generating a display image, and the other is used for preparing an image predicted to be necessary thereafter.
- the former is referred to as “display buffer”, and the latter is referred to as “decoding buffer”.
- the buffer area 72 is a display buffer
- the buffer area 74 is a decoding buffer
- the display area 68 is displayed.
- the image stored in the decoding buffer by the prefetching process may be an image in the same hierarchy as the image stored in the display buffer, or may be an image in a different hierarchy having a different scale.
- the image in the display area 68 among the images stored in the buffer area 72, which is a display buffer, is drawn in the frame memory 90 (S14).
- the image in the new area is decoded as necessary and stored in the buffer area 74.
- the display buffer and the decoding buffer are switched according to the timing of completion of storage, the change amount of the display area 68, and the like (S16). Thereby, a display image can be smoothly switched with respect to a movement of a display area, a change of a scale ratio, or the like.
- FIG. 6 shows the configuration of the control unit 100 in detail.
- the control unit 100 includes an input information acquisition unit 102 that acquires information input by the user from the input device 20, a compressed data division unit 104 that divides hierarchical data into image blocks, and a load block determination unit that determines an image block to be newly loaded.
- 106 includes a load unit 108 for loading necessary image blocks from the hard disk drive 50.
- the control unit 100 further includes a display image processing unit 114 that draws a display image, a prefetch processing unit 110 that performs prefetch processing, and a decoding unit 112 that decodes compressed image data.
- each element described as a functional block for performing various processes can be configured by a CPU (Central Processing Unit), a memory, and other LSIs in terms of hardware. This is realized by a program loaded on the computer.
- the control unit 100 includes one PPU and a plurality of SPUs, and each functional block can be configured by the PPU and the SPU individually or in cooperation. Therefore, it is understood by those skilled in the art that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof, and is not limited to any one.
- the input information acquisition unit 102 acquires instruction contents such as start / end of image display, movement of the display area, enlargement / reduction of the display image, etc. input by the user to the input device 20.
- the compressed data dividing unit 104 reads the hierarchical data from the hard disk drive 50, etc., generates an image block by dividing it according to a predetermined rule described later, and stores it in the hard disk drive 50. For example, when the user selects any one of the hierarchical data stored in the hard disk drive 50 with respect to the input device 20, the information is acquired from the input information acquisition unit 102, and the division process is started.
- the compressed data dividing unit 104 may not be in the same apparatus as the other functions of the control unit 100, and may be divided at the stage of generating hierarchical data. Although a specific division method will be described in detail later, the block division method performed by the compressed data division unit 104 may be different depending on the hardware performance such as the speed of the hard disk drive 50 and the capacity of the main memory 60. Therefore, the compressed data dividing unit 104 is set in advance so as to perform optimal block division according to the hardware performance of the image processing apparatus 10.
- the load block determination unit 106 confirms whether there is an image block to be newly loaded from the hard disk drive 50 to the main memory 60, determines the next image block to be loaded, and issues a load request to the load unit 108.
- the load block determination unit 106 performs the above confirmation and determination processing at a predetermined time interval, for example, when the load unit 108 is not performing the load process, for example, when a user makes an image change request. .
- the load unit 108 performs actual load processing in accordance with a request from the load block determination unit 106.
- the image block including the image area to be changed is not stored in the main memory 60, the image block is loaded from the hard disk drive 50, the necessary area is decoded, and the display image is displayed. It is necessary to perform processing such as drawing at once. In this case, it is conceivable that the load process becomes a bottleneck, and the responsiveness to the user request is impaired.
- (1) image blocks are loaded so as to cover regions that are likely to be displayed in the future, and (2) loading is performed as needed so that the loading process is not concentrated at one time. Load under policy. As a result, the load process is less likely to interfere with the display image changing process.
- the procedure for determining the image block to be loaded will be described in detail later.
- the prefetch processing unit 110 predicts an image area that will be required for drawing a display image in the future in accordance with the frame coordinates of the current display image and the display image change request information input by the user, and decodes the information. 112. However, immediately after the start of image display, or when the image to be changed cannot be drawn with the image already stored in the buffer memory 70, information on a predetermined area including an image necessary for drawing the display image at that time is not predicted. Is supplied to the decoding unit 112. Based on the image area information acquired from the prefetch processing unit 110, the decoding unit 112 reads and decodes a part of the compressed image data from the main memory 60, and stores the decoded data in the decoding buffer or the display buffer.
- the display image processing unit 114 determines the frame coordinates of a new display image according to the display image change request input by the user, reads the corresponding image data from the display buffer of the buffer memory 70, and the frame memory 90 of the display processing unit 44. To draw.
- FIG. 7 shows the configuration of the compressed data dividing unit 104 in detail.
- the compressed data dividing unit 104 includes an identification number assigning unit 120 and an image block generating unit 122.
- the identification number assigning unit 120 assigns identification numbers in order from 0 to the tile images of each layer constituting the hierarchical data in a predetermined order.
- the image block generation unit 122 collects tile images in order of identification numbers until immediately before the total data size exceeds a predetermined size, and generates an image block.
- FIG. 8 schematically shows an image of each layer in the layer data.
- the hierarchical data is composed of images of the 0th hierarchy 30a, the first hierarchy 32a, the second hierarchy 34a, and the third hierarchy 36a.
- a tile image is one of the sections separated by a solid line in the images of each layer.
- the identification number assigning unit 120 assigns an identification number to each tile image as shown in FIG.
- the image of the 0th hierarchy 30a is composed of one tile image, and the identification number is “0”.
- the first hierarchy 32a, the second hierarchy 34a, and the third hierarchy 36a assign identification numbers “1” and “2”, “3” to “8”, and “9” to “44” to each tile image, respectively.
- the order of assigning the identification numbers is shown as a raster order, but other orders may be used as will be described later.
- FIG. 9 schematically shows how the image block generator 122 collects the hierarchical data of FIG. 8 into image blocks.
- the image block generation unit 122 divides the tile image from the smaller identification number so that the image block is configured by the maximum number of tile images not exceeding a predetermined data size.
- the “predetermined data size” at this time is represented by a “basic block size” in a range of arrows.
- tile images with identification numbers “0” to “5” are grouped into image blocks 2
- tile images with “6” to “8” are grouped into image blocks 4, and so on.
- the tile image from “41” to “44” is the final image block 6.
- Each image block is identified by the identification number of the leading tile image and the number of tile images included. Therefore, the image block 2 has identification information “(0, 6)”, the image block 4 has identification information “(6, 3)”, and the image block 6 has “(41, 4)”.
- the identification information is defined in this way, it can be easily determined whether or not a certain tile image is included in a certain image block. That is, regardless of the block division method, it is possible to specify the tile image included in the image block only by confirming the range of the identification number.
- the identification information of the image block is stored in the hard disk drive 50 in association with the storage area information in the hard disk drive 50 of the corresponding compressed image data. In this way, if the compressed image data is divided into image blocks of approximately the same size, as described above, even if the loaded image block is stored in a continuous area in the main memory 60, one image block is thereafter stored.
- the image data can be stored by overwriting with the image block loaded in the memory block, the occurrence of fragmentation can be suppressed, and the main memory 60 can be used efficiently.
- FIG. 10 is an example of an original image for explaining the block division method in the present embodiment. Although FIGS. 10 to 14 show a gray scale, it may actually be full color.
- the original image 200 has, for example, 92323 ⁇ 34630 pixels and is divided into 361 ⁇ 136 tile images.
- FIG. 11 to 14 show a state in which the original image 200 in FIG. 10 is divided in the order of giving various identification numbers with the basic block size being 1 Mbyte. Each image has different gray scales for each image block so that the boundaries of the image blocks can be easily understood.
- FIG. 11 shows a state of division when identification numbers are assigned to tile images in raster order as shown in FIG.
- the raster order divided image 202 is divided into image blocks in a form in which tile images are grouped in the horizontal direction.
- the width 206 shown in the enlarged image 204 is the length of the image block in the vertical direction.
- the length in the horizontal direction varies depending on the data size after compression of the tile image included in the image block, and one image block may span a plurality of tile images.
- FIG. 12 shows a state of division when identification numbers are assigned to tile images in the order of “Z”.
- the “Z-shaped” order is incremented alternately in the horizontal direction (X direction) and the vertical direction (Y direction) of the image as shown in the scanning order 214 of the identification number assignment example 212 in FIG. Is in order.
- an identification number is given like the number described in the figure.
- the image blocks are grouped in this order, they are divided into image blocks such as a Z-shaped forward divided image 208.
- one image block has a shape like an image block 216.
- the image block 216 in this case has a shape close to a square.
- the detailed shape and size depend on the data size after compression of the tile image included in each image block.
- the information possessed by one image block has better spatial locality, and the image block necessary for generating the display image and the image in the vicinity thereof is different from the case of FIG. Less. Furthermore, it is difficult to load useless area information, and the use efficiency of the main memory 60 is improved.
- FIG. 13 shows a state of division when identification numbers are assigned to tile images in the order of square macro tiles.
- the “macro tile” is a rectangle composed of a plurality of tile images, and the number of tile images included vertically and horizontally is set in advance.
- the example of FIG. 12 can be considered to recursively form macro tiles composed of 2 ⁇ 2 tile images.
- a square composed of 8 ⁇ 8 tile images is defined as one macro tile.
- identification numbers are assigned within the macro tile 222 in the direction of the arrow, that is, in raster order.
- an identification number is assigned to each tile image like the numbers described in the figure.
- the identification number has been assigned to one macro tile, the same assignment is performed on all macro tiles in raster order.
- the image blocks are divided into image blocks such as square macro tile order divided image 218. That is, the image block in this case has a shape in which macroblocks are grouped in the horizontal direction.
- the length of one side of the macro tile 222, or an integral multiple thereof is the length of the image block in the vertical direction.
- the length in the horizontal direction varies depending on the data size after compression of the tile image included in each image block.
- FIG. 14 shows a state of division when identification numbers are assigned to tile images in the order of strip-like macro tiles.
- the strip-shaped macro tile is a macro tile in which only the number of tile images included in the horizontal direction is set and the vertical direction is unlimited.
- the horizontal direction is composed of 16 tile images.
- identification numbers are assigned within the macro tile 230 in the direction of the arrow, that is, in raster order. Thereby, an identification number is given to each tile image like the number described in FIG.
- the identification number is assigned to the lower end of the original image, the same assignment is performed on all the macro tiles in raster order, such as the macro tile located to the right of the original image.
- the image blocks are grouped in this order, they are divided into image blocks such as strip-like macro tile order divided images 224.
- the image block in this case has a shape in which a vertical row of macro tiles is divided in the middle according to the compressed data size of the tile image. In some cases, it may span multiple rows of macro tiles.
- the width 228 of the macro tile or an integral multiple thereof is the width in the horizontal direction of the image block.
- the detailed shape of the boundary line at the image block delimiter also changes in accordance with the data size of the tile image after compression.
- the shape, size, and information contained in the image block vary greatly depending on the order in which the identification numbers are assigned and the basic block size. Therefore, a condition that enables the most efficient data capture is determined according to the content and genre of the image, for example, whether it is a landscape photograph or a character image such as a newspaper, and is selected according to the actual image. You may do it. Further, as described above, an optimal method may be selected depending on the hardware configuration.
- the area is the data size after compression of the tile image configured as described above.
- the upper half of the image is an empty region and the color is relatively uniform, so that there are few high frequency components and the compression rate is high.
- the color change is large near the center of the original image 200 because there are buildings, and the compression rate is low because there are many high frequency components. Accordingly, the compressed data size of the upper half tile image of the original image 200 tends to be smaller than the data size near the center.
- the area of the upper half image block tends to be larger than the image block near the center.
- the area of the image is not simply divided evenly, but is divided according to the data size.
- the image block is made closer to a square by assigning an identification number to the Z-shape, and the spatial locality is improved.
- an identification number is assigned from a new starting point. And arranged in a matrix.
- the boundary lines of the image blocks have a lattice shape that is approximately orthogonal to each other.
- FIG. 15 schematically shows the relationship between the image block and the display area of the display image when the boundary lines of the image block are configured in a grid pattern.
- the image block 134a is in a matrix form of three horizontal and two vertical, and the boundary line is a grid.
- the drawing area 136a is located at a position including the intersection of the grid as shown in the figure, four image blocks are required for drawing.
- FIG. 16 schematically shows the relationship between the image block and the display area of the display image when the boundary of the image block is a T-junction shape.
- the image block 134b is arranged so that the vertical boundary line is shifted by the row, and as a result, the boundary line has a T-shaped shape.
- the maximum number of image blocks required for drawing is three, and the load efficiency is improved compared to the case of FIG. .
- the starting points for assigning the identification numbers are not arranged in a matrix but may be shifted by rows.
- the tile images of a plurality of layers may be simply compressed in the same manner and then combined as an image block, or one image is used as the other image by utilizing the redundancy of the same images having different resolutions. May be restored and the compression rate may be increased.
- information included in one image block is always loaded as a set, such a method is possible. For example, a differential image between an image obtained by enlarging a low resolution image to a magnification of the high resolution image and an actual high resolution image is compressed and included in the image block together with the compressed data of the low resolution image.
- the image blocks are grouped in the order of the low resolution image and the difference image, and when the maximum data size does not exceed the basic block size, the next image block is formed.
- FIG. 17 is a diagram for explaining a technique for compressing a high-resolution image as a difference image from an enlarged image of a low-resolution image.
- This process is performed by the compressed data dividing unit 104, but may be performed separately from other processes related to image display at the stage of generating hierarchical data.
- the original images of the second hierarchy 34 and the third hierarchy 36 in the hierarchy data 139 are read out (S20, S22).
- the third layer image 36b is an image obtained by doubling the second layer image 34b vertically and horizontally.
- some of the tile images of the second layer image 34b are compressed as usual (S24).
- the tile image to be compressed in the second layer image 34b is a tile image of an area obtained by dividing the original image so as to have approximately the same data size, as illustrated in FIGS. .
- a predetermined number of tile images may be used, and the data size may be adjusted to be uniform at the stage of forming the image block as described above.
- compressed data 140 of the second layer image 34b and compressed data 142 of the difference image are generated.
- the compressed data is also represented by an image for easy understanding, and the compressed data 142 of the difference image is a shaded image to indicate that it is a difference.
- These compressed data are included in one image block 144.
- images of three or more layers may be included in the same manner. That is, the lowest resolution image is compressed as it is, and the high resolution image is expressed by a difference image from the image in the upper layer.
- a plurality of sets of compressed data having such dependency relationships may be included in the image block 144.
- the image block 144 generated in this way is stored in the hard disk drive 50. Then, the load unit 108 loads the main memory 60 as necessary. Thereafter, the data is decoded by the decoding unit 112 according to the determination of the prefetch processing unit 110 or the like. At this time, the compressed data 140 of the second layer image 34b is decoded by a normal process to become the second layer image 34b (S28). On the other hand, the compressed data 142 of the difference image is first decoded as usual (S30), and then the decoded second layer image 34b is added to the image enlarged by 2 ⁇ 2 (S32, S34), so that the third layer An image 36b is obtained.
- the difference image between the enlarged image of the low resolution image and the high resolution image data is compressed, but conversely, the low resolution image may be created using the high resolution image.
- the decoding unit 112 After the high resolution image is wavelet compressed and stored in the hard disk drive 50 and loaded into the main memory 60 as necessary, the decoding unit 112 generates a low pass image of the compressed image of the high resolution image. A low-resolution image may be used. Similarly, a high-resolution image may be JPEG-compressed and the high-frequency component may be cut to form a low-resolution image.
- the differential image between the low-resolution image thus generated and the original low-resolution image is compressed and included in the same image block, and the low-resolution image is restored by adding the differential image in the same manner as described above. May be.
- one pixel may be obtained from a 2 ⁇ 2 pixel group using a pyramid filter.
- FIG. 18 is a flowchart showing a processing procedure for determining an image block to be loaded and performing loading. This process is performed as needed at predetermined time intervals, for example.
- the load block determination unit 106 checks whether or not the load unit 108 is currently loading (S40). If it is being loaded, the process is terminated (Y in S40).
- the determination target region may be one tile image or may include a plurality of tile images. Such an area is called a “necessary area”.
- the image block including the area is identified from the identification number of the tile image included in the area and the identification information of the image block. Then, it is determined as a load target (S46). If it is necessary to load a plurality of image blocks at this time, an image block having a high priority is determined as a load target according to a predetermined rule. That is, in one loading process, many image blocks are not loaded at once.
- the load block determination unit 106 determines image blocks to be loaded at any time, and limits the number of image blocks to be loaded at one time or within a predetermined number.
- the load unit 108 reads the image block to be loaded from the hard disk drive 50 based on the table in which the identification information of the image block and the storage area are associated with each other, and the main memory 60 (S48). If all the necessary areas are stored in the main memory 60 in S44, the process is terminated (N in S44).
- FIG. 19 is a conceptual diagram when determining a necessary area which is a target for determining whether or not data of a necessary area is stored in the main memory 60 in S42. It is desirable that the main memory 60 is basically loaded with compressed data of an image around the currently displayed area.
- the “periphery” may include the periphery in the vertical and horizontal (X-axis, Y-axis) directions and the depth (Z-axis) direction of the hierarchical structure in an image of the same hierarchy.
- the periphery in the Z-axis direction means a display image and an enlarged image and a reduced image including the vicinity thereof.
- FIG. 19 shows a part of images of the (n ⁇ 1) th layer, the nth layer, and the (n + 1) th layer from the top, and a tile image is shown in each image.
- the image shown in each layer displays the same part on the center line 154. If the region 150 near the center of the n-th layer image is the currently displayed region, the necessary region is, for example, a region including black circles 152a, 152b, and 152c.
- the center (point of the center line 154) and the four corner points in the display image region 150, and the rectangular sides obtained by extending the region 150 vertically and horizontally A tile image including a point and four corner points is set as a necessary area.
- a tile image including the point of the center line 154 and the four corners of a rectangle of a predetermined size centering on the center line 154 is necessary.
- the rectangle in this case may be a size corresponding to the size of the rectangle of the display image. If an image block including such a point is always loaded into the main memory 60 even if the display area moves, decoding and drawing can be performed smoothly, and the response to an image change request by the user is improved.
- the points shown in FIG. 19 are merely examples, and the number may be increased or decreased or the arrangement shape may be changed depending on the content of the image, the processing performance of the image processing apparatus 10, the capacity of the main memory 60, and the like.
- areas that are important in the content of the image areas that have been displayed with high frequency in the past, and areas that seem to match the user's preferences from the display history of the same user are preferentially loaded regardless of changes in the displayed image. You may make it do.
- the “important area” is, for example, a display area such as the vicinity of a face in the case of an image of a person and a feature product in the case of an electronic flyer.
- the user's preference is a column that the user often browses if the image is a newspaper page. Such a region is loaded with a high priority even when it is necessary to load a plurality of image blocks in S46 of FIG. Alternatively, it may be loaded when image display is started.
- the following rules may be prepared.
- priority is given to image blocks in another layer, that is, an enlarged image and a reduced image, over an image block in the same layer as the display image. This is because, in the case of movement of the display area in the same hierarchy, there is a high possibility of being covered by the decoded images stored in the two buffer areas 72 and 74 in the buffer memory 70, whereas in the movement between the hierarchy, the buffer memory 70 This is because there is a high possibility that all the decoded images must be updated. However, the buffer memory 70 may be sufficient in the case of reciprocating between two layers.
- FIG. 20 is a diagram for explaining prefetch processing performed by the prefetch processing unit 110.
- FIG. 20 shows the structure of hierarchical data, and each hierarchy is expressed as L0 (0th hierarchy), L1 (1st hierarchy), L2 (2nd hierarchy), and L3 (3rd hierarchy).
- An arrow 80 indicates that an image change request from the user requests reduction of the display image and straddles L2.
- the positions in the depth direction of L1 and L2 where the original image data exists are set as the prefetch boundary in the depth direction, and the image change request from the input device 20 is the prefetch boundary.
- the prefetching process is started at a timing exceeding.
- a display image is created using an image of L2 (second layer).
- the L2 image is reduced. Enlarge and generate a display image. Therefore, when image reduction processing is requested as indicated by the arrow 80, the L2 enlarged image is converted into a reduced image and displayed.
- the image processing apparatus 10 identifies a future necessary image predicted from the image change request, reads out the image from the main memory 60, and decodes it. In the example of FIG. 20, when the requested scale ratio due to the image change request crosses L2, the image of L1 in the reduction direction is pre-read from the main memory 60, decoded, and written to the buffer memory 70.
- a prefetch boundary is set for the image data stored in the buffer memory 70, and the prefetch process is started when the display position by the image change request crosses the prefetch boundary.
- FIG. 21 is a flowchart showing a processing procedure when the prefetch processing unit 110 and the decoding unit 112 decode an image.
- a required display image change amount is derived (S50).
- the change amount of the display image is the amount of movement in the vertical and horizontal directions and the amount of movement in the depth direction.
- the frame coordinates of the four corners of the display area of the movement destination are determined from the frame coordinates of the four corners of the display area in the hierarchical structure based on the derived change amount (S52).
- the prefetch processing unit 110 instructs the decoding unit 112 to decode a necessary image area.
- the decoding unit 112 acquires the data of the designated image area from the main memory 60, decodes it, and stores it in the buffer area 72 or the buffer area 74 (S60). Accordingly, a necessary image area can be developed in the buffer memory 70 before the display image processing unit 114 generates the display image.
- the next required image in the decoding buffer is, for example, when the display area crosses the prefetch boundary when moving within the same hierarchy, It is an image in which the display area is included at the end on the start point side in the movement direction.
- the images stored in the buffer area 72 and the buffer area 74 have at least an overlap area corresponding to the size of the display image, and the overlap area further increases depending on the setting position of the prefetch boundary.
- the range of an image to be newly decoded when the prefetch boundary is exceeded is set in advance according to the processing speed or the like. Or you may change with the content of an image.
- the decoding buffer When the decoding buffer is used as described above, a newly stored image and a part of the image previously stored in the decoding buffer may overlap. Utilizing such a property, in the present embodiment, the area to be newly decoded is reduced by the following process, thereby reducing the load of the decoding process.
- FIG. 22 shows a functional block of the decoding unit 112 when the image to be newly stored in the decoding buffer of the buffer memory 70 has an overlapping portion with the previously stored image.
- the decoding unit 112 includes an overlapping region acquisition unit 170 that identifies an overlapping region, a partial region decoding unit 172 that newly decodes a non-overlapping region and overwrites a part of the image that has been stored so far.
- a repeat image generation unit 174 that generates an image in which the overwritten images are repeatedly arranged, and a decoded image storage unit 176 that extracts a part of the arranged images and stores them as a final decoded image in a decoding buffer.
- FIG. 23 schematically shows a procedure in which the decoding unit 112 in FIG. 22 stores an image in the decoding buffer.
- the decoding unit 112 in FIG. 22 stores an image in the decoding buffer.
- the image already stored in the decoding buffer at this time is the image 160 in the upper left of the figure.
- the image to be newly stored is the image 162 in the upper right of the figure.
- the figure shows the situation where the display area has moved to the left, and the "star" figure on the left is expected to be displayed from the original "circle” or "triangle” figure. .
- the image 160 and the image 162 have overlapping areas.
- the region from x1 to x2 overlaps with the image 160 in the image 162 composed of regions from x0 to x2 in the horizontal direction (X axis).
- the overlapping area from x1 to x2 is used as it is as a part of the image to be newly stored.
- the partial region decoding unit 172 decodes only the region from x0 to x1 that does not overlap with the image 160 in the image 162 to be newly stored, and is no longer necessary in the stored image 160.
- the area from x2 to x3 that is not present is overwritten (S70).
- the buffer areas 72 and 74 of the buffer memory 70 are provided with areas for storing the lower left coordinates of the currently stored image. Then, the overlapping area acquisition unit 170 identifies the overlapping area by comparing with the lower left coordinates of the area to be newly stored. If a load of a predetermined value or more is applied to the process of specifying the overlapping area before decoding, only the non-overlapping area may be overwritten after decoding all the images 162 to be newly stored.
- the repeat image generation unit 174 temporarily creates a repeat image 166 obtained by repeating the intermediate image 164 thus formed in the horizontal direction (S72).
- An image obtained by repeating unit images in the vertical and horizontal directions can be generated by a technique generally used in image processing.
- the decoded image storage unit 176 sets the coordinate of the boundary of the intermediate image to 0 in the repeat image 166, the area from ⁇ (x1 ⁇ x0) to x2 ⁇ x1 is extracted and newly stored in the decoding buffer (S74). ).
- the image 162 to be newly stored is stored in the decoding buffer.
- the repeat image 166 is an image obtained by repeating the intermediate image 164 twice in the vertical and horizontal directions.
- the image stored in the decoding buffer may be the intermediate image 164.
- the display image processing unit 114 reads the intermediate image 164 from the decoding buffer in accordance with an instruction input from the user, performs the above-described processing of S72 and S74, and displays the image 162 to be newly stored in the display processing unit 44. Drawing in the frame memory 90.
- the above is the decoding method in the prefetch process for the movement of the display area in the same hierarchy.
- a request for enlargement or reduction without changing the displayed center point is made. If the change is in one of the enlargement direction and the reduction direction, a new image is sequentially stored in the decoding image when the prefetch boundary is exceeded, but the two prefetch boundaries are exceeded.
- a request to return to the original scale ratio is made, it is not necessary to store a new image in the decoding image, and the stored image can be used as it is.
- the buffer areas 72 and 74 of the buffer memory 70 may be further provided with areas for storing the layer numbers of the currently stored images.
- the prefetch boundary set in the depth direction of the hierarchy is exceeded, if the hierarchy to be stored in the decoding buffer is the same as the stored hierarchy, the stored image is left as it is without performing the decoding process. Leave it in a state. As a result, even when the display image is enlarged or reduced, the number of decoding processes can be minimized, and the processing load and latency can be reduced.
- the data is transferred from the hard disk drive storing the compressed data to the main memory.
- Load part By decoding and displaying the data loaded in the main memory, when a display image change request is made from the user, the time required to read out the necessary data from the hard disk drive can be saved and the responsiveness can be improved. Further, by adopting an aspect in which only a part of the data is loaded, even an image having a size larger than the capacity of the main memory can be set as a display target, and restrictions on an image that can be handled are reduced.
- the image data is divided into blocks with approximately the same data size and stored in the hard disk drive, and loading to the main memory is performed in units of blocks.
- the main memory can be used effectively, and address management becomes easy.
- the information that each block has is given spatial locality.
- the tile image included in one block is added so that the tile image of the starting point is expanded in the horizontal and vertical directions in the same manner, and the area of one block is immediately before reaching a predetermined data size.
- the block is shaped like a square.
- a tile image is added in the raster order to obtain a shape close to a rectangle having a predetermined width. In this way, the number of blocks necessary for display can be suppressed, the number of loads can be reduced, and data necessary for decoding can be easily read from the main memory.
- the blocks may be divided so that the boundary between the blocks becomes a T-junction.
- loading of the block into the main memory is performed as needed at predetermined time intervals even at timings other than when the display image is changed.
- a point that is a peripheral region in terms of position and hierarchy is determined according to a predetermined rule, and an unloaded block including the point is loaded as needed.
- areas that are important in image content and areas that have a display probability that can be predicted for each user based on the display history are higher than a predetermined threshold value are preferentially loaded. This makes it possible to reduce the possibility of a situation in which data must be loaded from the hard disk drive or downloaded over the network immediately after a display image change request from the user, and a large number of blocks need to be loaded at one time. And the occurrence of latency due to the load process can be suppressed.
- the same area part of images in different layers should be included in one block.
- the information held by the other image among the information necessary for restoring one image is not held twice in the block. For example, if a difference image between a low resolution image and an enlarged image obtained by enlarging the low resolution image and a high resolution image is included in one block and compressed, the high resolution image can be restored.
- the redundancy between images can be used in this way. By doing so, the data compression rate is increased and the main memory can be used effectively.
- an area that can be predicted to be displayed in the future is decoded in advance and stored in the decoding buffer.
- an area overlapping with the previously stored image is used as it is.
- an intermediate image is generated by overwriting a non-overlapping area of a stored image with a new image area, and a decoding process is performed by extracting a necessary portion from a repeat image formed by repeating the intermediate image.
- a new image can be easily stored at a minimum.
- the present invention can be used for information processing apparatuses such as image processing apparatuses, image display apparatuses, computers, and game machines.
- 1 image processing system 10 image processing device, 12 display device, 20 input device, 30 0th layer, 32 1st layer, 34 2nd layer, 36 3rd layer, 38 tile image, 44 display processing unit, 50 hard disk drive , 60 main memory, 70 buffer memory, 72 buffer area, 74 buffer area, 90 frame memory, 100 control unit, 102 input information acquisition unit, 104 compressed data division unit, 106 load block determination unit, 108 load unit, 110 prefetch processing Part, 112 decoding part, 114 display image processing part, 120 identification number giving part, 122 image block generating part, 170 overlapping area obtaining part, 172 partial area decoding part, 174 repeat Image generation unit, 176 decoded image storage unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Processing Or Creating Images (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
Description
システムが提案されている。この家庭用エンタテインメントシステムでは、GPUがポリ
ゴンを用いた三次元画像を生成する(例えば特許文献1参照)。
Claims (27)
- 画像の少なくとも一部をディスプレイに表示する画像処理装置であって、
処理対象の画像の圧縮データを所定の規則で分割してなる複数の画像ブロックを保持する記憶装置と、
表示中の画像の領域に応じて所定の規則で決定した必要領域のデータを含む画像ブロックを、前記記憶装置からメモリにロードするロード部と、
ユーザからの表示領域の移動、拡大、縮小のいずれかの要求に応じて、前記ロード部がロードした前記画像ブロックの少なくとも一部をメモリから読み出しデコードし、新たな表示画像を生成する表示画像処理部と、
を備えることを特徴とする画像処理装置。 - 前記記憶装置は、処理対象の画像の、複数の解像度の画像のそれぞれの圧縮データを分割してなる画像ブロックを保持することを特徴とする請求項1に記載の画像処理装置。
- 処理対象の画像の圧縮データを分割して前記画像ブロックを生成し、前記記憶装置に格納する圧縮データ分割部をさらに備え、
前記圧縮データ分割部は、画像を圧縮する最小単位であるタイル画像の圧縮データを、所定のデータサイズを超えない最大データサイズとなるようにまとめて、一の画像ブロックとすることを特徴とする請求項1または2に記載の画像処理装置。 - 前記圧縮データ分割部は、
画像を構成する前記タイル画像に対し、ラスタ順に識別番号を付与する識別番号付与部と、
前記識別番号付与部が付与した識別番号順にタイル画像の圧縮データをまとめて一の画像ブロックとする画像ブロック生成部と、
を備えることを特徴とする請求項3に記載の画像処理装置。 - 前記圧縮データ分割部は、
画像を構成する前記タイル画像に対し、横方向および縦方向に交互にインクリメントしていくように識別番号を付与する識別番号付与部と、
前記識別番号付与部が付与した識別番号順にタイル画像の圧縮データをまとめて一の画像ブロックとする画像ブロック生成部と、
を備えることを特徴とする請求項3に記載の画像処理装置。 - 前記圧縮データ分割部は、
画像を構成するタイル画像を所定の間隔で区切ったマクロタイルに対しラスタ順に、かつ、各マクロタイルを構成するタイル画像に対しラスタ順に識別番号を付与する識別番号付与部と、
前記識別番号付与部が付与した識別番号順にタイル画像の圧縮データをまとめて一の画像ブロックとする画像ブロック生成部と、
を備えることを特徴とする請求項3に記載の画像処理装置。 - 前記圧縮データ分割部は、
前記画像ブロックの画像上の境界線の少なくとも一部がT字形状となるように画像ブロックを生成することを特徴とする請求項3に記載の画像処理装置。 - 前記ロード部は、表示中の領域から所定の範囲内の領域、および表示中の画像を所定の縮尺率で拡大、または縮小した画像のうち表示中の領域を含む所定の領域、の少なくともいずれかを前記必要領域とすることを特徴とする請求項2に記載の画像処理装置。
- 前記必要領域のデータを含む画像ブロックがメモリに全て格納されているか否かを所定の時間間隔で判定し、格納されていない画像ブロックのロードを前記ロード部に要求するロードブロック決定部をさらに備え、
前記ロード部は、前記ロードブロック決定部の要求に従い、前記画像ブロックをロードすることを特徴とする請求項1または2に記載の画像処理装置。 - 前記ロード部は、前記必要領域の他に、あらかじめ設定された画像内の領域、および、ユーザの表示履歴から推測される表示確率が所定のしきい値より高い領域のデータを含む画像ブロックをさらにロードすることを特徴とする請求項1または2に記載の画像処理装置。
- 前記記憶装置は、処理対象の画像のうち、共通領域を表す複数の解像度の画像の圧縮データを一の画像ブロックに含めて保持することを特徴とする請求項1または2に記載の画像処理装置。
- 前記記憶装置は、共通領域を表す2つの解像度の画像のうち、低解像度の画像と、当該低解像度の画像を拡大した画像と高解像度の画像との差分画像と、をそれぞれ圧縮したデータを一の画像ブロックに含めて保持し、
前記表示画像処理部は、前記低解像度の画像と前記差分画像をそれぞれデコードして足し合わせることにより、前記高解像度の画像をデコードすることを特徴とする請求項1に記載の画像処理装置。 - 前記表示画像処理部は、前記画像ブロックの少なくとも一部をメモリから読み出しデコードするデコード部と、
前記デコード部がデコードした画像を格納するバッファメモリと、
前記バッファメモリに格納した画像の少なくとも一部を読み出し、表示する領域を描画する描画部と、
を備え、前記デコード部は、
新たな画像を前記バッファメモリに格納する際、それまで格納されていた画像と新たな画像との重複領域を特定する重複領域取得部と、
前記新たな画像のうち前記重複領域を除く部分領域を含む領域のデータをデコードする部分領域デコード部と、
それまで格納されていた画像のうち前記重複領域と、前記部分領域デコード部がデコードした前記部分領域とをつなげて前記バッファメモリに格納するデコード画像格納部と、
を備えることを特徴とする請求項1に記載の画像処理装置。 - 画像の少なくとも一部をディスプレイに表示する画像処理方法であって、
処理対象の画像の圧縮データを所定の規則で分割してなる複数の画像ブロックを生成し、記憶装置に格納するステップと、
表示中の画像の領域に応じて所定の規則で決定した必要領域のデータを含む画像ブロックを前記記憶装置からメモリにロードするステップと、
ユーザからの表示領域の移動、拡大、縮小のいずれかの要求に応じて、ロードした前記画像ブロックの少なくとも一部をメモリから読み出しデコードし、新たな表示画像を生成するステップと、
を含むことを特徴とする画像処理方法。 - 前記格納するステップは、画像を圧縮する最小単位であるタイル画像の圧縮データを、所定のデータサイズを超えない最大データサイズとなるようにまとめて、一の画像ブロックとすることを特徴とする請求項14に記載の画像処理方法。
- 前記格納するステップは、同一の領域を表す異なる解像度の画像の圧縮データを一の画像ブロックに含めることを特徴とする請求項14または15に記載の画像処理方法。
- コンピュータに、画像の少なくとも一部をディスプレイに表示する機能を実現させるコンピュータプログラムであって、
処理対象の画像の圧縮データを所定の規則で分割してなる複数の画像ブロックを生成し、記憶装置に格納する機能と、
表示中の画像の領域に応じて所定の規則で決定した必要領域のデータを含む画像ブロックを前記記憶装置からメモリにロードする機能と、
ユーザからの表示領域の移動、拡大、縮小のいずれかの要求に応じて、ロードした前記画像ブロックの少なくとも一部をメモリから読み出しデコードし、新たな表示画像を生成する機能と、
をコンピュータに実現させることを特徴とするコンピュータプログラム。 - 画像の少なくとも一部をディスプレイに表示するために記憶装置から読み出される画像のデータ構造であって、
画像を圧縮する最小単位であるタイル画像ごとの圧縮データを、所定のデータサイズを超えない最大データサイズとなるようにまとめた画像ブロックのデータと、各画像ブロックの識別情報とを対応付けた画像のデータ構造。 - 画像を構成する前記タイル画像に対し、ラスタ順に識別番号を付与し、当該識別番号順に前記タイル画像の圧縮データをまとめて一の画像ブロックとしたことを特徴とする請求項18に記載の画像のデータ構造。
- 画像を構成する前記タイル画像に対し、横方向および縦方向に交互にインクリメントしていくように識別番号を付与し、当該識別番号順に前記タイル画像の圧縮データをまとめて一の画像ブロックとしたことを特徴とする請求項18に記載の画像のデータ構造。
- 画像を構成するタイル画像を所定の間隔で区切ったマクロタイルに対しラスタ順に、かつ、各マクロタイルを構成するタイル画像に対しラスタ順に識別番号を付与し、当該識別番号順に前記タイル画像の圧縮データをまとめて一の画像ブロックとしたことを特徴とする請求項18に記載の画像のデータ構造。
- ユーザの要求に従い画像内の領域をディスプレイに表示する画像処理装置であって、
前記要求に基づき必要な領域の圧縮画像データをメモリより読み出しデコードするデコード部と、
前記デコード部がデコードした画像を格納するバッファメモリと、
前記バッファメモリに格納した画像の少なくとも一部を読み出し、表示する領域を描画する表示画像処理部と、
を備え、
前記デコード部は、新たな画像を前記バッファメモリに格納する際、それまで格納されていた画像と新たな画像との重複領域を特定する重複領域取得部と、
前記新たな画像のうち前記重複領域を除く部分領域を含む領域の圧縮画像データをデコードする部分領域デコード部と、
それまで格納されていた画像のうち前記重複領域と、前記部分領域デコード部がデコードした前記部分領域とをつなげて前記バッファメモリに格納するデコード画像格納部と、
を備えることを特徴とする画像処理装置。 - 前記部分領域デコード部はさらに、それまで前記バッファメモリに格納されていた画像のうち前記重複領域以外の領域を、新たにデコードした前記部分領域で上書きした中間画像を生成し、
前記画像処理装置はさらに、
前記部分領域デコード部が生成した中間画像を繰り返して並べたリピート画像を生成するリピート画像生成部を備え、
前記デコード画像格納部は、前記リピート画像生成部が生成したリピート画像のうち、前記重複領域と前記部分領域とからなる領域を抽出して前記バッファメモリに格納することを特徴とする請求項22に記載の画像処理装置。 - 前記バッファメモリは、現在表示されている領域を描画するための画像を格納する表示用バッファ領域と、前記要求に基づき予測される、前記表示用バッファ領域に格納された画像の次に必要となる画像を新たにデコードして格納するためのデコード用バッファ領域を含み、
前記デコード部は、前記デコード用バッファ領域に格納されていた画像のうち前記重複領域と、新たにデコードした前記部分領域とをつなげて、前記デコード用バッファ領域に格納することを特徴とする請求項22または23に記載の画像処理装置。 - ユーザの要求に従い画像内の領域をディスプレイに表示する画像処理方法であって、
前記要求に基づき必要な領域の圧縮画像データを新たにデコードしてバッファメモリに格納する際、それまで格納されていた画像と新たな画像との重複領域を特定するステップと、
前記新たな画像のうち前記重複領域を除く部分領域を含む領域の圧縮画像データをメインメモリより読み出しデコードするステップと、
それまで格納されていた画像のうち前記重複領域と、新たにデコードした前記部分領域とをつなげて前記バッファメモリに格納するステップと、
前記バッファメモリに格納した画像の少なくとも一部を読み出し、表示する領域を描画するステップと、
を含むことを特徴とする画像処理方法。 - それまで前記バッファメモリに格納されていた画像のうち前記重複領域以外の領域を、新たにデコードした前記部分領域で上書きした中間画像を生成するステップをさらに含み、
前記バッファメモリに格納するステップは、前記中間画像を繰り返して並べたリピート画像を生成するステップと、
前記リピート画像のうち、前記重複領域と前記部分領域とからなる領域を抽出して前記バッファメモリに格納するステップと、
を含むことを特徴とする請求項25に記載の画像処理方法。 - コンピュータに、ユーザの要求に従い画像内の領域をディスプレイに表示させる機能を実現させるコンピュータプログラムであって、
前記要求に基づき必要な領域の圧縮画像データを新たにデコードしてバッファメモリに格納する際、それまで格納されていた画像と新たな画像との重複領域を特定する機能と、
前記新たな画像のうち前記重複領域を除く部分領域を含む領域の圧縮画像データをメインメモリより読み出しデコードする機能と、
それまで格納されていた画像のうち前記重複領域と、新たにデコードした前記部分領域とをつなげて前記バッファメモリに格納する機能と、
前記バッファメモリに格納した画像の少なくとも一部を読み出し、表示する領域を描画する機能と、
をコンピュータに実現させるコンピュータプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020117009478A KR101401336B1 (ko) | 2008-09-30 | 2009-06-30 | 화상처리장치 및 화상처리방법 |
EP09817381.8A EP2330587B1 (en) | 2008-09-30 | 2009-06-30 | Image processing device and image processing method |
CN200980138067.9A CN102165515B (zh) | 2008-09-30 | 2009-06-30 | 图像处理装置以及图像处理方法 |
US13/120,785 US8878869B2 (en) | 2008-09-30 | 2009-06-30 | Image processing device and image processing method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-255655 | 2008-09-30 | ||
JP2008-255563 | 2008-09-30 | ||
JP2008255655A JP5331432B2 (ja) | 2008-09-30 | 2008-09-30 | 画像処理装置および画像処理方法 |
JP2008255563A JP4809412B2 (ja) | 2008-09-30 | 2008-09-30 | 画像処理装置および画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010038337A1 true WO2010038337A1 (ja) | 2010-04-08 |
Family
ID=42073126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/003043 WO2010038337A1 (ja) | 2008-09-30 | 2009-06-30 | 画像処理装置および画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8878869B2 (ja) |
EP (1) | EP2330587B1 (ja) |
KR (1) | KR101401336B1 (ja) |
CN (1) | CN102165515B (ja) |
WO (1) | WO2010038337A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102484674A (zh) * | 2010-07-09 | 2012-05-30 | 索尼公司 | 图像处理装置和方法 |
WO2013099076A1 (ja) * | 2011-12-27 | 2013-07-04 | 株式会社ソニー・コンピュータエンタテインメント | 動画圧縮装置、画像処理装置、動画圧縮方法、画像処理方法、および動画圧縮ファイルのデータ構造 |
WO2013121735A1 (en) * | 2012-02-16 | 2013-08-22 | Canon Kabushiki Kaisha | Image generating apparatus and method for controlling the same |
CN103703785A (zh) * | 2011-08-01 | 2014-04-02 | 索尼电脑娱乐公司 | 视频数据生成单元、图像显示设备、视频数据生成方法、视频图像显示方法、以及视频图像文件数据结构 |
CN104331213A (zh) * | 2014-08-04 | 2015-02-04 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5548671B2 (ja) * | 2011-12-27 | 2014-07-16 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理システム、画像提供サーバ、情報処理装置、および画像処理方法 |
US9646564B2 (en) * | 2012-01-20 | 2017-05-09 | Canon Kabushiki Kaisha | Information processing apparatus that controls display of display sections of contents, method of controlling the same, and storage medium |
JP2014186196A (ja) | 2013-03-25 | 2014-10-02 | Toshiba Corp | 映像処理装置および映像表示システム |
US9971844B2 (en) * | 2014-01-30 | 2018-05-15 | Apple Inc. | Adaptive image loading |
US9563927B2 (en) * | 2014-03-25 | 2017-02-07 | Digimarc Corporation | Screen watermarking methods and arrangements |
US9813654B2 (en) | 2014-08-19 | 2017-11-07 | Sony Corporation | Method and system for transmitting data |
KR102155479B1 (ko) * | 2014-09-01 | 2020-09-14 | 삼성전자 주식회사 | 반도체 장치 |
EP3001385B1 (en) * | 2014-09-29 | 2019-05-01 | Agfa Healthcare | A system and method for rendering a video stream |
US10410398B2 (en) * | 2015-02-20 | 2019-09-10 | Qualcomm Incorporated | Systems and methods for reducing memory bandwidth using low quality tiles |
CN105578194B (zh) * | 2016-01-06 | 2018-12-25 | 珠海全志科技股份有限公司 | Jpeg图像解码方法和解码器 |
CN109886861B (zh) * | 2019-01-08 | 2023-04-11 | 北京城市网邻信息技术有限公司 | 一种高效率图档格式heif图像加载方法及装置 |
CN110519607B (zh) * | 2019-09-27 | 2022-05-20 | 腾讯科技(深圳)有限公司 | 视频解码方法及装置,视频编码方法及装置 |
CN111538867B (zh) * | 2020-04-15 | 2021-06-15 | 深圳计算科学研究院 | 一种有界增量图划分方法和系统 |
WO2022061723A1 (zh) * | 2020-09-25 | 2022-03-31 | 深圳市大疆创新科技有限公司 | 一种图像处理方法、设备、终端及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09114431A (ja) * | 1995-10-18 | 1997-05-02 | Sapiensu:Kk | 静止画像再生表示方法および静止画像再生表示装置 |
JPH1188866A (ja) * | 1997-07-18 | 1999-03-30 | Pfu Ltd | 高精細画像表示装置及びそのプログラム記憶媒体 |
US6563999B1 (en) | 1997-03-27 | 2003-05-13 | Sony Computer Entertainment, Inc. | Method and apparatus for information processing in which image data is displayed during loading of program data, and a computer readable medium and authoring system therefor |
JP2005092007A (ja) * | 2003-09-19 | 2005-04-07 | Ricoh Co Ltd | 画像処理システム、画像処理方法、プログラム及び情報記録媒体 |
JP2005181853A (ja) * | 2003-12-22 | 2005-07-07 | Sanyo Electric Co Ltd | 画像供給装置 |
JP2005202327A (ja) * | 2004-01-19 | 2005-07-28 | Canon Inc | 画像表示装置および画像表示方法 |
JP2006113801A (ja) * | 2004-10-14 | 2006-04-27 | Canon Inc | 画像処理結果表示装置、画像処理結果表示方法およびプログラム |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01188866A (ja) | 1988-01-25 | 1989-07-28 | Canon Inc | 画像形成装置 |
JPH0314431A (ja) | 1989-06-08 | 1991-01-23 | Nippon Matai Co Ltd | 化粧包装方法 |
JP4420415B2 (ja) * | 1998-07-03 | 2010-02-24 | キヤノン株式会社 | 符号化方法及び符号化装置 |
US6956667B2 (en) * | 1999-12-24 | 2005-10-18 | Agfa Gevaert N. V. | Page composing method using stored page elements and apparatus for using the same |
US7012576B2 (en) * | 1999-12-29 | 2006-03-14 | Intel Corporation | Intelligent display interface |
US6873343B2 (en) * | 2000-05-11 | 2005-03-29 | Zoran Corporation | Scalable graphics image drawings on multiresolution image with/without image data re-usage |
FR2816138B1 (fr) * | 2000-10-27 | 2003-01-17 | Canon Kk | Decodage de donnees numeriques |
FR2826227B1 (fr) * | 2001-06-13 | 2003-11-28 | Canon Kk | Procede et dispositif de traitement d'un signal numerique code |
JP3937841B2 (ja) * | 2002-01-10 | 2007-06-27 | キヤノン株式会社 | 情報処理装置及びその制御方法 |
JP2004214983A (ja) | 2002-12-27 | 2004-07-29 | Canon Inc | 画像処理方法 |
JP4148462B2 (ja) * | 2003-01-20 | 2008-09-10 | 株式会社リコー | 画像処理装置、電子カメラ装置及び画像処理方法 |
JP4059802B2 (ja) * | 2003-04-17 | 2008-03-12 | 株式会社サピエンス | 画像表示方法 |
JP4382000B2 (ja) * | 2005-03-11 | 2009-12-09 | 株式会社リコー | 印刷制御システム及び印刷制御方法 |
US7768520B2 (en) * | 2006-05-03 | 2010-08-03 | Ittiam Systems (P) Ltd. | Hierarchical tiling of data for efficient data access in high performance video applications |
US7768516B1 (en) * | 2006-10-16 | 2010-08-03 | Adobe Systems Incorporated | Image splitting to use multiple execution channels of a graphics processor to perform an operation on single-channel input |
JP4958831B2 (ja) * | 2008-04-02 | 2012-06-20 | キヤノン株式会社 | 画像符号化装置及びその制御方法 |
-
2009
- 2009-06-30 CN CN200980138067.9A patent/CN102165515B/zh active Active
- 2009-06-30 KR KR1020117009478A patent/KR101401336B1/ko active IP Right Grant
- 2009-06-30 EP EP09817381.8A patent/EP2330587B1/en active Active
- 2009-06-30 WO PCT/JP2009/003043 patent/WO2010038337A1/ja active Application Filing
- 2009-06-30 US US13/120,785 patent/US8878869B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09114431A (ja) * | 1995-10-18 | 1997-05-02 | Sapiensu:Kk | 静止画像再生表示方法および静止画像再生表示装置 |
US6563999B1 (en) | 1997-03-27 | 2003-05-13 | Sony Computer Entertainment, Inc. | Method and apparatus for information processing in which image data is displayed during loading of program data, and a computer readable medium and authoring system therefor |
JPH1188866A (ja) * | 1997-07-18 | 1999-03-30 | Pfu Ltd | 高精細画像表示装置及びそのプログラム記憶媒体 |
JP2005092007A (ja) * | 2003-09-19 | 2005-04-07 | Ricoh Co Ltd | 画像処理システム、画像処理方法、プログラム及び情報記録媒体 |
JP2005181853A (ja) * | 2003-12-22 | 2005-07-07 | Sanyo Electric Co Ltd | 画像供給装置 |
JP2005202327A (ja) * | 2004-01-19 | 2005-07-28 | Canon Inc | 画像表示装置および画像表示方法 |
JP2006113801A (ja) * | 2004-10-14 | 2006-04-27 | Canon Inc | 画像処理結果表示装置、画像処理結果表示方法およびプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2330587A4 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102484674A (zh) * | 2010-07-09 | 2012-05-30 | 索尼公司 | 图像处理装置和方法 |
US8953898B2 (en) | 2010-07-09 | 2015-02-10 | Sony Corporation | Image processing apparatus and method |
CN103703785A (zh) * | 2011-08-01 | 2014-04-02 | 索尼电脑娱乐公司 | 视频数据生成单元、图像显示设备、视频数据生成方法、视频图像显示方法、以及视频图像文件数据结构 |
CN103703785B (zh) * | 2011-08-01 | 2016-11-23 | 索尼电脑娱乐公司 | 视频数据生成单元、图像显示设备、视频数据生成方法、视频图像显示方法、以及视频图像文件数据结构 |
US9516310B2 (en) | 2011-08-01 | 2016-12-06 | Sony Corporation | Moving image data generation device, moving image display device, moving image data generation method, moving image displaying method, and data structure of moving image file |
WO2013099076A1 (ja) * | 2011-12-27 | 2013-07-04 | 株式会社ソニー・コンピュータエンタテインメント | 動画圧縮装置、画像処理装置、動画圧縮方法、画像処理方法、および動画圧縮ファイルのデータ構造 |
JP2013135463A (ja) * | 2011-12-27 | 2013-07-08 | Sony Computer Entertainment Inc | 動画圧縮装置、画像処理装置、動画圧縮方法、画像処理方法、および動画圧縮ファイルのデータ構造 |
US9693072B2 (en) | 2011-12-27 | 2017-06-27 | Sony Corporation | Moving picture compression apparatus, image processing apparatus, moving picture compression method, image processing method, and data structure of moving picture compression file |
WO2013121735A1 (en) * | 2012-02-16 | 2013-08-22 | Canon Kabushiki Kaisha | Image generating apparatus and method for controlling the same |
JP2013167798A (ja) * | 2012-02-16 | 2013-08-29 | Canon Inc | 画像生成装置及びその制御方法 |
CN104331213A (zh) * | 2014-08-04 | 2015-02-04 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US20110221780A1 (en) | 2011-09-15 |
EP2330587A1 (en) | 2011-06-08 |
US8878869B2 (en) | 2014-11-04 |
CN102165515A (zh) | 2011-08-24 |
KR20110074884A (ko) | 2011-07-04 |
KR101401336B1 (ko) | 2014-05-29 |
EP2330587A4 (en) | 2012-02-01 |
CN102165515B (zh) | 2014-05-28 |
EP2330587B1 (en) | 2019-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010038337A1 (ja) | 画像処理装置および画像処理方法 | |
EP2464093B1 (en) | Image file generation device, image processing device, image file generation method, and image processing method | |
CN103119543B (zh) | 图像处理装置、内容制作装置、图像处理方法及内容文件的数据结构 | |
JP5792607B2 (ja) | 画像処理装置および画像処理方法 | |
AU2010313045B2 (en) | Image file generation device, image processing device, image file generation method, image processing method, and data structure for image files | |
JP5474887B2 (ja) | 動画データ生成装置、動画像表示装置、動画データ生成方法、動画像表示方法、および動画像ファイルのデータ構造 | |
JP5368254B2 (ja) | 画像ファイル生成装置、画像処理装置、画像ファイル生成方法、画像処理方法、および画像ファイルのデータ構造 | |
JP5419822B2 (ja) | 画像処理装置、画像表示装置、画像処理方法、および画像ファイルのデータ構造 | |
JP5826730B2 (ja) | 動画圧縮装置、画像処理装置、動画圧縮方法、画像処理方法、および動画圧縮ファイルのデータ構造 | |
JP5296656B2 (ja) | 画像処理装置および画像処理方法 | |
US9047680B2 (en) | Information processing apparatus, information processing method, and data structure of content files | |
JP5331432B2 (ja) | 画像処理装置および画像処理方法 | |
JP4809412B2 (ja) | 画像処理装置および画像処理方法 | |
JP5265306B2 (ja) | 画像処理装置 | |
JP5467083B2 (ja) | 画像処理装置、画像処理方法、および画像のデータ構造 | |
US8972877B2 (en) | Information processing device for displaying control panel image and information image on a display | |
JP5520890B2 (ja) | 画像処理装置、画像データ生成装置、画像処理方法、画像データ生成方法、および画像ファイルのデータ構造 | |
JP5731816B2 (ja) | 画像処理装置、画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980138067.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09817381 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009817381 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20117009478 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13120785 Country of ref document: US |