WO2008077126A2 - Procédé permettant de catégoriser des parties d'un texte - Google Patents
Procédé permettant de catégoriser des parties d'un texte Download PDFInfo
- Publication number
- WO2008077126A2 WO2008077126A2 PCT/US2007/088207 US2007088207W WO2008077126A2 WO 2008077126 A2 WO2008077126 A2 WO 2008077126A2 US 2007088207 W US2007088207 W US 2007088207W WO 2008077126 A2 WO2008077126 A2 WO 2008077126A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- image
- database
- criteria
- computer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5846—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
Definitions
- the present application relates to categorizing portions of text concerning an associated image.
- Electronic representations of text may be segmented using a variety of factors. For example, some portions of text may be associated with other portions of text by quantifying, then comparing, word distributions on two sides of a potential segment boundary. In such a case, a large difference would be a positive indicator for a segment boundary, while a small difference would be a negative indicator for a segment boundary. This is but one example among many of methods for segmenting text. However, all these methods share the basic assumption that there are measurable differences in patterns of word usage between segments. Thus, there is a need for a method to categorize text with regard to features of that text that cannot be discovered simply through analysis of word distribution.
- a method for categorizing portions of text concerning an associated image includes clustering the text into clusters based on a first set of one or more criteria, associating the clustered text with one or more images, and assigning one or more labels to the associated text based on a second set of one or more criteria.
- the method further comprises analyzing the text to identify a second set of one or more criteria relevant to the associated text.
- the portions of text are electronically scanned into a database.
- the scanned text is converted into a document encoding format.
- the method further comprises searching a database for an electronic version of said text.
- the method may also further comprise accessing the database through the Internet. Additionally, the method may even further comprise searching the same or another database for an electronic version of the one or more images associated with the text. The method may further comprise accessing that database through the Internet.
- the categorizing portions of text concerning an associated image further comprises determining a text representation for use by a machine learning algorithm.
- the assigning one or more labels to the associated text comprises using a machine learning algorithm.
- the machine learning algorithm comprises Naive Bayes algorithm.
- the machine learning algorithm comprises a support vector machine algorithm.
- a computer program product for categorizing portions of text concerning an associated image is embodied in a computer readable medium and comprises computer instructions for clustering the text into clusters based on a first set of one or more criteria, associating the clustered text with one or more images, and assigning one or more labels to the associated text based on a second set of one or more criteria.
- the computer readable medium comprises instructions for analyzing the text to identify the second set of one or more criteria relevant to the text.
- the computer readable medium comprises instructions for determining a text representation for use by a machine learning algorithm.
- the system further comprises a database comprising one or more images associated with the text.
- computer readable medium comprises storing a machine learning algorithm.
- the computer readable medium comprises instructions for using a machine learning algorithm , e.g., a Naive Bayes algorithm or a support vector machine algorithm.
- a computer database product for categorizing portions of text concerning an associated image is embodied in a computer readable medium and is created by clustering the text into clusters based on a first set of one or more criteria, associating the clustered text with one or more images, and assigning one or more labels to the associated text based on a second set of one or more criteria.
- the computer database product is created by analyzing the text to identify the second set of one or more criteria relevant to the text.
- the computer database product is created by determining a text representation for use by a machine learning algorithm.
- the machine learning algorithm is a Naive Bayes algorithm or a support vector machine algorithm.
- FIG. 1 illustrates a flow chart of the method in accordance with an embodiment of the present invention.
- FIG. 2 illustrates a schematic of the system in accordance with an embodiment of the present invention.
- Figure 1 is an exemplary embodiment of a method 100 for categorizing portions of text concerning an associated image. Certain steps may be combined, and certain steps may occur in a different sequence or simultaneously.
- text that is to be categorized is explanatory text having an association with certain images, such as artwork.
- the text may include descriptions of the art work, for example, such as information on the image, the techniques or artistic style of rendering the image, the subject depicted in the image, or the historical background of the image. Accessing the text is necessary to the categorization process.
- scanning (110) is composed of scanning the text (112) into an electronic format using a scanning apparatus (not shown) performing an optical character recognition algorithm or similar technique to obtain an electronic version of the text.
- the optical character recognition scanning process may scan an entire text automatically and uninterrupted.
- the optical character recognition scanning process may scan the text in a plurality portions and between the scanning of one or more portions a user may adjust the parameters of the optical character recognition algorithm. For example, a user may recognize that the optical character recognition algorithm consistently incorrectly scans the character "b"; the user may then adjust the algorithm to compensate, thus improving the efficiency of the scanning of subsequent portions of the text.
- the image to be associated with the scanned text may be similarly scanned into an electronic format (114).
- the image may already exist in a pre-loaded database, in which case the pre-loaded database may be accessed (116) to load the image.
- the image and text is scanned in the same step (118).
- the scanned image is stored (1 15) in a database for future use.
- the image data is provided in electronic format, e.g., as an image file.
- the text categorization may proceed independently of the image data.
- the text may already exist in an electronic format that may be accessed (116) by a user.
- This electronic format may be present on a database accessible through the Internet in a searchable form. Alternatively, the format may be on some other database accessible through a local network, or in any other suitable manner.
- a user may perform a search (117) of the database for the text and/or image for use in the present inventive method.
- the search (117) may also be a conducted across the Internet as a whole or any portion thereof.
- the text found via a search (1 17) may already exist in a machine readable format, or alternatively, may be in an image-type format, such as a Portable Document Format (PDF) file (i.e., an electronic image of the text).
- PDF Portable Document Format
- the text may then be converted into a document encoding format, by use of the methods detailed herein.
- the text may be converted (119) into a machine readable format by use of an optical character recognition process, or by any other appropriate process for conversion of an electronic document into a machine readable format.
- the scanned text is converted (120) to a document encoding format, e.g., Text Encoding Initiative (TEI) Lite with Extensible Markup Language (XML) format.
- the conversion (120) may be include marking-up (122) the text for any or all structural indicators, e.g., sentences, paragraphs, headers, section changes, tables of contents, title pages, conclusions, image captions, and so on.
- the conversion (120) to a document encoding format may further include locating (124), in the electronic version of the text, the location of an image corresponding to its location in the original version of the text.
- the conversion (120) to a document encoding format may also include tagging (126) the image locations with a tag indicating that the location was the location in original text were the image appeared.
- the conversion (120) may be performed automatically, i.e., without user interaction. Alternatively, the conversion (120) may be performed in a user interactive manner. Additionally, in the same or another embodiment, the locating (124) the image locations may involve identifying gaps in the electronic version of the text that indicate an image was at that location. In the same or another embodiment, locating (124) the image locations may involve identifying caption text that corresponds to the image that appears in that location in the original version of the text. In the same or yet another embodiment the locating (124) the image locations may involve identifying image labels that correspond to the image that appears in that location in the original version of the text. Furthermore, the tagging (126) of the image locations may involve tagging the image locations with XML tags that are unique to each image.
- Those XML tags may correspond to the image's figure number, if it has one, or some other unique information about that image, e.g., a descriptive caption.
- the tagging (126) may also involve assigning to each image a unique image identifying number.
- a section of text appears as follows after the conversion (120) of the text into a TEI Lite format with XML mark-ups:
- the sculptor probably meant to personify the entire Roman army.
- attendants lead in Valerian, who kneels before Shapur and begs for mercy.
- a ⁇ hi rend "i">putto ⁇ /hi>-like (cherub or childlike) figure borrowed from the repertory of Greco-Roman art hovers above the king and brings him a victory garland.
- the " ⁇ head>” tag indicates that the text following is the heading of this sub-section, while the “ ⁇ /head>” tag indicates the end of the heading text.
- the " ⁇ p>” tag indicates the beginning of a paragraph, while the " ⁇ /p>” tag indicates the closing of a paragraph.
- tag is indicates the location of a gap in the text and the reason for that gap.
- the text is thus converted (120) to a TEI Lite format with XML markups for further processing as detailed below.
- the scanned text is clustered (130) into groupings that relate to one or more images based on a first set of one or more criteria.
- Clustering (130) is accomplished by using a software program to determine what portions of text are related to the associated one or more images, as will be described below.
- the software program useful for clustering text is a program capable of reading text in a TEI Lite format with XML mark-ups.
- clustering the scanned text (130) may include choosing one or more paragraphs based on a set of criteria (132). For example, in one embodiment, the paragraph which includes the XML tag for the image is always included in the clustering algorithm.
- subsequent paragraphs may be included based on one or more criteria.
- a criteria might consist of including each subsequent paragraph until a certain condition is met.
- a condition might be that a subsequent paragraph is tagged with a XML tag indicating a new division, e.g., a tag for a new subsection. In such a case, all paragraphs up to but not including that paragraph would be included in the cluster.
- a second condition might be that a subsequent paragraph contains the XML tag for a different image. In such a case, all paragraphs up to but not including that paragraph would be included in the cluster.
- These two conditions are examples of any number of possible conditions that might operate to terminate the inclusion of subsequent paragraphs. Furthermore, these two conditions may be operated independently, such that satisfaction of either, or both, will operate to terminate the inclusion of subsequent paragraphs in the cluster.
- associating selected portions of text to a related image may be performed. Associating (140) is accomplished by using a software program to determine what portions of text are related to the associated one or more images.
- the software program useful for clustering text is a program capable of reading text in a TEI Lite format with XML mark-ups.
- Associating selected portions of the text to a related image (140) includes identifying the location of a XML tag or the unique identifying number, e.g., the image plate number, marking the location of an image within a cluster of text associated with that image (142).
- the identifying (142) may be followed by creating an electronic association (144) between XML tag or the unique identifying number marking the location of the image within the cluster of text with an electronic version of the image.
- This electronic association (144) may allow for an electronic link, whereby an electronic version of the image is displayed along with the associated cluster of text on a displaying device.
- an example of text would appear as follows after the clustering (130) and associating (140) of the text by the custom program:
- the corners of the White Temple are oriented to the cardinal points of the compass.
- the building probably dedicated to Anu, the sky god, is of modest proportions (sixty-one by sixteen feet). By design, it did not accommodate large throngs of worshipers but only a select few, the clergys and perhaps the leading community members.
- the temple has several chambers.
- the central hall, or cella was set aside for the divinity and housed a stepped altar.
- the Sumerians referred to their temples as “waiting rooms,” a reflection of their belief that the deity would descend from the heavens to appear before the clergys in the cella. How or if the Uruk temple was roofed is uncertain.
- the "ID” here is the unique identifying number assigned to figure 2-1, the "White Temple and ziggurat, Uruk (modern Warka), Iraq, ca. 3200-3000 B.C.” image, so labeled in the original text.
- Paragraphs 19 and 20 are paragraphs that have been clustered and associated with the image tagged "ID: 0".
- the associated cluster of text is thus formatted and in condition for labeling as detailed below.
- one or more associated clusters of text, or portions thereof are labeled with one or more labels based on a set of one or more criteria (150).
- the label assigned to any given cluster of text, or portion thereof may be chosen from among seven categories of labels described herein.
- Assigning one or more labels may include analyzing one or more associated clusters of text, or portions thereof, to identify the one or more associated clusters of text, or portions thereof, that could be appropriately labeled with one or more of the categories of labels described herein (152).
- the analyzed portions of text are assigned with the one or more appropriate labels (152).
- the assigning of the labels may involve placing XML tags on the TEI Lite formatted version of the text at appropriate locations.
- the assigning of labels may be accomplished by the use of a custom program capable of reading and marking text in a TEI Lite format with XML mark-ups.
- the process of categorizing portions of text concerning an associated image may involve determining a text representation (160) for use by a machine learning algorithm. Once determined, such representation may be input into a machine learning algorithm to further refine the ability to determine what portions of text should be label with a particular label. [0030] Label Categories
- Categories may be assigned to the portions of text.
- the categories may represent one of several aspects of the text.
- the text may refer to features of the image associated with the text, such as the content of the image or the techniques used to create the image.
- Other categories relate to historical data about three subject of the image or of the image itself.
- Each portion of text may be associated with one or more categories.
- the first exemplary category of labels is an "Image Content" label.
- the Image Content label might be assigned to one or more clusters of text, or portions thereof, where that text mentions the image and/or describes it in concrete terms.
- the Image Content label might be assigned where the text has an explicit mention of the unique image identifier, e.g., the image plate number or figure number.
- the Image Content label might be assigned where the text is primarily about one or more specific objects in the image.
- the Image Content label might not be applied where the text describes the general class of objects of which the object depicted is a member, rather than a description of the specific object depicted.
- the second exemplary category of labels is a "Historical Context" label.
- the Historical Context label might be assigned to one or more clusters of text, or portions thereof, where that text describes the historical context of the creation of the object depicted in the image.
- the Historical Context label might be assigned where the text describes when the object depicted was created or why it was created or under what circumstances it was created, e.g., whether it was commissioned or not.
- the Historical Context label might be assigned where the text mentions the broader art history facts about the period in which the object depicted was created.
- the third example category of labels is a "Biographical" label.
- the Biographical label might be assigned to one or more clusters of text, or portions thereof, where the text provides biographical information about the artist whose artwork is depicted in the image.
- the Biographical label might be assigned where the text provides biographical information about the patron of the object, e.g., the person for whom the object depicted was commissioned.
- the Biographical label might be assigned where the text provides biographical information about any other people or personages involved in creating the object depicted or having a direct link to the object depicted after it was created.
- the fourth exemplary category of labels is an "Implementation" label.
- the Implementation label might be assigned to one or more clusters of text, or portions thereof, where the text provides information about conventions, methods, use of particular materials or tools, or techniques that the artist implemented in the creation of the object depicted in the image.
- the Implementation label might be assigned where the text includes a discussion about the any other requirements and/or limitations employed by the artist.
- the fifth exemplary category of labels is a "Comparison" label.
- the Comparison label might be assigned to one or more clusters of text, or portions thereof, were the text discusses the object depicted in the image in reference to one or more other works.
- the reference to one or more other works may involve a comparison of one or more features of the work depicted in the image and the work referenced, e.g., a comparison of the imagery of the two works or a comparison of the techniques employed to create the two works.
- the Comparison label may not be applied where there is a cross reference to another work without some kind of comparison of the depicted work with the referenced work.
- the sixth exemplary category of labels is an "Interpretation" label.
- the Interpretation label might be assigned to one or more clusters of text, or portions thereof, where the author of the text provides his or her opinions about the interpretation of the object depicted in the image.
- the Interpretation label may be applied to text identified by emotion-bearing vocabulary, sentiment terms, or opinion terms.
- the seventh exemplary category of labels is a "Significance” label.
- the Significance label might be assigned to one or more clusters of text, or portions thereof, where the text describes the significance of the object depicted in the image in art history terms.
- the Significance label may be applied to text identified by the use of superlative phrases, e.g., "this is the most . . .”, “this is quintessential . . .”, and so on.
- the Significance label may be applied to text identified by the use of superlative morphology, e.g., the use of "-est” words, such as "best", “finest”, and so on.
- FIG. 2 is an exemplary embodiment of a system 200 for categorizing portions of text concerning an associated image.
- a database 210 may contain one or more texts in electronic format. In the same or another embodiment, the database 210 may also contain one or more images in electronic format.
- a processor 220 may be operatively coupled to a computer readable storage medium, such as memory 230. In an exemplary embodiment, the processor 220 is a personal computer operating with memory, such as a standard desktop computer.
- the database 210 is stored on a hard drive.
- the memory 230 may be storing program instructions that when executed by the processor 220, cause the processor 220 to cluster the text into clusters based on a first set of one or more criteria.
- the stored program instructions may be a custom software program used to determine what portions of text are related to the associated one or more images.
- the custom software program useful for clustering text is a program capable of reading text in a TEI Lite format with XML mark-ups.
- the program instructions stored in the memory 230 may be capable of clustering the scanned text by choosing one or more paragraphs for inclusion in the cluster based on a set of one or more criteria, when executed by the processor 220.
- the paragraph which includes the XML tag for the image is always be included.
- the immediately preceding paragraph is always be included.
- subsequent paragraphs may be included based on one or more criteria. For example, a criteria might consist of including each subsequent paragraph until a certain condition is met. Such a condition might be that a subsequent paragraph is tagged with a XML tag indicating a new division, e.g., a tag for a new subsection. In such a case, all paragraphs up to but not including that paragraph would be included in the cluster.
- a second condition might be that if a subsequent paragraph contains the XML tag for a different image. In such a case, all paragraphs up to but not including that paragraph would be included in the cluster.
- the memory 230 may be storing program instructions that when executed by the processor 220, cause the processor 220 to associate the clustered text with one or more images.
- the stored program instructions may be a custom software program used to determine what portions of text are related to the associated one or more images.
- the custom software program useful for associating text is a program capable of reading text in a TEI Lite format with XML mark-ups.
- the program instructions stored in the memory 230 may be capable of associating selected portions of text to a related image by identifying the location of the XML tag or the unique identifying number, e.g., the image plate number, marking the location of an image within a cluster of text associated with that image, when executed by the processor 220.
- the program instructions stored in the memory 230 may further be capable of creating an electronic association between the XML tag or the unique identifying number marking the location of the image within the cluster of text with an electronic version of the image, when executed by the processor 220. This electronic association may allow for an electronic link, whereby an electronic version of the image is displayed along with the associated cluster of text on a displaying device.
- the memory 230 may be storing program instructions that when executed by the processor 220, cause the processor 220 to assign one or more labels to the associated text based on a second set of one or more criteria.
- the stored program instructions may be a custom software program used to determine what portions of associated text are relevant to the one or more categories of label and then assign those portions with the appropriate label.
- the custom software program useful for labeling text may be a program capable of reading and marking text in a TEI Lite format with XML markups.
- the label assigned to any given cluster of text, or portion thereof may be chosen from among seven categories of labels described herein.
- the program instructions stored in the memory 230 may be capable of assigning one or more labels by analyzing one or more associated clusters of text, or portions thereof, to identify the one or more associated clusters of text, or portions thereof, that could be appropriately labeled with one or more of the categories of labels described herein, when executed by the processor 220.
- the program instructions stored in the memory 230 may further be capable of assigning the analyzed portions of text with the one or more appropriate labels, when executed by the processor 220.
- the program instructions stored in the memory 230 may be capable of analyzing the text and assigning the appropriate label to the text using a machine learning algorithm, when executed by the processor 220.
- the machine learning algorithm may be a Naives Bayes algorithm.
- the machine learning algorithm may also be a support vector machine algorithm.
- the program instructions stored in the memory 230 may be capable of determining a text representation for use by a machine learning algorithm, when executed by the processor 220. Once determined, the program instructions stored in the memory 230 may be capable of inputting such representation into a machine learning algorithm to further refine the ability to determine what portions of text should be label with a particular label, when executed by the processor 220.
- the output of the algorithm includes a database created by the processes described herein, including, e.g., the text portions and their assigned categories. Such output is stored in a text file, output to a monitor or print-out, etc.
- the output may be in a XML mark-up format, such as that shown above.
- the output may be in a format with the XML marking hidden, also as shown above.
Landscapes
- Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
La présente invention se rapporte à des systèmes et à des procédés permettant de catégoriser des parties d'un texte concernant une image associée. Dans certains modes de réalisation, un procédé permettant de catégoriser des parties de texte concernant une image associée est décrit. Ce procédé comprend les étapes consistant à regrouper le texte en groupes sur la base d'un premier ensemble d'un ou plusieurs critères ; associer le texte mis en groupe à une ou plusieurs images ; et attribuer une ou plusieurs étiquettes au texte associé en se basant sur un second ensemble d'un ou plusieurs critères.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US87072606P | 2006-12-19 | 2006-12-19 | |
US60/870,726 | 2006-12-19 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2008077126A2 true WO2008077126A2 (fr) | 2008-06-26 |
WO2008077126A3 WO2008077126A3 (fr) | 2008-09-04 |
Family
ID=39537077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/088207 WO2008077126A2 (fr) | 2006-12-19 | 2007-12-19 | Procédé permettant de catégoriser des parties d'un texte |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2008077126A2 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040193596A1 (en) * | 2003-02-21 | 2004-09-30 | Rudy Defelice | Multiparameter indexing and searching for documents |
US20060080314A1 (en) * | 2001-08-13 | 2006-04-13 | Xerox Corporation | System with user directed enrichment and import/export control |
US20060245641A1 (en) * | 2005-04-29 | 2006-11-02 | Microsoft Corporation | Extracting data from semi-structured information utilizing a discriminative context free grammar |
US20060251339A1 (en) * | 2005-05-09 | 2006-11-09 | Gokturk Salih B | System and method for enabling the use of captured images through recognition |
-
2007
- 2007-12-19 WO PCT/US2007/088207 patent/WO2008077126A2/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060080314A1 (en) * | 2001-08-13 | 2006-04-13 | Xerox Corporation | System with user directed enrichment and import/export control |
US20040193596A1 (en) * | 2003-02-21 | 2004-09-30 | Rudy Defelice | Multiparameter indexing and searching for documents |
US20060245641A1 (en) * | 2005-04-29 | 2006-11-02 | Microsoft Corporation | Extracting data from semi-structured information utilizing a discriminative context free grammar |
US20060251339A1 (en) * | 2005-05-09 | 2006-11-09 | Gokturk Salih B | System and method for enabling the use of captured images through recognition |
Also Published As
Publication number | Publication date |
---|---|
WO2008077126A3 (fr) | 2008-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10741167B2 (en) | Document mode processing for portable reading machine enabling document navigation | |
CN110889883B (zh) | 一种自适应的智能横幅广告图片生成方法及系统 | |
US8150107B2 (en) | Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine | |
US9626000B2 (en) | Image resizing for optical character recognition in portable reading machine | |
US8284999B2 (en) | Text stitching from multiple images | |
US8531494B2 (en) | Reducing processing latency in optical character recognition for portable reading machine | |
US8320708B2 (en) | Tilt adjustment for optical character recognition in portable reading machine | |
US7659915B2 (en) | Portable reading device with mode processing | |
US8626512B2 (en) | Cooperative processing for portable reading machine | |
US8186581B2 (en) | Device and method to assist user in conducting a transaction with a machine | |
US20100331043A1 (en) | Document and image processing | |
US20060017810A1 (en) | Mode processing in portable reading machine | |
US20150043822A1 (en) | Machine And Method To Assist User In Selecting Clothing | |
US20100199166A1 (en) | Image Component WEB/PC Repository | |
US20110125735A1 (en) | Architecture for responding to a visual query | |
US7325735B2 (en) | Directed reading mode for portable reading machine | |
US8249309B2 (en) | Image evaluation for reading mode in a reading machine | |
Chen | Intangible cultural heritage preservation: An exploratory study of digitization of the historical literature of Chinese Kunqu opera librettos | |
Ramel et al. | AGORA: the interactive document image analysis tool of the BVH project | |
Hayler | Research methods for creating and curating data in the digital humanities | |
WO2008077126A2 (fr) | Procédé permettant de catégoriser des parties d'un texte | |
Ishihara et al. | Analyzing visual layout for a non-visual presentation-document interface | |
CN100476809C (zh) | 网络内容调整处理和系统 | |
Joliveau et al. | Toward the spatial and temporal management of documents: The GéoTopia Platform | |
Bosma | Image retrieval supports multimedia authoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07869558 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase in: |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07869558 Country of ref document: EP Kind code of ref document: A2 |