US20040174434A1 - Systems and methods for suggesting meta-information to a camera user - Google Patents
Systems and methods for suggesting meta-information to a camera user Download PDFInfo
- Publication number
- US20040174434A1 US20040174434A1 US10/740,242 US74024203A US2004174434A1 US 20040174434 A1 US20040174434 A1 US 20040174434A1 US 74024203 A US74024203 A US 74024203A US 2004174434 A1 US2004174434 A1 US 2004174434A1
- Authority
- US
- United States
- Prior art keywords
- camera
- user
- question
- image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32128—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title attached to the image data, e.g. file header, transmitted message header, information on the same page or in the same computer file as the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3225—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
- H04N2201/3226—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image
- H04N2201/3228—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image further additional information (metadata) being comprised in the identification information
- H04N2201/3229—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image further additional information (metadata) being comprised in the identification information further additional information (metadata) being comprised in the file name (including path, e.g. directory or folder names at one or more higher hierarchical levels)
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3274—Storage or retrieval of prestored additional information
- H04N2201/3276—Storage or retrieval of prestored additional information of a customised additional information profile, e.g. a profile specific to a user ID
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/781—Television signal recording using magnetic recording on disks or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
- H04N9/8047—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
Definitions
- FIG. 1 shows a block diagram of a system that is consistent with at least one embodiment of the present invention.
- FIG. 2 shows a block diagram of a system that is consistent with at least one embodiment of the present invention.
- FIG. 3 shows a block diagram of a camera in communication with a computing device that is consistent with at least one embodiment of the present invention.
- FIG. 4 shows a block diagram of a computing device that is consistent with at least one embodiment of the present invention.
- FIG. 5 shows a block diagram of a camera that is consistent with at least one embodiment of the present invention.
- FIG. 6 shows a block diagram of a camera that is consistent with at least one embodiment of the present invention.
- FIG. 7 is a table illustrating an exemplary data structure of a settings database consistent with at least one embodiment of the present invention.
- FIG. 8 is a table illustrating an exemplary data structure of an image database consistent with at least one embodiment of the present invention.
- FIG. 9 is a table illustrating an exemplary data structure of a question database consistent with at least one embodiment of the present invention.
- FIG. 10 is a table illustrating an exemplary data structure of a determination condition database consistent with at least one embodiment of the present invention.
- FIG. 11 is a table illustrating an exemplary data structure of an output condition database consistent with at least one embodiment of the present invention.
- FIGS. 12A and 12B are a table illustrating an exemplary data structure of a response database consistent with at least one embodiment of the present invention.
- FIG. 13A is a table illustrating an exemplary data structure of an event log corresponding to capturing images at a wedding, in accordance with at least one embodiment of the present invention.
- FIG. 13B is a table illustrating an exemplary data structure of an event log corresponding to capturing images on a sunny beach, in accordance with at least one embodiment of the present invention.
- FIG. 14 is a table illustrating an exemplary data structure of an expiring information database consistent with at least one embodiment of the present invention.
- FIG. 15 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- FIG. 16 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for performing an action based on a response.
- FIG. 17 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for performing an action based on a response.
- FIG. 18 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for suggesting meta-information.
- FIG. 19 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- FIG. 20 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- Applicants have recognized that, in accordance with some embodiments of the present invention, some types of users of cameras and other imaging devices may find it appealing to have a camera that is able to determine a variety of different types of information that may be useful in performing a variety of functions and/or assisting a user in the performance of various actions. Also, some types of users may find it appealing to use a camera having enhanced features to facilitate information gathering (e.g., via interaction with a user, by detection of environmental conditions, by communication with other devices). In accordance with some embodiments, such information may be used, for example, in managing images (e.g., suggesting a meta-tag for an image) and in improving the quality of images (e.g., by adjusting a camera setting).
- Applicants have also recognized that some types of users of cameras and other imaging devices may find it appealing to be able to receive a variety of different types of questions (e.g., open-ended questions) and/or suggestions (e.g., suggested meta-data to associate with an image) from a camera, as provided for in accordance with at least one embodiment of the present invention. Some types of users may also find it appealing to be able to provide-responses to questions output by a camera.
- questions e.g., open-ended questions
- suggestions e.g., suggested meta-data to associate with an image
- At least one embodiment of the invention includes a camera that may output questions to a user.
- the user may respond to these questions (e.g., providing information about a scene that he is interested in photographing) and one or more settings on the camera may be adjusted based on the user's response.
- a camera may ask a user: “Are you at the beach?” If the user responds “Yes” to this question, then the camera may adjust one or more of its settings (e.g., aperture, shutter speed, white balance, automatic neutral density) based on the user's response.
- settings e.g., aperture, shutter speed, white balance, automatic neutral density
- the camera may ask a user a plurality of questions, starting with “Are you indoors?” If the user responds that he is indoors, then the camera may ask the user a second question: “What type of lights does this room have?” In addition to outputting the question, the camera may output a list of potential answers to the question (e.g., “Fluorescent,” “Tungsten,” “Halogen,” “Skylight,” and “I don't know”). The user may respond to the question by selecting one of the potential answers from the list.
- a list of potential answers to the question e.g., “Fluorescent,” “Tungsten,” “Halogen,” “Skylight,” and “I don't know”. The user may respond to the question by selecting one of the potential answers from the list.
- the camera may adjust its settings to “Fluorescent Light” mode, in which the camera's white balance, aperture, shutter speed, image sensor sensitivity and other settings are adjusted for taking pictures in a room that is lit with fluorescent light bulbs.
- Embodiments of the present invention will first be introduced by means of block diagrams of exemplary systems and devices that may be utilized by an entity practicing the present invention. Exemplary data structures illustrating tables that may be used when practicing various embodiments of the present invention will then be described, along with corresponding flowcharts that illustrate exemplary processes with reference to the exemplary devices, systems, and tables.
- a block diagram of a system 100 includes one or more servers 110 (e.g., a personal computer, a Web server) in communication, via a communications network 120 , with one or more cameras 130 (e.g., digital camera, video camera, wireless phone with integrated digital camera).
- Each of the servers 110 and cameras 130 may comprise one or more computing devices, such as those based on the Intel Pentium® processor, that are adapted to communicate with any number and type of devices (e.g., other cameras and/or servers) via the communications network 120 .
- FIG. 1 Although only two cameras 130 and two servers 110 are depicted in FIG. 1, it will be understood that any number and type of cameras 130 may communicate with any number of servers 110 and/or other cameras 130 (and vice versa).
- a camera 130 may communicate with a server 110 in order to determine a question to output to a user.
- the camera 130 may transmit various information (e.g., images, GPS coordinates) to a computer server 110 .
- the server 110 may then determine a question based on this information.
- the server 110 may then transmit the question to the camera 130 and the camera 130 may output the question to a user.
- Communication among the cameras 130 and the servers 110 may be direct or may be indirect, and may occur via a wired or wireless medium.
- Some, but not all, possible communication networks that may comprise network 120 include: a local area network (LAN), a wide area network (WAN), the Internet, a telephone line, a cable line, a radio channel, an optical communications line, and a satellite communications link.
- the devices of the system 100 may communicate with one another over RF, cable TV, satellite links and the like.
- Some possible communications protocols that may be part of system 100 include, without limitation: Ethernet (or IEEE 802.3), SAP, ATP, BluetoothTM, IEEE 802.11, CDMA, TDMA, ultra-wideband, universal serial bus (USB), and TCP/IP.
- Ethernet or IEEE 802.3
- SAP SAP
- ATP ATP
- BluetoothTM IEEE 802.11
- CDMA Code Division Multiple Access
- TDMA ultra-wideband
- USB universal serial bus
- TCP/IP Transmission Control Protocol/IP
- communication may be encrypted to ensure privacy and to prevent fraud in any of a variety of ways well known in the art.
- any appropriate communications means or combination of communications means may be employed in the system 100 and in other exemplary systems described herein.
- communication may take place over the Internet through a Web site maintained by a server 110 on a remote server, or over an on-line data network including commercial on-line service providers, bulletin board systems and the like.
- a user may upload an image captured using the integrated digital camera to his personal computer, or to a personal database of images on a Web server maintained by his telecommunications company.
- the user's personal computer may receive, via a cable modem, a series of vacation snapshots taken by the user, and may also transmit information about those snapshots and/or questions related to those snapshots back to the user's digital camera.
- a server 110 may comprise an external or internal module associated with one or more of the cameras 130 that is capable of communicating with one or more of the cameras 130 and of directing the one or more cameras 130 to perform one or more functions.
- a server 110 may be configured to execute a program for controlling one or more functions of a camera 130 remotely.
- a camera 130 may comprise a module associated with one or more servers 110 that is capable of directing one or more servers 110 to perform one or more functions.
- a camera 130 may be configured to direct a server 110 to execute a facial recognition program on a captured image and to return an indication of the best matches to the camera 130 via the communication network 120 .
- a camera 130 may be operable to access one or more databases (e.g., of server 110 ) to provide suggestions and/or questions to a user of the camera 130 based on, for example, an image captured by the camera 130 or on information gathered by the camera 130 (e.g., information about lighting conditions).
- a camera 130 may also be operable to access a database (e.g., an image database) via the network 120 to determine what meta-information (e.g., information descriptive of an image) to associate with one or more images.
- a database of images and/or image templates may be stored for a user on a server 110 .
- Various functions of a camera 130 and/or the server 110 may be performed based on images stored in a personalized database.
- an image recognition program running on the server 110 may use the user's personalized database of images for reference in identifying people, objects, and/or scenes in an image captured by the user. If, in accordance with a preferred embodiment, the user has identified the content of some of the images in the database himself (e.g., by associating a meta-tag with an image), a match determined by the image recognition software with reference to the customized database is likely to be acceptable to the user (e.g., the user is likely to agree to a suggestion to associate a meta-tag from a stored reference image with the new image also).
- Information exchanged by the exemplary devices depicted in FIG. 1 may include, without limitation, images and indications of changes in settings or operation of a camera 130 (e.g., an indication that a user or the camera 130 has altered an exposure setting).
- Other exemplary types of information that may be determined by the camera 130 and/or the server 110 and communicated to one or more other devices are described herein.
- the server 110 may monitor operations of a camera 130 (and/or activity of a user) via the network 120 . For instance, the server 110 may identify a subject a user is recording images of and, optionally, use that information to direct the camera 130 to ask if the user would like to e-mail or otherwise transmit a copy of the captured image to the subject.
- devices in communication with each other need not be continually transmitting to each other. On the contrary, such devices need only transmit to each other as necessary, and may actually refrain from exchanging data most of the time. For example, a device in communication with another device via the Internet may not transmit data to the other device for weeks at a time.
- various processes may be performed by the camera 130 in conjunction with the server 110 .
- some steps of a described process may be performed by the camera 130 , while other steps are performed by the server 110 .
- data useful in providing some of the described functionality may be stored on one of or both of the camera 130 and server 110 (or other devices).
- the servers 110 may not be necessary and/or may not be preferred.
- some embodiments of the present invention may be practiced using a camera 130 alone, as described herein.
- one or more functions described as being performed by the server 110 may be performed by the camera 130
- some or all of the data described as being stored on a server 110 may be stored on the camera 130 or on another device in communication with the camera 130 (e.g., another camera, a personal digital assistant (PDA)).
- PDA personal digital assistant
- the cameras 130 may not be necessary and/or may not be preferred. Accordingly, one or more functions described herein as being performed by the camera 130 may be performed by the server 110 , and some or all of the described as being stored on the camera 130 may be stored on the server 110 or on another device in communication with the server 110 (e.g., a PDA, a personal computer).
- a server 110 may be embodied in a variety of different forms, including, without limitation, a mainframe computer (e.g., an SGI OriginTM server), a personal computer (e.g., a Dell DimensionTM computer), and a portable computer (e.g., an Apple iBookTM laptop, a Palm m515TM PDA, a Kyocera 7135TM cell phone).
- a mainframe computer e.g., an SGI OriginTM server
- a personal computer e.g., a Dell DimensionTM computer
- a portable computer e.g., an Apple iBookTM laptop, a Palm m515TM PDA, a Kyocera 7135TM cell phone.
- FIG. 2 a block diagram of a system 200 according to at least one embodiment of the present invention includes an imaging device 210 in communication (e.g., via a communications network or system bus) with a computing device 220 .
- an imaging device 210 in communication (e.g., via a communications network or system bus) with a computing device 220 .
- Various exemplary means by which devices may communicate are discussed above with respect to FIG. 1. Although only one imaging device 210 and one computing device 220 are depicted in FIG. 2, it will be understood that any number and type of imaging devices 210 may communicate with any number of computing devices 220 .
- the imaging device 210 preferably comprises at least one device or component for recording an image, such as, without limitation, an image sensor, a camera, or a handheld device having an integrated camera.
- a lens and an image sensor may each be referred to individually as an imaging device, or, alternatively, two or more such components may be referred to collectively as an imaging device (e.g., as embodied in a camera or PDA).
- an imaging device e.g., as embodied in a camera or PDA
- a device embodying any such components e.g., a camera
- an imaging device may itself be referred to as an imaging device.
- the imaging device 210 may further comprise one or more types of computing devices, such as those based on the Intel Pentium® processor, adapted to communicate with the computing device 220 .
- computing devices such as those based on the Intel Pentium® processor, adapted to communicate with the computing device 220 .
- many types of cameras include an imaging device (e.g., an image sensor for capturing images) and a computing device (e.g., a processor for executing camera functions).
- a block diagram of a system 300 includes a camera 310 in communication (e.g., via a communications network) with a server 340 .
- the camera 310 itself comprises an imaging device 320 (e.g., an image sensor and/or lens) and a computing device 330 (e.g., a camera processor) that is in communication (e.g., via a communication port of the computing device 330 ) with the server 340 (e.g., a Web server).
- an imaging device 320 e.g., an image sensor and/or lens
- a computing device 330 e.g., a camera processor
- the server 340 e.g., a Web server.
- a device such as the camera 310 comprising both an imaging device and a computing device, may itself be referred to, alternatively, as an imaging device or a computing device.
- a computer or computing device 220 may comprise one or more processors adapted to communicate with the imaging device 210 (or one or more computing devices of the imaging device 210 ). As discussed herein, a computer or computing device 220 preferably also comprises a memory (e.g., storing a program executable by the processor) and may optionally comprise a communication port (e.g., for communication with an imaging device 210 ). Some examples of a computer or computing device 220 include, without limitation: a camera processor, a camera, a server, a PDA, a personal computer, a computer server, personal computer, portable hard drive, digital picture frame, or other electronic device. Thus, a computing device 220 may but need not include any devices for capturing images. Some exemplary components of a computing device are discussed in further detail below with respect to FIGS. 4-6.
- imaging device 210 comprises a camera (e.g., a camera 130 of FIG. 1) and the computing device 220 comprises a server (e.g., a server 110 of FIG. 1).
- the system 200 depicts components of a camera or other device capable of recording images.
- the imaging device 210 may comprise an image sensor or lens in communication via a camera system bus with a computing device 220 such as a camera computer or integrated communication device (e.g., a mobile phone).
- An imaging device 210 or camera 310 may communicate with one or more other devices (e.g., computing device 220 , server 340 ) in accordance with one or more systems and methods of the invention.
- devices e.g., computing device 220 , server 340
- Examples of devices that an imaging device may communicate with include, without limitation:
- PDA personal digital assistant
- a digital wallet e.g., the iPodTM by Apple, the MindStorTM from Minds@Work, Nixvue's Digital AlbumTM
- a digital wallet e.g., the iPodTM by Apple, the MindStorTM from Minds@Work, Nixvue's Digital AlbumTM
- a portable stereo e.g., an MP3 music player, a Sony DiscmanTM
- a digital picture frame e.g., Iomega's FotoShowTM, NORDview's Portable Digital Photo AlbumTM
- a GPS device e.g., such as those manufactured by Garmin
- an imaging device 210 may transfer one or more images to a second device (e.g., computing device 220 ). Some examples are provided with reference to FIGS. 1-3.
- an imaging device 210 may include a wireless communication port that allows the camera to transmit images to a second electronic device (e.g., a computer server, personal computer, portable hard drive, digital picture frame, or other electronic device). The second electronic device may then store copies of the images. After transferring the images to this second electronic device, the imaging device 210 may optionally delete the images, since the images are now stored securely on the second electronic device.
- the camera 310 may include a cellular telephone or be connected to a cellular telephone with wireless modem capabilities (e.g., a cellular telephone on a 2.5G or 3G wireless network). Using the cellular telephone, the camera may transmit one or more images to the computer server 340 , which may store the images.
- wireless modem capabilities e.g., a cellular telephone on a 2.5G or 3G wireless network.
- an imaging device 210 may communicate with a portable hard drive such as an Apple iPodTM. To free up memory on the imaging device 210 , the imaging device 210 may transfer images to the portable hard drive.
- a portable hard drive such as an Apple iPodTM.
- the camera 130 may have a wireless Internet connection (e.g., using the 802.11 wireless protocol) and use this connection to transmit images to a personal computer that is connected to the Internet.
- a wireless Internet connection e.g., using the 802.11 wireless protocol
- the camera may effectively expand its available memory. That is, some or all of the memory on the second electronic device may be available to the camera for storing images.
- a camera 310 or other imaging device 210 may communicate with an electronic device to output a question to a user.
- a camera may transmit a question to a user's PDA.
- the question may then be displayed to the user by the PDA.
- a PDA or other device with a relatively large display may make it easier for a user to view a question (e.g., a question that includes a large amount of text or a question which is based on an image).
- a digital camera may queue up a plurality of questions and output these questions to a user's personal computer when the user uploads photos from the camera to the personal computer.
- the personal computer may run software that outputs the questions to the user and enables the user to respond to the questions. Viewing questions on a personal computer may be more convenient than viewing questions using the digital camera.
- a user's response to a question may be less useful to the camera (e.g., in adjusting settings on the camera) if this response is provided after the user has already finished capturing images.
- a camera or other imaging device may communicate with an electronic device to receive an input to a user.
- a user may use a PDA to indicate a response to a question and then the PDA may transmit an indication of this response to the camera using a Bluetooth communication link.
- a user may highlight a portion of an image, select a response from a list of responses, or write a free form response using the stylus on his PDA.
- Providing an input to the camera using a PDA or other electronic device may be particularly convenient for a user because the PDA may include one or more input devices that are not present on the camera (e.g., a touch screen, a GPS device).
- a user may carry a GPS device that is separate from the camera but that communicates with the camera using a USB cable. In order to indicate his location, the user may transmit an indication of his latitude and longitude from the GPS device to the camera.
- all user control of a camera may be implemented through a user's cellular telephone. For example, the user may use his cellular telephone to remotely operate the camera, pressing the “1” and “2” keys to zoom in and zoom out, the “3” key to capture a picture, and the “4” and “5” keys to answer “Yes” and “no” to questions output by the camera.
- One advantage of having a second device implement a large number of controls for the camera is that the camera can have a very small form factor, but still be operable by a large number of controls because all of these controls are on the second device.
- FIG. 4 illustrated therein is a block diagram of an embodiment 400 of computing device 220 (FIG. 2) or computing device 330 (FIG. 3).
- the computing device 400 may be implemented as a system controller, a dedicated hardware circuit, an appropriately programmed general-purpose computer, or any other equivalent electronic, mechanical or electromechanical device.
- the computing device 400 may comprise, for example, a server computer operable to communicate with one or more client devices, such as an imaging device 210 .
- the computing device 400 may be operative to manage the system 100 , the system 200 , the system 300 , and/or the camera 310 and to execute various methods of the present invention.
- the computing device 400 may function under the control of a user, remote operator, image storage service provider, or other entity that may also control use of an imaging device 210 and/or computing device 220 .
- the computing device 400 may be a Web server maintained by an Internet services provider, or may be a computer embodied in a camera 310 or camera 130 .
- the computing device 400 and an imaging device 210 may be different devices.
- the computing device 400 and the imaging device 210 may be the same device.
- the computing device 400 may comprise more than one computer operating together.
- the computing device 400 comprises a processor 405 , such as one or more Intel Pentium® processors.
- the processor 405 is in communication with a memory 410 and with a communication port 495 (e.g., for communicating with one or more other devices).
- the memory 410 may comprise an appropriate combination of magnetic, optical and/or semiconductor memory, and may include, for example, Random Access Memory (RAM), Read-Only Memory (ROM), a compact disc and/or a hard disk.
- RAM Random Access Memory
- ROM Read-Only Memory
- the processor 405 and the memory 410 may each be, for example: (i) located entirely within a single computer or other device; or (ii) connected to each other by a remote communication medium, such as a serial port cable, telephone line or radio frequency transceiver.
- the computing device 400 may comprise one or more devices that are connected to a remote server computer for maintaining databases.
- the memory 410 stores a program 415 for controlling the processor 405 .
- the processor 405 performs instructions of the program 415 , and thereby operates in accordance with the present invention, and particularly in accordance with the methods described in detail herein.
- the program 415 may be stored in a compressed, uncompiled and/or encrypted format.
- the program 415 furthermore includes program elements that may be necessary, such as an operating system, a database management system and “device drivers” for allowing the processor 405 to interface with computer peripheral devices. Appropriate program elements are known to those skilled in the art, and need not be described in detail herein.
- Non-volatile media include, for example, optical or magnetic disks, such as memory 410 .
- Volatile media include dynamic random access memory (DRAM), which typically constitutes the main memory.
- Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor 405 . Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications.
- RF radio frequency
- IR infrared
- Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
- Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to processor 405 (or any other processor of a device described herein) for execution.
- the instructions may initially be borne on a magnetic disk of a remote computer.
- the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
- a modem local to a computing device 400 (or, e.g., a server 340 ) can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal.
- An infrared detector can receive the data carried in the infrared signal and place the data on a system bus for processor 405 .
- the system bus carries the data to main memory, from which processor 405 retrieves and executes the instructions.
- the instructions received by main memory may optionally be stored in memory 510 either before or after execution by processor 405 .
- instructions may be received via communication port 495 as electrical, electromagnetic or optical signals, which are exemplary forms of carrier waves that carry data streams representing various types of information.
- the computing device 400 may obtain instructions in the form of a carrier wave.
- the instructions of the program 415 may be read into a main memory from another computer-readable medium, such from a ROM to RAM. Execution of sequences of the instructions in program 415 causes processor 405 to perform the process steps described herein.
- processor 405 may perform the process steps described herein.
- hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of the present invention.
- embodiments of the present invention are not limited to any specific combination of hardware and software.
- the memory 410 also preferably stores a plurality of databases, including a settings database 420 , an image database 425 , a question database 430 , a determination condition database 435 , an output condition database 440 , a response database 445 , an event log 450 , and an expiring information database 455 . Examples of each of these databases is described in detail below and example structures are depicted with sample entries in the accompanying figures.
- these databases are described with respect to FIG. 4 as being stored in one computing device, in other embodiments of the present invention some or all of these databases may be partially or wholly stored in another device, such as one or more imaging devices 210 , one or more of the cameras 130 , one or more of the servers 110 or 340 , another device, or any combination thereof. Further, some or all of the data described as being stored in the databases may be partially or wholly stored (in addition to or in lieu of being stored in the memory 410 of the computing device 400 ) in a memory of one or more other devices.
- FIG. 5 illustrated therein is a block diagram of an embodiment 530 of a camera (e.g., camera 130 of FIG. 1, camera 310 of FIG. 3) in communication (e.g., via a communications network) with a server 550 .
- the camera 530 may be implemented as a system controller, a dedicated hardware circuit, an appropriately configured computer, or any other equivalent electronic, mechanical or electromechanical device.
- the camera 530 may comprise, for example, any of various types of cameras well known in the art, including, without limitation, a still camera, a digital camera, an underwater camera, and a video camera.
- a still camera for example, typically includes functionality to capture images that may be displayed individually.
- a single lens reflex (SLR) camera is one example of a still camera.
- SLR single lens reflex
- a video camera typically includes functionality to capture movies or video (i.e., one or more sequences of images typically displayed in succession).
- a still image, movie file or video file may or may not include or be associated with recorded audio. It will be understood by those skilled in the art that some types of cameras, such as the Powershot A40TM by Canon U.S.A., Inc., may include functionality to capture movies and functionality to capture still images.
- the camera 530 may comprise any or all of the cameras 130 of system 100 (FIG. 1) or the imaging device 210 (FIG. 2).
- a user device such as a PDA or cell phone may be used in place of, or in addition to, some or all of the camera 530 components depicted in FIG. 5.
- a camera may comprise a computing device or other device operable to communicate with another computing device (e.g., a server 110 ).
- the camera 530 comprises a processor 505 , such as one or more Intel PentiumTM processors.
- the processor 505 is in communication with a memory 510 and a communication port 520 (e.g., for communicating with one or more other devices).
- the memory 510 may comprise an appropriate combination of magnetic, optical and/or semiconductor memory, and may include, for example, Random Access Memory (RAM), Read-Only Memory (ROM), a programmable read only memory (PROM), a compact disc and/or a hard disk.
- RAM Random Access Memory
- ROM Read-Only Memory
- PROM programmable read only memory
- compact disc and/or a hard disk a hard disk.
- the memory 510 may comprise or include any type of computer-readable medium.
- the processor 505 and the memory 510 may each be, for example: (i) located entirely within a single computer or other device; or (ii) connected to each other by a remote communication medium, such as a serial port cable, telephone line or radio frequency transceiver.
- the camera 530 may comprise one or more devices that are connected to a remote server computer for maintaining databases.
- memory 510 of camera 530 may comprise an image buffer (e.g., a high-speed buffer for transferring images from an image sensor) and/or a flash memory (e.g., a high-capacity, removable flash memory card for storing images).
- image buffer e.g., a high-speed buffer for transferring images from an image sensor
- flash memory e.g., a high-capacity, removable flash memory card for storing images.
- memory may be volatile or non-volatile; may be electronic, capacitive, inductive, or magnetic in nature; and may be accessed sequentially or randomly.
- Memory may or may not be removable from a camera. Many types of cameras may use one or more forms of removable memory, such as chips, cards, and/or discs, to store and/or to transfer images and other data. Some examples of removable media include CompactFlashTM cards, SmartMediaTM cards, Sony Memory SticksTM, MultiMediaCardsTM (MMC) memory cards, Secure DigitalTM (SD) memory cards, IBM MicrodrivesTM, CD-R and CD-RW recordable compact discs, and DataPlayTM optical media.
- removable media include CompactFlashTM cards, SmartMediaTM cards, Sony Memory SticksTM, MultiMediaCardsTM (MMC) memory cards, Secure DigitalTM (SD) memory cards, IBM MicrodrivesTM, CD-R and CD-RW recordable compact discs, and DataPlayTM optical media.
- the memory 510 stores a program 515 for controlling the processor 505 .
- the program 515 may comprise instructions (e.g., Digita® imaging software, image recognition software) for capturing images and/or for one or more other functions.
- the processor 505 performs instructions of the program 515 , and thereby operates in accordance with the present invention, and particularly in accordance with the methods described in detail herein.
- the program 515 may be stored in a compressed, uncompiled and/or encrypted format.
- the program 515 furthermore includes program elements that may be necessary, such as an operating system, a database management system and “device drivers” for allowing the processor 505 to interface with computer peripheral devices. Appropriate program elements are known to those skilled in the art, and need not be described in detail herein.
- the instructions of the program 515 may be read into a main memory from another computer-readable medium, such from a ROM to RAM. Execution of sequences of the instructions in program 515 causes processor 505 to perform the process steps described herein. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of the present invention. Thus, embodiments of the present invention are not limited to any specific combination of hardware and software. As discussed with respect to system 100 of FIG. 1, execution of sequences of the instructions in a program of a server 110 in communication with camera 530 may also cause processor 505 to perform some of the process steps described herein.
- the memory 510 optionally also stores one or more databases, such as the exemplary databases described in FIG. 4.
- databases such as the exemplary databases described in FIG. 4.
- An example of a camera memory 510 storing various databases is discussed herein with respect to FIG. 6.
- the processor 505 is preferably also be in communication with one or more imaging devices 535 (e.g., a lens, an image sensor) embodied in the camera 530 .
- imaging devices 535 e.g., a lens, an image sensor
- FIG. 6 Various types of imaging devices are discussed herein and in particular with respect to FIG. 6.
- the processor 505 is preferably also in communication with one or more input devices 525 (e.g., a button, a touch screen) and output devices 540 .
- input devices 525 e.g., a button, a touch screen
- output devices 540 Various types of input devices and output devices are described herein and in particular with respect to FIG. 6.
- Such one or more output devices 540 may comprise, for example, an audio speaker (e.g., for outputting a question to a user), an infra-red transmitter (e.g., for transmitting a suggested meta-tag to a user's PDA), a display device (e.g., a liquid crystal display (LCD)), a radio transmitter, and a printer (e.g., for printing an image).
- an audio speaker e.g., for outputting a question to a user
- an infra-red transmitter e.g., for transmitting a suggested meta-tag to a user's PDA
- a display device e.g., a liquid crystal display (LCD)
- radio transmitter e.g., a radio transmitter
- printer e.g., for printing an image
- An input device 525 is capable of receiving an input (e.g., from a user or another device) and may be a component of camera 530 .
- An input device may communicate with or be part of another device (e.g. a server, a PDA).
- another device e.g. a server, a PDA
- common input devices include a button or dial.
- Some other examples of input devices include: a keypad, a button, a handle, a keypad, a touch screen, a microphone, an infrared sensor, a voice recognition module, a motion detector, a network card, a universal serial bus (USB) port, a GPS receiver, a radio frequency identification (RFID) receiver, an RF receiver, a thermometer, a pressure sensor, and an infra-red port (e.g., for receiving communications from with a second camera or another device such as a smart card or PDA of a user).
- RFID radio frequency identification
- FIG. 6 illustrated therein is a more detailed block diagram of a embodiment 600 of a camera (e.g., camera 130 of FIG. 1, camera 530 of FIG. 5).
- the camera 630 comprises a processor 605 , such as one or more Intel PentiumTM processors.
- the processor 605 is in communication with a memory 610 and a communication port 695 (e.g., for communicating with one or more other devices).
- the memory 610 may comprise or include any type of computer-readable medium, and stores a program 615 for controlling the processor 605 .
- the processor 605 performs instructions of the program 615 , and thereby operates in accordance with various processes of the present invention, and particularly in accordance with the methods described in detail herein.
- the memory 610 stores a plurality of databases, including a settings database 620 , an image database 625 , a question database 630 , a determination condition database 635 , an output condition database 640 , a response database 645 , an event log 650 , and an expiring information database 655 . Examples of each of these databases is described in detail below and example structures are depicted with sample entries in the accompanying figures.
- the processor 605 is preferably also in communication with a lens 660 (e.g., made of glass), an image sensor 665 , one or more controls 670 (e.g., an exposure control), one or more sensors 675 , one or more output devices 680 (e.g., a liquid crystal display (LCD)), and a power supply 685 (e.g., a battery, a fuel cell, a solar cell).
- a lens 660 e.g., made of glass
- an image sensor 665 e.g., an image sensor 665
- controls 670 e.g., an exposure control
- one or more sensors 675 e.g., a light detection device
- output devices 680 e.g., a liquid crystal display (LCD)
- a power supply 685 e.g., a battery, a fuel cell, a solar cell.
- a processor of a camera 600 may be capable of executing instructions (e.g., stored in memory 610 ) such as software (e.g., for wireless and/or digital imaging, such as Digita® software from Flashpoint Technology, Inc.).
- instructions e.g., stored in memory 610
- software e.g., for wireless and/or digital imaging, such as Digita® software from Flashpoint Technology, Inc.
- a camera may include one or more input devices capable of receiving data, signals, and indications from various sources.
- input devices capable of receiving data, signals, and indications from various sources.
- Lenses, sensors, communication ports and controls are well known types of input devices.
- lenses that may be used with cameras are well known, including telephoto, wide-angle, macro, and zoom lenses.
- an image sensor may be an area that is responsive to light and may be used to capture an image.
- An image sensor may or may not be an electronic device.
- Some examples of image sensors include, without limitation: a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) image sensor, such as the X3® PRO 10MTM CMOS image sensor by Foveon.
- An image sensor may comprise software or other means for image identification/recognition. “Image sensor” may be most often used to refer to an electronic image sensor, but those skilled in the art will recognize that various other technologies (e.g., a light sensitive film like that used in analog cameras) may also function as image sensors.
- a camera may include one or more output devices.
- output devices include, without limitation: a display (e.g., a color or black-and-white liquid crystal display (LCD) screen), an audio speaker (e.g., for outputting questions), a printer (e.g., for printing images), a light emitting diode (LED) (e.g., for indicating that a self-timer is functioning, for indicating that a question for the user is pending), and a touch screen.
- LCD liquid crystal display
- LED light emitting diode
- a display may be useful, for example, for displaying images and/or for displaying camera settings.
- the camera may also include one or more communication ports for use in communicating with one or more other devices.
- a USB (universal serial bus) or Firewire® (IEEE-1394 standard) connection port may be used to exchange images and other types of data with a personal computer or digital wallet (e.g., an Apple iPodTM).
- the camera may be in communication with a cellular telephone, personal digital assistant (PDA) or other wireless communications device. Images and other data may be transmitted to and from the camera using this wireless communications device.
- PDA personal digital assistant
- the SH251ITM cellular telephone by Sharp Corporation includes a 3.1 megapixel CCD camera, and allows users to receive image files via e-mail.
- a camera may include a radio antenna for communicating with a radio beacon.
- a subject of a photo may carry a radio beacon that may communicate with the camera and provide information that is useful in determining settings for the camera (e.g., information about the light incident on the subject).
- a camera may include one or more controls or other input devices.
- controls include, without limitation: a button (e.g., a shutter button), a switch (e.g., an on/off switch), a dial (e.g., a mode selection dial), a keypad, a touch screen, a microphone, a bar code reader (e.g., such as the one on the 1991 version of the Canon EOS ElanTM), a remote control (e.g., such as the one on the Canon Powershot G2TM), a sensor, a trackball, a joystick, a slider bar, and a continuity sensor.
- a button e.g., a shutter button
- a switch e.g., an on/off switch
- a dial e.g., a mode selection dial
- keypad e.g., a keypad
- touch screen e.g., a touch screen
- a microphone e.g., a microphone
- a bar code reader e
- Controls on a camera or other type of imaging device may be used to perform a variety of functions.
- a control may be used, without limitation, to adjust a setting or other parameter, provide a response to a question, or operate the camera.
- a user may press the shutter button on the camera to capture an image.
- Controls may be used to adjust one or more settings on the camera.
- a user may use “up” and “down” buttons on a camera to adjust the white balance on the camera.
- a user may use a mode dial on the camera to select a plurality of settings simultaneously.
- a user may use a control to indicate to the camera that he would like a question to be output as an audio recording, or to any adjust any of various other types of parameters of how the camera is to operate and/or interact with the user.
- controls may be used to provide an indication to the camera.
- a user may use a control to indicate that he would like to have a question output to him.
- a user may use a control to provide a response to a question or to provide other information, such as indicating that the user is in a room with fluorescent lights, at a beach, or capturing images of a football game.
- a light sensor e.g., for determining the distance to a subject
- a microphone e.g., for recording audio that corresponds to a scene
- GPS global positioning system
- a camera orientation sensor e.g., an electronic compass, a tilt sensor, an altitude sensor, a humidity sensor, a clock (e.g., indicating the time of day, day of the week, month, year), and a temperature/infrared sensor.
- a microphone may be useful for allowing a user to control the camera using voice commands.
- Voice recognition software e.g., ViaVoiceTM from IBM Voice Systems
- ViaVoiceTM from IBM Voice Systems
- a setting for a camera may be a parameter that affects how the camera operates (e.g., how the camera captures at least one image).
- Examples of types of settings on a camera include, without limitation: exposure settings, lens settings, digitization settings, flash settings, multi-frame settings, power settings, output settings, function settings, and mode settings. Some more detailed examples of these types of settings are discussed further below.
- Exposure settings may affect the exposure of a captured image.
- exposure settings include, without limitation: shutter speed, aperture, image sensor sensitivity (e.g., measured as ISO or ASA), white balance, color hue, and color saturation.
- Lens settings may affect properties of a lens on the camera. Examples of lens settings include, without limitation: focus (e.g., near or far), optical zoom (e.g., telephoto, wide angle), optical filters (e.g., ultraviolet, prism), an indication of which lens to use (e.g., for a camera that has multiple lenses) or which portion of a lens, field of view, and image stabilization (e.g., active or passive image stabilization).
- focus e.g., near or far
- optical zoom e.g., telephoto, wide angle
- optical filters e.g., ultraviolet, prism
- an indication of which lens to use e.g., for a camera that has multiple lenses
- image stabilization e.g., active or passive image stabilization
- Digitization settings may affect how the camera creates a digital representation of an image.
- Examples of digitization settings include, without limitation: resolution (e.g., 1600 ⁇ 1200 or 640 ⁇ 480), compression (e.g., for an image that is stored in JPG format), color depth/quantization, digital zoom, and cropping.
- a cropping setting may indicate how the camera should crop an acquired digital image when storing it to memory.
- Flash settings may affect how the flash on the camera operates.
- flash settings include, without limitation: flash brightness, red-eye reduction, and flash direction (e.g., for a bounce flash).
- Multi-frame settings may affect how the camera captures a plurality of related images. Examples of multi-frame settings include, without limitation: a burst mode (e.g., taking a plurality of pictures in response to one press of the shutter button), auto-bracketing (e.g., taking a plurality of pictures with different exposure settings), a movie mode (e.g., capturing a movie), and image combination (e.g., using Canon's PhotoStitchTM program to combine a plurality of images into a single image).
- a burst mode e.g., taking a plurality of pictures in response to one press of the shutter button
- auto-bracketing e.g., taking a plurality of pictures with different exposure settings
- movie mode e.g., capturing a movie
- image combination e.g
- Power settings may affect the supply of power to one or more of the camera's electronic components.
- Examples of power settings include, without limitation: on/off and “Power-Save” mode (e.g., various subsystems on a camera may be put into “Power-Save” mode to prolong battery life).
- Output settings may affect how the camera outputs information (e.g., to a user, to a server, to another device).
- Examples of output settings include, without limitation: language (e.g., what language is used to output prompts, questions, or other information to a user), viewfinder settings (e.g., whether a digital viewfinder on the camera is enabled, how a heads-up-display outputs information to a user), audio output settings (e.g., whether the camera beeps when it captures an image, whether questions may be output audibly), and display screen settings (e.g., how long the camera displays images on its display screen after capturing them).
- language e.g., what language is used to output prompts, questions, or other information to a user
- viewfinder settings e.g., whether a digital viewfinder on the camera is enabled, how a heads-up-display outputs information to a user
- audio output settings e.g., whether the camera beeps when it captures an image,
- a camera may be operable to capture images and to perform one or more of a variety of other functions.
- a function setting may cause one or more functions to be performed (and/or prevent one or more functions from being performed). For example, if an auto-rotate setting on a camera is enabled, then the camera may automatically rotate a captured image so that it is stored and displayed right side up, even if the camera was held at an angle when the image was captured.
- a camera includes, without limitation: modifying an image (e.g., cropping, filtering, editing, adding meta-tags), cropping an image (e.g., horizontal cropping, vertical cropping, aspect ratio), rotating an image (e.g., 90 degrees clockwise), filtering an image with a digital filter (e.g., emboss, remove red-eye, sharpen, add shadow, increase contrast), adding a meta-tag to an image, displaying an image (e.g., on a LCD screen of the camera), and transmitting an image to another device (e.g., a personal computer, a printer, a television).
- modifying an image e.g., cropping, filtering, editing, adding meta-tags
- cropping an image e.g., horizontal cropping, vertical cropping, aspect ratio
- rotating an image e.g., 90 degrees clockwise
- filtering an image with a digital filter e.g., emboss, remove red-eye, sharpen, add shadow, increase contrast
- One way to adjust a setting on the camera is to change the camera's mode. For example, if the camera were to be set to “Fluorescent Light” mode, then the settings of the camera would be adjusted to the exemplary values listed in this column (i.e., the aperture would be set to automatic, the shutter speed would be set to ⁇ fraction (1/125) ⁇ sec, the film speed would be set to 200 ASA, etc.).
- a mode refers to one or more parameters that may affect the operation of the camera.
- a setting may be one type of parameter. Indicating a mode to the camera may be a convenient way of adjusting a plurality of settings on the camera (e.g., as opposed to adjusting each setting individually).
- modes There are many types of modes. Some types, for example, may affect settings (e.g., how images are captured) and other modes may affect outputting questions. Some exemplary modes are discussed herein, without limitation, and other types of modes will be apparent to those skilled in the art in light of the present disclosure.
- a “Sports” mode for example, may describe settings appropriate for capturing images of sporting events (e.g., fast shutter speeds).
- a user may operate a control (e.g., a dial) to indicate that the camera should be in “Sports” mode, in which the shutter speed on the camera is faster than ⁇ fraction (1/250) ⁇ sec and burst capturing of three images is enabled.
- An exemplary “Fluorescent Light” mode may establish settings appropriate for capturing images under fluorescent lights (e.g., white balance).
- a “Sunny Beach” mode may describe settings appropriate for capturing images on sunny beaches, and a “Sunset” mode may describe settings appropriate for capturing images of sunsets (e.g., neutral density filter).
- An exemplary “Portrait” mode may establish settings appropriate for capturing close-up images of people (e.g., adjusting for skin tones).
- an exemplary tabular representation 700 illustrates one embodiment of settings database 420 (or settings database 620 ) that may be stored in an imaging device 210 and/or computing device 220 .
- the tabular representation 700 of the settings database includes a number of example records or entries, each defining a setting that may be enabled or adjusted on an imaging device such as camera 130 or camera 600 .
- the settings database may include any number of entries.
- the tabular representation 700 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a setting 705 , (ii) a current value 710 that indicates the present value or state of the corresponding setting, (iii) a value 715 that indicates an appropriate value for when the camera is in a “Fluorescent Light” mode, (iv) a value 720 that indicates an appropriate value for when the camera is in a “Sunny Beach” mode, and (v) a value 725 that indicates an appropriate value for when the camera is in a “Sunset” mode.
- the settings database may be useful, for example, for determining the current value 710 of a given setting (e.g., “aperture”). Also, as depicted in tabular representation 700 , one or more values may be established for association with a given mode. For example, tabular representation 700 indicates that if the mode of the camera is “Fluorescent Light,” the “aperture” setting will be changed to “auto.”
- an exemplary tabular representation 800 illustrates one embodiment of image database 425 (or image database 625 ) that may be stored, for example, in a server 110 and/or camera 130 .
- the tabular representation 800 of the image database includes a number of example records or entries, each defining a captured image. Those skilled in the art will understand that the image database may include any number of entries.
- the tabular representation 800 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) an image identifier 805 that uniquely identifies an image, (ii) an image format 810 that indicates the file format of the image, (iii) an image size 815 , (iv) a file size 820 , (v) a time 825 that indicates when the image was captured, and (vi) meta-data 830 that indicates any of various types of supplemental information (e.g., keyword, category, subject, description, location, camera settings when the image was captured) associated with the image.
- supplemental information e.g., keyword, category, subject, description, location, camera settings when the image was captured
- meta-data 830 includes position (e.g., GPS), orientation, altitude, exposure settings (aperture/shutter speed), illumination (daylight/tungsten/florescent/IR/flash), lens setting (distance/zoom position/macro), scene data (blue sky/water/grass/faces), subject motion, image content (e.g., subjects), sound annotations, date and time, preferred cropping, and scale.
- position e.g., GPS
- orientation e.g., altitude
- exposure settings e.g., illumination
- illumination daylight/tungsten/florescent/IR/flash
- lens setting distance/zoom position/macro
- scene data blue sky/water/grass/faces
- subject motion e.g., images
- image content e.g., subjects
- sound annotations e.g., date and time, preferred cropping, and scale.
- a camera may automatically assign an identifier to an image, or a user may use a control (e.g., a keypad) on a camera to indicate an identifier for an image.
- a control e.g., a keypad
- the image database may be useful for various types of processes described herein.
- a camera and/or server may output various different types of questions to a user.
- a question may comprise a request for information from a user.
- a camera may output a question to a user in order to determine information useful in applying meta-information to an image or in capturing one or more images (e.g., information about lighting, information about subjects, information about a scene).
- Examples of different types of questions include, without limitation: questions about lighting, questions about people and subjects of images, questions about focus and depth of field, questions about meta-tagging and sorting, questions about events and locations, questions about the environment, questions about scenes, questions about future plans, questions about priorities, and questions about images.
- Some examples of questions about people and subjects of images include:
- Examples of questions about events and locations include, without limitation:
- Examples of questions about the environment include, without limitation:
- Examples of questions about future plans include:
- Different types of questions may illicit different types of responses from a user. For example, questions may be classified according to the types of responses they are designed to illicit. Some examples of questions classified in this manner include:
- ratings e.g., a user may be asked to rate how much he likes an image
- a question may be phrased in the first person. Some types of users may find this personification of the camera appealing. Various exemplary ways that a question may be output to a user are discussed herein.
- an exemplary tabular representation 900 illustrates one embodiment of the question database 430 (FIG. 4) that may be stored in the computing device 400 .
- the tabular representation 900 of the question database includes a number of example records or entries, each defining a question that may be output by a server 110 or by a camera 130 .
- the question database may include any number of entries.
- the tabular representation 900 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a question identifier 905 that uniquely identifies a particular question, (ii) a question to ask 910 that includes an indication (e.g., in text) of a question to output (e.g., to a user, to transmit to a camera 130 ), and (iii) a potential response 915 that indicates one or more potential responses to the corresponding question (e.g., multiple-choice answers, acceptable answers, answers to suggest).
- the question to ask 910 and potential responses 915 may be used, for example, in accordance with various embodiments described herein for outputting a question to a camera user.
- a question is a multiple choice question
- a plurality of potential answers 915 may be presented to the user.
- the user may then answer the question by selecting one of the potential answers.
- question identifier 905 may be used (e.g., in conjunction with a response database) in ensuring that a camera does not repeat the same question.
- a condition may comprise a Boolean expression.
- Boolean expressions include, without limitation:
- a condition may be based on one or more factors. Examples of factors include, without limitation: factors affecting the occurrence of a condition, factors affecting whether a condition is true, factors causing a condition to occur, factors causing a condition to become true, and factors affecting the output of a message.
- factors related to images include, without limitation: factors related to images, indications by a user, time-related factors, factors relating to a state of the camera, information from sensors, characteristics of a user, and information from a database.
- Examples of factors relating to indications by a user include, without limitation: usage of controls (e.g., a shutter button, an aperture setting, an on/off setting), voice commands (e.g., recorded by a microphone), movement of the camera (e.g., observed using an orientation sensor), ratings provided (e.g., a user may rate the quality of an image or how much he like an image), and responses to previous questions. For example, the camera may use feedback to determine the next question in a series of questions to ask a user.
- controls e.g., a shutter button, an aperture setting, an on/off setting
- voice commands e.g., recorded by a microphone
- movement of the camera e.g., observed using an orientation sensor
- ratings provided e.g., a user may rate the quality of an image or how much he like an image
- responses to previous questions e.g., the camera may use feedback to determine the next question in a series of questions to ask a user.
- time-related factors include, without limitation: the duration of a condition (e.g., for the last ten seconds, for a total of fifteen minutes), the current time of day, week, month, or year (e.g., 12:23 pm Sep. 6, 2002), a duration of time after a condition occurs (e.g., two seconds after a previous image is captured), an estimated amount of time until a condition occurs (e.g., ten minutes until the camera's batteries run out, twenty minutes before the sun goes down).
- a duration of time after a condition e.g., for the last ten seconds, for a total of fifteen minutes
- the current time of day, week, month, or year e.g., 12:23 pm Sep. 6, 2002
- a duration of time after a condition occurs e.g., two seconds after a previous image is captured
- an estimated amount of time until a condition occurs e.g., ten minutes until the camera's batteries run out, twenty minutes before the sun goes down.
- factors relating to the state of the camera include, without limitation: current and past settings (e.g., shutter speed, aperture, mode), parameters that affect the operation of the camera, current and past modes (e.g., “Sports” mode, “Manual” mode, “Macro” mode, “Twilight” mode, “Fluorescent Light” mode, “Silent” mode, “Portrait” mode, “Output Upon Request” mode, “Power-Save” mode), images stored in memory (e.g., total images stored, amount of memory remaining), current mode (e.g., “Sports” mode), and battery charge level.
- current and past settings e.g., shutter speed, aperture, mode
- parameters that affect the operation of the camera e.g., current and past modes (e.g., “Sports” mode, “Manual” mode, “Macro” mode, “Twilight” mode, “Fluorescent Light” mode, “Silent” mode, “Portrait” mode, “Output Upon Request
- Examples of factors relating to information from sensors include, without limitation: location of the camera (e.g., determined with a GPS sensor), orientation of the camera (e.g., determined with an electronic compass), ambient light (e.g., determined with a light sensor), sounds and audio (e.g., determined with a microphone), the range to a subject (e.g., as determined using a range sensor), a lack of movement of the camera (e.g., indicating that the user is aiming the camera), signals from other devices (e.g., a radio beacon carried by a subject, a second camera), and temperature (e.g., determined by a temperature/infrared sensor).
- location of the camera e.g., determined with a GPS sensor
- orientation of the camera e.g., determined with an electronic compass
- ambient light e.g., determined with a light sensor
- sounds and audio e.g., determined with a microphone
- the range to a subject e.g., as determined
- the camera may ask different questions to different types of users.
- factors relating to characteristics of a user include, without limitation: preferences for capturing images (e.g., likes high contrast pictures, likes softening filters, saves images at best quality JPG compression), habits when operating camera (e.g., forgets to take off the lens cap, turns camera on and off a lot), appearance (e.g., is the user of the camera in an image captured using a self-timer?), characteristics of users other than the current user of the camera (e.g., past users, family members), family and friends, and skill level.
- a skilled user may tend to capture images that are well-composed, correctly exposed, and not blurry. In contrast, a less-experienced user may tend to have trouble capturing high-quality images.
- various embodiments of the present invention provide for information to be stored in one or more databases.
- Some examples of factors relating to information stored in a database include, without limitation:
- the camera may store a determination condition database for storing information related to such conditions, such as the one shown in FIG. 10. For each determination condition stored in the condition database, a corresponding question maybe output if the determination condition is true. For example, QUES-123478-02 (“What kind of light bulbs does this room have?”) may be output to a user if the camera is indoors and the flash is turned off.
- an exemplary tabular representation 1000 illustrates one embodiment of the determination condition database 435 (FIG. 4) that may be stored in the computing device 400 .
- the tabular representation 1000 of the determination condition database includes a number of example records or entries, each defining a determination condition that may be useful in determining one or more questions to ask a user of a camera.
- the determination condition database may include any number of entries.
- the tabular representation 1000 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a determination condition 1005 that defines a particular determination condition, and (ii) a question to ask 1010 that includes an identifier of a question corresponding to the determination condition.
- Question to ask 1010 preferably contains a unique reference to a question (e.g., as stored in a corresponding record of a question database).
- a determination condition database may include the actual text of a question and thus may not require a question identifier.
- the determination condition database preferably stores a condition for asking at least one question and at least one question to ask if the condition is true.
- “QUES-123478-02” may be output to a user if the user is indoors and the camera's flash is turned off.
- a question may be output to a user if a condition is true. For example, “QUES-123478-02” of FIG.
- question 9 (“What kind of light bulbs does this room have?”) may be output to a user if the camera is indoors and the flash is turned off.
- the question identifier listed in the exemplary question to ask field 1010 may correspond to a question identifier in the question database shown in FIG. 9.
- “QUES-123478-01” refers to the question “Are you indoors or outdoors?” in tabular representation 900 .
- an exemplary tabular representation 1100 illustrates one embodiment of the output condition database 440 (FIG. 4) that may be stored in the computing device 400 .
- the tabular representation 1100 of the output condition database includes a number of example records or entries, each defining a output condition that may be useful in determining when and/or how to output a question to a user.
- the output condition database may include any number of entries.
- the tabular representation 1100 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a camera mode 1105 , (ii) a question ready indication 1110 , (iii) an output condition 1115 that indicates a condition for outputting a question, (iv) a method 1120 that indicates a preferred method for outputting a question, and (v) an enabled field 1130 that indicates whether the corresponding camera mode (e.g., “Fully Automatic” mode) is presently enabled.
- the output condition database stores information that may be useful in determining when and/or how to output at least one question to a user, as discussed herein. For example, an audio recording of a question may be output when a user presses the camera's shutter button halfway down.
- a mode of a camera may include a collection of one or more parameters that affect when and/or how at least one question is output to a user.
- the camera may have an “Output Upon Request” mode in which one or more questions may be output to a user when the user presses an “Ask Me a Question” button on the camera.
- the camera Prior to outputting a question to a user, the camera may output an indication that a question is ready to be output.
- the question ready indication 1110 thus describes what indication (if any) may be output to a user to indicate that the camera has a question for the user. For example, as depicted in tabular representation 1100 of FIG. 11, if the camera is in “Sports” mode, then the camera may “beep” when it determines a question to be output to a user.
- the question itself may then be output at a later time (e.g., when an output condition occurs).
- a question may be output to a user upon the occurrence of one or more output conditions. For example, when the camera is in “Manual” mode, the camera may output a question when the viewfinder is in use (i.e., the user is looking through the viewfinder). In a second example, the camera may output a question after thirty seconds of inactivity if the camera is in “Silent” mode. The method of output field indicates how a question may be output. As described herein, a question may be output to a user in a variety of different ways. For example, a text representation of a question may be displayed on the camera's LCD screen, or an audio recording of a question may be output using an audio speaker.
- the currently enabled field 1130 indicates whether the associated mode (i.e., the mode indicated in the “camera mode” field) is currently enabled. If a mode is enabled, then a question or question ready indication may be output according to the output condition and/or method of output corresponding to that mode. If a mode is disabled, then a question may be output in a different manner (or may not be output at all). It is anticipated that a user may enable and disable modes based on his preferences. For example, if a user is capturing pictures of a musical, then the user may enable “Silent” mode on the camera, so as not to disturb audience members or actors.
- the exemplary embodiment of the output condition database shown in FIG. 11 describes one example of a camera communicating with an electronic device.
- the camera may output a question by transmitting the question to a user's PDA.
- the user may then respond to this question using the PDA (e.g., by selecting a response using the PDA's stylus) and the PDA may transmit the user's response back to the camera.
- an exemplary tabular representation 1200 illustrates one embodiment of the response database 445 (FIG. 4) that may be stored in the computing device 400 .
- the tabular representation 1200 of the response database includes a number of example records or entries, each defining a response that may be useful in recording responses provided by a user (e.g., in response to a question).
- the response database may include any number of entries.
- the tabular representation 1200 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a question 1205 that indicates a question that was output (e.g., to a user), (ii) a time 1210 that indicates when the question was output, (iii) a response to the question 1215 that includes an indication of the response to the question (e.g., text, an audio file), and (iv) an action 1220 that indicates what (if any) actions were taken based on the corresponding response.
- the first record in the response database shown in FIG. 12A indicates that a user responded “Indoors” to “QUES-123478-01.”
- An indication of what question was output to a user may comprise a question identifier, for example, which may correspond to a question identifier in the question database (e.g., as represented in FIG. 9).
- “QUES-123478-01” refers to the question “Are you indoors or outdoors?” in the question database shown in FIG. 4.
- a question may be output multiple times (e.g., in different situations). For example, “QUES-123478-01” was output to a user at 1:34 p.m. on Aug. 3, 2002 and also at 11:36 p.m. on Aug. 3, 2002.
- the response database may indicate that a user has not responded to a question (e.g., for “QUES-123478-01” at 4:10 p.m. on Aug. 17, 2002).
- a user may have provided a response that is not an answer to the question (e.g., for “QUES-123478-08” at 7:21 p.m. on Aug. 11, 2002).
- the camera may determine to output the question again.
- camera may perform one or more actions based on a user's response to a question.
- the camera may meta-tag at least one image or adjust one or more settings on the camera based on a user's response to a question.
- exemplary tabular representations 1300 and 1350 illustrate two embodiments of an event log database 450 (FIG. 4) that may be stored in the computing device 400 .
- An event log may store a list of events that occurred at one or more cameras and/or servers, for example.
- FIGS. 13A and 13B show two examples of event logs that may be stored by the camera and/or a server.
- FIG. 13A shows an exemplary event log for events that occurred on Aug. 3, 2002, generally relating to a user capturing images at a wedding.
- FIG. 13B shows an exemplary event log for events that occurred on Aug. 10, 2002, generally relating to a user capturing images at a beach.
- event logs preferably store an indication of a time when an event occurred and a description of the event.
- Each tabular representation of the event log includes a number of example records or entries, each defining an event.
- the response database may include any number of entries.
- the tabular representation 1300 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) a time of event 1305 that indicates a time that the corresponding event occurred, and (ii) a description of the event 1310 that includes (e.g., in text) a description of what the event was.
- Tabular representation 1350 defines similar fields time of event 1355 and description 1360 .
- each of the exemplary events logged has been depicted as occurring at a different time. Of course, two or more events may be logged as occurring at the same time.
- the event times in the exemplary tables are examples only and do not necessarily represent delays that may be associated with processes on the camera.
- FIG. 13A shows that the camera received a user's response to question “QUES-123478-01” at 1:35 pm on Aug. 3, 2002 and then output “QUES-123478-02” at 1:36 PM on Aug. 3, 2002, this does not necessarily mean that there is a one minute delay between receiving a user's response to question “QUES-123478-01” and outputting “QUES-123478-02.”
- the event logs depicted in FIGS. 13A and 13B may not include one or more events that occur at the camera.
- the event log shown in FIG. 13A does not include an event of meta-tagging the image “WEDDING-01” as being captured indoors.
- FIG. 14 shows an example of an expiring information database that may be stored by a camera, server, or other computing device.
- This database preferably stores data about information that is useful to a camera as well as an indication of one more conditions under which this information expires (e.g., and should no longer be used by the camera.)
- the expiring information database may also indicate one or more actions to perform in response to information expiring.
- an exemplary tabular representation 1400 illustrates one embodiment of the expiring information database 455 (FIG. 4) that may be stored in the computing device 400 .
- the tabular representation 1100 of the expiring information database includes a number of example records or entries, each defining expiring information that may be useful in determining when information (e.g., as collected by a camera and/or server) should expire.
- the expiring information database may include any number of entries.
- the tabular representation 1400 also defines fields for each of the entries or records.
- the exemplary fields specify: (i) the information 1405 whose expiration is being monitored, (ii) an expiration condition 1410 that indicates when or under what circumstances the piece of information should expire, and (iii) an action 1415 that indicates an action (if any) to be performed (e.g., by a camera, by a server) in response to the information expiring.
- the first record in the tabular representation 1400 of the expiring information database shown in FIG. 14 indicates that the camera will disregard the information that the camera is outdoors and output the question, “Are we outdoors?” if the camera is turned off for more than sixty minutes.
- the information field 1405 preferably includes the piece of information (e.g., determined based on a user's response to a question).
- the camera may store the information “Camera is Outdoors” based on a user responding “outdoors” to the question “Are you indoors or outdoors?”
- QUES-123478-The expiration condition 1410 preferably indicates one or more conditions under which the information will expire. For example, the information that the weather outside is sunny may be set to expire if the sun goes down or if the camera is taken indoors.
- the camera may perform one or more actions.
- exemplary actions include outputting a question, adjusting a setting, or ceasing to perform an action (e.g., an action that was performed based on the information being current).
- the camera may cancel “Sunny Beach Mode” (e.g., automatically or after prompting the user) in response to the expiration of the information that a user is on a beach.
- Methods consistent with one or more embodiments of the present invention may include one or more of the following steps, which are described in further detail herein:
- the process 1500 is a method for outputting a question to a user of a camera.
- the process 1500 may be performed by an imaging device (e.g., a camera), a computing device (e.g., a server) in communication with an imaging device, and/or a combination thereof.
- an imaging device e.g., a camera
- a computing device e.g., a server
- an imaging device e.g., a camera
- a computing device e.g., a server
- the process 1500 , and all other processes described herein unless expressly specified otherwise may include steps in addition to those expressly depicted in the Figures or described in the specification, without departing from the spirit and scope of the present invention.
- the steps of process 1500 and any other process described herein, unless expressly specified otherwise may be performed in an order other than depicted in the Figures or described in the specification, as appropriate.
- an image is captured.
- Various ways of capturing an image including by use of a camera, are well known to those skilled in the art and some examples are provided herein.
- a question is determined based on the captured image (e.g., using a determination condition database of a camera).
- the determined question is output to a user (e.g., via an output device of a camera).
- a user e.g., via an output device of a camera.
- FIG. 16 a flowchart illustrates a process 1600 that is consistent with one or more embodiments of the present invention.
- the process 1600 is a method for performing an action based on a response from a user.
- the process 1600 is described as being performed by a camera 130 .
- the process 1600 may be performed by any type of imaging device 210 , or an imaging device 210 in conjunction with a computing device 220 .
- a camera 130 captures an image. For example, a user presses a shutter button to record an image of a scene. In another example, the camera 130 automatically captures an image of a scene (e.g., in order to make suggestions that the user adjust one or more settings).
- the camera determines a question based on the image. For example, the camera 130 may determine that the image is underexposed and may determine that it is appropriate to ask the user if the user intended the image to be underexposed. In another example, the camera 130 may transfer the image or information about the image to a server 110 for determination of a question. Determining the question may thus include receiving an indication of a question from the server 110 .
- step 1615 the camera 130 outputs the question to a user (e.g., via an LCD device).
- step 1620 the camera 130 receives a response from the user.
- the user may provide a response by any of a variety of means, including by making a selection on a displayed menu of possible responses to the question.
- step 1625 the camera 130 performs one or more actions based on the received response. For example, based on a response that the user intends the image to be overexposed, the camera 130 may store an indication (e.g., in a response database) that questions about exposure should not be output for images of this scene.
- Various steps of exemplary processes 1500 and 1600 are described in detail below.
- Images may be captured (e.g., using an imaging device 210 ) in a variety of ways. For example, an image may be captured based on an indication from a user. For instance, a user may operate a control on a camera (e.g., a shutter button) to capture an image. An image may be captured based one or more settings. For example, an image that is captured by a camera may depend on the current aperture, shutter speed, zoom, focus, resolution, and compression settings. Similarly, an image may be captured based on a current mode of the camera (e.g., “Sunset” mode, “Sunny Beach” mode). For example, the camera may have a “Sunset” mode, which describes settings that appropriate for capturing images of sunsets.
- a current mode of the camera e.g., “Sunset” mode, “Sunny Beach” mode.
- the camera may have a “Sunset” mode, which describes settings that appropriate for capturing images of sunsets.
- an image may be captured automatically (e.g., without any indication from a user).
- the camera may capture images and store them in a buffer even if a user has not pressed the shutter button on the camera.
- images that are captured automatically may be automatically deleted or overwritten. Capturing an image automatically may be particularly helpful in some embodiments for determining the subject(s) of an image the user wishes to record or how a user is composing a photograph. For example, before a user presses the shutter button on the camera, he may aim the camera at a scene (e.g., his girlfriend in front of the Golden Gate Bridge). The camera may capture an image of this scene and then output a question to the user based on the image. In this manner, an image may be captured and a question may be output to a user prior to the user actually taking a picture.
- a scene e.g., his girlfriend in front of the Golden Gate Bridge
- Capturing an image may include capturing an image based on a condition. This condition may be referred to herein as a capture condition to differentiate it from other described conditions.
- capturing an image may include storing the image in memory.
- various different forms of memory may be used to store an image, including, without limitation: non-volatile memory (e.g., a CompactFlashTM card), volatile memory (e.g., dynamic random access memory (DRAM) for processing by an image recognition process), removable memory (e.g., a SmartMediaTM or CompactFlashTM card), and non-removable memory (e.g., an internal hard drive).
- images may be stored in an image database, such as the one shown in FIG. 8.
- a camera and/or server may output various different types of questions to a user.
- questions to ask a user may be determined (e.g., by a server) in many different ways, as discussed variously herein by way of example and without limitation.
- questions may be determined based on a condition, based on an image, and/or based on a template.
- a camera may determine a question to ask a user based on a variety of different factors, including images stored in the camera's memory, the state of the camera (e.g., the camera's current settings), indications by a user (e.g., responses to previous questions), and information from sensors (e.g., information captured by an image sensor).
- the camera may use image recognition software to determine that this image corresponds to a group of people and ask the user a question about this group of people (e.g., “Who's in this picture?”).
- the camera may determine a question based on an image that was captured automatically (i.e., without the user pressing the shutter button on the camera).
- a camera may communicate with a server in order to determine a question to output to a user.
- the camera may transmit any of various information (e.g., images, GPS coordinates) to a computer server.
- the server may then determine one or more questions based on this information.
- the server may then transmit an indication of at least one question to the camera, and the camera may output the question to a user.
- Various types of information that may be collected by a camera are described herein.
- Some examples of information that a camera may transmit to a server include, without limitation: one or more images captured by the camera, indications by a user (e.g., responses to questions, usage of controls), a state of the camera (e.g., current mode, images stored in memory), and information from sensors (e.g., location, orientation, sound and audio).
- a server may determine a question to output in accordance with one or more of the exemplary processes discussed herein. It is worthwhile to note that the computer server may have significantly greater processing power, memory, power consumption (e.g., no batteries), and physical size (e.g., not portable) than the camera. This may allow the computer server to perform computations and analysis that are more complex or extensive than could those that could be performed quickly using the camera's processor. For example, the computer server could run complicated image analysis and pattern matching algorithms to determine an appropriate question to ask a user.
- a computer server may transmit an indication of the question to the camera.
- indications of questions include, without limitation, a question and a question identifier.
- the computer server may transmit an audio clip or text messages corresponding to the question.
- the question may be compressed (e.g., as an MP3) to reduce the bandwidth necessary to transmit the question.
- the camera may store a database of questions, with each question in the database being identified by a question identifier. In order to indicate a question to the camera, the computer server may transmit a question identifier corresponding to the question. The camera may then retrieve the question from the database.
- the camera may output this question to a user as discussed variously herein.
- a camera may determine a question to ask a user based on a condition.
- This type of condition may also be referred to as a determination condition to differentiate it from other conditions described elsewhere in this disclosure.
- determining a question based on a condition are discussed herein, without limitation. For instance, if the batteries in the camera are running low (i.e., a condition), then the camera may ask the user, “How many more pictures are you planning on taking?” In one example, if a captured image includes an image of a football player (i.e., a condition), the user is to be asked, “Are you taking pictures of a football game?” In another example, if a captured image includes an image of a candle (i.e., a condition), the user may be asked, “Are you taking pictures of a birthday party?” In one example, if a captured image includes an image of two or more people (i.e., a condition), then the question, “Who are the people in the photo?” is to be output to the user.
- a camera or server may determine a question to ask a user based on one or more images that have been captured. That is, an image may be captured and then the camera may determine a question to ask a user based on this image.
- determining a question based on an image may include processing the image using image recognition software.
- image recognition programs are known to those skilled in the art and need not be described in detail herein.
- a user captures an image using the camera.
- the camera may determine, based on an analysis of the image, that this image shows a person sitting under a tree. Based on this determination, the camera may ask the user a question such as, “Who is the person sitting under the tree in this picture?”
- the user's response to this question (e.g., “Alice”) may be used, for example, to meta-tag the image.
- the camera may automatically capture a plurality of images of an exciting series of plays during a basketball game. Based on these images, the camera may ask the user, “Are you interested in pictures of any of the following players (check all that apply)?” The camera may then save or delete some or all of the captured images based on the user's response to this question.
- a user may capture an image of a child with a present. Based on this image, the camera may ask the user, “Are you taking pictures of a birthday party?” If the user indicates that he is indeed taking pictures of a birthday party, the camera may adjust the shutter speed to be at least ⁇ fraction (1/125) ⁇ sec, so as to capture the children who are moving quickly.
- the camera may determine a question based on one or more properties of an image, including, without limitation: exposure (e.g., including brightness, contrast, hue, saturation), framing (e.g., organization of subjects within the image, background), focus (e.g., sharpness, depth of field), digitization (e.g., resolution, quantization, compression), meta-information associated with an image (e.g., camera settings, identities of people in an image, a rating provided by a user), subject(s) (e.g., people, animals, inanimate objects), scenery (e.g., background), and motion relating to an image (e.g., movement of a subject, movement of the camera).
- exposure e.g., including brightness, contrast, hue, saturation
- framing e.g., organization of subjects within the image, background
- focus e.g., sharpness, depth of field
- digitization e.g., resolution, quantization, compression
- meta-information associated with an image e.g., camera settings,
- the camera may determine a question based on exposure of an image. For example, the camera may determine that the background of an image is brighter than the foreground of the image. Based on this, the camera may ask a user, “Would you like to use a fill flash to brighten your subject?” In another example, the camera may determine that an image is too bright. Based on this, the camera may ask a user, “Are you trying to take a picture of a sunrise or sunset?”
- the camera may determine a question based on framing of an image. For example, the camera may determine that an image includes two objects, one in the foreground and one in the background. Based on this, the camera may ask a user, “Which object do you want to focus on, the one in the foreground or the one in the background?” If the objects have been identified (e.g., by the camera using an image recognition program), the different objects may be named in the question.
- the camera may determine a question based on focus of an image
- the camera may determine that a portion of an image is blurred (e.g., as if by movement). Based on this, the camera may ask a user, “Are you taking pictures of a sporting event?” Based on the user's response, the camera may then adjust a setting on the camera (e.g., increase the shutter speed to at least ⁇ fraction (1/250) ⁇ sec) as described herein.
- the camera may determine a question based on meta-information associated with an image (e.g., camera settings, identities of people in an image).
- the camera recognizes that an image has been meta-tagged as being taken at 7:06 p.m. Based on this information, the camera may ask a user, “Is the sun about to go down?” or “How long will it be until the sun goes down?”
- an image may include a meta-tag that indicates that it shows Alice and Bob in a canoe. Based on this tag, the camera may ask a user, “Are you taking pictures at a lake or river?”
- an image may include a meta-tag that indicates the preferred cropping or scale for the image.
- meta-data associated with an image may include a rating of the quality of the image (e.g., a rating provided by the user or determined by the camera). The camera may determine a question based on this rating (e.g., “This image appears to be overexposed. Are you trying to create an artistic effect?”).
- the use of image recognition software may allow for one or more subjects of a captured image to be determined.
- subjects include, without limitation: people, animals, buildings, vehicles, trees, the sky, a ceiling, a landscape, etc.
- Determining a question may comprise determining a type of scenery (e.g., natural landscape) in the image.
- Scenery may include one or more subjects, which may or may not be identified individually.
- one or more questions may be determined based on whether the image matches a template.
- the camera may identify a candle in an image. Based on this determination, the camera may ask a user, “Are you taking pictures at a birthday party?” In another example, the camera may identify a large body of water in an image. Based on this, the camera may ask a user, “Are you on a boat?” Based on a determination that an image includes a building, for example, the camera may ask a user, “Are you outside?” If it is determined that an image includes a mountain, for example, the camera may ask a user, “What mountain is this?”
- determining a question may include one or more of the following steps: determining that an image includes at least one person, identifying at least one person in an image, and determining one or more characteristics of at least one person in image.
- One or more of the above steps may be performed by a server and/or camera.
- the camera may identify a person in an image (e.g. based on information received from a server). Based on this identification, the camera may ask a user, “Is this a picture of Alice?” The camera may determine a question based on a person in an image. In another example, the camera may ask a user, “Do you already have any pictures of this person?” A camera may ask a user, “What color skin does Bob have?” The user's answer to this question may be useful in determining how to set exposure settings on the camera when taking a picture of Bob.
- the subject of an image may be identified as a football player and thus may indicate that a user is capturing images of a football game. Accordingly, the camera may ask the user, “Are you taking pictures of a football game?”
- the camera may determine a question based on motion relating to an image (e.g., based on blurring of an image or comparison of a plurality of images). For example, the camera may determine a question based on motion of a subject in an image. For instance, if a subject in an image is moving quickly, the camera may ask a user, “Are you taking sports pictures?” In another example, the camera may determine that the ground plane in an image is moving slightly (e.g., shimmering like water). Based on this, the camera may ask a user, “Are you taking pictures of water (e.g., the ocean, a fountain, a creek)?” An imaging device may determine that it is moving (e.g., using a GPS sensor). Based on this determination, a digital camera may ask a user, “Are you in a vehicle (e.g., a car, a boat, an airplane)?”
- a vehicle e.g., a car, a boat, an airplane
- Various embodiments of the present invention allow for a question to be determined based on a plurality of images.
- the camera may capture two images in close succession (e.g., ⁇ fraction (1/10) ⁇ of a second apart). The camera may compare these images and may determine that a subject (e.g., a person, an animal) is moving. Based on this determination, the camera may ask a user, “Are you taking pictures of a sporting event?”
- a camera may determine a question based on at least one difference among a plurality of images. For example, the camera may capture a plurality of images in bright light conditions (e.g., outdoors on a sunny day). Then the camera may capture an image in low light conditions (e.g., so little light that a flash in necessary). Based on this, the camera may ask a user a question, “Did you just go inside a building?”
- the camera may also be configured so as to determine a question based on at least one similarity among a plurality of images. For example, the camera may determine that two images have the same person wearing a red shirt in them. Based on this, the camera may ask a user, “Who is the person in the red shirt?” In another example, the camera may determine that a first image is of a tiger and a second image is of a polar bear. Since a tiger and a polar bear are both animals, the camera may ask a user, “Are you at the zoo?”
- a question may refer to an image (and thereby be based on the image). Examples include, without limitation:
- one way of determining a question based on an image is to determine if an image matches a template.
- a camera and/or server may store a plurality of templates. Each template may correspond to a different type of image, category of image, or one or more properties of an image. After an image is captured, this image may be compared to one or more of the templates to see if there is a match. If the image matches a template, then an appropriate question may be determined (e.g., to verify that the image does in fact match the template and/or to verify information useful in determining a setting on the camera).
- templates include, without limitation:
- a “football player” template If an image matches the “football player” template, then the image may include a football player. The camera may then determine to ask the user, “Are you taking pictures of a football game?” Note that in a preferred embodiment a football player template may match with any picture of a football player, whether the player is facing the camera, facing away from the camera, or lying on the ground. Alternatively, there may multiple “football player” templates, with some corresponding to a football player in a different position, with different lighting, etc.
- a “candle” template If an image matches the “candle” template, then the image may include a candle or other point light source (e.g., light bulb, flashlight). Based on this, the camera may determine a question to ask a user (e.g., “Are you taking pictures at a birthday party?”).
- a candle or other point light source e.g., light bulb, flashlight.
- a “fluorescent light bulb” template If an image matches with this template, then the scene in the image may have been illuminated with a fluorescent light bulb. The camera may ask a user an appropriate question based on the image and the determined template.
- a camera and/or a server may perform one or more of the following steps in matching an image to a template:
- the matching of an image to a template may be a condition. That is, if an image matches a template, then a condition may be true and a question may be determined based on this condition, as described herein.
- an image may match with multiple templates. For example, a picture of Bob Jones on a beach may match with both the “Bob Jones” template and the “sandy beach” template. In this circumstance, a question may be determined based on one or both of the templates.
- an image may partially match a template.
- matching an image to a template may include determining how much the image matches the template.
- partial matches include, without limitation:
- An image of a light bulb may only be a 65% match for the “candle template.”
- An image may match a first template to a first degree and a second template to a second degree.
- a first template may match a 95% match with the “Steve Jones” template and a 80% match with the “Tom Jones” template.
- An image that was taken indoors may be a 5% match for the “sandy beach” template.
- An image may only be considered a match for a template if the amount that the image matches the template is greater than a threshold value (e.g., 95% certainty).
- a threshold value e.g. 95% certainty
- the camera may determine and/or create a template based on a user's response to a question. For example, the camera may create an “Alice Jones” template based on an indication by a user that an image is of Alice Jones.
- a camera may output the question to a user.
- Various different ways that a question may be output to a user of a camera are discussed herein.
- a question may be output to a user of a camera in a variety of different ways.
- a question may be output using one or more of a variety of different output devices.
- Examples of output devices that may output questions to a user include, without limitation:
- an LCD screen For example, the camera may include a color LCD screen on its back. This LCD screen may display questions to a user (e.g., as text).
- an audio speaker For example, the camera may use an audio speaker to output a audio clip of a question to a user.
- the camera may include a LED screen that displays questions to users as scrolling text.
- a heads-up viewfinder display For example, a question may be displayed on the viewfinder of the camera so that a user can view the question while composing a shot (i.e., which the user is preparing to capture an image).
- the camera may include thermal printer that may print out a list of questions for a user to answer.
- a question may be represented in or more of a variety of different media formats, including text, audio, images, video, and any combination thereof.
- Examples of questions being output as text include a question displayed as text on an LCD screen on the back of the camera and a question displayed as text overlaid on the camera's viewfinder.
- a text question may repeatedly scroll across a header bar on the camera's LCD screen. It will be understood that various visual cues may be used to draw a user's attention to a message that is output in text, including different fonts, font sizes, colors, text boxes, and backgrounds.
- Examples of questions being output as audio include an audio recording of a question.
- speech synthesis software may be used to generate an audio representation of a question.
- a “BEEP” sound may be output when a question is displayed on a video screen. Examples of outputting questions using images and video include, without limitation, displaying a sequence of images (e.g., a movie) to a camera user using a video screen.
- a video of an animated cartoon character may indicate a question to user.
- a message may be presented in a plurality of ways.
- a question may include both a text component and an audio component (e.g., the camera may beep and then display a question on an LCD screen).
- the camera may display an image on its color LCD screen and then play an audio recording of a question.
- a question may be phrased in the first person.
- a question may use the word “I” to refer to itself or the word “we” to refer to the camera and the user. Examples include, without limitation:
- a question may be output in different languages (e.g., depending on who is using the camera). For example, if the current user of the camera speaks English, then a question may be output to the user in English. However, if the current user of the camera speaks Chinese, then the question may be output the to the user in Chinese.
- a question may be output by a presenter (e.g., a character that presents the question to a user).
- presenters include, without limitation:
- the camera may store two recordings of a question-one with a female speaker and one with a male speaker.
- an animated character in a video message For example, an avatar, virtual assistant, or other on-screen character may be displayed to a user in conjunction with a question.
- an animated rabbit may be displayed on the camera's LCD screen and “talk” to a user, thereby outputting one or more questions to the user. Indications from the rabbit may be provided as text (e.g., displayed using a speech bubble as a partition) and/or audio (e.g., an audio recording may be played, allowing the rabbit to “speak” to the camera user.)
- (iii) An actor. For example, a video of an actor presenting a question may be displayed to a user on the camera's LCD screen.
- the camera may store a database of video clips representing different questions.
- a celebrity For example, an audio or video recording of a celebrity (e.g., William Shatner) reciting a question may be output to a user.
- a celebrity e.g., William Shatner
- the camera may store one or more representations of a question in memory. For example, for a given question (e.g., “Are you at a ski resort?”), the camera may store the following representations: a text version of the question in English, an audio version of the question in English, a text version of the question in Spanish, and an audio version of the question in Spanish.
- a given question e.g., “Are you at a ski resort?”
- the camera may store the following representations: a text version of the question in English, an audio version of the question in English, a text version of the question in Spanish, and an audio version of the question in Spanish.
- a question may be output to a user at a variety of different times.
- the camera may delay outputting a question until an appropriate time.
- the camera may output a questions based on a condition.
- the camera may output other information that may be helpful to the user.
- additional information may include, without limitation: at least one image that relates to the question, potential answers to the question (e.g., for a multiple choice question), a reason for asking the question, current settings on the camera, a default answer to the question, and a category for the question.
- the camera may output a question to a user that relates to at least one image.
- the camera may display at least one image to the user.
- the camera may display a picture of a person on the beach to a user and ask the user, “Is this a picture of Alice or Bob?”
- the camera may display a set of four pictures to a user and ask the user, “All of these pictures are of Alice, right?”
- the camera may display a plurality pictures to a user and ask the user, “Pick your favorite picture from this group.”
- a camera may display an image to a user and ask the user, “Is this picture underexposed?” or may highlight a portion of an image ask a user, “Is this the subject of the image?”
- the camera may automatically crop an image and ask a user, “I'm going to automatically crop this image. Is this a good way to crop the image?”
- Indicating potential answers to a question may be helpful in describing to a user how he should answer a question, indicating acceptable answers to a question, and/or reducing the time or effort required for a user to answer a question.
- Examples of potential answers to questions include, without limitation:
- Indicating at least one reason for asking a question may be helpful in explaining the purpose of a question to a user, or in explaining to a user why it would be beneficial for him to answer a question.
- Some examples of the camera indicating one or more reasons for asking a question include:
- Indicating at least one setting on the camera may help the user to understand the context of a question, or to make a decision on how to respond to a question.
- the camera indicates: “The flash is currently off. Are we indoors?”
- the camera displays: “Pictures are currently being captured at 1600 ⁇ 1200 resolution, meaning that I have enough memory left to hold 15 more pictures at this resolution. How many more pictures are you planning on taking?” Providing such additional information may be beneficial to some types of users.
- Some embodiments of the present invention provide for a default or predetermined answer to a question output to a user.
- the default answer may be indicated to the user.
- Some examples of output including default answers to questions include:
- Questions may be categorized, allowing the camera to output an indication of at least one category corresponding to a question. Categorizing a question may be helpful to some users, for example, if there are a plurality of questions that may be output to a user and the user would like to sort these questions. Questions may be categorized based on a variety of different factors, including, without limitation:
- topic e.g., “Lighting Questions,” “Focus Questions,” “Meta-Tagging Questions,” “Situational Questions,” “Questions about Future Plans,” “Questions About Past Images”.
- image e.g., “Questions About Image #1,” “Questions About The Last 8 Images Captured,” “Questions about Images Captured Yesterday”.
- priority e.g., “High-Priority Questions,” “Questions to Answer in the Next 5 Minutes,” “Questions to Answer During Your Free Time,” “Questions to Answer Before Capturing More Images”.
- type of response e.g., “Yes/No Questions,” “Free Answer Questions,” “Multiple Choice Questions”.
- the camera may determine a question to output to a user (e.g., based on a determination condition). Alternatively, or in addition, the camera may determine when and/or how to output a question to a user based on one or more conditions. For example, a question may be output as an audio recording while a user is looking through the viewfinder of the camera. In another example, the camera may delay outputting a question to a user until there is a pause in the user's activities.
- the camera thus may output a question based on a condition.
- This condition may also be referred to as an output condition to differentiate it from other types of conditions (e.g., determination conditions) described elsewhere in this disclosure.
- a question may be output: when a condition occurs, when a condition is true, when a condition becomes true, in response to a condition, in response to a condition occurring, in response to a condition being true, because of a condition, because a condition occurred, because a condition is true, according to a condition, at substantially the same time that a condition occurs, and/or at substantially the same time that a condition becomes true.
- condition may be useful in enabling a variety of different functions. For example, as discussed herein, a condition may be used in determining what question(s) to output and/or for determining an order in which to output a plurality of questions.
- a condition may be useful for determining when to output a question (e.g., determining an appropriate time to output a question).
- output of a question may be delayed until a condition occurs. For example, it may be annoying to output a question to a user when the user is busy taking photographs at a birthday party or busy talking with a friend. Therefore, outputting a question to the user may be delayed until an appropriate time.
- a condition may be used in determining how to output a question. For example, a condition may be used to determine whether a question should be output in text on the camera's LCD display or as audio through the camera's audio speaker. In a second example, the camera may select which personality should be used in outputting a question to a user.
- conditions for outputting a question may be similar to conditions for determining a question.
- an output condition may be a Boolean expression and/or may be based on one or more factors
- any of the factors described herein as potentially useful in determining a question to ask a user may also be used for determining when and/or how to output a question to a user. Still other factors will be readily apparent to those skilled in the art in light of the present disclosure.
- Some moments may be particularly appropriate for outputting a question to a user.
- Some general examples of appropriate times to output a question include, without limitation: when a user is composing a shot (e.g., is about to capture an image), when a user is inactive, when a user indicates that he is interested in receiving at least one question, when a user is viewing an image, when a user is answering one or more questions, when the camera's resources are running low, and when a user starts to capture images of a scene.
- a question may be output when it is determined (e.g., by the camera) that a user is composing a shot.
- Some exemplary scenarios include, without limitation:
- the camera may include a sensor that determines when the user is looking through the optical viewfinder on the camera.
- the camera may include one or more touch sensors (e.g., heat, continuity, electric field, pressure) that may determine when a user places both of his hands on the camera. This may be considered an indication that the user is composing a shot.
- touch sensors e.g., heat, continuity, electric field, pressure
- the camera may include a button or menu option labeled “Ask Me a Question.” Whenever a user has free time, he may press this button to answer any questions that the camera has. An indication that the user has pressed the button may trigger the camera to provide one or more questions.
- the camera may output an indication of this (e.g., by beeping once or illuminating an LED). The user may then respond to this indication at his leisure (e.g., once he finishes capturing a sequence of action photos) by providing an indication (e.g., pressing a button on the camera) when he is ready to answer the question.
- a user may provide an indication of what question he would like to answer. For example, a user may select a question or question topic from a list of questions or question topics.
- a user may use the camera to view images that he has already captured. Some images may have questions associated with them, and to indicate this, these images may be highlighted by the camera, for example, with green borders.
- the user may select (e.g., using a touch screen, using a dial) one or more of the highlighted images.
- a user may provide an indication of how a question should be output.
- the camera may normally output questions in an audio format.
- a user who is operating the camera in an opera house may prefer to avoid disturbing other audience members. Therefore, the user may operate a control on the camera to indicate that he would prefer that the camera output the question in text form (e.g., via the camera's LCD display).
- Some factors relating to inactivity by a user may be used in determining how and/or when to output a question. Such factors may include, without limitation:
- the camera may include a timer that monitors the period of time that has elapsed since a user performed an activity (e.g., operated a control on the camera). If a threshold amount of time has elapsed (e.g., sixty seconds), then the camera may determine that a question should be output to the user.
- a threshold amount of time e.g., sixty seconds
- the camera may monitor a duration since an image has been captured (e.g., sixty-five seconds) or an average rate of capturing images (e.g., one picture every thirty-two seconds).
- a duration since an image has been captured e.g., sixty-five seconds
- an average rate of capturing images e.g., one picture every thirty-two seconds.
- the camera may include a microphone that monitors the level of audible background noise around the camera. If the level of background noise falls below a threshold level, the camera may determine that this is an appropriate time to output a question to a user.
- the camera may include one or more motion sensors that may be helpful in determining whether the user is currently composing a shot, moving to a new location, or otherwise engaged in an activity. If the camera determines that the user is not moving the camera, a question may be output the user.
- the camera may include a color LCD display that allows a user to view images that are stored in the camera's memory (e.g., images that he has already captured using the camera). If the user uses the LCD display to view one or more images, this may indicate that the user is no longer busy with other activities and it may be an appropriate time to output a question to the user. For example, the camera may output any questions relating to an image when the user views that image on the camera.
- the camera may include a printer or be connected to a printer.
- the camera may output one or more questions to the user (e.g., questions relating to the one or more images).
- the camera may transfer images to one or more various other devices (e.g., a desktop computer using a USB cable, an iPodTM portable wallet, a television set for viewing, a color inkjet printer for printing, a removable flash memory card for storage).
- One or more questions may be output to a user before, during, and/or after the transfer of an image to another device.
- a second question may be output to a user based on a user's response to a first question. For instance, the camera may ask a user, “Where are you in this picture?” If the user responds, “on a beach,” then the camera may ask the user a second, related question: “Are these two pictures taken on the same beach?”
- a determination that the camera's resources are running low may be used in determining when or how to provide a question. For example, if a camera's batteries are running low, the camera may output the question, “The batteries are running low. Do you have any more batteries?” In another example, if a memory resource (e.g., a flash memory card) approaches or is below a predetermined threshold of available memory (e.g., ten Mb) the camera may output the question, “You have only 10 Mb of memory left. How many more images are you planning on capturing?”
- a memory resource e.g., a flash memory card
- a predetermined threshold of available memory e.g., ten Mb
- a user captures an image.
- the camera may output a question to a user immediately after the user captures an image of the scene (e.g., in the anticipation that the user will capture additional images of the scene).
- the camera may output a question to a user when the user presses the shutter button on the camera halfway (e.g., to focus the camera's lens on a subject). This may indicate that the user is about to capture an image of the subject.
- the camera may include a GPS sensor or other location sensor that allows it to determine when a user moves the camera to a new location. Since this may be a sign that the user is now capturing images of a new scene, a question may be output to the user (e.g., “Are you still taking pictures of Alice?”).
- a user operates a control.
- the camera may output a question to a user when the user adjusts a setting on the camera, since this may be an indication that some aspect of the camera's settings need to be adjusted (e.g., because the user is capturing images of a different subject).
- the camera may store an output condition database such as the one shown in FIG. 11.
- the output condition database shown in FIG. 11 specifies how one or more questions should be output to a user. For example, if the camera is currently in manual mode, the camera will output a question to a user by beeping and displaying the question as text along the bottom of the camera's viewfinder.
- this exemplary version of the output condition database may be used by the camera to output a question based on a current mode of the camera and possibly one or more other factors. For example, according to the output condition database shown in FIG. 11, the camera is currently in “Manual” mode, so questions may be output to a user when the user looks through the camera's viewfinder. An alternate version of the output condition database might be used to output a question based on other factors.
- a question may be determined, as discussed variously understood, but that its output is suppressed, delayed, or cancelled. Also, suppression of a question does not necessarily mean that no questions are output at all. Suppression may include suppressing one or more questions (or types of questions) while one or more other questions are output. For example, two questions may be identified at about the same time for output to a user, with one being output immediately and output of the second question being delayed until a more appropriate time.
- Conditions for suppressing questions may be similar to those described above for outputting questions. For example, a question may be suppressed based on an indication from a user or because a time limit expires.
- a question may be suppressed because it comes at an inappropriate time. For example, it may be determined that a user may currently be busy with another activity. In another example, if a user is busy adjusting the camera's settings, then the camera may delay outputting a question until the user finishes adjusting the camera's settings and captures the image that he was busing composing.
- one or more questions may be suppressed because of inappropriate content.
- a determined question may be a duplicate of a recently-asked question. Since the user has already answered the question, it may be bothersome to ask the user the same question again.
- the answer to (or purpose of) a question may no longer be relevant. For instance, while a user is indoors, the camera may determine a question to ask the user: “What type of light bulb does this room have, tungsten or fluorescent?” However, the camera may have delayed outputting this question, because the user is in the middle of a conversation with a friend, for example. If the user then moves outside before the question is output, that question is no longer relevant.
- a user may indicate that one or more messages (or types of messages) should be suppressed. For example, a user who is capturing images at a golf tournament may indicate that no audio message should be output (e.g., so that the user does not disturb the golfers).
- Suppressing a question may include removing the question from an output queue or other list of questions to be output to a user.
- the output condition database shown in FIG. 11 shows an example of delaying output of a question if the camera is in “Sports” mode
- the camera may delay outputting a question to a user until the camera is held still for a period of time.
- Another example of FIG. 11 relates to canceling output of a question. For example, the camera may refrain from outputting any questions when it is in “Do Not Disturb” mode.
- a camera may output an indication of a question before outputting the question itself. For instance, rather than outputting a determined question immediately, the camera may output an indication that the question is ready to be output. In some embodiments, the camera will then wait for a user to indicate that he is ready for the question itself to be output. For example, the camera may beep when it determines a question and then wait for the user to respond to this beep before outputting the question. In a second example, an LED on the camera may flash whenever the camera has a question ready to output to a user.
- outputting a question to a user may include one or more of the following steps: outputting an indication that a question is ready, receiving a response to the indication from the user, and outputting the question based on the response. Examples of each of these steps are provided below.
- Some examples of outputting an indication that a question is ready include, without limitation: outputting an indication that a question has been determined, outputting an indication that a question has been queued for output, outputting an indication that a question has been retrieved from memory, outputting an indication that a question is ready to be output, outputting a request for the user's attention, and outputting a request for the user's attention regarding a question.
- outputting an indication may comprise illuminating an LED on a camera.
- a user may understand that whenever this LED is illuminated, the camera has a question queued to ask the user.
- an audible “BEEP” or bell sound may be output that a user may hear.
- a user may understand that whenever this beep sounds, the camera has a question queued to ask the user.
- a message may be displayed on an LCD screen.
- an LCD screen on the back of the camera may display a text, “I've got a question. Press the ‘Ask Me a Question’ button to have this question output to you.”
- a portion of an image may be highlighted (e.g., in the camera's viewfinder). For instance, if the camera has a question about a particular subject in an image, the camera may highlight that subject in red when the image is displayed on the camera's LCD viewfinder. Different types of visual indicators may be used to alert a user that at least one question is pending. In one example, when a user views an image using the camera, the camera may place a green border around the image to indicate that there is a question associated with the image. In another example, a red question mark may be overlaid on the corner of a displayed image to indicate that the camera has a question related to the image.
- an indication that a question is ready may include a presenter (e.g., an animated character, a celebrity voice).
- the camera may output an indication that a question is ready based on one or more conditions. Note that the various output conditions described herein for outputting a question may also be used to control the output of an indication that a question is ready.
- a user may respond at his leisure to an indication that a question is ready. For example, at 2:12 p.m. the camera may beep to indicate that it has a question to ask a user. The user may be busy with some other activity at this time (e.g., capturing an important sequence of images of a sporting event) and so ignores this beep until 2:17 p.m., when he is free to pay attention to the question that the camera outputs. To indicate that he would like to answer a question, the user may operate a control on the camera (e.g., press a button).
- a control on the camera e.g., press a button
- Receiving a response to an indication that a question is ready may include, without limitation: receiving an indication from a user, receiving an indication that a user would like to view a question, and receiving an indication that a user is inactive.
- a user may provide a response by operating a control or other input device on the camera.
- Various types of input devices are discussed herein, and others may be readily apparent to one skilled in the art in light of the present disclosure.
- a user's response may include an indication of which question should be output. For example, a user may indicate that he is ready to answer questions about lighting, scenes, and future plans, but not about meta-tagging. In a second example, a user may select a question to answer from a list of questions that have been determined.
- a user's response may include an indication of how to output a question. For example, a user may prefer to have a question output to him with both audio (e.g., through a speaker on the camera) and text (e.g., through a LCD display on the camera).
- audio e.g., through a speaker on the camera
- text e.g., through a LCD display on the camera
- Some types of users may prefer to have the camera output an indication that a question is ready before outputting the question.
- Other users may prefer to have the camera determine the best time(s) to output a question. For example, allowing the user to control when and how a question is output may be preferred by users because of the control and simplicity such a system offers. For instance, a user may wish to have control over the outputting of questions that he finds annoying.
- the output condition database in FIG. 11 shows a number of examples of how the camera may output an indication that a question is ready before outputting the question itself.
- the camera when the camera is in “Output Upon Request” mode, the camera will beep when a question is ready to be output and then wait until a user presses the camera's “Ask Me a Question” button before outputting a question.
- the camera when the camera is in “Silent” mode, the camera will cause an LED to blink to indicate that a question is ready.
- a user may indicate a response to a question that is output to him.
- the camera may output a question to a user, “Are we on a beach?” and the user may reply “No.”
- a user may operate one or more controls or other input devices of the camera to indicate a response to a question.
- Various types of input devices and controls that the camera may include are described herein, and other types may be readily apparent to those skilled in the art in light of the present disclosure.
- a user may use one or more buttons on the camera to select a response from a plurality of response (e.g., an answer to a multiple-choice question).
- a user may select a response from the list of choices: “sunny,” “partly cloudy,” “light rain,” “heavy rain,” “light snow,” “heavy snow.”
- a user may press a button on the camera to indicate “Yes” in response to a question of whether he is capturing images of a sporting event.
- the camera may include a microphone that allows a user to respond to a question verbally (e.g., a user may indicate the weather outside is sunny by saying the word “Sunny”).
- a user may use a stylus or other device to spell out a response on a touch screen on the camera. For example, a user may use the GraffitiTM alphabet to spell out a textual response to a question.
- Some embodiments of the present invention allow for a user to speak a response to a question. Such a response may be recorded using a microphone on the camera. The camera may then process the response using voice recognition software. For example, a user may indicate that he is at a “birthday party” and the weather is “raining.”
- a user may use any of a plurality of input devices (e.g., buttons) on the camera to highlight a portion of an image displayed (e.g., on the camera's color LCD display). For example, the camera may ask a user to indicate where a subject's face is in an image. Based on the user's response, the camera and/or a server may determine that this area of the image is properly exposed.
- a plurality of input devices e.g., buttons
- the camera may ask a user to indicate where a subject's face is in an image. Based on the user's response, the camera and/or a server may determine that this area of the image is properly exposed.
- a user's response to a question may take a variety of different forms (e.g., depending on the type of question).
- Some examples of forms of answers include, without limitation:
- the camera may also receive or otherwise determine a default response from a user. For example, if a user does not respond to a question in a certain period of time, the camera may assume that the user answered the question in a certain way (e.g., in accordance with a default answer, as discussed herein). For example, the camera may ask a user the question “You're at a ski resort, aren't you?” If the user does not respond to this question within ten seconds, then the camera may assume that the user answered “Yes” to this question.
- a camera and/or server may verify a response from a user. Verifying a response may include outputting an indication of the response that the user provided, or asking the same question or a similar question.
- the camera may verify a response to ensure that it did not misunderstand the user's response. For example, a user may respond “Yeah” or “Nah” to a Yes/No question, making it difficult for the camera to use voice recognition software to determine the user's response.
- the camera may output an indication of the user's response by displaying the camera's best guess as to the response (e.g., “Yes”) on the camera's LCD display.
- the camera may verify a response to ensure that a user did not make a mistake in responding to a question. For example, the camera may output a second question to verify that a user did not accidentally press the wrong button on the camera when indicating his response (e.g., “Are you sure that this room has fluorescent light bulbs?”).
- the camera may verify a response to confirm that the user understands the ramifications of his response. For example, a user may indicate that he only plans to capture five more images on the camera's current memory card. Based on this, the camera may output a warning or reminder: “Are you sure that you only want to capture 5 more images? If you capture 5 more images at high resolution, then the camera will be out of memory and will not be able to store any more images.” According to some embodiments, the camera may verify a response by displaying an image to a user. For example, a user may respond to a question by indicating that he is at a beach. Based on this, the camera may display an image to a user along with the message, “This is what the picture would look like in Beach Mode. Is this correct?” The user may then respond to the verification by indicating whether the displayed image is correct.
- a user may not respond to a question. For example, a user may ignore a question that is output by the camera. For instance, the camera may output a question when a user is busy with another activity (e.g., capturing an important sequence of action shots at a sporting event). Instead of responding to the question, the user may ignore the question.
- a user may not know that the camera output a question. For instance, a user may not be looking at the camera's LCD screen when a question is displayed on the LCD screen. Since the user does not know that the camera has output a question, he will not know to respond to the question.
- Determining that a user has not responded to a question may include determining that a period of time has elapsed since the question was output. For example, if a user does not respond within twenty seconds of when a question is output, then the camera may determine that the user has not responded. The camera may then perform an action (e.g., output the question again).
- determining that a user has not responded to a question may comprise determining that a condition has occurred (e.g., a user presses a button on the camera, the camera is held motionless for a period of time, a user provides a response to a different question). For example, the camera may stop displaying a question when a user captures an image.
- the camera take one or more of the following actions:
- the camera may display a question on an LCD screen until the user responds to the question.
- the camera may display a question on an LCD screen until either (a) a user responds to the question, or (b) thirty seconds have elapsed.
- (iii) Output the question again.
- the camera may output an audio recording of a question. If a user does not respond, then the camera may output the question again (perhaps in a different form).
- (iv) Assume a default response to the question. For example, the camera may output a question to a user, “You're taking a picture of a sunset, aren't you?” If the user does not respond, the camera may assume that the answer to the question is “Yes” (e.g., based on a determined default answer) and perform an appropriate action based on this default response.
- (v) Output a different question.
- the camera may ask a user a first question, “Are you at the beach?” If the user does not respond to the question, then the camera may ask the user a second question (e.g., based on the same image), “Are you at a ski resort?”
- a user may provide a response that does not answer a question. For example, a user may indicate that he does not want to answer a question. Accordingly, a user may press a “Cancel Question” button on the camera. In a second example, a user may use a control on the camera to put the camera in “Do Not Disturb” mode, thereby canceling the current question and any future questions. According to some embodiments, a user may indicate that he would prefer to answer a question at a later time.
- the camera may have a “snooze button” that allows a user to indicate that the camera should stop outputting a question and then output the question again when a condition occurs (e.g., a period of time has elapsed, an output condition occurs).
- a condition e.g., a period of time has elapsed, an output condition occurs.
- a user may indicate whether a question is inappropriate or unhelpful.
- the camera may have a “thumbs down” button or a “stupid question” button that a user may press when the camera outputs a question that the user determines is not worth answering. This may be particularly useful if the camera tends to make mistakes when determining questions to output to users.
- a user may capture a plurality of images indoors and then move outdoors to capture more images.
- the camera may ask the user, “What kind of lighting does this room have?” Since the user is currently outdoors, this question is inappropriate, so the user may press the “thumbs down” button to indicate that the camera should discard the question about lighting as being irrelevant to the current situation.
- the camera may have a “reset questions” button that allows a user to indicate that the camera should restart its line of questioning.
- the camera may output the question again.
- the second output of the question may be similar to or different from the first output of the question.
- a question may be output a second time based on a different output condition. For example, a question may be output when a user presses the shutter button halfway down (an output condition). If a user does not respond to this output of the question, then the camera may output the question again in fifteen seconds (a second output condition).
- a question may be output a second time based on the same output condition. For example, a question may be output thirty seconds after a user operates the camera (an output condition). If the user does not respond to this first output of the question, then the camera may output the question a second time in response to a second occurrence of the output condition. That is, the camera may wait until the user operates the camera again, and then output the question thirty seconds after the user stops operating the camera.
- a question may be output in the same manner as it was previously output. For example, a question may be output a first time as an audio prompt. If a user does not respond to the question, then the camera may repeat the audio prompt. In another embodiment, a question may be output in a manner that is different from the one that was previously used for the question. For example, a question may be output a first time as text displayed on the camera's LCD screen. If a user does not respond, then the camera may output the question as holographic text overlaid on the camera's optical viewfinder.
- a question may be output a first time as “Are you taking a picture of a sunset?” If a user does not respond to this first output of the question, then the camera may output the question again, this time providing additional information: “Are you taking a picture of a sunset? If so, then let me know now—otherwise your picture will be underexposed.”
- the camera may store an indication of a user's response to a question.
- the camera may store an indication of a response in a response database such as the one shown in FIGS. 12A and 12B.
- a stored indication of a response may be useful in performing an action based on multiple responses.
- the camera may ask a user a plurality of questions and receive a plurality of responses from the user.
- the camera may then perform an action (e.g., adjust a setting, meta-tag an image, guide a user in operating the camera) based on the plurality of responses.
- an action e.g., adjust a setting, meta-tag an image, guide a user in operating the camera
- a user may indicate in a first response that he is capturing a picture of at a ski resort, and this first response may be stored in a database (such as the response database). Later, in a second response, the user may indicate that the weather is cloudy. Based on these two responses, the camera may adjust the settings on the camera to appropriate values for capturing images at a ski resort during cloudy weather.
- Indications of responses may be beneficial in determining future questions to ask a user.
- the camera may ask a user a first question (e.g., “Are you indoors or outdoors?”). The user may then respond to this question (e.g., “Indoors”) and the camera may store this response. Based on the stored response, the camera may ask the user a second question (e.g., “What kind of lightbulbs does this room have?”).
- a first question e.g., “Are you indoors or outdoors?”
- the user may then respond to this question (e.g., “Indoors”) and the camera may store this response.
- the camera may ask the user a second question (e.g., “What kind of lightbulbs does this room have?”).
- Storing an indication of a user's response may assist a computing device (e.g., a camera, a server) in avoiding repeating questions or asking unnecessary questions.
- a computing device e.g., a camera, a server
- the camera may avoid asking a user the same question twice in close succession by checking to see if the user has already answered the question recently. If the user has answered the question recently, then the camera may assume that the user's answer is unchanged.
- the camera may ask user if he is on a beach and store the user's response (e.g., “Yes”). Ten minutes later, the camera may refrain from again asking the user if he is on a beach and instead assume that the answer from ten minutes before is still valid and that the user is still on the beach.
- information provided by a user may expire after a certain period of time or based on some other condition.
- a camera may store an indication of a user's response in an expiring information database, such as the one depicted in FIG. 14.
- a camera and/or a server may perform various actions based on a response from a user, including one or more of: meta-tagging an image, adjusting a setting, guiding a user in operating a camera, outputting a second question, and determining a template.
- a computer server may assist the camera in processing a user's response to a question. Examples include, without limitation:
- the camera may receive a user's response to a question and transmit an indication of this response to the computer server.
- the computer server may then process this response (e.g., using voice recognition software) to determine an appropriate action based on the response (e.g., adjusting a setting on the camera).
- An indication of the action may then be transmitted to the camera and performed by the camera.
- the computer server may transmit instructions to the camera describing how to process a response by a user. For example, in addition to transmitting an indication of a question to the camera (as described above), the computer server may transmit one or more sets of instructions describing how to process a user's potential responses to the question. For example, the computer server may indicate that if the user responds “Yes” to a question, then the camera should put the flash in slow-syncro mode; if the user responds “No” to the question, then the camera should ask the user whether he is outdoors.
- instructions may be transmitted to a camera in a variety of different forms, including a computer program (e.g., in C or Java), script (e.g., in Javascript or VSscript), or machine code (e.g., x86 assembly).
- Meta-tagging may be used herein to refer generally to a process of associating supplementary information with an image (e.g., that is captured by a camera or by some other imaging device).
- the supplementary information associated with an image may be referred to as meta-data, meta-information, or a meta-tag.
- meta-data include, without limitation:
- a location where an image was captured e.g., latitude and longitude coordinates obtained from a GPS sensor, an indication of a city, state, park, or other region provided by a user, an altitude determined using an altimeter). For example, a user may indicate that an image was captured in the SoHo area of New York City.
- One or more subjects of an image e.g., people, objects, locations, animals, etc.
- a subject may be uniquely identified (e.g., “Alice Jones,” “Grand Canyon”) and/or categorized (e.g., “a squirrel,” “national park”).
- a scene in an image e.g., a rainbow next to a waterfall, a group of friends at a restaurant, a baby and a dog, a family portrait, a reflection in a mirror
- Lighting e.g., daylight/ a tungsten light bulb, a florescent light bulb, flash intensity, locations of light sources
- One or more settings on the camera e.g., aperture, shutter speed, flash, mode.
- meta-data associated with an image may indicate that the image was captured at f/2.8 with a CCD sensitivity of 100 ISO and a shutter speed of ⁇ fraction (1/250) ⁇ sec.
- (xii) Preferred cropping or scale.
- meta-data associated with an image may indicate what portion of the image should be printed and/or how large the image should be printed.
- a category for an image For example, an image may be categorized based on its intended usage (e.g., part of a slide show), based on its subject (e.g., images of Alice), and/or based on how or when it was captured (e.g., captured during a ski trip on Dec. 7, 2002).
- its intended usage e.g., part of a slide show
- its subject e.g., images of Alice
- how or when it was captured e.g., captured during a ski trip on Dec. 7, 2002.
- a user's intentions (or other notes from a user). For example, a user may indicate, “I'm trying to get a picture of the baby with its eyes open” or, “I want to capture the reflection of the mountain in the water of the lake” or, “Getting the exposure right for the subject's face is most important; I don't care whether the background is in focus.”
- (xvi) Acceptable to delete For example, in his response to question a user may indicate to the camera that it is acceptable to delete one or more images from the camera's memory in order to make room for images that may be captured in the future. Based on this response, the camera may meta-tag one or more images for “deletion,” meaning that these images may be deleted if the camera begins to run out of memory.
- the camera may meta-tag one or more images as being “protected,” meaning that these images should not be deleted or altered in any way.
- the “protected” meta-tag may be helpful in ensuring that the user or the camera does not inadvertently delete one of the protected images.
- a rating For example, the camera may determine a rating of an image and store this rating with the image.
- a rating may be an indication of the quality of the image and may be based one a variety of different factors, including: exposure, sharpness, composition, subject, and indications from a user. Ratings may be helpful in allowing the camera to sort images.
- meta-data that is associated with an image may be determined based on one or more responses indicated by a user. For example, a user may indicate in a first response that he is in Maui. In a second response, the user may indicate that he is at the beach. Based on these two responses, the camera may meta-tag an image as being taken “On the beach in Maui.”
- a camera may meta-tag an image based on a user's response to a question.
- a server may transmit to a camera a signal indicating that a recorded image is most likely of Alice (e.g., based on an image recognition program).
- the camera may then ask a user to verify that the image is of Alice. If the user indicates that the image is of Alice, then the camera may meta-tag the image as “Subject of Image: Alice.” In another example, if a user indicates that an image shows “Alice and Bob in Yosemite,” then the camera may meta-tag the image as “Subjects: Alice, Bob//Location: Yosemite National Park.”
- an image database may be used by a server 110 in performing an image recognition process on a captured image.
- the server 110 may suggest some or all of the meta-data 830 associated with the stored image to a user (e.g., by transmitting an indication of the meta-data 830 to the camera 130 ). The user may then conveniently agree (e.g., by pressing an “Ok” button) to have the suggested meta-data associated with the new image. In this manner, a user may avoid some of the tedium of creating meta-tags.
- any new images may be stored in the image database and thus may be made available to an image recognition process.
- an image recognition process and/or a process for meta-tagging images may be refined or customized in accordance with the stored meta-information associated with a particular user's images. For example, a first user may have captured an image of a particular scene, associated meta-data including the description “Grand Canyon” with the image, and stored the image on his personal computer.
- a second user may have captured and stored a very similar (or identical) image, but associated with the image (e.g., as meta-data 830 ) the description, “Arizona, Grand Canyon, March 1999.” If the first user transmits a second image similar to the stored image to his personal computer (e.g., from the camera 130 via communications network 120 ) for image recognition, the computer may identify the same scene or subject based on the stored image, and suggest “Grand Canyon” to the first user.
- the second user's server 110 might suggest, for example, one or more of “Arizona,” “Grand Canyon,” and “March 1999” for the same second image.
- an image database may be useful in accordance with some embodiments of the present invention for generating and/or suggesting personalized meta-information for a particular user (or group of users).
- a plurality of images may be meta-tagged based on at least one response from a user. For example, a user may indicate that he is at the beach. Base on this, all images captured by the camera may be meta-tagged as being captured “At the Beach.” In another example, a user may indicate in a response to a question that Alice is the only blonde woman who he has captured any images of today. Based on this, the camera may automatically meta-tag all images of blonde women taken today as being images of Alice.
- meta-data e.g., text (e.g., a current date, a GPS location, the name of a subject, a current lighting condition), audio (e.g., an audio recording of a user's response to a question), images (e.g., a user's response may be a highlighted portion of an image), binary or other machine-readable formats (e.g., a 100 bytes of information at the start of an image file), and any combination thereof.
- text e.g., a current date, a GPS location, the name of a subject, a current lighting condition
- audio e.g., an audio recording of a user's response to a question
- images e.g., a user's response may be a highlighted portion of an image
- binary or other machine-readable formats e.g., a 100 bytes of information at the start of an image file
- Meta-data may also be stored in a variety of different ways.
- meta-data may be stored in a file that is separate from an image file to which it pertains.
- a “BOB23.TXT” file may store meta-data that pertains to a “BOB23.JPG” image that is stored by the camera.
- meta-data may be stored in an image file.
- the start of an image file may include a plurality of meta-tags that provide information based on a user's responses to one or more questions.
- a single meta-data file may store information for a plurality of images.
- the camera may store a response database that includes meta-data for a plurality of images (see below for further details).
- a camera may store an audio clip of the user's response to a question and associate this audio clip with an image as meta-data.
- a camera may set the file name of an image based on a user's response to a question. For instance, if a user indicates that an images is of Alice, then the camera may store this image with the filename “ALICE-01.JPG.”
- the response database shown in FIG. 7 and the image database shown in FIG. 8 depict a few examples of a camera meta-tagging an image based on a user's response to a question.
- the image “WEDDING-02” was meta-tagged as including “Alice” and “Bob” (e.g., based on a user's response to question “QUES-123478-03”).
- the exemplary images “BEACHTRIP-05” and “BEACHTRIP-06” were meta-tagged as images of a beach (e.g., based on a user's response to “QUES-123478-06”).
- a flowchart illustrates a process 1700 that is consistent with one or more embodiments of the present invention.
- the process 1700 is a method for determining a question based on information determined based on an image recognition process performed by a server.
- the process 1700 is described as being performed by a camera 130 in communication with a server 110 .
- the process 1700 may be performed by any type of imaging device 210 in communication with a computing 220 .
- step 1705 the camera 130 captures an image.
- the camera 130 transmits the image to the server 110 for image recognition processing.
- the server 110 may compare the captured image to a database of images stored for the user in a database.
- the camera receives information determined by the server 110 based on the image recognition process.
- the server 110 may have matched the captured image to a stored image, retrieved the meta-information associated with the stored image, and forwarded the meta-information to the camera 130 .
- the server 110 may have been unable to identify a match and may have transmitted a signal to the camera 130 directing the camera 130 to ask the user if the user would like to apply the same camera settings in the future to any similar images.
- step 1720 the camera 130 determines a question based on the information from the server. For example, the camera may generate a question asking if the user would like to associate meta-information received from the server 110 with the newly-captured image. The question is output to the user in step 1725 .
- step 1730 the camera 130 receives a response from the user and the camera performs an action based on the response (step 1735 ). For example, the camera may associate meta-data with the captured image based on the response.
- a flowchart illustrates a process 1800 that is consistent with one or more embodiments of the present invention.
- the process 1800 is a method for determining meta-information.
- the process 1800 is described as being performed by a server 110 in communication with a camera 130 .
- the process 1800 may be performed by any type of imaging device 210 in communication with a computing 220 .
- the server 110 receives an image captured by a user of a camera 130 .
- the server 110 determines at least one of a plurality of images meta-tagged by the user.
- the server 110 may access the user's personal images database that contains images previously meta-tagged by the user.
- the server 110 determines meta-information to suggest to the user based on the captured image and the at least one image meta-tagged by the user. For example, using an image recognition program, the server 110 identifies one or more matches for the captured image in the user's database of images and retrieves some or all of the meta-information associated with those matching images.
- the server 110 transmits an indication of the meta-information to be suggested to the user.
- the server 110 receives an indication from the camera 130 of meta-information associated with the captured image by the user.
- the camera 130 may transmit a signal indicating that the user has accepted the suggested meta-information, or, alternatively, may transmit a signal indicating other meta-information the user has decided to associate with the image (e.g., the user may have rejected all or some of the suggested information and may have provided other supplemental information).
- a camera may automatically adjust one or more of its settings based on a response from a user and/or based on a signal from a server. For example, if a user indicates in his response to a question that the weather is sunny, then the camera may adjust the aperture on the camera to be V5.6 and the shutter speed to be ⁇ fraction (1/250) ⁇ sec.
- settings on the camera include, without limitation: exposure settings, lens settings, digitization settings, flash settings, multi-frame settings, power settings, output settings, function settings, and modes.
- Adjusting a setting may include, without limitation, one or more of: turning a setting on or off, increasing the value of a setting, decreasing the value of a setting, modifying a setting, changing a setting, revising a setting, and setting the camera to capture an image in a particular manner.
- Each exemplary scenario comprises an exemplary question asked by a camera, a response from a user, and action(s) performed by the camera:
- Action Put camera in burst mode to take 3 pictures every time the shutter button is pressed and increase the shutter speed to that it is faster than ⁇ fraction (1/250) ⁇ sec.
- Action Adjust flash timing and shutter speed for slow synchro.
- Action Set aperture on camera to f/8 (or less). Adjust shutter speed or film speed accordingly.
- Action Adjust white balance and exposure metering for subjects on bright white backgrounds.
- Action Crop the image to include everybody in the group.
- a camera may adjust a setting for one or more images. For example, the camera may output a question to a user, “Is this a group photo?” If the user responds “Yes” to the question, then the camera may adjust one or more settings and enable the user to capture one image based on these settings. After capturing the image of the group of people, the camera may revert to its original settings, for example, or determine one or more new settings for capturing images in the future. In some embodiments, settings may be adjusted for a plurality of images.
- the camera may output a question to a user, “Are we at the beach?” If the user responds “Yes” to the question, then the camera may put the camera in “Beach” mode for the remainder of the user's image-capturing session. The camera may remain in “Beach” mode until the user turns the camera off or until the user begins capturing images of a different scene.
- An adjustment to a setting may persist until a condition occurs.
- Various examples of conditions are described herein, and others may be readily apparent to one skilled in the art in light of the present disclosure.
- a camera may not immediately adjust a setting based on a user's response to a question. For example, a camera may ask the user a plurality of questions and then adjust at least one setting based on the user's responses to the plurality of questions. In a second example, the camera may not adjust a setting based on a first question until after a user has answered a second, related question.
- the camera may indicate to a user an adjustment made to a setting. For example, a user may respond to a question by indicating that he is at the beach. Based on this, the camera may increase the color saturation setting on the camera by 5%. In addition, the camera may output a message to the user “Increasing color saturation 5%.”
- Indicating an adjustment to a setting may be helpful for a variety of reasons, such as by assuring the user that the camera is in fact making use of his responses to questions. For example, even if a user does not understand what adjustment the camera is making, he may find it comforting to be informed that the camera is making use of his responses to questions. If a user were to feel that his responses to questions were being ignored by the camera, then he might ignore future questions that are output by the camera.
- Indication of an adjustment may be helpful to the user in verifying that the camera has not misunderstood or misinterpreted a user's response to a question. For example, a user may respond to a question by indicating that he is at a football game. Based on this indication and the current time of day (e.g., 2 p.m.) the camera may assume that the game is being illuminated by sunlight. However, in fact, this may not be the case (e.g., the football game may be played in a domed stadium). When the camera indicates that it is “Adjusting the camera for sports during daylight conditions,” the user may notice this mistake and correct the camera by indicating that the football game is in fact illuminated by halogen light bulbs.
- Informing the user of any adjustments that are made to the camera may also help the user in composing a shot or in making further adjustments to the camera's settings. For example, informing the user that the “flash brightness has been set for subjects 10-12 feet away” may be helpful to a user if the user decides to move or recompose an image.
- a camera may ask for a user's permission before making an adjustment to a setting. If the user indicates that it is acceptable to adjust the setting on the camera, then the camera may adjust the setting. If the user indicates that he would rather not adjust the setting on the camera, then the camera may not adjust the setting. For example, based on a user's response, the camera may determine that the camera's flash should be turned on. Before turning on the flash, the camera may output a message to the user, “I'm about to turn on the flash. Is this okay?” If the user responds “Yes,” then the camera may turn the flash on.
- the camera may determine that a user is capturing an image of a sunset based on the user's responses to one or more questions. Based on this, the camera may output a question to the user, “Would you like to put the camera into Sunset Mode? Sunset Mode is specially designed to make sure that pictures of sunsets are exposed correctly.” The user may then press a “Yes” button on the camera to indicate that he would like to put the camera into “Sunset” mode.
- Asking for a user's permission to adjust a setting on the camera may be similar to providing advice to a user about adjusting a setting.
- Various ways of providing advice to a user based on the user's response to a question are discussed herein.
- the camera may implement one or more rules based on a user's response to a question.
- a rule may be a guideline or other indication that may be used to determine a setting on the camera.
- Implementing a rule may include one or more of: storing an indication of a rule in memory, automatically adjusting a setting of the camera based on a rule, and restricting operation of the camera based on a rule.
- a user may respond to a question by indicating that he is capturing images of a child's birthday party.
- the camera may store a rule that requires that the camera maintain shutter speed of at least ⁇ fraction (1/125) ⁇ sec (because children at a birthday party tend to move quickly), except when the camera determines that an image includes a birthday cake with candles, in which case the camera should set the aperture to be a large as possible and not use a flash.
- An indication of a rule may be stored, for example, in a rules database (not shown) or a settings database such as the one depicted in FIG. 7.
- a rule may be a required relationship between one or more settings. For example, based on a user's indication that he is taking pictures at the beach, the camera may ensure that the subject of an image is always correctly exposed, even if the background of the image is overexposed. In a related example, the camera may use an automatic neutral density feature to automatically vary the exposure of the subject relative to the exposure of the background. In another example, a user may respond to a question by indicating that he is capturing images at a zoo. Based on this, the camera may implement a rule that, if the user is outdoors, the camera's aperture should be smaller than f/8 (to ensure good depth of field). If the user is indoors, a rule may establish that the camera should increase the CCD sensitivity as much as possible and never use a flash (to avoid frightening the animals).
- a rule may indicate how a setting on the camera should be adjusted. For example, based on an indication from a user that Alice is standing in front of a tree, the camera may implement a rule to shift the hue of an image by +5% anytime the camera is used to capture an image of Alice wearing her green jacket (e.g., to avoid having Alice's green jacket blend into the background.
- a user may respond to a question by indicating that he is at a ski resort. Based on this, the camera may implement a rule that until 5 p.m. that day, all images captured by the camera should be meta-tagged as being “skiing/snowboarding” images.
- the camera may automatically adjust the white balance setting to 7000K based on an indication by a user.
- a rule may indicate how one or more images of a subject should be captured.
- the camera may store a rule that all images of Alice should be taken from the left side, since Alice has a birthmark on her right arm that she prefers to have hidden in images of her. Based on the rule, the camera may prevent the capturing of an image of Alice's right side and/or may prompt the user to verify that he wishes to take a picture of Alice's right side.
- a rule may prevent the camera from performing one or more operations, such as using a flash while the user is capturing images of a sporting event.
- the exemplary response database shown in FIGS. 12A and 12B shows a few examples of how a camera may adjust a setting based on a user's response to a question.
- the camera adjusted its settings to “Fluorescent Light” mode based on a user responding “Fluorescent” to question “QUES-123478-02.”
- the camera adjusted the white balance setting to “5200 K” based on a user responding “Sunny” to question “QUES-123478-05.”
- the camera adjusted the image size setting to “1600 ⁇ 1200” and the image compression setting to “Fine” based on a user responding “15” to question “QUES-123478-07.”
- a camera may guide a user in operating the camera based on one or more responses from the user. Guiding a user may include, without limitation, one or more of: recommending that a user adjust a setting, prompting a user to adjust a setting, guiding a user in composing a shot, and outputting a message that guides a user in operating the camera.
- Recommending an adjustment to a setting may include, without limitation, one or more of: outputting an indication that an adjustment to a setting is recommended, outputting a message describing an adjustment to a setting, outputting an indication of a setting to be adjusted, and outputting an indication of a value of a setting (e.g., a current value, a recommended value).
- a value of a setting e.g., a current value, a recommended value.
- the camera may guide a user by recommending an adjustment to at least one setting based on at least one response from the user. For example, the camera may recommend that a user increase his shutter speed to at least ⁇ fraction (1/250) ⁇ of a second when taking sports pictures.
- Response database in FIG. 7 also shows an example of outputting a recommendation of a setting to a user based on the user's response to a question. Based on the user's response “Yes” to “QUES-123478-09” at 4:11 p.m. on Aug. 17, 2002, the camera advised the user to adjust the camera's shutter speed to be faster than ⁇ fraction (1/250) ⁇ of a second.
- the camera may actually change a setting on the camera based on a user's response to a question.
- recommending that a user adjust a setting may include simply outputting a message describing a potential adjustment to a setting, leaving it up to the user to actually adjust the setting.
- a camera may output a message to a user, “I suggest that you use a smaller aperture to ensure that both the foreground and the background of your photo are in focus. An aperture of f/8 or smaller would be good for this photo.” The user may or may not make the suggested adjustment.
- the camera may output a message to a user, “If you're taking pictures of animals in the wild, then you should probably put the camera in ‘Wildlife’ mode.”
- an output device may be used to output an indication of a suggestion of a setting adjustment.
- a warning LED in the camera's viewfinder may blink to indicate to a user that an image the user is about to capture may be underexposed (e.g., suggesting an adjustment should be made). Note that this recommendation (i.e., the blinking LED) may simply suggest that a user make an adjustment to a setting without indicating any specific adjustment to make.
- a camera may output a message describing a setting that should not be used.
- the camera may output a message to the user, “It is not advisable to use a flash when capturing an image of a mirror.
- the flash could reflect off the mirror back at the camera, causing the image to be over exposed.”
- a camera may prompt a user to adjust a setting based on at least one response from the user.
- Prompting a user to adjust a question may include outputting a question to a user asking him if he would like to adjust a setting.
- a camera may output a message to a user, “Since you're taking pictures at a ski slope, you should probably turn on the camera's Auto-Neutral Density feature.” Note that in this example, it is left to the user to determine whether he would like to turn on the Auto-Neutral Density feature and operate the controls of the camera to enable the feature.
- a user may respond to a question by indicating that he is in a room with fluorescent lights.
- the camera may output a message to the user, “Would you like to put the camera in “Fluorescent Light” mode? This mode is specifically designed for rooms with fluorescent lights and will help to ensure that your images are exposed correctly.” If the user responds “Yes” to this question, then the camera may be adjusted to “Fluorescent Light” mode.
- a camera may assist a user in adjusting a setting (e.g., without the camera actually performing the adjustment of the setting). For example, a camera may output a message to a user, “Since you're on a sunny beach, you should probably put me in ‘Sunny Beach’ mode. Press ‘Ok’ to put the camera in ‘Sunny Beach’ mode.” Note that in this example, the camera has adjusted its controls to simplify for the user the process of putting the camera in “Sunny Beach” mode. For instance, instead of selecting “Sunny Beach” mode (e.g., from a menu of modes on the camera), all the user has to do is press the “OK” button on the camera's touch screen.
- “Sunny Beach” mode e.g., from a menu of modes on the camera
- a camera may output a message: “You may want to adjust the white balance setting on the camera based on the color of light emitted by the light bulb in this room. Press the ‘up’ and ‘down’ buttons to adjust the white balance.”
- the camera has simplified the process of adjusting the white balance on the camera by automatically enabling the “up” and “down” buttons on the camera to control the white balance.
- a camera and/or a server may guide a user in composing a shot based on at least one response from the user.
- Various types of software and/or hardware useful in assisting a user in composing a shot are known to those skilled in the art, including systems described in U.S. Pat. No. 5,831,670 to Suzuki, entitled “Camera capable of issuing composition information”; U.S. Pat. No. 5,266,985 to Takagi, entitled “Camera with optimum composition determinator”; U.S. Pat. No.
- a camera may determine an optimum framing for a scene (e.g., with a subject slightly off center and an interesting tree in the background). Based on this determined framing, the camera may provide instructions to a user on how to aim the camera to obtain this framing. For example, the camera may output audio instructions to the use, such as, “Pan the camera a little more to the left . . . Okay, that's good. Now zoom in a little bit . . . Whoops, that's too much . . . Okay, that's good. Now you're set to take the picture.”
- a camera may include a mechanism that allows the camera to aim itself. For example, a user of the camera may be asked to hold the camera steady, and then the camera may adjust a mirror, lens, light sensor, and/or other optical device(s) so as to capture an image at a certain angle from the camera. For example, the camera may rotate a mirror five degrees to the left to capture an image that is to the left of where a user is aiming the camera.
- the camera may be configured so as to be manipulated remotely (e.g., by a server).
- a server may be able to view a representation of the camera's viewpoint over a network connection.
- the server may instruct a user to hold the camera steady (e.g., via the camera's LCD display) (or direct the camera to provide such an instruction), and then the server may adjust remotely the camera's mirror to obtain an optimal framing of a picture.
- the camera may provide directions to one or more subjects of a image. For example, a user may be capturing an image of a group of friends at a restaurant. Based on the user's response to a question and/or based on image recognition (e.g., performed by the camera and/or a server), the camera may provide directions relating to the group, such as, without limitation: “Everybody needs to get closer together”; “Tell Alice to take a step back”; and “Bobby is giving rabbit ears to Alice.” Similarly, a camera may output directions addressed to the group rather than the user (e.g., using an audio speaker).
- a camera's viewfinder may display a blinking arrow pointing to the left to indicate to a user that he should pan left to capture the best possible image of a particular scene.
- a user may indicate that he is capturing an image through a glass window and would like to use a flash. Based on this, the camera may provide instructions to the user on how to compose the shot so as to avoid having the flash reflect off of the glass window.
- one or more questions may be determined based on a user's response to a previous question. For example, the camera may ask a user a first question: “Are you indoors or outdoors?” If the user responds “Indoors” to this question, then the camera may store this response and ask the user a second question based on the response: “What kind of lightbulbs does this room have?”
- Each exemplary scenario describes at least one question output, a response by a user, and a subsequent question determined (e.g., by a camera, by a server) based on the response to at least one previous question:
- Second question determined “Is Bob currently wearing the same shirt as he was in this picture? ⁇ display picture of Bob>”
- an entire series of questions may be output based on a user's response to a question. For example, in response to a user indicating that he is indoors, the camera may ask the user a number of questions about the lights of the room in order to determine what kind of lights there are, where the lights are located, and what sort of lighting effect the user is hoping to achieve. The user's responses to these questions may then be used to determine one or more settings for the camera, as discussed herein.
- a computing device may use a decision tree to determine one or more questions to ask a user. For example, a camera may ask a user a first question. If the user gives a first response to the first question (e.g., “Yes”), then the camera may ask the user a second question (e.g., the question from the “Yes” branch of the decision tree). If the user gives a second response to the first question (e.g., “No”), then the camera may ask the user a third question (e.g., the question from the “No” branch of the decision tree). This process may repeat until the camera determines enough information to perform one or more actions (e.g., adjust a setting, guide a user in operating the camera).
- a third question e.g., the question from the “No” branch of the decision tree
- a user's response to a question may be a factor in determining a question to ask the user.
- a determination condition may be satisfied based on a user's response to one or more questions.
- the information that a user is outdoors may have been received, for example, from a user as a response to a question.
- the information that a user is at the zoo may have been determined based on a response to a first question, and the information that the user is capturing an image of an animal may be determined based on a response to a second question.
- the determination condition database shown in FIG. 10 includes a number of examples of the camera that describe determining a question based on a user's response to a previous question.
- the camera may determine to output question “QUES-123478-02” (i.e., “What kind of lightbulbs does this room have?”) to a user.
- the camera may output “QUES-123478-04” (i.e., “Please use the cursor to point to Alice in this picture.”).
- Some embodiments of the present invention may be advantageous in that by asking a user a plurality of questions (e.g., a series of related questions), a computing device (e.g., of a camera, a server) may determine enough information about a scene to perform one or more other actions (e.g., adjusting a setting, meta-tagging an image, guiding a user in operating the camera).
- a computing device e.g., of a camera, a server
- may determine enough information about a scene to perform one or more other actions e.g., adjusting a setting, meta-tagging an image, guiding a user in operating the camera.
- a camera, server or other computing device may use one or more templates to perform image recognition on captured images.
- the camera may store a “beach template” that may be used to determine whether an image includes (and thus may have been captured on) a beach.
- a wide variety of other templates are possible.
- the camera may use information provided by a user (e.g., a user's response to a question) to determine a template.
- the template may then be stored and used for processing images and/or for asking questions in the future.
- a camera may output a question, “Who is the subject of this image?” and a user may respond: “Alice.” Based on the user's response and the image, the camera may create a template suitable for recognizing images of Alice (e.g., an “Alice template”). At a later time, the camera may use the “Alice template” to determine that Alice is in an image. A question may then be asked based on this determination (e.g., “Who is standing next to Alice in this picture?”).
- a camera may display a plurality of images to a user and ask the user, “Were all of these images captured in a gymnasium?” If the user responds “Yes” to the question, then the camera may create a “gymnasium template” based on similarities among the plurality of images (e.g., the color of the fluorescent lighting, the color of the wood floor, etc.). If the user later returns to the gymnasium to capture more images, the camera may recognize that it is in the gymnasium and ask the user a question based on this (e.g., “You're in a gymnasium, aren't you?”).
- a camera or server may store a “group photo template” that may be used for recognizing images of groups of people and for adjusting the settings of the camera so as to best capture images of groups.
- some group photos may not match the group photo template.
- an image of a group of people in which people are lying down may not match the group photo template.
- the camera may output a question to a user “Is this a group photo?” If the user responds “Yes,” then the camera may determine a new group photo template and use this new group photo template to replace the old group photo template.
- two templates may be retained (e.g., one being for group photos where the subjects are lying down). In the future, the camera may recognize images of people lying down as group photos as well and ask appropriate questions based on these photos.
- a template may also be determined based on a variety of other factors (i.e., factors other than a user's response to a question).
- a template may be determined based on at least one image.
- the camera may capture a plurality of images at a ski resort and determine a “ski resort template” based on these images (e.g., based on similarities between the images).
- This “ski resort template” may be used to recognize images in which people are shown skiing or snowboarding on snow. Note that snow provides a bright white background for such images, which may be helpful in distinguishing images of people at a ski resort, for example, from images of people engaged in other activities.
- Some embodiments provide for determining a template based on other indications by a user. For example, a user may use buttons on the back of the camera to select a plurality of images that are stored in the camera and may indicate that the camera should determine a template based on these images. For instance, the user may select a plurality of images captured at a dance party and ask the camera to create a “dance party template” based on the selected images.
- a camera or other imaging device may transfer one or more images to a second device (e.g., a server).
- a camera may determine whether to transmit one or more images to a second electronic device. For example, the camera may determine whether it is running low on memory and therefore should free up some memory by transmitting one or more images to a second electronic device and then deleting them. Such a determination may be based on a variety of factors, including, without limitation:
- an amount of available memory e.g., an amount of memory on the camera that is free, an amount of memory on the second device that is free
- an amount of bandwidth e.g., an amount of bandwidth available for transmitting images to the second device
- a camera may determine which images to transmit to a second device. For example, the camera may free up some memory by transmitting images of Alice to a second electronic device, but keep all images of Bob stored in the camera's secondary memory for viewing using the camera. Such a determination may be based on a variety of factors, including, without limitation:
- an amount of available memory e.g., an amount of memory on the camera that is free, an amount of memory on the second electronic device that is free
- an amount of bandwidth e.g., an amount of bandwidth available for transmitting images to the second electronic device
- the camera may compress one or more images when transmitting them to a second device.
- the camera may determine whether to compress an image when transmitting it to a second device. For example, low quality images may be compressed before being transmitted to a second device, whereas high quality images may be transmitted at full resolution to the second device.
- the camera may determine how much to compress one or more images when transmitting the one or more images to a second device. Determining whether to compress an image (and/or how much to compress the image) may be based on a variety of factors, including, without limitation:
- an amount of available memory e.g., an amount of memory on the camera that is free, an amount of memory on the second electronic device that is free
- an amount of bandwidth e.g., an amount of bandwidth available for transmitting images to the second electronic device
- a camera may delete or compress an image after transmitting it to a second electronic device, thereby saving memory. Since a copy of the image may be stored on the second device (e.g., in a server database), there may be no danger of losing or degrading the image by deleting or compressing it on the camera. Of course, in some circumstances it may not be desirable to delete or compress an image after transmitting the image to a second device. For example, a camera may transmit an image to a second electronic device in order to create a backup copy of the image.
- Capturing an image manually may include receiving an indication from a user that an image should be captured.
- Some examples of receiving an indication from a user include, without limitation: a user pressing a shutter button on a camera, thereby manually capturing an image; a user setting a self-timer on a camera, thereby indicating that the camera should capture an image in fifteen seconds; a user holding down the shutter button on a camera, indicating that the camera should capture a series of images (e.g., when taking pictures of a sporting event); a user putting a camera in a burst mode, in which the camera captures a sequence of three images each time the user presses the shutter button; and a user putting a camera into an auto-bracketing mode, in which the camera captures a series of images using different exposure settings each time the user presses the shutter button on the camera.
- a camera or other imaging device may capture an image automatically and then determine a question to ask a user based on that image.
- the image database in FIG. 8 depicts an example of an image “BEACHTRIP-04” that was captured automatically by a camera.
- capturing an image manually may involve receiving an indication from a user that an image should be captured (e.g., by the user pressing a shutter button).
- automatically capturing an image may not involve receiving any such indication.
- the camera may capture an image automatically without the user ever pressing the shutter button on the camera.
- capturing an image automatically may include, without limitation, one or more of the following:
- a user may or may not be aware that an image has been captured automatically.
- the user's camera may not beep, display an image that has been captured, or provide any other indication that it has captured an image, as is typically done by digital cameras that capture images manually.
- Automatically capturing an image quietly and inconspicuously may help to prevent the camera from distracting a user who is in the midst of composing a shot.
- a user may find it annoying or distracting to have the camera automatically flash or beep when he is about to capture an important image.
- capturing images without a user's knowledge may allow the camera to give the user a pleasant surprise at the end of the day when the user reviews his captured images and finds that the camera captured sixty-eight images automatically in addition to the nineteen images that he captured manually.
- a user may manually capture a plurality of images at a birthday party, but miss capturing an image of the birthday boy opening one of the gifts. Fortunately, the camera may have automatically captured one or more images of this special event without the user's knowledge.
- a camera may capture an image automatically while a user is composing a shot.
- a user may aim the camera at a subject and begin to compose a shot (e.g., adjusting the zoom on the camera, etc.).
- the camera may capture one or more images automatically.
- the camera may capture images of scenes that the user aims the camera at, even if the user does not press the shutter button on the camera.
- one or more images may be captured based on a condition.
- a condition may be referred to herein as a capture condition.
- Capture conditions may be useful in triggering or enabling a variety of different functions, including, without limitation: determining when to capture an image, determining what image to capture, and determining how to capture an image.
- capturing an image based on a condition may include, without limitation, capturing an image when a condition occurs, in response to a condition, when a condition is true, etc.
- a capture condition may comprise a Boolean expression and/or may be based on one or more factors.
- factors upon which a condition may be based are discussed herein.
- a camera may automatically capture an image and store this image in RAM for further processing.
- a camera may include an orientation sensor that determines when the camera is being aimed horizontally and has not moved in the last two seconds. Based on this determination, the camera may capture an image, since a user of the camera may be composing a shot and the captured image may be useful in determining a question to ask the user about the shot.
- a camera may include a microphone. If this microphone senses an increase in the noise level, then this may be a sign that an event is occurring. Based on the increase in noise level, the camera may capture an image, which may be useful in determining the situation and asking the user a question.
- automatically capturing one or more images based on a capture condition may be particularly helpful for the camera in determining one or more questions to ask a user. For example, whenever a user raises the camera to a horizontal position and holds it steady, the camera may capture an image. This image may then be used to determine an appropriate question to ask the user (e.g., a question relating to the image that the user is about to capture).
- an appropriate question to ask the user e.g., a question relating to the image that the user is about to capture.
- Various exemplary ways of determining a question based on an image that has been captured are discussed herein, and may be applied in accordance with some embodiments with respect to images captured automatically.
- An image that is captured based on a capture condition may be stored in memory temporarily or permanently.
- a camera may automatically delete an image that is captured automatically after the camera has determined and output a question based on this image.
- the camera may automatically capture one or more images while a user is composing a shot. These images may be stored in memory temporarily and used for determining one or more questions to ask the user. These questions may be output to the user while he is composing the shot. The user's responses to these questions may then be used to adjust one or more settings on the camera, as discussed herein.
- the user may finish composing the shot and capture an image (e.g., based on the adjusted settings). Afterwards, the automatically captured images may be deleted from memory to free up space.
- an imaging device may capture images semi-continuously (e.g., like a video camera), and a capture condition may be used to select an image for further processing.
- a capture condition e.g., a user pressing the shutter button halfway
- the camera may select one of the previously captured images and determine a question to ask the user based on this image.
- an image that is captured automatically may be meta-tagged.
- an image that is captured automatically may be meta-tagged to indicate that it can later be deleted (e.g., if the camera starts to run out of memory).
- a flowchart illustrates a process 1900 that is consistent with one or more embodiments of the present invention.
- the process 1900 is a method for automatically capturing an image based on a capture condition.
- the process 1900 is described as being performed by a camera 130 .
- the process 1900 may be performed by any type of imaging device 210 .
- step 1905 the camera 130 automatically captures an image based on a capture condition.
- step 1910 the camera 130 determines a question based on the image.
- step 1915 the camera outputs the question based on an output condition.
- Various types of output conditions are discussed herein.
- step 1920 the camera 130 receives a response to the question, and in step 1925 , the camera 130 adjusts a setting based on the response.
- a user may provide information by responding to a question that is output by a camera.
- This information provided by the user may be used by the camera to determine one or more actions to perform (e.g., adjusting settings on the camera, guiding a user in operating the camera).
- information may expire.
- a user may respond to a question by indicating that he is “at the beach.” This response may be stored in the response database (e.g., such as the one shown in FIG. 7) and an action may be performed based on the response (e.g., the camera may be adjusted to “Sunny Beach” mode).
- the camera may be adjusted to “Sunny Beach” mode.
- the information that the user is “at the beach” will no longer be valid.
- the user may go to a restaurant to eat lunch, or go home after visiting the beach all day.
- information about the weather outside being sunny may expire at the end of the day when the sun goes down.
- the camera may perform an appropriate action such as adjusting a setting on the camera or outputting an additional question to a user.
- a server or other computing device may determine that information has expired. In some embodiments, one or more actions may be performed based on the expiration of information.
- Information may expire for a variety of different reasons.
- the information may no longer be correct, for example.
- information that the user is at the zoo is no longer correct if the user has left the zoo.
- information may no longer be applicable.
- information about how to crop an image of a group of people may no longer be applicable if a user is not capturing an image of a group of people.
- more recent information may be available. For example, two hours ago the weather outside may have been cloudy and raining, but now the weather is sunny.
- information may be time-sensitive and/or may be updated periodically, according to a schedule, from time to time, or at any time.
- information should be verified before being used again.
- a user may indicate that he is interested in capturing an image with a slow shutter speed and maximum depth of field.
- these settings may or may not be appropriate for a new scene that the user is capturing.
- it may be appropriate to verify whether the user is still interested in using the same settings (i.e., determine whether that information has expired, is still valid, and/or is still applicable).
- a computing device may determine when information provided by a user expires and/or perform an action based on the information's expiration. For example, the camera may ask the user another question and/or may adjust a setting based on the expiration of the information.
- Different pieces of information may expire at different times (e.g., independently or each other). For example, information about the subject(s) of one or more images that a user is currently capturing (e.g., the identities of people in a group photo) may expire when the user ceases to aim the camera at the group of people. In another example, information about the location of the camera may expire when the camera is moved more than one hundred feet from its original location. Information about the current weather conditions may expire after two hours, for example. In some embodiments, information about a user of the camera may expire when the camera is turned off.
- information about the subject(s) of one or more images that a user is currently capturing e.g., the identities of people in a group photo
- information about the location of the camera may expire when the camera is moved more than one hundred feet from its original location.
- Information about the current weather conditions may expire after two hours, for example.
- information about a user of the camera may expire when the camera is turned off.
- different pieces of information may expire at the same time. For example, all information about capturing images of Alice may expire if a user is now capturing an image of Bob. In another example, all information about a scene that a user was capturing may expire if the user turns the camera off for more than thirty minutes. In another example, all information about a user of the camera may expire if a user presses the “Reset User Information” button on the camera.
- a computing device may determine when one or more pieces of information expire based on a condition.
- This condition may also be referred to as an expiration condition to differentiate it from other types of conditions described elsewhere in this disclosure. Conditions are discussed in further detail variously herein.
- a condition may be a Boolean expression and/or may be based on one or more factors.
- any of the factors described herein may also be useful in determining when information expires. Some additional examples of factors are provided below.
- any information about a scene that the user was capturing may be deemed to be expired.
- the camera may ignore the expired information or, alternatively, ask a user a question to verify that the expired information is still relevant.
- the camera may determine that the response has expired and perform an action (e.g., ask the question again).
- the camera may determine that information relating to his original location is no longer applicable
- Information may expire (or not expire) based on one or more indications by a user. For example, a user may respond to a question by indicating that information about a scene is or is not expired. For instance, the camera may ask a user, “Are we still at the beach?”
- information that affects a setting on the camera may expire based on a user adjusting the setting on the camera. For example, information about the lighting in a room may cause the camera to adjust its white balance setting. If the user later adjusts (e.g., using a control) the white balance setting on the camera to “Sunny,” then this may indicate that the user is no longer indoors and that the information about the lighting of the room is no longer relevant.
- a user may press a “Reset Scene Information” button on the camera to indicate that information relating to a past scene is expired (e.g., meaning that the camera should disregard the information relating to the past scene).
- a user may use the voice command “Same Scene” to indicate that information about a previous scene has not expired (e.g., even if the camera would otherwise have considered the information to be expired).
- information related to an image may expire when or after an image is captured (e.g., the information about the scene may only be applicable to that image).
- information about a current scene that the user is capturing may expire when the camera is turned off.
- some information may expire (or not expire) based on one or more images.
- a computing device may use a face recognition program to analyze an image and to determine that an image is an image of Bob. Based on this, the camera may determine, for example, that information about capturing images of Alice is expired. In another example, a user may have indicated that he is at the beach. However, thirty minutes later, the camera may determine that one or more images captured recently do not match any of the “beach templates” stored by the camera. Based on this, the camera may determine that the information that the user is on the beach may have expired.
- Some types of information may expire (or not) based on the state of the camera. For example, a camera may keep track of how many images have been captured since a piece of information was received. After a threshold number of images (e.g., ten images) have been captured, the information may expire. In another example, information may expire whenever the camera is turned off. Note that the camera may be turned off based on an indication by a user (e.g., the user presses the power button on the camera) and/or based on other conditions (e.g., the camera may automatically turn itself off after five minutes of inactivity).
- a threshold number of images e.g., ten images
- information may expire when the camera's batteries are replaced, when the camera is plugged into a wall outlet to recharge, or when images are downloaded from the camera (e.g., for storage on a personal computer).
- Information may expire based on a user.
- a camera may store information about its current user (e.g., the user's identity, the user's preferences and habits when capturing images, a list of images that have already been captured by the user). If the camera is later given to a new user, information about the previous user may expire, since it is not applicable to the new user. For example, Alice and Bob may share a camera. When Alice is capturing images using the camera, the camera may adjust one or more settings based on Alice's user preferences. If Alice then hands the camera to Bob, the information about Alice's user preferences may expire and be replaced with information about Bob's user preferences.
- information may expire or not expire based on one or more of a variety of time-related factors. Some examples of time-related factors are described herein without limitation.
- Information may expire, for example, after a duration of time. For instance, information that a user provides about a scene may expire after thirty minutes unless it is reaffirmed by the user (e.g., by indicating the information is still valid, by providing additional information about the scene).
- information may expire at a specific time. For instance, information about whether the sky is sunny, partially cloudy, or overcast may expire at 6:34 p.m. (e.g., when the sun goes down).
- information may expire based on a condition existing for a duration of time. For example, information about the lighting in a room may expire if the camera's light meter reads bright (outdoors) light for more than thirty seconds.
- Examples of information expiring or not expiring based on information from sensors include, without limitation: determining a location, determining an orientation of a camera, and determining information about light.
- a camera may use GPS device to determine how far it has been moved from a location where a user provided a response to a question. If the camera has been moved more than a threshold distance from the location where the user provided the response, then the information provided by the response may be determined to be expired.
- a camera may use an orientation sensor to determine when a user is aiming the camera at a scene.
- an imaging device may use a light sensor to determine the color of light that is shining on the camera. If the color of light shining on the camera is 5200K (daylight), then the camera may determine that information indicating the camera is under fluorescent light bulbs (4000K) is expired.
- information expiring or not expiring may be based on one or more characteristics of a user include. For example, a user may be in the habit of turning his camera off anytime he does not anticipate capturing an image in the next minute. Based on this, the camera may adjust its conditions for expiring information so that information about a scene does not expire unless the user turns the camera off for an extended period of time (e.g., fifteen minutes). In another example, a user may prefer that he not be asked the same question twice in close succession (e.g., within ten minutes). Based on this, the camera may prolong the time that it takes for a piece of information to expire (e.g., to more than ten minutes). In this way, the camera may effectively postpone asking the user a second question relating to the information in accordance with the user's preference.
- a user may be in the habit of turning his camera off anytime he does not anticipate capturing an image in the next minute. Based on this, the camera may adjust its conditions for expiring information so that information about a scene does
- information may expire or not expire based on information from one or more databases.
- a first piece of information may be based on the validity of one or more second pieces of information. If the second pieces of information expire, then the first piece of information may also expire.
- a camera may store two pieces of information: a) the camera is currently indoors, and b) the room has fluorescent lighting. If the first piece of information (i.e., the camera is indoors) expires because the user moves outdoors, then this may cause the second piece of information (i.e., that the room has fluorescent lighting) to expire also, since it is unlikely that there is also fluorescent lighting outdoors. However, it will be recognized that the inverse may not be true. That is, the expiration of the information that the room has fluorescent lighting may not mean that the information that the camera is indoors has also expired. For example, a user of the camera may have just moved to another room.
- an imaging device may determine that information has expired based on a change to an image template stored in a database. For example, the imaging device may determine a revised image template for Alice (e.g., a revised “Alice template”) because Alice has put on a blue sweater over her green tank top. Based on this, the imaging device may determine that information about the subject of the image (e.g., a girl in a green tank top) is expired.
- a revised image template for Alice e.g., a revised “Alice template” because Alice has put on a blue sweater over her green tank top.
- the imaging device may determine that information about the subject of the image (e.g., a girl in a green tank top) is expired.
- a camera may store information about when the sun comes up or goes down. When the sun goes down, the camera may then expire any information about the current weather conditions (e.g., sunny, cloudy, etc.).
- a camera may receive weather reports via a radio link (and optionally may store an indication of the received information). For instance, the camera may receive an updated weather report indicating that the weather outside is no longer sunny and is now raining. Based on this updated report, the camera may determine that information indicating that light is shining on subjects sitting outside should be expired.
- an imaging device and/or computing device may perform one or more of a variety of different actions, including, without limitation: ceasing to perform an action (e.g., an action that was performed based on the information), outputting a question, adjusting a setting, meta-tagging an image, guiding a user in operating the camera, storing information, and any combination thereof.
- an action e.g., an action that was performed based on the information
- a camera may cease to perform the action (and may optionally perform a second action instead). Examples of ceasing to perform an action based on expiring information include, without limitation: readjusting a setting, ceasing to meta-tag images, and ceasing to guide a user in operating a camera.
- an imaging device such as a camera may adjust a setting (e.g., the mode of the camera) based on a user's response to a question.
- a setting e.g., the mode of the camera
- the camera may adjust the setting again (e.g., returning the setting to its original value or adjusting to a new value).
- the camera may adjust itself to “Ski Slope” mode based on a user's indication that he is on a ski slope.
- the camera may determine that the user's indication that he is on a ski slope has expired. Accordingly, the camera may readjust itself to cancel “Ski Slope” mode and put the camera in “Default” mode instead.
- a camera may cease to meta-tag images when information expires.
- a camera may cease to meta-tag images with information that has expired.
- the camera may meta-tag one or more images based on a user's response to a question, as described herein. If the information upon which the meta-data is based expires, then the camera may cease to associate that meta-data with images.
- the camera may receive information from a user that the user is capturing images of a group of people: Alice, Bob, Carly, and Dave. This information may be used to meta-tag the captured images.
- the information about the group of people may expire (e.g., based on an expiration condition). Accordingly, future images captured by the camera will not be tagged as images of Alice, Bob, Carly, and Dave.
- Expiration of information may cause an imaging device to cease guiding a user in operating the camera.
- a camera may guide a user in operating the camera based on the user's response to a question, as described herein.
- the camera may guide a user in adjusting the shutter speed of the camera based on a user's indication that he is capturing images of wildlife. If an image recognition program running on the camera (or on a server in communication with the camera, as discussed herein) later determines that the user is about to capture an image of a person, then the camera may cease to provide instructions to the user about how to capture images of wildlife (e.g., because the wildlife-related information has effectively expired).
- a camera may output a question to a user based on information expiring. For example, in response to a piece of information expiring, the camera may ask a user a question relating to the information that expired. The user's response to this question may be helpful in replacing the information that expired and/or in guiding the camera in performing additional actions to assist the user.
- a determination condition may be based on information expiring. For example, the camera may determine to output the question, “Are you indoors or outdoors?” based on the determination condition: expired (indoors_or_outdoors_response).
- Information about a scene may expire when a user stops aiming a camera at the scene.
- the camera may then remain idle, for example, until the user begins to aim the camera at a new scene, at which point the camera may determine that a) the information about the old scene has expired and b) the camera does not have any information about the new scene. Based on this, the camera may determine and output an appropriate question to the user.
- Information about the lighting of a scene may expire whenever a light sensor on the camera determines that the light color or intensity of a scene has changed dramatically. Accordingly, the camera may output the question, “What type of lighting does this scene have?” whenever the light sensor causes information to expire.
- a camera may adjust one or more settings based on information expiring.
- information about the current lighting conditions may expire. Based on this, the camera may adjust its settings to auto-exposure and auto-white balance. In another example, information about who is the current user of the camera may expire. Based on this, the camera may revert to its default user preferences. In another example, information about what object in the field of view (e.g., the foreground, the background) a user would like to focus on may expire. Based on this expiration, the camera may adjust its focus settings (e.g., to five-region, center-weighted auto-focus).
- information about a user being on a boat may expire. Based on this, the camera may adjust its digital stabilization setting to “regular.” Information about the weather outside being sunny may expire because the current time of day is after sunset, for example. Based on this, the camera may assume that it is indoors or nighttime and turn its flash on.
- a device such as a camera or server may meta-tag one or more images based on information expiring. For example, information about the subject of an image may expire. Based on this, a camera may meta-tag an image as “Subject: Unknown.” According to some embodiments, a user can later review the image and provide meta-data about the subject of the image. In another example, information about a location of the camera may expire. After determining the location information has expired, the camera may omit location information when meta-tagging an image (or may not meta-tag an image at all).
- a camera may guide a user in operating the camera based on information expiring. For instance, information about whether the subject of an image is in the shade or the sun may expire. Based on this, the camera output a message to guide a user: “If your subject is in the shade, you may want to adjust the white balance setting on the camera to 7000K or use a flash. If your subject is in the sun, you may want to adjust the white balance setting on the camera to 5200K and make sure that your subject is facing towards the sun.” In another example, information about the subject of one or more images may expire. In response, the camera may output a message to a user: “You can meta-tag your images with information about their subjects by pressing the ‘Meta-Tag’ button and saying the name of the subject.”
- the camera may store information based on information expiring. That is, according to some embodiments of the present invention, the camera may store a first piece of information based on the expiration of a second piece of information. For example, information about the camera being underwater may expire based on a conductivity sensor on the body of the camera. Based on information received via the conductivity sensor, the camera may store an indication that it is not underwater.
- an expiration condition occurs, then related information may be determined to be expired (and optionally the camera may perform one or more actions), as described herein.
- information may expire based on other information expiring. For example, if a camera is turned off for more than sixty minutes, then the information that the camera is outdoors may expire. Based on this, the information that the weather is sunny and that the user is on a beach may also expire.
- a single condition might cause multiple pieces of information to expire. For example, the information that an image includes a body of water and the piece of information that the user prefers to have no reflections may both expire if an image does not match a water template or if a user presses the “Reset Image Preferences” button on the camera.
- Different actions may be performed based on what causes information to expire. For example, if information indicating that the subject of an image is Alice expires because a user is no longer aiming the camera or because thirty seconds have elapsed, then the camera may stop meta-tagging images as “Subject: Alice.” However, if that information expires because an image recognition program (e.g., executed by the camera, executed by a server in communication with the camera) does not recognize the subject of an image as being Alice, then the camera may stop meta-tagging the image and ask: “Who are you photographing?” Thus, an action (e.g., determining and/or outputting a question) may be performed based on information expiring and/or based on the particular circumstance(s) that caused the information to expire.
- an image recognition program e.g., executed by the camera, executed by a server in communication with the camera
- a camera or other device may store an expiring information database that stores information about conditions that may cause information to expire.
- an expiring information database is shown in FIG. 14. Note that the example of the expiring information database shown in FIG. 14 may store at least one expiration condition for each piece of information that is stored by the camera.
- a flowchart illustrates a process 2000 that is consistent with one or more embodiments of the present invention.
- the process 2000 is a method for performing an action based on information expiring.
- the process 2000 is described as being performed by a camera 130 .
- the process 2000 may be performed by any type of imaging device 210 and/or computing device 220 .
- the camera 130 receives information related to use of the camera 130 .
- the camera determines or otherwise receives (e.g., from a sensor, from a server 110 , from a user) any of the various types of information described herein.
- the camera 130 receives an indication that it is raining, a user preference, a signal that the memory is low, etc.
- the camera 130 determines an expiration condition for the information. For example, the camera 130 determines that the piece of information should expire after thirty minutes.
- step 2015 the camera 130 stores an indication of the information and an indication of the expiration condition (e.g., in an expiration condition database).
- step 2020 the camera 130 determines if the information has expired (e.g., based on the expiration condition). If the information has not expired, the camera 130 performs a first action in step 2025 . If the information has expired, the camera 130 performs a second action based on the information expiring (e.g., a corresponding action indicated in an expiration condition database).
- a camera may output a question to a party other than a user of the camera.
- a camera may output a question to a human subject (i.e., a person) of one or more images captured (or to be captured) by the camera.
- Questions may be output to subjects of images for a variety of different purposes.
- a question may be output to verify that an image was captured properly.
- a camera may be used to capture an image of a group of people (e.g., Alice, Bob, and Carly). Immediately after capturing the image of the group of people, the camera may output a question to the group: “Did anybody blink?” If one or more people in the group answer “Yes” to the question, then the camera may capture one or more additional images of the group, in the hope of capturing at least one image of the group in which nobody is blinking.
- a preference of a subject may be determined.
- the camera may output a question to a subject of an image: “Do you want this to be a close-up shot from the waist up, or a full-body shot that includes your feet?”
- the camera may adjust one or more settings (e.g., a zoom setting) based on the subject's response to this question.
- Performing one or more actions based on a subject's preferences e.g., adjusting a setting, meta-tagging an image
- an imaging device may assist a subject in posing. For example, a camera may output a question to a subject of an image: “It looks like there's a piece of paper sticking to your shoe. Do you want to remove this before the photo is taken?” Based on the user's response, the camera may then pause to allow the subject to remove the piece of paper from his foot.
- a camera may include an audio speaker to play an audio recording of a question loud enough for a subject of an image to hear the question.
- the camera may use a HyperSonic Sound® directional sound system by American Technology Corporation to output a question to a subject that may not be heard by other subjects of the image, the user, or bystanders.
- a camera may include an LCD display or other type of video monitor that faces (or may be configurable to face) a subject of the camera (e.g., away from a user of the camera). The camera may use this LCD, for instance, to display a text question to the subject.
- a subject of a camera may carry a wireless handheld device (e.g., a remote control, a cell phone, a PDA) that communicates with the camera (e.g., using a infrared or radio communications link).
- the camera may output a question to the subject by transmitting the question to the wireless handheld device.
- the wireless handheld device may then display the question to the subject (e.g., using an audio speaker, LCD display screen, or other output means).
- Other embodiments operable for outputting a question to a subject of an image may be similar to those described herein for outputting a question to a user of the camera.
- a subject may respond to a question using an input device, such as, without limitation, a microphone, an image sensor, or a wireless handheld device.
- a subject of an image may respond to a question by speaking the answer aloud.
- the camera may use a microphone and voice recognition software to record and determine the user's response to the question.
- a subject of an image may respond to a question by making an appropriate hand signal to the camera (e.g., thumbs-up for “Yes,” thumbs-down for “No”).
- the camera may use an image sensor to capture one or more images of the subject making the hand signal and then process the images using an image recognition program to determine the subject's response to the question.
- a subject of an image may respond to a question using a wireless handheld device (e.g., a remote control, a cell phone, a PDA) operable to communicate with a camera.
- a wireless handheld device e.g., a remote control, a cell phone, a PDA
- a subject of an image may press a button on his PDA or speak into a microphone on his cell phone to provide a response to a question.
- the PDA or cell phone may then transmit an indication of the response to the camera (e.g., via a communication network).
- a variety of exemplary actions that may be performed based on a user's response to a question are discussed herein (e.g., adjusting a setting, meta-tagging an image, outputting a second question). Other actions are also possible. Additional actions that may be performed by the camera based on a user's response to a question include automatically capturing an image and managing images stored in memory.
- an imaging device may be configured to capture an image automatically.
- a camera may automatically capture one or more images based on a user's response to a question. For example, if the camera asks a user, “Are we at a football game?” and the user responds, “Yes,” then the camera may automatically capture one or more images whenever the players on the football field are moving.
- Various exemplary processes for automatically capturing an image are discussed herein.
- Automatically capturing one or more images based on a user's response to a question may comprise one or more of, without limitation: determining whether the camera should automatically capture one or more images based on a user's response to a question; determining what images the camera should automatically capture based on a user's response to a question; and determining how the camera should treat the one or more automatically-captured images (e.g., compressing them) based on a user's response to a question.
- One way for a user's response to a question to affect a process of automatically capturing images is for the camera to adjust a setting relating to automatically capturing images.
- a setting on the camera that may relate to automatically capturing images is a condition for automatically capturing images.
- the camera may automatically capture an image when a condition is true.
- a user's response to a question may be a factor that affects a condition.
- a threshold value for determining whether to store an image may relate to the automatic capturing of images.
- the camera may capture an image and then determine a rating for the image based on the quality of the image. If the rating of the image is higher than a threshold value, then the camera may store the automatically-captured image.
- the automatically-captured image may be compressed or deleted.
- a parameter that affects how much an automatically-captured image is compressed may be adjusted.
- the camera may automatically compress an automatically-captured image based on a compression setting. For instance, images with greater compression settings may be compressed more and images will lesser compression settings may be compressed less.
- a camera may manage one or more images stored in memory based on a user's response to a question.
- Managing images stored in memory may include, without limitation, one or more of:
- a camera may include a radio modem, cell phone, or other wireless network connection that allows the camera to transmit images to a second electronic device (e.g., a computer server, a laptop computer, a cell phone).
- a second electronic device e.g., a computer server, a laptop computer, a cell phone.
- Uploading an image using a network may be particularly useful in sharing images with other people (e.g., friends of a user) or freeing up memory on the camera (e.g., since an image may not be stored on the second electronic device).
- a camera may use image editing software to modify one or more images that are stored in memory. Examples of modifications that may be made to images include cropping, removing red-eye, color balancing, removing shadows, removing objects from the foreground or background, adding or removing meta-data, and combining images into a panorama.
- a camera may automatically compress or delete one or more images stored in memory in order to make room for additional images that may be captured by the camera.
- Each scenario includes a question (e.g., asked by a camera), a response from a user, and an action:
- Action Use a wireless network connection (e.g., 3G cellular network) to upload images from the camera to a central computer. Then delete the images that have been uploaded, thereby freeing up space in the camera's memory for the 20-30 additional images that the user plans on capturing.
- a wireless network connection e.g., 3G cellular network
- Action Process the captured image to remove shadows that fall across people's faces and red-eye that may have resulted from using a flash.
- Action Sort the images in memory into images of Alice and images of Bob. Re-compress all the images of Alice using a JPEG compression setting of 80%, thereby reducing the file sizes of these images and freeing up memory space in the camera. Do not perform any additional compression on the images of Bob that are stored in memory.
- One or more embodiments of the present invention may enable a camera or other imaging device to determine more easily a scene to be captured in an image and a user's intentions for capturing an image. Such determinations will enable some types of users to more easily adjust the settings on their cameras, making capturing images a simpler and more enjoyable process. In addition, some embodiments of the invention may allow a user to capture better images, even if he does not have a detailed knowledge of photography.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
According to one embodiment of the invention, a camera captures an image. The image is transmitted to a server for image recognition processing. The camera receives information from the server, including an indication of information to suggest to a user for meta-tagging the image. The suggested information may be based, for example, on a comparison of the image with meta-information stored by the server and/or a database of stored images. The camera asks the user if the user would like to meta-tag the image with the information. Optionally, the camera receives an indication from the user that the user would like to meta-tag the image with the suggested information, and the camera meta-tags the image with the information.
Description
- The present Application claims the benefit of U.S. Provisional Application Serial No. 60/434,475 filed Dec. 18, 2002, in the name of Walker et al. The entirety of this provisional application is incorporated by reference herein for all purposes.
- FIG. 1 shows a block diagram of a system that is consistent with at least one embodiment of the present invention.
- FIG. 2 shows a block diagram of a system that is consistent with at least one embodiment of the present invention.
- FIG. 3 shows a block diagram of a camera in communication with a computing device that is consistent with at least one embodiment of the present invention.
- FIG. 4 shows a block diagram of a computing device that is consistent with at least one embodiment of the present invention.
- FIG. 5 shows a block diagram of a camera that is consistent with at least one embodiment of the present invention.
- FIG. 6 shows a block diagram of a camera that is consistent with at least one embodiment of the present invention.
- FIG. 7 is a table illustrating an exemplary data structure of a settings database consistent with at least one embodiment of the present invention.
- FIG. 8 is a table illustrating an exemplary data structure of an image database consistent with at least one embodiment of the present invention.
- FIG. 9 is a table illustrating an exemplary data structure of a question database consistent with at least one embodiment of the present invention.
- FIG. 10 is a table illustrating an exemplary data structure of a determination condition database consistent with at least one embodiment of the present invention.
- FIG. 11 is a table illustrating an exemplary data structure of an output condition database consistent with at least one embodiment of the present invention.
- FIGS. 12A and 12B are a table illustrating an exemplary data structure of a response database consistent with at least one embodiment of the present invention.
- FIG. 13A is a table illustrating an exemplary data structure of an event log corresponding to capturing images at a wedding, in accordance with at least one embodiment of the present invention.
- FIG. 13B is a table illustrating an exemplary data structure of an event log corresponding to capturing images on a sunny beach, in accordance with at least one embodiment of the present invention.
- FIG. 14 is a table illustrating an exemplary data structure of an expiring information database consistent with at least one embodiment of the present invention.
- FIG. 15 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- FIG. 16 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for performing an action based on a response.
- FIG. 17 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for performing an action based on a response.
- FIG. 18 is a flowchart illustrating a process consistent with at least one embodiment of the present invention for suggesting meta-information.
- FIG. 19 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- FIG. 20 is a flowchart illustrating a process consistent with at least one embodiment of the present invention.
- Applicants have recognized that, in accordance with some embodiments of the present invention, some types of users of cameras and other imaging devices may find it appealing to have a camera that is able to determine a variety of different types of information that may be useful in performing a variety of functions and/or assisting a user in the performance of various actions. Also, some types of users may find it appealing to use a camera having enhanced features to facilitate information gathering (e.g., via interaction with a user, by detection of environmental conditions, by communication with other devices). In accordance with some embodiments, such information may be used, for example, in managing images (e.g., suggesting a meta-tag for an image) and in improving the quality of images (e.g., by adjusting a camera setting).
- Applicants have also recognized that some types of users of cameras and other imaging devices may find it appealing to be able to receive a variety of different types of questions (e.g., open-ended questions) and/or suggestions (e.g., suggested meta-data to associate with an image) from a camera, as provided for in accordance with at least one embodiment of the present invention. Some types of users may also find it appealing to be able to provide-responses to questions output by a camera.
- Some users of cameras (e.g., casual users) seldom adjust their cameras to capture images in the best possible way. Also, even automatic cameras, for example, may still make mistakes in estimating what images a user wants to capture and what settings are best for capturing images. Further, even if a user knows how to adjust his camera correctly, he may occasionally forget to adjust his camera when he is capturing images. Accordingly, Applicants have recognized that some types of users may find it appealing to use a camera having an interface that is convenient and not time consuming for a user to adjust his camera and that may optionally suggest (or automatically make) settings adjustments, as provided for in some embodiments of the present invention.
- At least one embodiment of the invention includes a camera that may output questions to a user. The user may respond to these questions (e.g., providing information about a scene that he is interested in photographing) and one or more settings on the camera may be adjusted based on the user's response.
- For example, a camera may ask a user: “Are you at the beach?” If the user responds “Yes” to this question, then the camera may adjust one or more of its settings (e.g., aperture, shutter speed, white balance, automatic neutral density) based on the user's response. In a second example, the camera may ask a user a plurality of questions, starting with “Are you indoors?” If the user responds that he is indoors, then the camera may ask the user a second question: “What type of lights does this room have?” In addition to outputting the question, the camera may output a list of potential answers to the question (e.g., “Fluorescent,” “Tungsten,” “Halogen,” “Skylight,” and “I don't know”). The user may respond to the question by selecting one of the potential answers from the list. For example, if the user responds “Fluorescent” to this question, then the camera may adjust its settings to “Fluorescent Light” mode, in which the camera's white balance, aperture, shutter speed, image sensor sensitivity and other settings are adjusted for taking pictures in a room that is lit with fluorescent light bulbs.
- Various embodiments of the present invention are described herein with reference to the accompanying drawings. The leftmost digit(s) of a reference numeral typically identifies the figure in which the reference numeral first appears.
- Embodiments of the present invention will first be introduced by means of block diagrams of exemplary systems and devices that may be utilized by an entity practicing the present invention. Exemplary data structures illustrating tables that may be used when practicing various embodiments of the present invention will then be described, along with corresponding flowcharts that illustrate exemplary processes with reference to the exemplary devices, systems, and tables.
- 1. Systems and Devices
- Referring now to FIG. 1, a block diagram of a
system 100 according to at least one embodiment of the present invention includes one or more servers 110 (e.g., a personal computer, a Web server) in communication, via acommunications network 120, with one or more cameras 130 (e.g., digital camera, video camera, wireless phone with integrated digital camera). Each of theservers 110 andcameras 130 may comprise one or more computing devices, such as those based on the Intel Pentium® processor, that are adapted to communicate with any number and type of devices (e.g., other cameras and/or servers) via thecommunications network 120. Although only twocameras 130 and twoservers 110 are depicted in FIG. 1, it will be understood that any number and type ofcameras 130 may communicate with any number ofservers 110 and/or other cameras 130 (and vice versa). - According to one or more embodiments of the present invention, a
camera 130 may communicate with aserver 110 in order to determine a question to output to a user. For example, thecamera 130 may transmit various information (e.g., images, GPS coordinates) to acomputer server 110. Theserver 110 may then determine a question based on this information. Theserver 110 may then transmit the question to thecamera 130 and thecamera 130 may output the question to a user. - Communication among the
cameras 130 and theservers 110 may be direct or may be indirect, and may occur via a wired or wireless medium. Some, but not all, possible communication networks that may comprise network 120 (or may otherwise be part ofsystem 100 and/or other exemplary systems described herein) include: a local area network (LAN), a wide area network (WAN), the Internet, a telephone line, a cable line, a radio channel, an optical communications line, and a satellite communications link. In yet other embodiments, the devices of thesystem 100 may communicate with one another over RF, cable TV, satellite links and the like. Some possible communications protocols that may be part ofsystem 100 include, without limitation: Ethernet (or IEEE 802.3), SAP, ATP, Bluetooth™, IEEE 802.11, CDMA, TDMA, ultra-wideband, universal serial bus (USB), and TCP/IP. Optionally, communication may be encrypted to ensure privacy and to prevent fraud in any of a variety of ways well known in the art. Of course, in lieu of or in addition to the exemplary communications means described herein, any appropriate communications means or combination of communications means may be employed in thesystem 100 and in other exemplary systems described herein. - For example, communication may take place over the Internet through a Web site maintained by a
server 110 on a remote server, or over an on-line data network including commercial on-line service providers, bulletin board systems and the like. In another example, using the wireless capabilities of his mobile phone, a user may upload an image captured using the integrated digital camera to his personal computer, or to a personal database of images on a Web server maintained by his telecommunications company. In another example, while a user is still away from home on vacation, the user's personal computer may receive, via a cable modem, a series of vacation snapshots taken by the user, and may also transmit information about those snapshots and/or questions related to those snapshots back to the user's digital camera. - According to one or more embodiments of the present invention, a
server 110 may comprise an external or internal module associated with one or more of thecameras 130 that is capable of communicating with one or more of thecameras 130 and of directing the one ormore cameras 130 to perform one or more functions. For example, aserver 110 may be configured to execute a program for controlling one or more functions of acamera 130 remotely. Similarly, acamera 130 may comprise a module associated with one ormore servers 110 that is capable of directing one ormore servers 110 to perform one or more functions. For example, acamera 130 may be configured to direct aserver 110 to execute a facial recognition program on a captured image and to return an indication of the best matches to thecamera 130 via thecommunication network 120. - A
camera 130 may be operable to access one or more databases (e.g., of server 110) to provide suggestions and/or questions to a user of thecamera 130 based on, for example, an image captured by thecamera 130 or on information gathered by the camera 130 (e.g., information about lighting conditions). Acamera 130 may also be operable to access a database (e.g., an image database) via thenetwork 120 to determine what meta-information (e.g., information descriptive of an image) to associate with one or more images. For example, as discussed further herein, a database of images and/or image templates may be stored for a user on aserver 110. Various functions of acamera 130 and/or theserver 110 may be performed based on images stored in a personalized database. For instance, an image recognition program running on the server 110 (or on the camera 130) may use the user's personalized database of images for reference in identifying people, objects, and/or scenes in an image captured by the user. If, in accordance with a preferred embodiment, the user has identified the content of some of the images in the database himself (e.g., by associating a meta-tag with an image), a match determined by the image recognition software with reference to the customized database is likely to be acceptable to the user (e.g., the user is likely to agree to a suggestion to associate a meta-tag from a stored reference image with the new image also). - Information exchanged by the exemplary devices depicted in FIG. 1 may include, without limitation, images and indications of changes in settings or operation of a camera130 (e.g., an indication that a user or the
camera 130 has altered an exposure setting). Other exemplary types of information that may be determined by thecamera 130 and/or theserver 110 and communicated to one or more other devices are described herein. Theserver 110, for example, may monitor operations of a camera 130 (and/or activity of a user) via thenetwork 120. For instance, theserver 110 may identify a subject a user is recording images of and, optionally, use that information to direct thecamera 130 to ask if the user would like to e-mail or otherwise transmit a copy of the captured image to the subject. - With respect to the various exemplary systems, devices, and methods discussed herein, those skilled in the art will understand that devices in communication with each other need not be continually transmitting to each other. On the contrary, such devices need only transmit to each other as necessary, and may actually refrain from exchanging data most of the time. For example, a device in communication with another device via the Internet may not transmit data to the other device for weeks at a time.
- According to some embodiments, various processes may be performed by the
camera 130 in conjunction with theserver 110. For example, some steps of a described process may be performed by thecamera 130, while other steps are performed by theserver 110. As discussed herein, data useful in providing some of the described functionality may be stored on one of or both of thecamera 130 and server 110 (or other devices). - In some embodiments, as discussed herein, the
servers 110 may not be necessary and/or may not be preferred. For example, some embodiments of the present invention may be practiced using acamera 130 alone, as described herein. In such embodiments, one or more functions described as being performed by theserver 110 may be performed by thecamera 130, and some or all of the data described as being stored on aserver 110 may be stored on thecamera 130 or on another device in communication with the camera 130 (e.g., another camera, a personal digital assistant (PDA)). - Similarly, in some embodiments the
cameras 130 may not be necessary and/or may not be preferred. Accordingly, one or more functions described herein as being performed by thecamera 130 may be performed by theserver 110, and some or all of the described as being stored on thecamera 130 may be stored on theserver 110 or on another device in communication with the server 110 (e.g., a PDA, a personal computer). - A
server 110 may be embodied in a variety of different forms, including, without limitation, a mainframe computer (e.g., an SGI Origin™ server), a personal computer (e.g., a Dell Dimension™ computer), and a portable computer (e.g., an Apple iBook™ laptop, a Palm m515™ PDA, a Kyocera 7135™ cell phone). Several examples of types of cameras, servers, and other devices are discussed herein, and other types consistent with various embodiments of the present invention will be readily understood by those of skill in the art in light of the present disclosure. - Referring now to FIG. 2, a block diagram of a
system 200 according to at least one embodiment of the present invention includes animaging device 210 in communication (e.g., via a communications network or system bus) with acomputing device 220. Various exemplary means by which devices may communicate are discussed above with respect to FIG. 1. Although only oneimaging device 210 and onecomputing device 220 are depicted in FIG. 2, it will be understood that any number and type ofimaging devices 210 may communicate with any number ofcomputing devices 220. - Various types of
imaging devices 210 andcomputing devices 220 are discussed herein. Theimaging device 210 preferably comprises at least one device or component for recording an image, such as, without limitation, an image sensor, a camera, or a handheld device having an integrated camera. It will be understood, therefore, that a lens and an image sensor, for example, may each be referred to individually as an imaging device, or, alternatively, two or more such components may be referred to collectively as an imaging device (e.g., as embodied in a camera or PDA). Further, it will be understood, as discussed further below with respect to FIG. 3, that a device embodying any such components (e.g., a camera) may itself be referred to as an imaging device. - The
imaging device 210 may further comprise one or more types of computing devices, such as those based on the Intel Pentium® processor, adapted to communicate with thecomputing device 220. For example, as will be readily apparent to those skilled in the art, many types of cameras include an imaging device (e.g., an image sensor for capturing images) and a computing device (e.g., a processor for executing camera functions). For example, referring now to FIG. 3, a block diagram of asystem 300 according to at least one embodiment of the present invention includes acamera 310 in communication (e.g., via a communications network) with aserver 340. Thecamera 310 itself comprises an imaging device 320 (e.g., an image sensor and/or lens) and a computing device 330 (e.g., a camera processor) that is in communication (e.g., via a communication port of the computing device 330) with the server 340 (e.g., a Web server). It will be understood that a device such as thecamera 310, comprising both an imaging device and a computing device, may itself be referred to, alternatively, as an imaging device or a computing device. - Referring again to FIG. 2, a computer or
computing device 220 may comprise one or more processors adapted to communicate with the imaging device 210 (or one or more computing devices of the imaging device 210). As discussed herein, a computer orcomputing device 220 preferably also comprises a memory (e.g., storing a program executable by the processor) and may optionally comprise a communication port (e.g., for communication with an imaging device 210). Some examples of a computer orcomputing device 220 include, without limitation: a camera processor, a camera, a server, a PDA, a personal computer, a computer server, personal computer, portable hard drive, digital picture frame, or other electronic device. Thus, acomputing device 220 may but need not include any devices for capturing images. Some exemplary components of a computing device are discussed in further detail below with respect to FIGS. 4-6. - In some exemplary embodiments of the present invention, as discussed herein,
imaging device 210 comprises a camera (e.g., acamera 130 of FIG. 1) and thecomputing device 220 comprises a server (e.g., aserver 110 of FIG. 1). In another example consistent with at least one embodiment of the present invention, thesystem 200 depicts components of a camera or other device capable of recording images. For instance, theimaging device 210 may comprise an image sensor or lens in communication via a camera system bus with acomputing device 220 such as a camera computer or integrated communication device (e.g., a mobile phone). - An
imaging device 210 orcamera 310 may communicate with one or more other devices (e.g.,computing device 220, server 340) in accordance with one or more systems and methods of the invention. Examples of devices that an imaging device may communicate with include, without limitation: - (i) a personal digital assistant (PDA)
- (ii) a cellular telephone
- (iii) a digital wallet (e.g., the iPod™ by Apple, the MindStor™ from Minds@Work, Nixvue's Digital Album™)
- (iv) a portable stereo (e.g., an MP3 music player, a Sony Discman™)
- (v) a notebook computer
- (vi) a tablet computer
- (vii) a digital picture frame (e.g., Iomega's FotoShow™, NORDview's Portable Digital Photo Album™)
- (viii) a GPS device (e.g., such as those manufactured by Garmin)
- (ix) a personal computer
- According to various embodiments of the present invention, an
imaging device 210 may transfer one or more images to a second device (e.g., computing device 220). Some examples are provided with reference to FIGS. 1-3. In one example, animaging device 210 may include a wireless communication port that allows the camera to transmit images to a second electronic device (e.g., a computer server, personal computer, portable hard drive, digital picture frame, or other electronic device). The second electronic device may then store copies of the images. After transferring the images to this second electronic device, theimaging device 210 may optionally delete the images, since the images are now stored securely on the second electronic device. - According to another exemplary embodiment, the
camera 310 may include a cellular telephone or be connected to a cellular telephone with wireless modem capabilities (e.g., a cellular telephone on a 2.5G or 3G wireless network). Using the cellular telephone, the camera may transmit one or more images to thecomputer server 340, which may store the images. - In another example, an
imaging device 210 may communicate with a portable hard drive such as an Apple iPod™. To free up memory on theimaging device 210, theimaging device 210 may transfer images to the portable hard drive. - In another example, the
camera 130 may have a wireless Internet connection (e.g., using the 802.11 wireless protocol) and use this connection to transmit images to a personal computer that is connected to the Internet. Note that by transferring an image from a camera to a second electronic device, the camera may effectively expand its available memory. That is, some or all of the memory on the second electronic device may be available to the camera for storing images. - According to various embodiments of the present invention, a
camera 310 orother imaging device 210 may communicate with an electronic device to output a question to a user. For example, a camera may transmit a question to a user's PDA. The question may then be displayed to the user by the PDA. Using a PDA or other device with a relatively large display may make it easier for a user to view a question (e.g., a question that includes a large amount of text or a question which is based on an image). - In another example, a digital camera may queue up a plurality of questions and output these questions to a user's personal computer when the user uploads photos from the camera to the personal computer. The personal computer may run software that outputs the questions to the user and enables the user to respond to the questions. Viewing questions on a personal computer may be more convenient than viewing questions using the digital camera. Of course, a user's response to a question may be less useful to the camera (e.g., in adjusting settings on the camera) if this response is provided after the user has already finished capturing images.
- According to some embodiments, a camera or other imaging device may communicate with an electronic device to receive an input to a user. For example, a user may use a PDA to indicate a response to a question and then the PDA may transmit an indication of this response to the camera using a Bluetooth communication link. For example, a user may highlight a portion of an image, select a response from a list of responses, or write a free form response using the stylus on his PDA. Providing an input to the camera using a PDA or other electronic device may be particularly convenient for a user because the PDA may include one or more input devices that are not present on the camera (e.g., a touch screen, a GPS device).
- In another example, a user may carry a GPS device that is separate from the camera but that communicates with the camera using a USB cable. In order to indicate his location, the user may transmit an indication of his latitude and longitude from the GPS device to the camera. In yet another example, all user control of a camera may be implemented through a user's cellular telephone. For example, the user may use his cellular telephone to remotely operate the camera, pressing the “1” and “2” keys to zoom in and zoom out, the “3” key to capture a picture, and the “4” and “5” keys to answer “Yes” and “no” to questions output by the camera. One advantage of having a second device implement a large number of controls for the camera is that the camera can have a very small form factor, but still be operable by a large number of controls because all of these controls are on the second device.
- 1.1. Computing Device
- Referring now to FIG. 4, illustrated therein is a block diagram of an
embodiment 400 of computing device 220 (FIG. 2) or computing device 330 (FIG. 3). Thecomputing device 400 may be implemented as a system controller, a dedicated hardware circuit, an appropriately programmed general-purpose computer, or any other equivalent electronic, mechanical or electromechanical device. Thecomputing device 400 may comprise, for example, a server computer operable to communicate with one or more client devices, such as animaging device 210. Thecomputing device 400 may be operative to manage thesystem 100, thesystem 200, thesystem 300, and/or thecamera 310 and to execute various methods of the present invention. - In operation, the
computing device 400 may function under the control of a user, remote operator, image storage service provider, or other entity that may also control use of animaging device 210 and/orcomputing device 220. For example, thecomputing device 400 may be a Web server maintained by an Internet services provider, or may be a computer embodied in acamera 310 orcamera 130. In some embodiments, thecomputing device 400 and animaging device 210 may be different devices. In some embodiments, thecomputing device 400 and theimaging device 210 may be the same device. In some embodiments, thecomputing device 400 may comprise more than one computer operating together. - The
computing device 400 comprises aprocessor 405, such as one or more Intel Pentium® processors. Theprocessor 405 is in communication with amemory 410 and with a communication port 495 (e.g., for communicating with one or more other devices). - The
memory 410 may comprise an appropriate combination of magnetic, optical and/or semiconductor memory, and may include, for example, Random Access Memory (RAM), Read-Only Memory (ROM), a compact disc and/or a hard disk. Theprocessor 405 and thememory 410 may each be, for example: (i) located entirely within a single computer or other device; or (ii) connected to each other by a remote communication medium, such as a serial port cable, telephone line or radio frequency transceiver. In one embodiment, thecomputing device 400 may comprise one or more devices that are connected to a remote server computer for maintaining databases. - The
memory 410 stores aprogram 415 for controlling theprocessor 405. Theprocessor 405 performs instructions of theprogram 415, and thereby operates in accordance with the present invention, and particularly in accordance with the methods described in detail herein. Theprogram 415 may be stored in a compressed, uncompiled and/or encrypted format. Theprogram 415 furthermore includes program elements that may be necessary, such as an operating system, a database management system and “device drivers” for allowing theprocessor 405 to interface with computer peripheral devices. Appropriate program elements are known to those skilled in the art, and need not be described in detail herein. - The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor405 (or any other processor of a device described herein) for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as
memory 410. Volatile media include dynamic random access memory (DRAM), which typically constitutes the main memory. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to theprocessor 405. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. - Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to processor405 (or any other processor of a device described herein) for execution. For example, the instructions may initially be borne on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to a computing device 400 (or, e.g., a server 340) can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector can receive the data carried in the infrared signal and place the data on a system bus for
processor 405. The system bus carries the data to main memory, from whichprocessor 405 retrieves and executes the instructions. The instructions received by main memory may optionally be stored inmemory 510 either before or after execution byprocessor 405. In addition, instructions may be received viacommunication port 495 as electrical, electromagnetic or optical signals, which are exemplary forms of carrier waves that carry data streams representing various types of information. Thus, thecomputing device 400 may obtain instructions in the form of a carrier wave. - According to an embodiment of the present invention, the instructions of the
program 415 may be read into a main memory from another computer-readable medium, such from a ROM to RAM. Execution of sequences of the instructions inprogram 415 causesprocessor 405 to perform the process steps described herein. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of the present invention. Thus, embodiments of the present invention are not limited to any specific combination of hardware and software. - The
memory 410 also preferably stores a plurality of databases, including asettings database 420, animage database 425, aquestion database 430, adetermination condition database 435, anoutput condition database 440, aresponse database 445, anevent log 450, and an expiringinformation database 455. Examples of each of these databases is described in detail below and example structures are depicted with sample entries in the accompanying figures. - As will be understood by those skilled in the art, the schematic illustrations and accompanying descriptions of the sample databases presented herein are exemplary arrangements for stored representations of information. Any number of other arrangements may be employed besides those suggested by the tables shown. For example, even though eight separate databases are illustrated, the invention could be practiced effectively using any number of functionally equivalent databases. Similarly, the illustrated entries of the databases represent exemplary information only; those skilled in the art will understand that the number and content of the entries can be different from those illustrated herein. Further, despite the depiction of the databases as tables, an object-based model could be used to store and manipulate the data types of the present invention and, likewise, object methods or behaviors can be used to implement the processes of the present invention.
- Note that, although these databases are described with respect to FIG. 4 as being stored in one computing device, in other embodiments of the present invention some or all of these databases may be partially or wholly stored in another device, such as one or
more imaging devices 210, one or more of thecameras 130, one or more of theservers memory 410 of the computing device 400) in a memory of one or more other devices. - 1.2. Camera
- Referring now to FIG. 5, illustrated therein is a block diagram of an
embodiment 530 of a camera (e.g.,camera 130 of FIG. 1,camera 310 of FIG. 3) in communication (e.g., via a communications network) with aserver 550. Thecamera 530 may be implemented as a system controller, a dedicated hardware circuit, an appropriately configured computer, or any other equivalent electronic, mechanical or electromechanical device. Thecamera 530 may comprise, for example, any of various types of cameras well known in the art, including, without limitation, a still camera, a digital camera, an underwater camera, and a video camera. A still camera, for example, typically includes functionality to capture images that may be displayed individually. A single lens reflex (SLR) camera is one example of a still camera. A video camera typically includes functionality to capture movies or video (i.e., one or more sequences of images typically displayed in succession). A still image, movie file or video file may or may not include or be associated with recorded audio. It will be understood by those skilled in the art that some types of cameras, such as the Powershot A40™ by Canon U.S.A., Inc., may include functionality to capture movies and functionality to capture still images. - The
camera 530 may comprise any or all of thecameras 130 of system 100 (FIG. 1) or the imaging device 210 (FIG. 2). In some embodiments, a user device such as a PDA or cell phone may be used in place of, or in addition to, some or all of thecamera 530 components depicted in FIG. 5. Further, a camera may comprise a computing device or other device operable to communicate with another computing device (e.g., a server 110). - The
camera 530 comprises aprocessor 505, such as one or more Intel Pentium™ processors. Theprocessor 505 is in communication with amemory 510 and a communication port 520 (e.g., for communicating with one or more other devices). Thememory 510 may comprise an appropriate combination of magnetic, optical and/or semiconductor memory, and may include, for example, Random Access Memory (RAM), Read-Only Memory (ROM), a programmable read only memory (PROM), a compact disc and/or a hard disk. Thememory 510 may comprise or include any type of computer-readable medium. Theprocessor 505 and thememory 510 may each be, for example: (i) located entirely within a single computer or other device; or (ii) connected to each other by a remote communication medium, such as a serial port cable, telephone line or radio frequency transceiver. In one embodiment, thecamera 530 may comprise one or more devices that are connected to a remote server computer for maintaining databases. - According to some embodiments,
memory 510 ofcamera 530 may comprise an image buffer (e.g., a high-speed buffer for transferring images from an image sensor) and/or a flash memory (e.g., a high-capacity, removable flash memory card for storing images). A wide variety of different types of memory are possible and are known to those skilled in the art. For example, memory may be volatile or non-volatile; may be electronic, capacitive, inductive, or magnetic in nature; and may be accessed sequentially or randomly. - Memory may or may not be removable from a camera. Many types of cameras may use one or more forms of removable memory, such as chips, cards, and/or discs, to store and/or to transfer images and other data. Some examples of removable media include CompactFlash™ cards, SmartMedia™ cards, Sony Memory Sticks™, MultiMediaCards™ (MMC) memory cards, Secure Digital™ (SD) memory cards, IBM Microdrives™, CD-R and CD-RW recordable compact discs, and DataPlay™ optical media.
- The
memory 510 stores aprogram 515 for controlling theprocessor 505. Theprogram 515 may comprise instructions (e.g., Digita® imaging software, image recognition software) for capturing images and/or for one or more other functions. Theprocessor 505 performs instructions of theprogram 515, and thereby operates in accordance with the present invention, and particularly in accordance with the methods described in detail herein. Theprogram 515 may be stored in a compressed, uncompiled and/or encrypted format. Theprogram 515 furthermore includes program elements that may be necessary, such as an operating system, a database management system and “device drivers” for allowing theprocessor 505 to interface with computer peripheral devices. Appropriate program elements are known to those skilled in the art, and need not be described in detail herein. - According to one embodiment of the present invention, the instructions of the
program 515 may be read into a main memory from another computer-readable medium, such from a ROM to RAM. Execution of sequences of the instructions inprogram 515 causesprocessor 505 to perform the process steps described herein. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of the present invention. Thus, embodiments of the present invention are not limited to any specific combination of hardware and software. As discussed with respect tosystem 100 of FIG. 1, execution of sequences of the instructions in a program of aserver 110 in communication withcamera 530 may also causeprocessor 505 to perform some of the process steps described herein. - The
memory 510 optionally also stores one or more databases, such as the exemplary databases described in FIG. 4. An example of acamera memory 510 storing various databases is discussed herein with respect to FIG. 6. - The
processor 505 is preferably also be in communication with one or more imaging devices 535 (e.g., a lens, an image sensor) embodied in thecamera 530. Various types of imaging devices are discussed herein and in particular with respect to FIG. 6. - The
processor 505 is preferably also in communication with one or more input devices 525 (e.g., a button, a touch screen) andoutput devices 540. Various types of input devices and output devices are described herein and in particular with respect to FIG. 6. - Such one or
more output devices 540 may comprise, for example, an audio speaker (e.g., for outputting a question to a user), an infra-red transmitter (e.g., for transmitting a suggested meta-tag to a user's PDA), a display device (e.g., a liquid crystal display (LCD)), a radio transmitter, and a printer (e.g., for printing an image). - An
input device 525 is capable of receiving an input (e.g., from a user or another device) and may be a component ofcamera 530. An input device may communicate with or be part of another device (e.g. a server, a PDA). For cameras, common input devices include a button or dial. Some other examples of input devices include: a keypad, a button, a handle, a keypad, a touch screen, a microphone, an infrared sensor, a voice recognition module, a motion detector, a network card, a universal serial bus (USB) port, a GPS receiver, a radio frequency identification (RFID) receiver, an RF receiver, a thermometer, a pressure sensor, and an infra-red port (e.g., for receiving communications from with a second camera or another device such as a smart card or PDA of a user). - Referring now to FIG. 6, illustrated therein is a more detailed block diagram of a
embodiment 600 of a camera (e.g.,camera 130 of FIG. 1,camera 530 of FIG. 5). Thecamera 630 comprises aprocessor 605, such as one or more Intel Pentium™ processors. Theprocessor 605 is in communication with amemory 610 and a communication port 695 (e.g., for communicating with one or more other devices). Thememory 610 may comprise or include any type of computer-readable medium, and stores aprogram 615 for controlling theprocessor 605. Theprocessor 605 performs instructions of theprogram 615, and thereby operates in accordance with various processes of the present invention, and particularly in accordance with the methods described in detail herein. - The
memory 610 stores a plurality of databases, including asettings database 620, animage database 625, aquestion database 630, adetermination condition database 635, anoutput condition database 640, aresponse database 645, anevent log 650, and an expiringinformation database 655. Examples of each of these databases is described in detail below and example structures are depicted with sample entries in the accompanying figures. - The
processor 605 is preferably also in communication with a lens 660 (e.g., made of glass), animage sensor 665, one or more controls 670 (e.g., an exposure control), one ormore sensors 675, one or more output devices 680 (e.g., a liquid crystal display (LCD)), and a power supply 685 (e.g., a battery, a fuel cell, a solar cell). Various examples of these types of components are described herein. - A processor of a
camera 600 may be capable of executing instructions (e.g., stored in memory 610) such as software (e.g., for wireless and/or digital imaging, such as Digita® software from Flashpoint Technology, Inc.). - As indicated in FIG. 6, a camera may include one or more input devices capable of receiving data, signals, and indications from various sources. Lenses, sensors, communication ports and controls are well known types of input devices.
- Various types of lenses that may be used with cameras are well known, including telephoto, wide-angle, macro, and zoom lenses.
- As will be understood by those of skill in the art, an image sensor may be an area that is responsive to light and may be used to capture an image. An image sensor may or may not be an electronic device. Some examples of image sensors include, without limitation: a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) image sensor, such as the X3® PRO 10M™ CMOS image sensor by Foveon. An image sensor may comprise software or other means for image identification/recognition. “Image sensor” may be most often used to refer to an electronic image sensor, but those skilled in the art will recognize that various other technologies (e.g., a light sensitive film like that used in analog cameras) may also function as image sensors.
- A camera may include one or more output devices. Examples of output devices include, without limitation: a display (e.g., a color or black-and-white liquid crystal display (LCD) screen), an audio speaker (e.g., for outputting questions), a printer (e.g., for printing images), a light emitting diode (LED) (e.g., for indicating that a self-timer is functioning, for indicating that a question for the user is pending), and a touch screen. A display may be useful, for example, for displaying images and/or for displaying camera settings.
- The camera may also include one or more communication ports for use in communicating with one or more other devices. For example, a USB (universal serial bus) or Firewire® (IEEE-1394 standard) connection port may be used to exchange images and other types of data with a personal computer or digital wallet (e.g., an Apple iPod™). The camera may be in communication with a cellular telephone, personal digital assistant (PDA) or other wireless communications device. Images and other data may be transmitted to and from the camera using this wireless communications device. For example, the SH251I™ cellular telephone by Sharp Corporation includes a 3.1 megapixel CCD camera, and allows users to receive image files via e-mail. In yet another example, a camera may include a radio antenna for communicating with a radio beacon. For instance, a subject of a photo may carry a radio beacon that may communicate with the camera and provide information that is useful in determining settings for the camera (e.g., information about the light incident on the subject).
- As will be understood by those skilled in the art, a camera may include one or more controls or other input devices. Examples of controls include, without limitation: a button (e.g., a shutter button), a switch (e.g., an on/off switch), a dial (e.g., a mode selection dial), a keypad, a touch screen, a microphone, a bar code reader (e.g., such as the one on the 1991 version of the Canon EOS Elan™), a remote control (e.g., such as the one on the Canon Powershot G2™), a sensor, a trackball, a joystick, a slider bar, and a continuity sensor.
- Controls on a camera or other type of imaging device may be used to perform a variety of functions. In accordance with various embodiments of the present invention, a control may be used, without limitation, to adjust a setting or other parameter, provide a response to a question, or operate the camera. For example, a user may press the shutter button on the camera to capture an image. Controls may be used to adjust one or more settings on the camera. For example, a user may use “up” and “down” buttons on a camera to adjust the white balance on the camera. In another example, a user may use a mode dial on the camera to select a plurality of settings simultaneously. For example, a user may use a control to indicate to the camera that he would like a question to be output as an audio recording, or to any adjust any of various other types of parameters of how the camera is to operate and/or interact with the user. As discussed herein controls may be used to provide an indication to the camera. For example, a user may use a control to indicate that he would like to have a question output to him. In still another example, a user may use a control to provide a response to a question or to provide other information, such as indicating that the user is in a room with fluorescent lights, at a beach, or capturing images of a football game.
- Various types of sensors that may be included in a camera include, without limitation: a light sensor, an image sensor, a range sensor (e.g., for determining the distance to a subject), a microphone (e.g., for recording audio that corresponds to a scene), a global positioning system (GPS) device (e.g., for determining a camera's location), a camera orientation sensor (e.g., an electronic compass, a tilt sensor, an altitude sensor, a humidity sensor, a clock (e.g., indicating the time of day, day of the week, month, year), and a temperature/infrared sensor.
- According to some embodiments, a microphone may be useful for allowing a user to control the camera using voice commands. Voice recognition software (e.g., ViaVoice™ from IBM Voice Systems) is known to those skilled in the art and need not be described further herein.
- 1.3. Databases
- As will be understood by those skilled in the art, a setting for a camera may be a parameter that affects how the camera operates (e.g., how the camera captures at least one image). Examples of types of settings on a camera include, without limitation: exposure settings, lens settings, digitization settings, flash settings, multi-frame settings, power settings, output settings, function settings, and mode settings. Some more detailed examples of these types of settings are discussed further below.
- Exposure settings may affect the exposure of a captured image. Examples of exposure settings include, without limitation: shutter speed, aperture, image sensor sensitivity (e.g., measured as ISO or ASA), white balance, color hue, and color saturation. Lens settings may affect properties of a lens on the camera. Examples of lens settings include, without limitation: focus (e.g., near or far), optical zoom (e.g., telephoto, wide angle), optical filters (e.g., ultraviolet, prism), an indication of which lens to use (e.g., for a camera that has multiple lenses) or which portion of a lens, field of view, and image stabilization (e.g., active or passive image stabilization).
- Digitization settings may affect how the camera creates a digital representation of an image. Examples of digitization settings include, without limitation: resolution (e.g., 1600×1200 or 640×480), compression (e.g., for an image that is stored in JPG format), color depth/quantization, digital zoom, and cropping. For instance, a cropping setting may indicate how the camera should crop an acquired digital image when storing it to memory.
- Flash settings may affect how the flash on the camera operates. Examples of flash settings include, without limitation: flash brightness, red-eye reduction, and flash direction (e.g., for a bounce flash). Multi-frame settings may affect how the camera captures a plurality of related images. Examples of multi-frame settings include, without limitation: a burst mode (e.g., taking a plurality of pictures in response to one press of the shutter button), auto-bracketing (e.g., taking a plurality of pictures with different exposure settings), a movie mode (e.g., capturing a movie), and image combination (e.g., using Canon's PhotoStitch™ program to combine a plurality of images into a single image).
- Power settings may affect the supply of power to one or more of the camera's electronic components. Examples of power settings include, without limitation: on/off and “Power-Save” mode (e.g., various subsystems on a camera may be put into “Power-Save” mode to prolong battery life).
- Output settings may affect how the camera outputs information (e.g., to a user, to a server, to another device). Examples of output settings include, without limitation: language (e.g., what language is used to output prompts, questions, or other information to a user), viewfinder settings (e.g., whether a digital viewfinder on the camera is enabled, how a heads-up-display outputs information to a user), audio output settings (e.g., whether the camera beeps when it captures an image, whether questions may be output audibly), and display screen settings (e.g., how long the camera displays images on its display screen after capturing them).
- In accordance with one or more embodiments of the present invention, a camera may be operable to capture images and to perform one or more of a variety of other functions. A function setting may cause one or more functions to be performed (and/or prevent one or more functions from being performed). For example, if an auto-rotate setting on a camera is enabled, then the camera may automatically rotate a captured image so that it is stored and displayed right side up, even if the camera was held at an angle when the image was captured. Other examples of functions that may be performed by a camera include, without limitation: modifying an image (e.g., cropping, filtering, editing, adding meta-tags), cropping an image (e.g., horizontal cropping, vertical cropping, aspect ratio), rotating an image (e.g., 90 degrees clockwise), filtering an image with a digital filter (e.g., emboss, remove red-eye, sharpen, add shadow, increase contrast), adding a meta-tag to an image, displaying an image (e.g., on a LCD screen of the camera), and transmitting an image to another device (e.g., a personal computer, a printer, a television).
- One way to adjust a setting on the camera is to change the camera's mode. For example, if the camera were to be set to “Fluorescent Light” mode, then the settings of the camera would be adjusted to the exemplary values listed in this column (i.e., the aperture would be set to automatic, the shutter speed would be set to {fraction (1/125)} sec, the film speed would be set to 200 ASA, etc.).
- In accordance with some embodiments of the present invention, a mode refers to one or more parameters that may affect the operation of the camera. A setting may be one type of parameter. Indicating a mode to the camera may be a convenient way of adjusting a plurality of settings on the camera (e.g., as opposed to adjusting each setting individually). There are many types of modes. Some types, for example, may affect settings (e.g., how images are captured) and other modes may affect outputting questions. Some exemplary modes are discussed herein, without limitation, and other types of modes will be apparent to those skilled in the art in light of the present disclosure. A “Sports” mode, for example, may describe settings appropriate for capturing images of sporting events (e.g., fast shutter speeds). For instance, a user may operate a control (e.g., a dial) to indicate that the camera should be in “Sports” mode, in which the shutter speed on the camera is faster than {fraction (1/250)} sec and burst capturing of three images is enabled. An exemplary “Fluorescent Light” mode may establish settings appropriate for capturing images under fluorescent lights (e.g., white balance). A “Sunny Beach” mode may describe settings appropriate for capturing images on sunny beaches, and a “Sunset” mode may describe settings appropriate for capturing images of sunsets (e.g., neutral density filter). An exemplary “Portrait” mode may establish settings appropriate for capturing close-up images of people (e.g., adjusting for skin tones).
- Referring now to FIG. 7, an exemplary
tabular representation 700 illustrates one embodiment of settings database 420 (or settings database 620) that may be stored in animaging device 210 and/orcomputing device 220. Thetabular representation 700 of the settings database includes a number of example records or entries, each defining a setting that may be enabled or adjusted on an imaging device such ascamera 130 orcamera 600. Those skilled in the art will understand that the settings database may include any number of entries. - The
tabular representation 700 also defines fields for each of the entries or records. The exemplary fields specify: (i) a setting 705, (ii) acurrent value 710 that indicates the present value or state of the corresponding setting, (iii) avalue 715 that indicates an appropriate value for when the camera is in a “Fluorescent Light” mode, (iv) avalue 720 that indicates an appropriate value for when the camera is in a “Sunny Beach” mode, and (v) avalue 725 that indicates an appropriate value for when the camera is in a “Sunset” mode. - The settings database may be useful, for example, for determining the
current value 710 of a given setting (e.g., “aperture”). Also, as depicted intabular representation 700, one or more values may be established for association with a given mode. For example,tabular representation 700 indicates that if the mode of the camera is “Fluorescent Light,” the “aperture” setting will be changed to “auto.” - Referring now to FIG. 8, an exemplary
tabular representation 800 illustrates one embodiment of image database 425 (or image database 625) that may be stored, for example, in aserver 110 and/orcamera 130. Thetabular representation 800 of the image database includes a number of example records or entries, each defining a captured image. Those skilled in the art will understand that the image database may include any number of entries. - The
tabular representation 800 also defines fields for each of the entries or records. The exemplary fields specify: (i) animage identifier 805 that uniquely identifies an image, (ii) animage format 810 that indicates the file format of the image, (iii) animage size 815, (iv) afile size 820, (v) atime 825 that indicates when the image was captured, and (vi) meta-data 830 that indicates any of various types of supplemental information (e.g., keyword, category, subject, description, location, camera settings when the image was captured) associated with the image. - It will be understood by those skilled in the art that a variety of different types of meta-
data 830 are possible, including position (e.g., GPS), orientation, altitude, exposure settings (aperture/shutter speed), illumination (daylight/tungsten/florescent/IR/flash), lens setting (distance/zoom position/macro), scene data (blue sky/water/grass/faces), subject motion, image content (e.g., subjects), sound annotations, date and time, preferred cropping, and scale. Other types of meta-data are discussed herein. - With respect to the
image identifier 805, a camera may automatically assign an identifier to an image, or a user may use a control (e.g., a keypad) on a camera to indicate an identifier for an image. - The image database may be useful for various types of processes described herein.
- In accordance with various embodiments of the present invention, a camera and/or server may output various different types of questions to a user.
- A question may comprise a request for information from a user. For example, a camera may output a question to a user in order to determine information useful in applying meta-information to an image or in capturing one or more images (e.g., information about lighting, information about subjects, information about a scene).
- Examples of different types of questions include, without limitation: questions about lighting, questions about people and subjects of images, questions about focus and depth of field, questions about meta-tagging and sorting, questions about events and locations, questions about the environment, questions about scenes, questions about future plans, questions about priorities, and questions about images.
- Some examples of questions about lighting include, without limitation:
- (i) “Are you indoors?”
- (ii) “What kind of light bulb(s) does this room have?” (e.g., with multiple choice answers “tungsten,” “fluorescent,” “halogen,” and “I don't know”)
- (iii) “Are we in the shade?”
- (iv) “It's very dark here. Is it nighttime?”
- Some examples of questions about people and subjects of images include:
- (i) “Who is in this picture?”
- (ii) “What other pictures is <person> in?”
- (iii) “It looks like you're taking a picture of Alice. Are you taking a picture of Alice?”
- (iv) “Is this the same person that was in picture #42<show picture #42>?”
- (v) “It looks like you're taking a picture of an animal. Are you taking a picture of your pet?”
- Examples of questions about focus and depth of field include:
- (i) “What are you trying to focus on?”
- (ii) “The background of this image looks complicated. Do you want the background to be in focus?”
- (iii) “Do you want the background to be sharp or blurred?”
- (iv) “Do you want <object> to be in focus?”
- (v) “Do you want <object> to be sharp or blurred?”
- (vi) “There's a small object in the foreground (left side) of your image. Do you want this object to be in focus?”
- Examples of questions about meta-tagging and sorting:
- (i) “Do you want to automatically store images of <subject> in a separate directory?”
- (ii) “Where should1 store images of <subject>?”
- (iii) “How would you characterize this image?”
- Examples of questions about events and locations include, without limitation:
- (i) “I think I see candles. Are we at a birthday party?”
- (ii) “Are you taking pictures of a sporting event?”
- (iii) “It appears that you're taking lots of pictures of animals. Are you at the zoo?”
- (iv) “I saw a bright flash and heard an explosion. Are you taking pictures of fireworks?”
- Examples of questions about the environment include, without limitation:
- (i) “The current humidity level of the air is 90%. Is it raining outside?”
- (ii) “This image seems cloudy. Am I underwater?”
- (iii) “The sky has an orange tint. Is the sun setting?”
- (iv) “Is it cloudy outside?”
- (v) “Are we in the shade?”
- (vi) “The camera is rocking back and forth. Are you on a boat?”
- (vii) “We're traveling at 200 mph. Are you in an airplane?”
- (viii) “Are you in a moving vehicle (e.g., a car)?
- (ix) “Is the camera on a tripod?”
- Some examples of questions about scenes include:
- (i) “I think I see a rainbow. Are you trying to take a picture of a rainbow?”
- (ii) “I think I see running water. Are you trying to take a picture of a waterfall?”
- (iii) “I think I see sand. Are you trying to take a picture of a beach?”
- (iv) “I think I see a body of water. Are you trying to take a picture of an ocean, lake, or pond?”
- (v) “Are you taking a picture through a window?”
- (vi) “I think I see a reflection. Will this picture have a mirror in it?
- (vii) “Are you taking a picture of a sunset?”
- (viii) “Are you taking a picture of stars in the sky?”
- (ix) “What is the most important element in this scene?”
- (x) “Are you taking a picture of a reflection?”
- Examples of questions about future plans include:
- (i) “There are 23 Mb of memory remaining. How many more pictures are you planning on taking today?”
- (ii) “How many pictures are you planning on taking of this scene?”
- (iii) “You've already captured23 pictures of Alice. How many pictures are you planning on taking of Alice?”
- (iv) How much longer are you planning on using the camera? (e.g., with multiple choice answers “less than 10 minutes,” “10-30 minutes,” “30-60 minutes,” and “more than 60 minutes”)
- (v) “This memory card has only 10 Mb of space left. Do you have any blank memory cards with you?”
- (vi) “My batteries will run out in less than 30 minutes. Do you have any more charged batteries with you?”
- (vii) “Are you done taking pictures of this scene?”
- (viii) “Do you need a good picture of every subject?”
- (ix) “Who do you want to capture pictures of today?”
- (x) “Who should receive a copy of this image? (e.g., Grandma, Uncle Joey, Kodak.com's Picture of the Day contest)”
- (xi) “Are you planning on using this image in a slide show?”
- (xii) “Are you planning on emailing this image to somebody?”
- Some examples of questions about priorities include, without limitation:
- (i) “Are you more concerned about exposure or sharpness?”
- (ii) “Are you more concerned about framing or resolution?”
- (iii) “I think I see Alice and Bob. Who is the subject of this photo? (e.g., with multiple choice answers “Alice,” “Bob,” “both Alice and Bob,” and “neither Alice nor Bob”)”
- (iv) “Do you want to focus on Alice or the mountains?”
- Some examples of questions about images include:
- (i) “Who is the subject of this image?”
- (ii) “How would you rate this image on a 1-10 scale?”
- (iii) “How would you rate the exposure of this image on a 1-10 scale?”
- (iv) “Would you prefer the background of this image to be sharper or more blurred?”
- (v) “Is this an image of Alice or Amy?”
- (vi) “Does Alice's skin tone look correct in this image?”
- (vii) “Which is your favorite picture from this group?”
- (viii) “Rank these images from best to worst, starting with your favorite image.”
- (ix) “Is this image overexposed?”
- (x) “Should this image be categorized automatically?”
- Different types of questions may illicit different types of responses from a user. For example, questions may be classified according to the types of responses they are designed to illicit. Some examples of questions classified in this manner include:
- (i) Yes/No questions (e.g., “Is this a picture of Alice?”)
- (ii) open-ended questions (e.g., “Who is in this picture?”)
- (iii) multiple-choice questions in which the user is presented with a plurality of options and prompted to choose at least one of the options (e.g., “Who is in this picture? a) Alice b) Bob c) both d) neither.”)
- (iv) true/false questions (e.g., “True or False?: This is a picture of Alice.”)
- (v) graphical response questions (e.g., “Point to Alice in this picture.”)
- (vi) ratings (e.g., a user may be asked to rate how much he likes an image)
- Note that a question may be phrased in the first person. Some types of users may find this personification of the camera appealing. Various exemplary ways that a question may be output to a user are discussed herein.
- Referring now to FIG. 9, an exemplary
tabular representation 900 illustrates one embodiment of the question database 430 (FIG. 4) that may be stored in thecomputing device 400. Thetabular representation 900 of the question database includes a number of example records or entries, each defining a question that may be output by aserver 110 or by acamera 130. Those skilled in the art will understand that the question database may include any number of entries. - The
tabular representation 900 also defines fields for each of the entries or records. The exemplary fields specify: (i) aquestion identifier 905 that uniquely identifies a particular question, (ii) a question to ask 910 that includes an indication (e.g., in text) of a question to output (e.g., to a user, to transmit to a camera 130), and (iii) apotential response 915 that indicates one or more potential responses to the corresponding question (e.g., multiple-choice answers, acceptable answers, answers to suggest). - The question to ask910 and
potential responses 915 maybe used, for example, in accordance with various embodiments described herein for outputting a question to a camera user. According to some embodiments of the present invention, if the question is a multiple choice question, then a plurality ofpotential answers 915 may be presented to the user. The user may then answer the question by selecting one of the potential answers. In another example,question identifier 905 may be used (e.g., in conjunction with a response database) in ensuring that a camera does not repeat the same question. - According to at least one embodiment, a condition may comprise a Boolean expression. This Boolean expression, for example, may reference one or more variables (i.e., factors) and may include Boolean modifiers and conjunctions (e.g. AND, OR, XOR, NOT, NAND), comparators (e.g., >, <, =, >=, <=, !=), mathematical operations (e.g. +, −, *, /, mean, standard deviation, logarithm, derivative, integral), functions (e.g., search_term_in_database( ), autocorrelation( ), dilates, fourier_transform( ), template_match( )), and constants (e.g., 10, 20 pixels, 300 milliseconds, 4 lumens, 0.02, 15%, pi, TRUE, yellow, “raining,” 5200 K). Some examples of conditions comprising Boolean expressions include, without limitation:
- (i) picture_of football_player (image2394) AND (NOT Date=“Halloween”)
- (ii) (identify_person_in_image (image736234)=“FAILED”)
- (iii) (subject_of_image (image420378)=“dog”)
- (iv) (average_rate_of capturing_images >2.1 per minute) OR (empty_memory <10 Mb)
- (v) (number_of_images>10) AND (percentage_of_images_captured_using_flash >80%)
- (vi) (average_audio_noise_level >=70 dB)
- (vii) (average_rating (image45618, image86481, image18974)<5)
- A condition may be based on one or more factors. Examples of factors include, without limitation: factors affecting the occurrence of a condition, factors affecting whether a condition is true, factors causing a condition to occur, factors causing a condition to become true, and factors affecting the output of a message.
- Some general categories of factors include, without limitation: factors related to images, indications by a user, time-related factors, factors relating to a state of the camera, information from sensors, characteristics of a user, and information from a database.
- Examples of factors relating to indications by a user include, without limitation: usage of controls (e.g., a shutter button, an aperture setting, an on/off setting), voice commands (e.g., recorded by a microphone), movement of the camera (e.g., observed using an orientation sensor), ratings provided (e.g., a user may rate the quality of an image or how much he like an image), and responses to previous questions. For example, the camera may use feedback to determine the next question in a series of questions to ask a user.
- Examples of time-related factors include, without limitation: the duration of a condition (e.g., for the last ten seconds, for a total of fifteen minutes), the current time of day, week, month, or year (e.g., 12:23 pm Sep. 6, 2002), a duration of time after a condition occurs (e.g., two seconds after a previous image is captured), an estimated amount of time until a condition occurs (e.g., ten minutes until the camera's batteries run out, twenty minutes before the sun goes down).
- Some examples of factors relating to the state of the camera include, without limitation: current and past settings (e.g., shutter speed, aperture, mode), parameters that affect the operation of the camera, current and past modes (e.g., “Sports” mode, “Manual” mode, “Macro” mode, “Twilight” mode, “Fluorescent Light” mode, “Silent” mode, “Portrait” mode, “Output Upon Request” mode, “Power-Save” mode), images stored in memory (e.g., total images stored, amount of memory remaining), current mode (e.g., “Sports” mode), and battery charge level.
- Examples of factors relating to information from sensors include, without limitation: location of the camera (e.g., determined with a GPS sensor), orientation of the camera (e.g., determined with an electronic compass), ambient light (e.g., determined with a light sensor), sounds and audio (e.g., determined with a microphone), the range to a subject (e.g., as determined using a range sensor), a lack of movement of the camera (e.g., indicating that the user is aiming the camera), signals from other devices (e.g., a radio beacon carried by a subject, a second camera), and temperature (e.g., determined by a temperature/infrared sensor).
- The camera may ask different questions to different types of users. Some examples of factors relating to characteristics of a user include, without limitation: preferences for capturing images (e.g., likes high contrast pictures, likes softening filters, saves images at best quality JPG compression), habits when operating camera (e.g., forgets to take off the lens cap, turns camera on and off a lot), appearance (e.g., is the user of the camera in an image captured using a self-timer?), characteristics of users other than the current user of the camera (e.g., past users, family members), family and friends, and skill level. For example, a skilled user may tend to capture images that are well-composed, correctly exposed, and not blurry. In contrast, a less-experienced user may tend to have trouble capturing high-quality images.
- As discussed variously herein, various embodiments of the present invention provide for information to be stored in one or more databases. Some examples of factors relating to information stored in a database include, without limitation:
- (i) templates or other information useful in recognizing or processing images
- (ii) images stored in the camera's memory (e.g., an image database such as the one shown in FIG. 8)
- (iii) indications by a user (e.g., previous answers to questions)
- (iv) predicted weather conditions
- (v) current weather conditions
- (vi) topography, vegetation
- (vii) locations of landmarks
- (viii) light sources (e.g., all of the lights in this building are fluorescent)
- (ix) anticipated events (e.g., Old Faithful at Yellowstone National Park erupting)
- (x) the current score of a baseball game
- (xi) sunrise and sunset times
- (xii) high and low tide times
- The camera may store a determination condition database for storing information related to such conditions, such as the one shown in FIG. 10. For each determination condition stored in the condition database, a corresponding question maybe output if the determination condition is true. For example, QUES-123478-02 (“What kind of light bulbs does this room have?”) may be output to a user if the camera is indoors and the flash is turned off.
- Referring now to FIG. 10, an exemplary
tabular representation 1000 illustrates one embodiment of the determination condition database 435 (FIG. 4) that may be stored in thecomputing device 400. Thetabular representation 1000 of the determination condition database includes a number of example records or entries, each defining a determination condition that may be useful in determining one or more questions to ask a user of a camera. Those skilled in the art will understand that the determination condition database may include any number of entries. - The
tabular representation 1000 also defines fields for each of the entries or records. The exemplary fields specify: (i) adetermination condition 1005 that defines a particular determination condition, and (ii) a question to ask 1010 that includes an identifier of a question corresponding to the determination condition. - Question to ask1010 preferably contains a unique reference to a question (e.g., as stored in a corresponding record of a question database). Of course, a determination condition database may include the actual text of a question and thus may not require a question identifier. For each condition, the determination condition database preferably stores a condition for asking at least one question and at least one question to ask if the condition is true. As depicted in
tabular representation 1000, for example, “QUES-123478-02” may be output to a user if the user is indoors and the camera's flash is turned off. According to another example, a question may be output to a user if a condition is true. For example, “QUES-123478-02” of FIG. 9 (“What kind of light bulbs does this room have?”) may be output to a user if the camera is indoors and the flash is turned off. Note that the question identifier listed in the exemplary question to askfield 1010 may correspond to a question identifier in the question database shown in FIG. 9. For example, “QUES-123478-01” refers to the question “Are you indoors or outdoors?” intabular representation 900. - Referring now to FIG. 11, an exemplary
tabular representation 1100 illustrates one embodiment of the output condition database 440 (FIG. 4) that may be stored in thecomputing device 400. Thetabular representation 1100 of the output condition database includes a number of example records or entries, each defining a output condition that may be useful in determining when and/or how to output a question to a user. Those skilled in the art will understand that the output condition database may include any number of entries. - The
tabular representation 1100 also defines fields for each of the entries or records. The exemplary fields specify: (i) acamera mode 1105, (ii) a questionready indication 1110, (iii) anoutput condition 1115 that indicates a condition for outputting a question, (iv) amethod 1120 that indicates a preferred method for outputting a question, and (v) an enabledfield 1130 that indicates whether the corresponding camera mode (e.g., “Fully Automatic” mode) is presently enabled. - The output condition database stores information that may be useful in determining when and/or how to output at least one question to a user, as discussed herein. For example, an audio recording of a question may be output when a user presses the camera's shutter button halfway down.
- As discussed herein, a mode of a camera may include a collection of one or more parameters that affect when and/or how at least one question is output to a user. For example, the camera may have an “Output Upon Request” mode in which one or more questions may be output to a user when the user presses an “Ask Me a Question” button on the camera. Prior to outputting a question to a user, the camera may output an indication that a question is ready to be output. The question
ready indication 1110 thus describes what indication (if any) may be output to a user to indicate that the camera has a question for the user. For example, as depicted intabular representation 1100 of FIG. 11, if the camera is in “Sports” mode, then the camera may “beep” when it determines a question to be output to a user. The question itself may then be output at a later time (e.g., when an output condition occurs). - As described variously herein, a question may be output to a user upon the occurrence of one or more output conditions. For example, when the camera is in “Manual” mode, the camera may output a question when the viewfinder is in use (i.e., the user is looking through the viewfinder). In a second example, the camera may output a question after thirty seconds of inactivity if the camera is in “Silent” mode. The method of output field indicates how a question may be output. As described herein, a question may be output to a user in a variety of different ways. For example, a text representation of a question may be displayed on the camera's LCD screen, or an audio recording of a question may be output using an audio speaker.
- The currently enabled
field 1130 indicates whether the associated mode (i.e., the mode indicated in the “camera mode” field) is currently enabled. If a mode is enabled, then a question or question ready indication may be output according to the output condition and/or method of output corresponding to that mode. If a mode is disabled, then a question may be output in a different manner (or may not be output at all). It is anticipated that a user may enable and disable modes based on his preferences. For example, if a user is capturing pictures of a musical, then the user may enable “Silent” mode on the camera, so as not to disturb audience members or actors. - Of course, while the exemplary data shown in FIG. 11 indicates that only one mode is enabled (“Manual”), it will be understood that any number of modes (including no modes at all) may be enabled.
- The exemplary embodiment of the output condition database shown in FIG. 11 describes one example of a camera communicating with an electronic device. For example, when the camera is in “PDA Assisted” mode, the camera may output a question by transmitting the question to a user's PDA. The user may then respond to this question using the PDA (e.g., by selecting a response using the PDA's stylus) and the PDA may transmit the user's response back to the camera.
- Referring now to FIG. 12A and FIG. 12B, an exemplary
tabular representation 1200 illustrates one embodiment of the response database 445 (FIG. 4) that may be stored in thecomputing device 400. Thetabular representation 1200 of the response database includes a number of example records or entries, each defining a response that may be useful in recording responses provided by a user (e.g., in response to a question). Those skilled in the art will understand that the response database may include any number of entries. - The
tabular representation 1200 also defines fields for each of the entries or records. The exemplary fields specify: (i) aquestion 1205 that indicates a question that was output (e.g., to a user), (ii) atime 1210 that indicates when the question was output, (iii) a response to thequestion 1215 that includes an indication of the response to the question (e.g., text, an audio file), and (iv) anaction 1220 that indicates what (if any) actions were taken based on the corresponding response. - For example, the first record in the response database shown in FIG. 12A indicates that a user responded “Indoors” to “QUES-123478-01.” An indication of what question was output to a user may comprise a question identifier, for example, which may correspond to a question identifier in the question database (e.g., as represented in FIG. 9). For example, “QUES-123478-01” refers to the question “Are you indoors or outdoors?” in the question database shown in FIG. 4. As indicated in the exemplary table, a question may be output multiple times (e.g., in different situations). For example, “QUES-123478-01” was output to a user at 1:34 p.m. on Aug. 3, 2002 and also at 11:36 p.m. on Aug. 3, 2002.
- The response database may indicate that a user has not responded to a question (e.g., for “QUES-123478-01” at 4:10 p.m. on Aug. 17, 2002). In another example, a user may have provided a response that is not an answer to the question (e.g., for “QUES-123478-08” at 7:21 p.m. on Aug. 11, 2002). Thus, in some instances, the camera may determine to output the question again. As discussed herein, camera may perform one or more actions based on a user's response to a question. For example, the camera may meta-tag at least one image or adjust one or more settings on the camera based on a user's response to a question.
- Referring now to FIG. 13A and FIG. 13B, exemplary
tabular representations computing device 400. An event log may store a list of events that occurred at one or more cameras and/or servers, for example. FIGS. 13A and 13B show two examples of event logs that may be stored by the camera and/or a server. FIG. 13A shows an exemplary event log for events that occurred on Aug. 3, 2002, generally relating to a user capturing images at a wedding. FIG. 13B shows an exemplary event log for events that occurred on Aug. 10, 2002, generally relating to a user capturing images at a beach. As depicted in the figures, event logs preferably store an indication of a time when an event occurred and a description of the event. - Each tabular representation of the event log includes a number of example records or entries, each defining an event. Those skilled in the art will understand that the response database may include any number of entries.
- The
tabular representation 1300 also defines fields for each of the entries or records. The exemplary fields specify: (i) a time ofevent 1305 that indicates a time that the corresponding event occurred, and (ii) a description of theevent 1310 that includes (e.g., in text) a description of what the event was.Tabular representation 1350 defines similar fields time ofevent 1355 anddescription 1360. - For convenience of discussion, each of the exemplary events logged has been depicted as occurring at a different time. Of course, two or more events may be logged as occurring at the same time. Note that the event times in the exemplary tables are examples only and do not necessarily represent delays that may be associated with processes on the camera. For example, while the event log in FIG. 13A shows that the camera received a user's response to question “QUES-123478-01” at 1:35 pm on Aug. 3, 2002 and then output “QUES-123478-02” at 1:36 PM on Aug. 3, 2002, this does not necessarily mean that there is a one minute delay between receiving a user's response to question “QUES-123478-01” and outputting “QUES-123478-02.”
- The event logs depicted in FIGS. 13A and 13B may not include one or more events that occur at the camera. For example, the event log shown in FIG. 13A does not include an event of meta-tagging the image “WEDDING-01” as being captured indoors.
- FIG. 14 shows an example of an expiring information database that may be stored by a camera, server, or other computing device. This database preferably stores data about information that is useful to a camera as well as an indication of one more conditions under which this information expires (e.g., and should no longer be used by the camera.) The expiring information database may also indicate one or more actions to perform in response to information expiring.
- Referring now to FIG. 14, an exemplary
tabular representation 1400 illustrates one embodiment of the expiring information database 455 (FIG. 4) that may be stored in thecomputing device 400. Thetabular representation 1100 of the expiring information database includes a number of example records or entries, each defining expiring information that may be useful in determining when information (e.g., as collected by a camera and/or server) should expire. Those skilled in the art will understand that the expiring information database may include any number of entries. - The
tabular representation 1400 also defines fields for each of the entries or records. The exemplary fields specify: (i) theinformation 1405 whose expiration is being monitored, (ii) anexpiration condition 1410 that indicates when or under what circumstances the piece of information should expire, and (iii) anaction 1415 that indicates an action (if any) to be performed (e.g., by a camera, by a server) in response to the information expiring. - For example, the first record in the
tabular representation 1400 of the expiring information database shown in FIG. 14 indicates that the camera will disregard the information that the camera is outdoors and output the question, “Are we outdoors?” if the camera is turned off for more than sixty minutes. - The
information field 1405 preferably includes the piece of information (e.g., determined based on a user's response to a question). For example, the camera may store the information “Camera is Outdoors” based on a user responding “outdoors” to the question “Are you indoors or outdoors?” QUES-123478-Theexpiration condition 1410 preferably indicates one or more conditions under which the information will expire. For example, the information that the weather outside is sunny may be set to expire if the sun goes down or if the camera is taken indoors. - When an
expiration condition 1410 occurs, the camera may perform one or more actions. Exemplary actions include outputting a question, adjusting a setting, or ceasing to perform an action (e.g., an action that was performed based on the information being current). For instance, the camera may cancel “Sunny Beach Mode” (e.g., automatically or after prompting the user) in response to the expiration of the information that a user is on a beach. - 2. Processes
- Methods consistent with one or more embodiments of the present invention may include one or more of the following steps, which are described in further detail herein:
- (i) capturing an image,
- (ii) determining a question,
- (iii) outputting a question,
- (iv) receiving a response to a question,
- (v) performing an action based on a response, and
- (vi) expiring information.
- Referring now to FIG. 15, a flowchart illustrates a
process 1500 that is consistent with one or more embodiments of the present invention. Theprocess 1500 is a method for outputting a question to a user of a camera. Theprocess 1500, and all other processes described herein unless expressly specified otherwise, may be performed by an imaging device (e.g., a camera), a computing device (e.g., a server) in communication with an imaging device, and/or a combination thereof. Each of these devices is described in detail herein. Further, theprocess 1500, and all other processes described herein unless expressly specified otherwise, may include steps in addition to those expressly depicted in the Figures or described in the specification, without departing from the spirit and scope of the present invention. Similarly, the steps ofprocess 1500 and any other process described herein, unless expressly specified otherwise, may be performed in an order other than depicted in the Figures or described in the specification, as appropriate. - Referring to step1505, an image is captured. Various ways of capturing an image, including by use of a camera, are well known to those skilled in the art and some examples are provided herein. In
step 1510, a question is determined based on the captured image (e.g., using a determination condition database of a camera). Instep 1515, the determined question is output to a user (e.g., via an output device of a camera). Various ways of determining and of outputting questions are described in detail herein. - Referring now to FIG. 16, a flowchart illustrates a
process 1600 that is consistent with one or more embodiments of the present invention. Theprocess 1600 is a method for performing an action based on a response from a user. For illustrative purposes only, theprocess 1600 is described as being performed by acamera 130. Of course, theprocess 1600 may be performed by any type ofimaging device 210, or animaging device 210 in conjunction with acomputing device 220. - Referring to step1605, a
camera 130 captures an image. For example, a user presses a shutter button to record an image of a scene. In another example, thecamera 130 automatically captures an image of a scene (e.g., in order to make suggestions that the user adjust one or more settings). Instep 1610, the camera determines a question based on the image. For example, thecamera 130 may determine that the image is underexposed and may determine that it is appropriate to ask the user if the user intended the image to be underexposed. In another example, thecamera 130 may transfer the image or information about the image to aserver 110 for determination of a question. Determining the question may thus include receiving an indication of a question from theserver 110. - In
step 1615, thecamera 130 outputs the question to a user (e.g., via an LCD device). Instep 1620, thecamera 130 receives a response from the user. As discussed herein, the user may provide a response by any of a variety of means, including by making a selection on a displayed menu of possible responses to the question. Instep 1625, thecamera 130 performs one or more actions based on the received response. For example, based on a response that the user intends the image to be overexposed, thecamera 130 may store an indication (e.g., in a response database) that questions about exposure should not be output for images of this scene. Various steps ofexemplary processes - Images may be captured (e.g., using an imaging device210) in a variety of ways. For example, an image may be captured based on an indication from a user. For instance, a user may operate a control on a camera (e.g., a shutter button) to capture an image. An image may be captured based one or more settings. For example, an image that is captured by a camera may depend on the current aperture, shutter speed, zoom, focus, resolution, and compression settings. Similarly, an image may be captured based on a current mode of the camera (e.g., “Sunset” mode, “Sunny Beach” mode). For example, the camera may have a “Sunset” mode, which describes settings that appropriate for capturing images of sunsets.
- Alternatively, an image may be captured automatically (e.g., without any indication from a user). For example, the camera may capture images and store them in a buffer even if a user has not pressed the shutter button on the camera. In order to save memory space, images that are captured automatically may be automatically deleted or overwritten. Capturing an image automatically may be particularly helpful in some embodiments for determining the subject(s) of an image the user wishes to record or how a user is composing a photograph. For example, before a user presses the shutter button on the camera, he may aim the camera at a scene (e.g., his girlfriend in front of the Golden Gate Bridge). The camera may capture an image of this scene and then output a question to the user based on the image. In this manner, an image may be captured and a question may be output to a user prior to the user actually taking a picture.
- Capturing an image may include capturing an image based on a condition. This condition may be referred to herein as a capture condition to differentiate it from other described conditions.
- Those skilled in the art will readily understand that capturing an image may include storing the image in memory. As discussed herein, various different forms of memory may be used to store an image, including, without limitation: non-volatile memory (e.g., a CompactFlash™ card), volatile memory (e.g., dynamic random access memory (DRAM) for processing by an image recognition process), removable memory (e.g., a SmartMedia™ or CompactFlash™ card), and non-removable memory (e.g., an internal hard drive). Images may be stored in an image database, such as the one shown in FIG. 8.
- Other methods and aspects of capturing an image (e.g., using a digital camera) are known to those skilled in the art and need not be described in detail herein.
- In accordance with various embodiments of the present invention, a camera and/or server may output various different types of questions to a user. Such questions to ask a user may be determined (e.g., by a server) in many different ways, as discussed variously herein by way of example and without limitation. For instance, questions may be determined based on a condition, based on an image, and/or based on a template. A camera may determine a question to ask a user based on a variety of different factors, including images stored in the camera's memory, the state of the camera (e.g., the camera's current settings), indications by a user (e.g., responses to previous questions), and information from sensors (e.g., information captured by an image sensor).
- For example, if a user captured an image of a group of people, then the camera may use image recognition software to determine that this image corresponds to a group of people and ask the user a question about this group of people (e.g., “Who's in this picture?”). According to one embodiment, the camera may determine a question based on an image that was captured automatically (i.e., without the user pressing the shutter button on the camera).
- According to one or more embodiments of the present invention, a camera may communicate with a server in order to determine a question to output to a user. For example, the camera may transmit any of various information (e.g., images, GPS coordinates) to a computer server. The server may then determine one or more questions based on this information. The server may then transmit an indication of at least one question to the camera, and the camera may output the question to a user.
- Various types of information that may be collected by a camera are described herein. Some examples of information that a camera may transmit to a server include, without limitation: one or more images captured by the camera, indications by a user (e.g., responses to questions, usage of controls), a state of the camera (e.g., current mode, images stored in memory), and information from sensors (e.g., location, orientation, sound and audio).
- A server may determine a question to output in accordance with one or more of the exemplary processes discussed herein. It is worthwhile to note that the computer server may have significantly greater processing power, memory, power consumption (e.g., no batteries), and physical size (e.g., not portable) than the camera. This may allow the computer server to perform computations and analysis that are more complex or extensive than could those that could be performed quickly using the camera's processor. For example, the computer server could run complicated image analysis and pattern matching algorithms to determine an appropriate question to ask a user.
- After a question is determined, a computer server may transmit an indication of the question to the camera. Examples of indications of questions include, without limitation, a question and a question identifier. For example, the computer server may transmit an audio clip or text messages corresponding to the question. The question may be compressed (e.g., as an MP3) to reduce the bandwidth necessary to transmit the question. In another example, the camera may store a database of questions, with each question in the database being identified by a question identifier. In order to indicate a question to the camera, the computer server may transmit a question identifier corresponding to the question. The camera may then retrieve the question from the database.
- After receiving an indication of a question from the computer server, the camera may output this question to a user as discussed variously herein.
- According to some embodiments, a camera may determine a question to ask a user based on a condition. This type of condition may also be referred to as a determination condition to differentiate it from other conditions described elsewhere in this disclosure.
- Some examples of determining a question based on a condition are discussed herein, without limitation. For instance, if the batteries in the camera are running low (i.e., a condition), then the camera may ask the user, “How many more pictures are you planning on taking?” In one example, if a captured image includes an image of a football player (i.e., a condition), the user is to be asked, “Are you taking pictures of a football game?” In another example, if a captured image includes an image of a candle (i.e., a condition), the user may be asked, “Are you taking pictures of a birthday party?” In one example, if a captured image includes an image of two or more people (i.e., a condition), then the question, “Who are the people in the photo?” is to be output to the user.
- As discussed variously herein, in accordance with some embodiments of the present invention, a camera or server may determine a question to ask a user based on one or more images that have been captured. That is, an image may be captured and then the camera may determine a question to ask a user based on this image.
- As discussed herein, determining a question based on an image may include processing the image using image recognition software. A wide variety of image recognition programs are known to those skilled in the art and need not be described in detail herein.
- In one example of determining a question based on an image, a user captures an image using the camera. The camera may determine, based on an analysis of the image, that this image shows a person sitting under a tree. Based on this determination, the camera may ask the user a question such as, “Who is the person sitting under the tree in this picture?” The user's response to this question (e.g., “Alice”) may be used, for example, to meta-tag the image.
- In another example, the camera may automatically capture a plurality of images of an exciting series of plays during a basketball game. Based on these images, the camera may ask the user, “Are you interested in pictures of any of the following players (check all that apply)?” The camera may then save or delete some or all of the captured images based on the user's response to this question.
- In another example of determining a question based on an image, a user may capture an image of a child with a present. Based on this image, the camera may ask the user, “Are you taking pictures of a birthday party?” If the user indicates that he is indeed taking pictures of a birthday party, the camera may adjust the shutter speed to be at least {fraction (1/125)} sec, so as to capture the children who are moving quickly.
- The camera may determine a question based on one or more properties of an image, including, without limitation: exposure (e.g., including brightness, contrast, hue, saturation), framing (e.g., organization of subjects within the image, background), focus (e.g., sharpness, depth of field), digitization (e.g., resolution, quantization, compression), meta-information associated with an image (e.g., camera settings, identities of people in an image, a rating provided by a user), subject(s) (e.g., people, animals, inanimate objects), scenery (e.g., background), and motion relating to an image (e.g., movement of a subject, movement of the camera).
- The camera may determine a question based on exposure of an image. For example, the camera may determine that the background of an image is brighter than the foreground of the image. Based on this, the camera may ask a user, “Would you like to use a fill flash to brighten your subject?” In another example, the camera may determine that an image is too bright. Based on this, the camera may ask a user, “Are you trying to take a picture of a sunrise or sunset?”
- The camera may determine a question based on framing of an image. For example, the camera may determine that an image includes two objects, one in the foreground and one in the background. Based on this, the camera may ask a user, “Which object do you want to focus on, the one in the foreground or the one in the background?” If the objects have been identified (e.g., by the camera using an image recognition program), the different objects may be named in the question.
- In an example of how camera may determine a question based on focus of an image, the camera may determine that a portion of an image is blurred (e.g., as if by movement). Based on this, the camera may ask a user, “Are you taking pictures of a sporting event?” Based on the user's response, the camera may then adjust a setting on the camera (e.g., increase the shutter speed to at least {fraction (1/250)} sec) as described herein.
- The camera may determine a question based on meta-information associated with an image (e.g., camera settings, identities of people in an image). In one example, the camera recognizes that an image has been meta-tagged as being taken at 7:06 p.m. Based on this information, the camera may ask a user, “Is the sun about to go down?” or “How long will it be until the sun goes down?” In another example, an image may include a meta-tag that indicates that it shows Alice and Bob in a canoe. Based on this tag, the camera may ask a user, “Are you taking pictures at a lake or river?” In another example, an image may include a meta-tag that indicates the preferred cropping or scale for the image. Based on this, the camera may ask a user, “When your subject is off-center, do you want the background to be in focus?” In still another example, meta-data associated with an image may include a rating of the quality of the image (e.g., a rating provided by the user or determined by the camera). The camera may determine a question based on this rating (e.g., “This image appears to be overexposed. Are you trying to create an artistic effect?”).
- As discussed herein, the use of image recognition software may allow for one or more subjects of a captured image to be determined. Examples of subjects include, without limitation: people, animals, buildings, vehicles, trees, the sky, a ceiling, a landscape, etc. Determining a question may comprise determining a type of scenery (e.g., natural landscape) in the image. Scenery may include one or more subjects, which may or may not be identified individually. Further, according to some embodiments, one or more questions may be determined based on whether the image matches a template.
- In one example of determining a question based on a subject in an image, the camera may identify a candle in an image. Based on this determination, the camera may ask a user, “Are you taking pictures at a birthday party?” In another example, the camera may identify a large body of water in an image. Based on this, the camera may ask a user, “Are you on a boat?” Based on a determination that an image includes a building, for example, the camera may ask a user, “Are you outside?” If it is determined that an image includes a mountain, for example, the camera may ask a user, “What mountain is this?”
- It will be readily understood that at least one subject of an image may be a person. In some embodiments, determining a question may include one or more of the following steps: determining that an image includes at least one person, identifying at least one person in an image, and determining one or more characteristics of at least one person in image. One or more of the above steps may be performed by a server and/or camera.
- In one example of determining a question based on a subject in an image, the camera may identify a person in an image (e.g. based on information received from a server). Based on this identification, the camera may ask a user, “Is this a picture of Alice?” The camera may determine a question based on a person in an image. In another example, the camera may ask a user, “Do you already have any pictures of this person?” A camera may ask a user, “What color skin does Bob have?” The user's answer to this question may be useful in determining how to set exposure settings on the camera when taking a picture of Bob. The subject of an image may be identified as a football player and thus may indicate that a user is capturing images of a football game. Accordingly, the camera may ask the user, “Are you taking pictures of a football game?”
- The camera may determine a question based on motion relating to an image (e.g., based on blurring of an image or comparison of a plurality of images). For example, the camera may determine a question based on motion of a subject in an image. For instance, if a subject in an image is moving quickly, the camera may ask a user, “Are you taking sports pictures?” In another example, the camera may determine that the ground plane in an image is moving slightly (e.g., shimmering like water). Based on this, the camera may ask a user, “Are you taking pictures of water (e.g., the ocean, a fountain, a creek)?” An imaging device may determine that it is moving (e.g., using a GPS sensor). Based on this determination, a digital camera may ask a user, “Are you in a vehicle (e.g., a car, a boat, an airplane)?”
- Various embodiments of the present invention allow for a question to be determined based on a plurality of images. For example, the camera may capture two images in close succession (e.g., {fraction (1/10)} of a second apart). The camera may compare these images and may determine that a subject (e.g., a person, an animal) is moving. Based on this determination, the camera may ask a user, “Are you taking pictures of a sporting event?” In some embodiments related to multiple images, a camera may determine a question based on at least one difference among a plurality of images. For example, the camera may capture a plurality of images in bright light conditions (e.g., outdoors on a sunny day). Then the camera may capture an image in low light conditions (e.g., so little light that a flash in necessary). Based on this, the camera may ask a user a question, “Did you just go inside a building?”
- The camera may also be configured so as to determine a question based on at least one similarity among a plurality of images. For example, the camera may determine that two images have the same person wearing a red shirt in them. Based on this, the camera may ask a user, “Who is the person in the red shirt?” In another example, the camera may determine that a first image is of a tiger and a second image is of a polar bear. Since a tiger and a polar bear are both animals, the camera may ask a user, “Are you at the zoo?”
- A question may refer to an image (and thereby be based on the image). Examples include, without limitation:
- (i) “Is this picture bright enough?”
- (ii) “Who is the person wearing the yellow jacket in this image?”
- (iii) “Do you mind if I delete this image? You have two others just like it.”
- (iv) “Where was this picture taken?<show the picture>?”
- (v) “Please highlight Alice's face in this image.”
- According to at least one embodiment of the present invention, one way of determining a question based on an image is to determine if an image matches a template. For example, a camera and/or server may store a plurality of templates. Each template may correspond to a different type of image, category of image, or one or more properties of an image. After an image is captured, this image may be compared to one or more of the templates to see if there is a match. If the image matches a template, then an appropriate question may be determined (e.g., to verify that the image does in fact match the template and/or to verify information useful in determining a setting on the camera).
- Some examples of templates include, without limitation:
- (i) An “indoors” template. If an image matches with the indoors template, then the camera may determine that there is a significant probability that the user is capturing images inside a building. The camera may then determine an appropriate question to ask the user (e.g., “Are you indoors?”).
- (ii) A “sandy beach” template. If an image matches with the “sandy beach” template, then the image may have been captured on a sandy beach (e.g., in bright sunlight, where water may be nearby). The camera may then determine an appropriate question to ask the user (e.g., “Are you at the beach?”).
- (iii) A “football player” template. If an image matches the “football player” template, then the image may include a football player. The camera may then determine to ask the user, “Are you taking pictures of a football game?” Note that in a preferred embodiment a football player template may match with any picture of a football player, whether the player is facing the camera, facing away from the camera, or lying on the ground. Alternatively, there may multiple “football player” templates, with some corresponding to a football player in a different position, with different lighting, etc.
- (iv) A “candle” template. If an image matches the “candle” template, then the image may include a candle or other point light source (e.g., light bulb, flashlight). Based on this, the camera may determine a question to ask a user (e.g., “Are you taking pictures at a birthday party?”).
- (v) A “Bob Jones” template. In an image matches the “Bob Jones”-template, then the image may include a picture of Bob Jones (e.g., who may be a friend of the user of the camera). Based on this, the camera may determine a question to ask a user (e.g., “What is Bob doing in this picture?”).
- (vi) A “fluorescent light bulb” template. If an image matches with this template, then the scene in the image may have been illuminated with a fluorescent light bulb. The camera may ask a user an appropriate question based on the image and the determined template.
- In accordance with various embodiments of the present invention, a camera and/or a server may perform one or more of the following steps in matching an image to a template:
- (i) determining an image
- (ii) storing one or more templates (e.g., in a template database)
- (iii) determining that the image matches at least one template
- (iv) determining a correspondence between the image and at least one template
- (v) determining a question based on the image and the at least one template
- (vi) determining a correlation between the image and at least one template
- (vii) determining a degree to which the image matches at least one template
- Note that the matching of an image to a template may be a condition. That is, if an image matches a template, then a condition may be true and a question may be determined based on this condition, as described herein.
- It will be understood that an image may match with multiple templates. For example, a picture of Bob Jones on a beach may match with both the “Bob Jones” template and the “sandy beach” template. In this circumstance, a question may be determined based on one or both of the templates.
- Also, it may be possible for an image to partially match a template. For example, matching an image to a template may include determining how much the image matches the template. Some examples of partial matches include, without limitation:
- (i) An image of a light bulb may only be a 65% match for the “candle template.”
- (ii) An image may match a first template to a first degree and a second template to a second degree. For example, an image of Steve Jones (who looks similar to his bother Tom Jones) may be a 95% match with the “Steve Jones” template and a 80% match with the “Tom Jones” template.
- (iii) An image that was taken indoors may be a 5% match for the “sandy beach” template.
- (iv) An image may only be considered a match for a template if the amount that the image matches the template is greater than a threshold value (e.g., 95% certainty).
- According to some embodiments, the camera may determine and/or create a template based on a user's response to a question. For example, the camera may create an “Alice Jones” template based on an indication by a user that an image is of Alice Jones.
- After a question is determined, a camera may output the question to a user. Various different ways that a question may be output to a user of a camera are discussed herein.
- A question may be output to a user of a camera in a variety of different ways. For example, a question may be output using one or more of a variety of different output devices. Examples of output devices that may output questions to a user include, without limitation:
- (i) An LCD screen. For example, the camera may include a color LCD screen on its back. This LCD screen may display questions to a user (e.g., as text).
- (ii) An audio speaker. For example, the camera may use an audio speaker to output a audio clip of a question to a user.
- (iii) An LED screen. For example, the camera may include a LED screen that displays questions to users as scrolling text.
- (iv) A heads-up viewfinder display. For example, a question may be displayed on the viewfinder of the camera so that a user can view the question while composing a shot (i.e., which the user is preparing to capture an image).
- (v) A printer. For example, the camera may include thermal printer that may print out a list of questions for a user to answer.
- Other examples of output devices that may be used to output a question are discussed herein.
- A question may be represented in or more of a variety of different media formats, including text, audio, images, video, and any combination thereof. Examples of questions being output as text include a question displayed as text on an LCD screen on the back of the camera and a question displayed as text overlaid on the camera's viewfinder. In another example, a text question may repeatedly scroll across a header bar on the camera's LCD screen. It will be understood that various visual cues may be used to draw a user's attention to a message that is output in text, including different fonts, font sizes, colors, text boxes, and backgrounds.
- Examples of questions being output as audio include an audio recording of a question. In another example, speech synthesis software may be used to generate an audio representation of a question. In another example, a “BEEP” sound may be output when a question is displayed on a video screen. Examples of outputting questions using images and video include, without limitation, displaying a sequence of images (e.g., a movie) to a camera user using a video screen. In another example, a video of an animated cartoon character may indicate a question to user.
- Note that a message may be presented in a plurality of ways. For example, a question may include both a text component and an audio component (e.g., the camera may beep and then display a question on an LCD screen). In a second example, the camera may display an image on its color LCD screen and then play an audio recording of a question.
- Note that a question may be phrased in the first person. For example, a question may use the word “I” to refer to itself or the word “we” to refer to the camera and the user. Examples include, without limitation:
- (i) “Are we at the beach?”
- (ii) “It seems to me that we're indoors right now. Is this correct?”
- (iii) “Am I underwater?
- (iv) “How many more pictures should I plan on taking today?”
- (v) “Should I use the red-eye reduction flash?”
- A question may be output in different languages (e.g., depending on who is using the camera). For example, if the current user of the camera speaks English, then a question may be output to the user in English. However, if the current user of the camera speaks Chinese, then the question may be output the to the user in Chinese.
- According to some embodiments of the present invention, a question may be output by a presenter (e.g., a character that presents the question to a user). Examples of presenters include, without limitation:
- (i) A speaker. For example, the camera may store two recordings of a question-one with a female speaker and one with a male speaker.
- (ii) An animated character in a video message. For example, an avatar, virtual assistant, or other on-screen character may be displayed to a user in conjunction with a question. For example, an animated rabbit may be displayed on the camera's LCD screen and “talk” to a user, thereby outputting one or more questions to the user. Indications from the rabbit may be provided as text (e.g., displayed using a speech bubble as a partition) and/or audio (e.g., an audio recording may be played, allowing the rabbit to “speak” to the camera user.)
- (iii) An actor. For example, a video of an actor presenting a question may be displayed to a user on the camera's LCD screen. The camera may store a database of video clips representing different questions.
- (iv) A celebrity. For example, an audio or video recording of a celebrity (e.g., William Shatner) reciting a question may be output to a user.
- It is anticipated that some types of camera users may pay more attention to representations of message that include certain presenters. For example, a user may pay extra attention to a message that is presented by his favorite celebrity.
- In order to accommodate a variety of different formats and languages for the same question, the camera may store one or more representations of a question in memory. For example, for a given question (e.g., “Are you at a ski resort?”), the camera may store the following representations: a text version of the question in English, an audio version of the question in English, a text version of the question in Spanish, and an audio version of the question in Spanish.
- As discussed variously herein, a question may be output to a user at a variety of different times. For example, the camera may delay outputting a question until an appropriate time. In another example, the camera may output a questions based on a condition.
- In addition to outputting a question, the camera may output other information that may be helpful to the user. Examples of additional information may include, without limitation: at least one image that relates to the question, potential answers to the question (e.g., for a multiple choice question), a reason for asking the question, current settings on the camera, a default answer to the question, and a category for the question.
- As discussed herein, the camera may output a question to a user that relates to at least one image. To assist the user in answering the question, the camera may display at least one image to the user. For example, the camera may display a picture of a person on the beach to a user and ask the user, “Is this a picture of Alice or Bob?” In another example, the camera may display a set of four pictures to a user and ask the user, “All of these pictures are of Alice, right?”
- In another example, the camera may display a plurality pictures to a user and ask the user, “Pick your favorite picture from this group.” A camera may display an image to a user and ask the user, “Is this picture underexposed?” or may highlight a portion of an image ask a user, “Is this the subject of the image?” In yet another example, the camera may automatically crop an image and ask a user, “I'm going to automatically crop this image. Is this a good way to crop the image?”
- Indicating potential answers to a question may be helpful in describing to a user how he should answer a question, indicating acceptable answers to a question, and/or reducing the time or effort required for a user to answer a question. Examples of potential answers to questions include, without limitation:
- (i) “Yes”/“No”/“I'm not sure”
- (ii) “Alice”/“Bob”/“both Alice and Bob”/“neither Alice nor Bob”
- (iii) “tungsten”/“fluorescent”/“I'm not indoors”/“I don't know”
- (iv) “overexposed”/“underexposed”/“just right”
- Indicating at least one reason for asking a question may be helpful in explaining the purpose of a question to a user, or in explaining to a user why it would be beneficial for him to answer a question. Some examples of the camera indicating one or more reasons for asking a question include:
- (i) “I thought I saw a football player in that last photo you took. Are you taking pictures of a sporting event?”
- (ii) “Based on the last picture you took, it looks to me like you're on a beach. Are you taking picture on a beach?”
- (iii) “It looks like you're taking a lot of pictures outdoors with white backgrounds. Are you at a ski resort?”
- (iv) “The camera seems to be rocking back and forth. Are you on a boat?”
- (v) “It looks like there was a reflection in the last picture you took. Are you trying to take pictures through a glass window?”
- (vi) “How many more pictures are you planning on taking? If you continue storing your pictures at high resolution, you only have space for 12 more pictures.”
- Indicating at least one setting on the camera may help the user to understand the context of a question, or to make a decision on how to respond to a question. In one example, the camera indicates: “The flash is currently off. Are we indoors?” In another example, the camera displays: “Pictures are currently being captured at 1600×1200 resolution, meaning that I have enough memory left to hold 15 more pictures at this resolution. How many more pictures are you planning on taking?” Providing such additional information may be beneficial to some types of users.
- Some embodiments of the present invention provide for a default or predetermined answer to a question output to a user. In some cases, the default answer may be indicated to the user. Some examples of output including default answers to questions include:
- (i) “We're indoors, right? If you don't answer in 5 seconds, I'll assume that we're indoors . . . ”
- (ii) “Let me know if any of this information is incorrect: You're taking pictures at a birthday party. There are 8 children at the birthday party. So far you've taken pictures of 3 children: Alice, Bob, and Carly.”
- (iii) “The girl wearing the blue sweater is Alice, right? Press the “no” button if this is incorrect.”
- Questions may be categorized, allowing the camera to output an indication of at least one category corresponding to a question. Categorizing a question may be helpful to some users, for example, if there are a plurality of questions that may be output to a user and the user would like to sort these questions. Questions may be categorized based on a variety of different factors, including, without limitation:
- (i) topic (e.g., “Lighting Questions,” “Focus Questions,” “Meta-Tagging Questions,” “Situational Questions,” “Questions about Future Plans,” “Questions About Past Images”)
- (ii) image (e.g., “Questions About Image #1,” “Questions About The Last 8 Images Captured,” “Questions about Images Captured Yesterday”)
- (iii) priority (e.g., “High-Priority Questions,” “Questions to Answer in the
Next 5 Minutes,” “Questions to Answer During Your Free Time,” “Questions to Answer Before Capturing More Images”) - (iv) type of response (e.g., “Yes/No Questions,” “Free Answer Questions,” “Multiple Choice Questions”)
- (v) potential actions based on a user's response (e.g., “Meta-Tagging Questions,” “Settings Questions,” “Image Storage Questions”)
- As discussed herein, the camera may determine a question to output to a user (e.g., based on a determination condition). Alternatively, or in addition, the camera may determine when and/or how to output a question to a user based on one or more conditions. For example, a question may be output as an audio recording while a user is looking through the viewfinder of the camera. In another example, the camera may delay outputting a question to a user until there is a pause in the user's activities.
- The camera thus may output a question based on a condition. This condition may also be referred to as an output condition to differentiate it from other types of conditions (e.g., determination conditions) described elsewhere in this disclosure.
- According to some examples, without limitation, a question may be output: when a condition occurs, when a condition is true, when a condition becomes true, in response to a condition, in response to a condition occurring, in response to a condition being true, because of a condition, because a condition occurred, because a condition is true, according to a condition, at substantially the same time that a condition occurs, and/or at substantially the same time that a condition becomes true.
- Note that conditions may be useful in enabling a variety of different functions. For example, as discussed herein, a condition may be used in determining what question(s) to output and/or for determining an order in which to output a plurality of questions.
- In another example, a condition may be useful for determining when to output a question (e.g., determining an appropriate time to output a question). According to some embodiments, output of a question may be delayed until a condition occurs. For example, it may be annoying to output a question to a user when the user is busy taking photographs at a birthday party or busy talking with a friend. Therefore, outputting a question to the user may be delayed until an appropriate time.
- In some embodiments, as described herein, a condition may be used in determining how to output a question. For example, a condition may be used to determine whether a question should be output in text on the camera's LCD display or as audio through the camera's audio speaker. In a second example, the camera may select which personality should be used in outputting a question to a user.
- In light of the present disclosure, one skilled in the art will understand that conditions for outputting a question may be similar to conditions for determining a question. For example, it should be clear to the reader that an output condition may be a Boolean expression and/or may be based on one or more factors
- Many types of factors are discussed herein as being potentially useful in making one or more of various types of determinations. For example, any of the factors described herein as potentially useful in determining a question to ask a user (e.g., images, indications by a user, movement of the camera, time-related factors, state of the camera, information from sensors, characteristics of a user, information from a database) may also be used for determining when and/or how to output a question to a user. Still other factors will be readily apparent to those skilled in the art in light of the present disclosure.
- Some moments may be particularly appropriate for outputting a question to a user. Some general examples of appropriate times to output a question include, without limitation: when a user is composing a shot (e.g., is about to capture an image), when a user is inactive, when a user indicates that he is interested in receiving at least one question, when a user is viewing an image, when a user is answering one or more questions, when the camera's resources are running low, and when a user starts to capture images of a scene. Some of these general examples are discussed in further detail herein.
- According to some embodiments, a question may be output when it is determined (e.g., by the camera) that a user is composing a shot. Some exemplary scenarios include, without limitation:
- (i) Operation of a control. For example, if the user presses the shutter button halfway down, then this may indicate that he is composing a shot. This may be an appropriate time to ask him a question about the shot. In a second example, the camera may output a question to a user when the user switches the camera from manual exposure mode to auto-exposure mode.
- (ii) Use of the optical viewfinder on the camera. For example, the camera may include a sensor that determines when the user is looking through the optical viewfinder on the camera.
- (iii) Use of the digital viewfinder on the camera. For example, if a user turns on the digital viewfinder on his camera, this may be a sign that he is about to start capturing images.
- (iv) When the camera is being held steadily (e.g., in a horizontal position). If a user is holding the camera steady in a horizontal position, for example, it may be likely that he is about to capture an image.
- (v) Holding of the camera in two hands. For example, the camera may include one or more touch sensors (e.g., heat, continuity, electric field, pressure) that may determine when a user places both of his hands on the camera. This may be considered an indication that the user is composing a shot.
- Some factors may be related to an indication by a user. For example, the camera may include a button or menu option labeled “Ask Me a Question.” Whenever a user has free time, he may press this button to answer any questions that the camera has. An indication that the user has pressed the button may trigger the camera to provide one or more questions. In another example, whenever the camera has a question to ask a user, it may output an indication of this (e.g., by beeping once or illuminating an LED). The user may then respond to this indication at his leisure (e.g., once he finishes capturing a sequence of action photos) by providing an indication (e.g., pressing a button on the camera) when he is ready to answer the question.
- In some embodiments, a user may provide an indication of what question he would like to answer. For example, a user may select a question or question topic from a list of questions or question topics. In a second example, a user may use the camera to view images that he has already captured. Some images may have questions associated with them, and to indicate this, these images may be highlighted by the camera, for example, with green borders. To indicate what question he would like to answer, the user may select (e.g., using a touch screen, using a dial) one or more of the highlighted images.
- A user may provide an indication of how a question should be output. For example, the camera may normally output questions in an audio format. A user who is operating the camera in an opera house, however, may prefer to avoid disturbing other audience members. Therefore, the user may operate a control on the camera to indicate that he would prefer that the camera output the question in text form (e.g., via the camera's LCD display).
- Some factors relating to inactivity by a user may be used in determining how and/or when to output a question. Such factors may include, without limitation:
- (i) Duration since activity by a user. For example, the camera may include a timer that monitors the period of time that has elapsed since a user performed an activity (e.g., operated a control on the camera). If a threshold amount of time has elapsed (e.g., sixty seconds), then the camera may determine that a question should be output to the user.
- (ii) Measurements relating to images captured by a user. For example, the camera may monitor a duration since an image has been captured (e.g., sixty-five seconds) or an average rate of capturing images (e.g., one picture every thirty-two seconds).
- (iii) Lack of sound. For example, the camera may include a microphone that monitors the level of audible background noise around the camera. If the level of background noise falls below a threshold level, the camera may determine that this is an appropriate time to output a question to a user.
- (iv) Movement of the user and/or the camera. For example, the camera may include one or more motion sensors that may be helpful in determining whether the user is currently composing a shot, moving to a new location, or otherwise engaged in an activity. If the camera determines that the user is not moving the camera, a question may be output the user.
- (v) Power down. For example, if the camera determines (or a user indicates) that the camera should enter power-save mode, then the camera may determine that this is an appropriate time to output a question to a user. In a second example, the camera may output an indication that one or more questions are ready any time the user turns the camera on or off.
- Examples of factors relating to the camera outputting an image include, without limitation:
- (i) Viewing an image using the camera. For example, the camera may include a color LCD display that allows a user to view images that are stored in the camera's memory (e.g., images that he has already captured using the camera). If the user uses the LCD display to view one or more images, this may indicate that the user is no longer busy with other activities and it may be an appropriate time to output a question to the user. For example, the camera may output any questions relating to an image when the user views that image on the camera.
- (ii) Printing an image. For example, the camera may include a printer or be connected to a printer. When a user operates the camera to print one or more images, the camera may output one or more questions to the user (e.g., questions relating to the one or more images).
- (iii) Transferring an image to another device. For example, the camera may transfer images to one or more various other devices (e.g., a desktop computer using a USB cable, an iPod™ portable wallet, a television set for viewing, a color inkjet printer for printing, a removable flash memory card for storage). One or more questions may be output to a user before, during, and/or after the transfer of an image to another device.
- In some embodiments, it may be determined that it is appropriate to ask a question when the user is answering one or more other questions. For example, a plurality of questions may be output to a user simultaneously or in close succession. For example, it may be most convenient for a user to answer all or most of the camera's questions at one time, rather than individually as the camera determines questions to ask the user.
- In another example, a second question may be output to a user based on a user's response to a first question. For instance, the camera may ask a user, “Where are you in this picture?” If the user responds, “on a beach,” then the camera may ask the user a second, related question: “Are these two pictures taken on the same beach?”
- A determination that the camera's resources are running low may be used in determining when or how to provide a question. For example, if a camera's batteries are running low, the camera may output the question, “The batteries are running low. Do you have any more batteries?” In another example, if a memory resource (e.g., a flash memory card) approaches or is below a predetermined threshold of available memory (e.g., ten Mb) the camera may output the question, “You have only 10 Mb of memory left. How many more images are you planning on capturing?”
- It may be desirable to output one or more questions when a user is starting to capture images of a scene. Some exemplary related factors and scenarios include, without limitation:
- (i) A user turns the camera on. For example, each time a user turns the camera on, this may be a sign that the user is about to capture images of a new scene. Based on this, the camera may ask the user a question (e.g., “Are you indoors or outdoors?”).
- (ii) A user captures an image. For example, the camera may output a question to a user immediately after the user captures an image of the scene (e.g., in the anticipation that the user will capture additional images of the scene).
- (iii) A user presses the shutter button. For example, the camera may output a question to a user when the user presses the shutter button on the camera halfway (e.g., to focus the camera's lens on a subject). This may indicate that the user is about to capture an image of the subject.
- (iv) A user moves to a new location. For example, the camera may include a GPS sensor or other location sensor that allows it to determine when a user moves the camera to a new location. Since this may be a sign that the user is now capturing images of a new scene, a question may be output to the user (e.g., “Are you still taking pictures of Alice?”).
- (v) A user operates a control. For example, the camera may output a question to a user when the user adjusts a setting on the camera, since this may be an indication that some aspect of the camera's settings need to be adjusted (e.g., because the user is capturing images of a different subject).
- The camera may store an output condition database such as the one shown in FIG. 11. Note that the output condition database shown in FIG. 11 specifies how one or more questions should be output to a user. For example, if the camera is currently in manual mode, the camera will output a question to a user by beeping and displaying the question as text along the bottom of the camera's viewfinder. Note that this exemplary version of the output condition database may be used by the camera to output a question based on a current mode of the camera and possibly one or more other factors. For example, according to the output condition database shown in FIG. 11, the camera is currently in “Manual” mode, so questions may be output to a user when the user looks through the camera's viewfinder. An alternate version of the output condition database might be used to output a question based on other factors.
- There may be times when a user would prefer not to have a question output to him (e.g., when he is busy with another activity, or when outputting a question would be disturbing to the user or other people). In circumstances like these, output of a question may be suppressed (e.g., by the camera) based on one or more conditions. Suppressing a question may include, without limitation: preventing a question from being output, not outputting a question, canceling output of a question, and delaying output of the question.
- It will be readily understood that a question may be determined, as discussed variously understood, but that its output is suppressed, delayed, or cancelled. Also, suppression of a question does not necessarily mean that no questions are output at all. Suppression may include suppressing one or more questions (or types of questions) while one or more other questions are output. For example, two questions may be identified at about the same time for output to a user, with one being output immediately and output of the second question being delayed until a more appropriate time.
- Conditions for suppressing questions may be similar to those described above for outputting questions. For example, a question may be suppressed based on an indication from a user or because a time limit expires.
- In some embodiments, a question may be suppressed because it comes at an inappropriate time. For example, it may be determined that a user may currently be busy with another activity. In another example, if a user is busy adjusting the camera's settings, then the camera may delay outputting a question until the user finishes adjusting the camera's settings and captures the image that he was busing composing.
- According to some embodiments, one or more questions may be suppressed because of inappropriate content. For example, a determined question may be a duplicate of a recently-asked question. Since the user has already answered the question, it may be bothersome to ask the user the same question again. In another example, the answer to (or purpose of) a question may no longer be relevant. For instance, while a user is indoors, the camera may determine a question to ask the user: “What type of light bulb does this room have, tungsten or fluorescent?” However, the camera may have delayed outputting this question, because the user is in the middle of a conversation with a friend, for example. If the user then moves outside before the question is output, that question is no longer relevant.
- A user may indicate that one or more messages (or types of messages) should be suppressed. For example, a user who is capturing images at a golf tournament may indicate that no audio message should be output (e.g., so that the user does not disturb the golfers).
- Suppressing a question may include removing the question from an output queue or other list of questions to be output to a user.
- The output condition database shown in FIG. 11 shows an example of delaying output of a question if the camera is in “Sports” mode In this exemplary mode, the camera may delay outputting a question to a user until the camera is held still for a period of time. Another example of FIG. 11 relates to canceling output of a question. For example, the camera may refrain from outputting any questions when it is in “Do Not Disturb” mode.
- According to some embodiments of the present invention, a camera may output an indication of a question before outputting the question itself. For instance, rather than outputting a determined question immediately, the camera may output an indication that the question is ready to be output. In some embodiments, the camera will then wait for a user to indicate that he is ready for the question itself to be output. For example, the camera may beep when it determines a question and then wait for the user to respond to this beep before outputting the question. In a second example, an LED on the camera may flash whenever the camera has a question ready to output to a user.
- According to some embodiments, outputting a question to a user may include one or more of the following steps: outputting an indication that a question is ready, receiving a response to the indication from the user, and outputting the question based on the response. Examples of each of these steps are provided below.
- Some examples of outputting an indication that a question is ready include, without limitation: outputting an indication that a question has been determined, outputting an indication that a question has been queued for output, outputting an indication that a question has been retrieved from memory, outputting an indication that a question is ready to be output, outputting a request for the user's attention, and outputting a request for the user's attention regarding a question. Some of these examples are discussed in more detail herein.
- It will be readily understood that there are many ways of indicating to a user that a question is ready, such as by use of the various output devices discussed herein. For example, outputting an indication may comprise illuminating an LED on a camera. A user may understand that whenever this LED is illuminated, the camera has a question queued to ask the user. In another example, an audible “BEEP” or bell sound may be output that a user may hear. A user may understand that whenever this beep sounds, the camera has a question queued to ask the user. A message may be displayed on an LCD screen. For example, an LCD screen on the back of the camera may display a text, “I've got a question. Press the ‘Ask Me a Question’ button to have this question output to you.”
- In still another example, a portion of an image may be highlighted (e.g., in the camera's viewfinder). For instance, if the camera has a question about a particular subject in an image, the camera may highlight that subject in red when the image is displayed on the camera's LCD viewfinder. Different types of visual indicators may be used to alert a user that at least one question is pending. In one example, when a user views an image using the camera, the camera may place a green border around the image to indicate that there is a question associated with the image. In another example, a red question mark may be overlaid on the corner of a displayed image to indicate that the camera has a question related to the image.
- It will be readily understood that various methods described herein for outputting a question may also be used to output an indication that a question is ready. For example, an indication that a question is ready may include a presenter (e.g., an animated character, a celebrity voice).
- The camera may output an indication that a question is ready based on one or more conditions. Note that the various output conditions described herein for outputting a question may also be used to control the output of an indication that a question is ready.
- According to some embodiments, a user may respond at his leisure to an indication that a question is ready. For example, at 2:12 p.m. the camera may beep to indicate that it has a question to ask a user. The user may be busy with some other activity at this time (e.g., capturing an important sequence of images of a sporting event) and so ignores this beep until 2:17 p.m., when he is free to pay attention to the question that the camera outputs. To indicate that he would like to answer a question, the user may operate a control on the camera (e.g., press a button).
- Receiving a response to an indication that a question is ready may include, without limitation: receiving an indication from a user, receiving an indication that a user would like to view a question, and receiving an indication that a user is inactive. A user may provide a response by operating a control or other input device on the camera. Various types of input devices are discussed herein, and others may be readily apparent to one skilled in the art in light of the present disclosure.
- A user's response may include an indication of which question should be output. For example, a user may indicate that he is ready to answer questions about lighting, scenes, and future plans, but not about meta-tagging. In a second example, a user may select a question to answer from a list of questions that have been determined.
- A user's response may include an indication of how to output a question. For example, a user may prefer to have a question output to him with both audio (e.g., through a speaker on the camera) and text (e.g., through a LCD display on the camera).
- Some types of users may prefer to have the camera output an indication that a question is ready before outputting the question. Other users may prefer to have the camera determine the best time(s) to output a question. For example, allowing the user to control when and how a question is output may be preferred by users because of the control and simplicity such a system offers. For instance, a user may wish to have control over the outputting of questions that he finds annoying.
- The output condition database in FIG. 11 shows a number of examples of how the camera may output an indication that a question is ready before outputting the question itself. According to one example, when the camera is in “Output Upon Request” mode, the camera will beep when a question is ready to be output and then wait until a user presses the camera's “Ask Me a Question” button before outputting a question. In another example, when the camera is in “Silent” mode, the camera will cause an LED to blink to indicate that a question is ready.
- According to various embodiments of the present invention, a user may indicate a response to a question that is output to him. For example, the camera may output a question to a user, “Are we on a beach?” and the user may reply “No.”
- A user may operate one or more controls or other input devices of the camera to indicate a response to a question. Various types of input devices and controls that the camera may include are described herein, and other types may be readily apparent to those skilled in the art in light of the present disclosure. For example, a user may use one or more buttons on the camera to select a response from a plurality of response (e.g., an answer to a multiple-choice question). For instance, a user may select a response from the list of choices: “sunny,” “partly cloudy,” “light rain,” “heavy rain,” “light snow,” “heavy snow.” In another example, a user may press a button on the camera to indicate “Yes” in response to a question of whether he is capturing images of a sporting event. In another example, the camera may include a microphone that allows a user to respond to a question verbally (e.g., a user may indicate the weather outside is sunny by saying the word “Sunny”). In some embodiments, a user may use a stylus or other device to spell out a response on a touch screen on the camera. For example, a user may use the Graffiti™ alphabet to spell out a textual response to a question.
- Some embodiments of the present invention allow for a user to speak a response to a question. Such a response may be recorded using a microphone on the camera. The camera may then process the response using voice recognition software. For example, a user may indicate that he is at a “birthday party” and the weather is “raining.”
- In another example, a user may use any of a plurality of input devices (e.g., buttons) on the camera to highlight a portion of an image displayed (e.g., on the camera's color LCD display). For example, the camera may ask a user to indicate where a subject's face is in an image. Based on the user's response, the camera and/or a server may determine that this area of the image is properly exposed.
- It will be readily understood that a user's response to a question may take a variety of different forms (e.g., depending on the type of question). Some examples of forms of answers include, without limitation:
- (i) Yes/No (e.g., an answer to the question “Is this a picture of Alice?”).
- (ii) Open-ended. For example, a user may respond to the question by speaking freely (e.g., by speaking “Alice” in response to a question about who is in a displayed image).
- (iii) A selection from a plurality of choices. For example, a user may respond to the question, “Who is in this picture?” by selecting one of the offered choices: “a) Alice b) Bob c) both d) neither.”
- (iv) Graphical response. For example, a user may highlight a portion of an image or point to an object in an image displayed on a LCD display on a camera.
- The camera may also receive or otherwise determine a default response from a user. For example, if a user does not respond to a question in a certain period of time, the camera may assume that the user answered the question in a certain way (e.g., in accordance with a default answer, as discussed herein). For example, the camera may ask a user the question “You're at a ski resort, aren't you?” If the user does not respond to this question within ten seconds, then the camera may assume that the user answered “Yes” to this question.
- According to some embodiments, a camera and/or server may verify a response from a user. Verifying a response may include outputting an indication of the response that the user provided, or asking the same question or a similar question. The camera may verify a response to ensure that it did not misunderstand the user's response. For example, a user may respond “Yeah” or “Nah” to a Yes/No question, making it difficult for the camera to use voice recognition software to determine the user's response. In order to verify the user's response, the camera may output an indication of the user's response by displaying the camera's best guess as to the response (e.g., “Yes”) on the camera's LCD display. In some embodiments, the camera may verify a response to ensure that a user did not make a mistake in responding to a question. For example, the camera may output a second question to verify that a user did not accidentally press the wrong button on the camera when indicating his response (e.g., “Are you sure that this room has fluorescent light bulbs?”).
- The camera may verify a response to confirm that the user understands the ramifications of his response. For example, a user may indicate that he only plans to capture five more images on the camera's current memory card. Based on this, the camera may output a warning or reminder: “Are you sure that you only want to capture 5 more images? If you capture 5 more images at high resolution, then the camera will be out of memory and will not be able to store any more images.” According to some embodiments, the camera may verify a response by displaying an image to a user. For example, a user may respond to a question by indicating that he is at a beach. Based on this, the camera may display an image to a user along with the message, “This is what the picture would look like in Beach Mode. Is this correct?” The user may then respond to the verification by indicating whether the displayed image is correct.
- In some cases a user may not respond to a question. For example, a user may ignore a question that is output by the camera. For instance, the camera may output a question when a user is busy with another activity (e.g., capturing an important sequence of action shots at a sporting event). Instead of responding to the question, the user may ignore the question. In another example, a user may not know that the camera output a question. For instance, a user may not be looking at the camera's LCD screen when a question is displayed on the LCD screen. Since the user does not know that the camera has output a question, he will not know to respond to the question.
- Determining that a user has not responded to a question may include determining that a period of time has elapsed since the question was output. For example, if a user does not respond within twenty seconds of when a question is output, then the camera may determine that the user has not responded. The camera may then perform an action (e.g., output the question again). In some embodiments, determining that a user has not responded to a question may comprise determining that a condition has occurred (e.g., a user presses a button on the camera, the camera is held motionless for a period of time, a user provides a response to a different question). For example, the camera may stop displaying a question when a user captures an image.
- If a user does not respond to a question output by the camera, then the camera take one or more of the following actions:
- (i) Continue outputting the question. For example, the camera may display a question on an LCD screen until the user responds to the question.
- (ii) Stop outputting the question. For example, the camera may display a question on an LCD screen until either (a) a user responds to the question, or (b) thirty seconds have elapsed.
- (iii) Output the question again. For example, the camera may output an audio recording of a question. If a user does not respond, then the camera may output the question again (perhaps in a different form).
- (iv) Assume a default response to the question. For example, the camera may output a question to a user, “You're taking a picture of a sunset, aren't you?” If the user does not respond, the camera may assume that the answer to the question is “Yes” (e.g., based on a determined default answer) and perform an appropriate action based on this default response.
- (v) Output a different question. For example, the camera may ask a user a first question, “Are you at the beach?” If the user does not respond to the question, then the camera may ask the user a second question (e.g., based on the same image), “Are you at a ski resort?”
- It will be understood that some times a user may provide a response that does not answer a question. For example, a user may indicate that he does not want to answer a question. Accordingly, a user may press a “Cancel Question” button on the camera. In a second example, a user may use a control on the camera to put the camera in “Do Not Disturb” mode, thereby canceling the current question and any future questions. According to some embodiments, a user may indicate that he would prefer to answer a question at a later time. For example, the camera may have a “snooze button” that allows a user to indicate that the camera should stop outputting a question and then output the question again when a condition occurs (e.g., a period of time has elapsed, an output condition occurs).
- In at least one embodiment, a user may indicate whether a question is inappropriate or unhelpful. For example, the camera may have a “thumbs down” button or a “stupid question” button that a user may press when the camera outputs a question that the user determines is not worth answering. This may be particularly useful if the camera tends to make mistakes when determining questions to output to users. For example, a user may capture a plurality of images indoors and then move outdoors to capture more images. Based on the plurality of images captured indoors, the camera may ask the user, “What kind of lighting does this room have?” Since the user is currently outdoors, this question is inappropriate, so the user may press the “thumbs down” button to indicate that the camera should discard the question about lighting as being irrelevant to the current situation. Similarly, the camera may have a “reset questions” button that allows a user to indicate that the camera should restart its line of questioning.
- In accordance with one or more embodiments of the present invention, if a user does not respond to a question or a user provides a response that does not answer the question, then the camera may output the question again. The second output of the question may be similar to or different from the first output of the question. A question may be output a second time based on a different output condition. For example, a question may be output when a user presses the shutter button halfway down (an output condition). If a user does not respond to this output of the question, then the camera may output the question again in fifteen seconds (a second output condition).
- In some embodiments, a question may be output a second time based on the same output condition. For example, a question may be output thirty seconds after a user operates the camera (an output condition). If the user does not respond to this first output of the question, then the camera may output the question a second time in response to a second occurrence of the output condition. That is, the camera may wait until the user operates the camera again, and then output the question thirty seconds after the user stops operating the camera.
- Of course, a question may be output in the same manner as it was previously output. For example, a question may be output a first time as an audio prompt. If a user does not respond to the question, then the camera may repeat the audio prompt. In another embodiment, a question may be output in a manner that is different from the one that was previously used for the question. For example, a question may be output a first time as text displayed on the camera's LCD screen. If a user does not respond, then the camera may output the question as holographic text overlaid on the camera's optical viewfinder.
- Some embodiments allow for a question to be output with or without additional information. For example, a question may be output a first time as “Are you taking a picture of a sunset?” If a user does not respond to this first output of the question, then the camera may output the question again, this time providing additional information: “Are you taking a picture of a sunset? If so, then let me know now—otherwise your picture will be underexposed.”
- The camera may store an indication of a user's response to a question. For example, the camera may store an indication of a response in a response database such as the one shown in FIGS. 12A and 12B.
- Storing an indication of a user's response to a question may be helpful in a variety of different circumstances, including some exemplary scenarios described herein. Other uses of one or more stored indications of a user's response(s) will be readily apparent to those skilled in the art in light of the present disclosure.
- A stored indication of a response may be useful in performing an action based on multiple responses. For example, the camera may ask a user a plurality of questions and receive a plurality of responses from the user. The camera may then perform an action (e.g., adjust a setting, meta-tag an image, guide a user in operating the camera) based on the plurality of responses. For example, a user may indicate in a first response that he is capturing a picture of at a ski resort, and this first response may be stored in a database (such as the response database). Later, in a second response, the user may indicate that the weather is cloudy. Based on these two responses, the camera may adjust the settings on the camera to appropriate values for capturing images at a ski resort during cloudy weather.
- Indications of responses may be beneficial in determining future questions to ask a user. For example, the camera may ask a user a first question (e.g., “Are you indoors or outdoors?”). The user may then respond to this question (e.g., “Indoors”) and the camera may store this response. Based on the stored response, the camera may ask the user a second question (e.g., “What kind of lightbulbs does this room have?”).
- Storing an indication of a user's response may assist a computing device (e.g., a camera, a server) in avoiding repeating questions or asking unnecessary questions. For example, the camera may avoid asking a user the same question twice in close succession by checking to see if the user has already answered the question recently. If the user has answered the question recently, then the camera may assume that the user's answer is unchanged. For example, the camera may ask user if he is on a beach and store the user's response (e.g., “Yes”). Ten minutes later, the camera may refrain from again asking the user if he is on a beach and instead assume that the answer from ten minutes before is still valid and that the user is still on the beach. In at least one embodiment, as discussed further herein, information provided by a user may expire after a certain period of time or based on some other condition. For example, a camera may store an indication of a user's response in an expiring information database, such as the one depicted in FIG. 14.
- A camera and/or a server may perform various actions based on a response from a user, including one or more of: meta-tagging an image, adjusting a setting, guiding a user in operating a camera, outputting a second question, and determining a template.
- A computer server may assist the camera in processing a user's response to a question. Examples include, without limitation:
- (i) The camera may receive a user's response to a question and transmit an indication of this response to the computer server. The computer server may then process this response (e.g., using voice recognition software) to determine an appropriate action based on the response (e.g., adjusting a setting on the camera). An indication of the action may then be transmitted to the camera and performed by the camera.
- (ii) The computer server may transmit instructions to the camera describing how to process a response by a user. For example, in addition to transmitting an indication of a question to the camera (as described above), the computer server may transmit one or more sets of instructions describing how to process a user's potential responses to the question. For example, the computer server may indicate that if the user responds “Yes” to a question, then the camera should put the flash in slow-syncro mode; if the user responds “No” to the question, then the camera should ask the user whether he is outdoors. Note that instructions may be transmitted to a camera in a variety of different forms, including a computer program (e.g., in C or Java), script (e.g., in Javascript or VSscript), or machine code (e.g., x86 assembly).
- Meta-tagging may be used herein to refer generally to a process of associating supplementary information with an image (e.g., that is captured by a camera or by some other imaging device). The supplementary information associated with an image may be referred to as meta-data, meta-information, or a meta-tag. Some examples of meta-data include, without limitation:
- (i) A time and date when an image was captured (e.g., Oct. 10, 2002)
- (ii) A location where an image was captured (e.g., latitude and longitude coordinates obtained from a GPS sensor, an indication of a city, state, park, or other region provided by a user, an altitude determined using an altimeter). For example, a user may indicate that an image was captured in the SoHo area of New York City.
- (iii) An orientation of the camera when an image was captured (e.g., determined using tilt sensor or an electronic compass)
- (iv) One or more subjects of an image (e.g., people, objects, locations, animals, etc.). Note that a subject may be uniquely identified (e.g., “Alice Jones,” “Grand Canyon”) and/or categorized (e.g., “a squirrel,” “national park”).
- (v) A scene in an image (e.g., a rainbow next to a waterfall, a group of friends at a restaurant, a baby and a dog, a family portrait, a reflection in a mirror)
- (vi) Motion relating to an image (e.g., movement of a subject, movement of the camera)
- (vii) The environment in which an image was captured (e.g., weather conditions, sunlight, altitude, temperature)
- (viii) Lighting (e.g., daylight/ a tungsten light bulb, a florescent light bulb, flash intensity, locations of light sources)
- (ix) One or more settings on the camera (e.g., aperture, shutter speed, flash, mode). For example, meta-data associated with an image may indicate that the image was captured at f/2.8 with a CCD sensitivity of 100 ISO and a shutter speed of {fraction (1/250)} sec.
- (x) Information about how an image was captured. For example, an image may be meta-tagged as being part of an auto-bracketed group.
- (xi) Information about a user (e.g., the user's name, preferences, priorities)
- (xii) Preferred cropping or scale. For example, meta-data associated with an image may indicate what portion of the image should be printed and/or how large the image should be printed.
- (xiii) A category for an image. For example, an image may be categorized based on its intended usage (e.g., part of a slide show), based on its subject (e.g., images of Alice), and/or based on how or when it was captured (e.g., captured during a ski trip on Dec. 7, 2002).
- (xiv) A user's intentions (or other notes from a user). For example, a user may indicate, “I'm trying to get a picture of the baby with its eyes open” or, “I want to capture the reflection of the mountain in the water of the lake” or, “Getting the exposure right for the subject's face is most important; I don't care whether the background is in focus.”
- (xv) An audio clip. For example, the camera may record a ten second audio clip when capturing an image and stored this audio clip as meta-data associated with the image.
- (xvi) Acceptable to delete. For example, in his response to question a user may indicate to the camera that it is acceptable to delete one or more images from the camera's memory in order to make room for images that may be captured in the future. Based on this response, the camera may meta-tag one or more images for “deletion,” meaning that these images may be deleted if the camera begins to run out of memory.
- (xvii) Protection. For example, the camera may meta-tag one or more images as being “protected,” meaning that these images should not be deleted or altered in any way. The “protected” meta-tag may be helpful in ensuring that the user or the camera does not inadvertently delete one of the protected images.
- (xviii) A rating. For example, the camera may determine a rating of an image and store this rating with the image. A rating may be an indication of the quality of the image and may be based one a variety of different factors, including: exposure, sharpness, composition, subject, and indications from a user. Ratings may be helpful in allowing the camera to sort images.
- Note that meta-data that is associated with an image may be determined based on one or more responses indicated by a user. For example, a user may indicate in a first response that he is in Maui. In a second response, the user may indicate that he is at the beach. Based on these two responses, the camera may meta-tag an image as being taken “On the beach in Maui.”
- According to various embodiments of the present invention, a camera may meta-tag an image based on a user's response to a question. For example, a server may transmit to a camera a signal indicating that a recorded image is most likely of Alice (e.g., based on an image recognition program). The camera may then ask a user to verify that the image is of Alice. If the user indicates that the image is of Alice, then the camera may meta-tag the image as “Subject of Image: Alice.” In another example, if a user indicates that an image shows “Alice and Bob in Yosemite,” then the camera may meta-tag the image as “Subjects: Alice, Bob//Location: Yosemite National Park.”
- In another example, an image database may be used by a
server 110 in performing an image recognition process on a captured image. According to some embodiments of the present invention, if the recognition process matches a stored image (e.g., “YOSEMITE-08”) to a new image, theserver 110 may suggest some or all of the meta-data 830 associated with the stored image to a user (e.g., by transmitting an indication of the meta-data 830 to the camera 130). The user may then conveniently agree (e.g., by pressing an “Ok” button) to have the suggested meta-data associated with the new image. In this manner, a user may avoid some of the tedium of creating meta-tags. - Further, any new images may be stored in the image database and thus may be made available to an image recognition process. In this way, an image recognition process and/or a process for meta-tagging images may be refined or customized in accordance with the stored meta-information associated with a particular user's images. For example, a first user may have captured an image of a particular scene, associated meta-data including the description “Grand Canyon” with the image, and stored the image on his personal computer. A second user may have captured and stored a very similar (or identical) image, but associated with the image (e.g., as meta-data830) the description, “Arizona, Grand Canyon, March 1999.” If the first user transmits a second image similar to the stored image to his personal computer (e.g., from the
camera 130 via communications network 120) for image recognition, the computer may identify the same scene or subject based on the stored image, and suggest “Grand Canyon” to the first user. The second user'sserver 110, however, might suggest, for example, one or more of “Arizona,” “Grand Canyon,” and “March 1999” for the same second image. Thus, an image database may be useful in accordance with some embodiments of the present invention for generating and/or suggesting personalized meta-information for a particular user (or group of users). - A plurality of images may be meta-tagged based on at least one response from a user. For example, a user may indicate that he is at the beach. Base on this, all images captured by the camera may be meta-tagged as being captured “At the Beach.” In another example, a user may indicate in a response to a question that Alice is the only blonde woman who he has captured any images of today. Based on this, the camera may automatically meta-tag all images of blonde women taken today as being images of Alice.
- It will be readily understood by those skilled in the art that a variety of different forms of meta-data are possible, including, without limitation: text (e.g., a current date, a GPS location, the name of a subject, a current lighting condition), audio (e.g., an audio recording of a user's response to a question), images (e.g., a user's response may be a highlighted portion of an image), binary or other machine-readable formats (e.g., a 100 bytes of information at the start of an image file), and any combination thereof.
- Meta-data may also be stored in a variety of different ways. For example, meta-data may be stored in a file that is separate from an image file to which it pertains. For example, a “BOB23.TXT” file may store meta-data that pertains to a “BOB23.JPG” image that is stored by the camera. In some embodiments, meta-data may be stored in an image file. For example, the start of an image file may include a plurality of meta-tags that provide information based on a user's responses to one or more questions. In one or more embodiments, a single meta-data file may store information for a plurality of images. For example, the camera may store a response database that includes meta-data for a plurality of images (see below for further details). According to some embodiments of the present invention, a camera may store an audio clip of the user's response to a question and associate this audio clip with an image as meta-data. In another exemplary embodiment, a camera may set the file name of an image based on a user's response to a question. For instance, if a user indicates that an images is of Alice, then the camera may store this image with the filename “ALICE-01.JPG.”
- A wide variety of methods of meta-tagging an image and of different types of meta-data are known to those skilled in the art and need not be described in further detail herein. For additional reference, the reader may refer to U.S. Pat. No. 6,408,301 to Patton, et al., entitled “Interactive image storage, indexing and retrieval system”; U.S. Pat. No. 5,633,678 to Parulski et al., entitled “Electronic still camera for capturing and categorizing images”; and U.S. patent application No. 20010012062 by Anderson et al., entitled “System and method for automatic analysis and categorization of images in an electronic imaging device.”
- The response database shown in FIG. 7 and the image database shown in FIG. 8 depict a few examples of a camera meta-tagging an image based on a user's response to a question. For instance, the image “WEDDING-02” was meta-tagged as including “Alice” and “Bob” (e.g., based on a user's response to question “QUES-123478-03”). The exemplary images “BEACHTRIP-05” and “BEACHTRIP-06” were meta-tagged as images of a beach (e.g., based on a user's response to “QUES-123478-06”).
- Referring to FIG. 17, a flowchart illustrates a
process 1700 that is consistent with one or more embodiments of the present invention. Theprocess 1700 is a method for determining a question based on information determined based on an image recognition process performed by a server. For illustrative purposes only, theprocess 1700 is described as being performed by acamera 130 in communication with aserver 110. Of course, theprocess 1700 may be performed by any type ofimaging device 210 in communication with acomputing 220. - In
step 1705, thecamera 130 captures an image. Instep 1710, thecamera 130 transmits the image to theserver 110 for image recognition processing. For example, as discussed herein, theserver 110 may compare the captured image to a database of images stored for the user in a database. Instep 1715 the camera receives information determined by theserver 110 based on the image recognition process. For example, theserver 110 may have matched the captured image to a stored image, retrieved the meta-information associated with the stored image, and forwarded the meta-information to thecamera 130. In another example, theserver 110 may have been unable to identify a match and may have transmitted a signal to thecamera 130 directing thecamera 130 to ask the user if the user would like to apply the same camera settings in the future to any similar images. - In
step 1720, thecamera 130 determines a question based on the information from the server. For example, the camera may generate a question asking if the user would like to associate meta-information received from theserver 110 with the newly-captured image. The question is output to the user instep 1725. Instep 1730 thecamera 130 receives a response from the user and the camera performs an action based on the response (step 1735). For example, the camera may associate meta-data with the captured image based on the response. - Referring to FIG. 18, a flowchart illustrates a
process 1800 that is consistent with one or more embodiments of the present invention. Theprocess 1800 is a method for determining meta-information. For illustrative purposes only, theprocess 1800 is described as being performed by aserver 110 in communication with acamera 130. Of course, theprocess 1800 may be performed by any type ofimaging device 210 in communication with acomputing 220. - In
step 1805, theserver 110 receives an image captured by a user of acamera 130. Instep 1810, theserver 110 determines at least one of a plurality of images meta-tagged by the user. For example, theserver 110 may access the user's personal images database that contains images previously meta-tagged by the user. Instep 1815, theserver 110 determines meta-information to suggest to the user based on the captured image and the at least one image meta-tagged by the user. For example, using an image recognition program, theserver 110 identifies one or more matches for the captured image in the user's database of images and retrieves some or all of the meta-information associated with those matching images. - In
step 1820, theserver 110 transmits an indication of the meta-information to be suggested to the user. In an optional step 1830, theserver 110 receives an indication from thecamera 130 of meta-information associated with the captured image by the user. For example, thecamera 130 may transmit a signal indicating that the user has accepted the suggested meta-information, or, alternatively, may transmit a signal indicating other meta-information the user has decided to associate with the image (e.g., the user may have rejected all or some of the suggested information and may have provided other supplemental information). - According to various embodiments of the present invention, a camera may automatically adjust one or more of its settings based on a response from a user and/or based on a signal from a server. For example, if a user indicates in his response to a question that the weather is sunny, then the camera may adjust the aperture on the camera to be V5.6 and the shutter speed to be {fraction (1/250)} sec. Various different types of settings on the camera are described in detail herein, and others will be readily understood by those skilled in the art. Some examples of settings on the camera include, without limitation: exposure settings, lens settings, digitization settings, flash settings, multi-frame settings, power settings, output settings, function settings, and modes. Adjusting a setting may include, without limitation, one or more of: turning a setting on or off, increasing the value of a setting, decreasing the value of a setting, modifying a setting, changing a setting, revising a setting, and setting the camera to capture an image in a particular manner.
- The following are only some examples of how one or more settings may be adjusted based on a user's response to a question in accordance with one or more embodiments of the present invention. Each exemplary scenario comprises an exemplary question asked by a camera, a response from a user, and action(s) performed by the camera:
- (i) Question asked by camera: “Is this photo too dark?”
- Response from user: “Yes”
- Action: Increase the aperture on the camera.
- (ii) Question asked by camera: “Are you taking pictures of a sporting event?”
- Response from user: “Yes”
- Action: Put camera in burst mode to take3 pictures every time the shutter button is pressed and increase the shutter speed to that it is faster than {fraction (1/250)} sec.
- (iii) Question asked by camera: “What are the current weather conditions?”
- Response from user: “Cloudy and Overcast”
- Action: Adjust white balance (color temperature) on camera to 6000 K.
- (iv) Question asked by camera: “How far away is the wall behind your subject?”
- Response from user: “About 10 ft”
- Action: Adjust flash timing and shutter speed for slow synchro.
- (v) Question asked by camera: “Which is more important to you, taking more pictures or getting higher resolution pictures?”
- Response from user: “Taking more pictures”
- Action: Set JPG compression to “medium quality” and resolution to “regular.”
- (vi) Question asked by camera: “Do you want the building behind your subject to be in focus?”
- Response from user: “Yes”
- Action: Set aperture on camera to f/8 (or less). Adjust shutter speed or film speed accordingly.
- (vii) Question asked by camera: “Are we at a ski resort?”
- Response from user: “Yes”
- Action: Adjust white balance and exposure metering for subjects on bright white backgrounds.
- (viii) Question asked by camera: “Are you taking a group photo?”
- Response from user: “Yes”
- Action: Crop the image to include everybody in the group.
- (ix) Question asked by camera: “Are you on a boat?”
- Response from user: “Yes”
- Action: Adjust image stabilization setting to “high.”
- According to some embodiments, a camera may adjust a setting for one or more images. For example, the camera may output a question to a user, “Is this a group photo?” If the user responds “Yes” to the question, then the camera may adjust one or more settings and enable the user to capture one image based on these settings. After capturing the image of the group of people, the camera may revert to its original settings, for example, or determine one or more new settings for capturing images in the future. In some embodiments, settings may be adjusted for a plurality of images. For example, the camera may output a question to a user, “Are we at the beach?” If the user responds “Yes” to the question, then the camera may put the camera in “Beach” mode for the remainder of the user's image-capturing session. The camera may remain in “Beach” mode until the user turns the camera off or until the user begins capturing images of a different scene.
- An adjustment to a setting may persist until a condition occurs. Various examples of conditions are described herein, and others may be readily apparent to one skilled in the art in light of the present disclosure.
- A camera may not immediately adjust a setting based on a user's response to a question. For example, a camera may ask the user a plurality of questions and then adjust at least one setting based on the user's responses to the plurality of questions. In a second example, the camera may not adjust a setting based on a first question until after a user has answered a second, related question.
- According to some embodiments, the camera may indicate to a user an adjustment made to a setting. For example, a user may respond to a question by indicating that he is at the beach. Based on this, the camera may increase the color saturation setting on the camera by 5%. In addition, the camera may output a message to the user “Increasing
color saturation 5%.” - Indicating an adjustment to a setting may be helpful for a variety of reasons, such as by assuring the user that the camera is in fact making use of his responses to questions. For example, even if a user does not understand what adjustment the camera is making, he may find it comforting to be informed that the camera is making use of his responses to questions. If a user were to feel that his responses to questions were being ignored by the camera, then he might ignore future questions that are output by the camera.
- Indication of an adjustment may be helpful to the user in verifying that the camera has not misunderstood or misinterpreted a user's response to a question. For example, a user may respond to a question by indicating that he is at a football game. Based on this indication and the current time of day (e.g., 2 p.m.) the camera may assume that the game is being illuminated by sunlight. However, in fact, this may not be the case (e.g., the football game may be played in a domed stadium). When the camera indicates that it is “Adjusting the camera for sports during daylight conditions,” the user may notice this mistake and correct the camera by indicating that the football game is in fact illuminated by halogen light bulbs. Informing the user of any adjustments that are made to the camera may also help the user in composing a shot or in making further adjustments to the camera's settings. For example, informing the user that the “flash brightness has been set for subjects 10-12 feet away” may be helpful to a user if the user decides to move or recompose an image.
- Instead of automatically making an adjustment, a camera may ask for a user's permission before making an adjustment to a setting. If the user indicates that it is acceptable to adjust the setting on the camera, then the camera may adjust the setting. If the user indicates that he would rather not adjust the setting on the camera, then the camera may not adjust the setting. For example, based on a user's response, the camera may determine that the camera's flash should be turned on. Before turning on the flash, the camera may output a message to the user, “I'm about to turn on the flash. Is this okay?” If the user responds “Yes,” then the camera may turn the flash on. In another example, the camera may determine that a user is capturing an image of a sunset based on the user's responses to one or more questions. Based on this, the camera may output a question to the user, “Would you like to put the camera into Sunset Mode? Sunset Mode is specially designed to make sure that pictures of sunsets are exposed correctly.” The user may then press a “Yes” button on the camera to indicate that he would like to put the camera into “Sunset” mode.
- Asking for a user's permission to adjust a setting on the camera may be similar to providing advice to a user about adjusting a setting. Various ways of providing advice to a user based on the user's response to a question are discussed herein.
- The camera may implement one or more rules based on a user's response to a question. A rule may be a guideline or other indication that may be used to determine a setting on the camera. Implementing a rule may include one or more of: storing an indication of a rule in memory, automatically adjusting a setting of the camera based on a rule, and restricting operation of the camera based on a rule.
- In one example of implementing a rule, a user may respond to a question by indicating that he is capturing images of a child's birthday party. Based on this, the camera may store a rule that requires that the camera maintain shutter speed of at least {fraction (1/125)} sec (because children at a birthday party tend to move quickly), except when the camera determines that an image includes a birthday cake with candles, in which case the camera should set the aperture to be a large as possible and not use a flash. An indication of a rule may be stored, for example, in a rules database (not shown) or a settings database such as the one depicted in FIG. 7.
- In another example of use of a rule, a rule may be a required relationship between one or more settings. For example, based on a user's indication that he is taking pictures at the beach, the camera may ensure that the subject of an image is always correctly exposed, even if the background of the image is overexposed. In a related example, the camera may use an automatic neutral density feature to automatically vary the exposure of the subject relative to the exposure of the background. In another example, a user may respond to a question by indicating that he is capturing images at a zoo. Based on this, the camera may implement a rule that, if the user is outdoors, the camera's aperture should be smaller than f/8 (to ensure good depth of field). If the user is indoors, a rule may establish that the camera should increase the CCD sensitivity as much as possible and never use a flash (to avoid frightening the animals).
- According to some embodiments, a rule may indicate how a setting on the camera should be adjusted. For example, based on an indication from a user that Alice is standing in front of a tree, the camera may implement a rule to shift the hue of an image by +5% anytime the camera is used to capture an image of Alice wearing her green jacket (e.g., to avoid having Alice's green jacket blend into the background. In another example, a user may respond to a question by indicating that he is at a ski resort. Based on this, the camera may implement a rule that until 5 p.m. that day, all images captured by the camera should be meta-tagged as being “skiing/snowboarding” images. In yet another example, the camera may automatically adjust the white balance setting to 7000K based on an indication by a user.
- According to one or more embodiments, a rule may indicate how one or more images of a subject should be captured. For example, the camera may store a rule that all images of Alice should be taken from the left side, since Alice has a birthmark on her right arm that she prefers to have hidden in images of her. Based on the rule, the camera may prevent the capturing of an image of Alice's right side and/or may prompt the user to verify that he wishes to take a picture of Alice's right side.
- According to some embodiments, a rule may prevent the camera from performing one or more operations, such as using a flash while the user is capturing images of a sporting event.
- The exemplary response database shown in FIGS. 12A and 12B shows a few examples of how a camera may adjust a setting based on a user's response to a question. For example, the camera adjusted its settings to “Fluorescent Light” mode based on a user responding “Fluorescent” to question “QUES-123478-02.” In another exemplary adjustment, the camera adjusted the white balance setting to “5200 K” based on a user responding “Sunny” to question “QUES-123478-05.” In another exemplary adjustment based on a response, the camera adjusted the image size setting to “1600×1200” and the image compression setting to “Fine” based on a user responding “15” to question “QUES-123478-07.”
- In accordance with various embodiments of the present invention, a camera may guide a user in operating the camera based on one or more responses from the user. Guiding a user may include, without limitation, one or more of: recommending that a user adjust a setting, prompting a user to adjust a setting, guiding a user in composing a shot, and outputting a message that guides a user in operating the camera.
- Recommending an adjustment to a setting may include, without limitation, one or more of: outputting an indication that an adjustment to a setting is recommended, outputting a message describing an adjustment to a setting, outputting an indication of a setting to be adjusted, and outputting an indication of a value of a setting (e.g., a current value, a recommended value). Some different types of settings on the camera are described in detail herein, as are some exemplary types of adjustments that may be made to settings.
- According to some embodiments, the camera may guide a user by recommending an adjustment to at least one setting based on at least one response from the user. For example, the camera may recommend that a user increase his shutter speed to at least {fraction (1/250)} of a second when taking sports pictures. Response database in FIG. 7 also shows an example of outputting a recommendation of a setting to a user based on the user's response to a question. Based on the user's response “Yes” to “QUES-123478-09” at 4:11 p.m. on Aug. 17, 2002, the camera advised the user to adjust the camera's shutter speed to be faster than {fraction (1/250)} of a second.
- As discussed herein, in accordance with some embodiments, the camera may actually change a setting on the camera based on a user's response to a question. In contrast, recommending that a user adjust a setting may include simply outputting a message describing a potential adjustment to a setting, leaving it up to the user to actually adjust the setting. For example, a camera may output a message to a user, “I suggest that you use a smaller aperture to ensure that both the foreground and the background of your photo are in focus. An aperture of f/8 or smaller would be good for this photo.” The user may or may not make the suggested adjustment. In another example, the camera may output a message to a user, “If you're taking pictures of animals in the wild, then you should probably put the camera in ‘Wildlife’ mode.”
- According to some embodiments, an output device may be used to output an indication of a suggestion of a setting adjustment. For example, a warning LED in the camera's viewfinder may blink to indicate to a user that an image the user is about to capture may be underexposed (e.g., suggesting an adjustment should be made). Note that this recommendation (i.e., the blinking LED) may simply suggest that a user make an adjustment to a setting without indicating any specific adjustment to make. In at least one embodiment, a camera may output a message describing a setting that should not be used. For example, if the camera's flash is currently enabled and the user indicates that he is capturing an image of a mirror, then the camera may output a message to the user, “It is not advisable to use a flash when capturing an image of a mirror. The flash could reflect off the mirror back at the camera, causing the image to be over exposed.”
- According to some embodiments, a camera may prompt a user to adjust a setting based on at least one response from the user. Prompting a user to adjust a question may include outputting a question to a user asking him if he would like to adjust a setting. For example, a camera may output a message to a user, “Since you're taking pictures at a ski slope, you should probably turn on the camera's Auto-Neutral Density feature.” Note that in this example, it is left to the user to determine whether he would like to turn on the Auto-Neutral Density feature and operate the controls of the camera to enable the feature. In another example, a user may respond to a question by indicating that he is in a room with fluorescent lights. Based on this response, the camera may output a message to the user, “Would you like to put the camera in “Fluorescent Light” mode? This mode is specifically designed for rooms with fluorescent lights and will help to ensure that your images are exposed correctly.” If the user responds “Yes” to this question, then the camera may be adjusted to “Fluorescent Light” mode.
- A camera may assist a user in adjusting a setting (e.g., without the camera actually performing the adjustment of the setting). For example, a camera may output a message to a user, “Since you're on a sunny beach, you should probably put me in ‘Sunny Beach’ mode. Press ‘Ok’ to put the camera in ‘Sunny Beach’ mode.” Note that in this example, the camera has adjusted its controls to simplify for the user the process of putting the camera in “Sunny Beach” mode. For instance, instead of selecting “Sunny Beach” mode (e.g., from a menu of modes on the camera), all the user has to do is press the “OK” button on the camera's touch screen.
- In another example, a camera may output a message: “You may want to adjust the white balance setting on the camera based on the color of light emitted by the light bulb in this room. Press the ‘up’ and ‘down’ buttons to adjust the white balance.” In this example, the camera has simplified the process of adjusting the white balance on the camera by automatically enabling the “up” and “down” buttons on the camera to control the white balance.
- According to some embodiments of the present invention, a camera and/or a server may guide a user in composing a shot based on at least one response from the user. Various types of software and/or hardware useful in assisting a user in composing a shot are known to those skilled in the art, including systems described in U.S. Pat. No. 5,831,670 to Suzuki, entitled “Camera capable of issuing composition information”; U.S. Pat. No. 5,266,985 to Takagi, entitled “Camera with optimum composition determinator”; U.S. Pat. No. 6,094,215 to Sundahl et al., entitled “Method of determining relative camera orientation position to create 3-D visual images”; and U.S. Pat. No. 5,687,408 to Park, entitled “Camera and method for displaying picture composition point guides.”
- According to some embodiments of the present invention, a camera may determine an optimum framing for a scene (e.g., with a subject slightly off center and an interesting tree in the background). Based on this determined framing, the camera may provide instructions to a user on how to aim the camera to obtain this framing. For example, the camera may output audio instructions to the use, such as, “Pan the camera a little more to the left . . . Okay, that's good. Now zoom in a little bit . . . Whoops, that's too much . . . Okay, that's good. Now you're set to take the picture.”
- In some embodiments of the present invention, a camera may include a mechanism that allows the camera to aim itself. For example, a user of the camera may be asked to hold the camera steady, and then the camera may adjust a mirror, lens, light sensor, and/or other optical device(s) so as to capture an image at a certain angle from the camera. For example, the camera may rotate a mirror five degrees to the left to capture an image that is to the left of where a user is aiming the camera.
- According to one or more embodiments, the camera may be configured so as to be manipulated remotely (e.g., by a server). For instance, a server may be able to view a representation of the camera's viewpoint over a network connection. The server may instruct a user to hold the camera steady (e.g., via the camera's LCD display) (or direct the camera to provide such an instruction), and then the server may adjust remotely the camera's mirror to obtain an optimal framing of a picture.
- The camera may provide directions to one or more subjects of a image. For example, a user may be capturing an image of a group of friends at a restaurant. Based on the user's response to a question and/or based on image recognition (e.g., performed by the camera and/or a server), the camera may provide directions relating to the group, such as, without limitation: “Everybody needs to get closer together”; “Tell Alice to take a step back”; and “Bobby is giving rabbit ears to Alice.” Similarly, a camera may output directions addressed to the group rather than the user (e.g., using an audio speaker).
- In another example of providing composition assistance to a user, a camera's viewfinder may display a blinking arrow pointing to the left to indicate to a user that he should pan left to capture the best possible image of a particular scene. In still another example, a user may indicate that he is capturing an image through a glass window and would like to use a flash. Based on this, the camera may provide instructions to the user on how to compose the shot so as to avoid having the flash reflect off of the glass window.
- As discussed variously herein, one or more questions may be determined based on a user's response to a previous question. For example, the camera may ask a user a first question: “Are you indoors or outdoors?” If the user responds “Indoors” to this question, then the camera may store this response and ask the user a second question based on the response: “What kind of lightbulbs does this room have?”
- Some additional examples of determining a second question based on a first question are provided below. Each exemplary scenario describes at least one question output, a response by a user, and a subsequent question determined (e.g., by a camera, by a server) based on the response to at least one previous question:
- (i) First question output by camera: “Are we at the beach?”
- User's response: “Yes”
- Second question determined: “Are you taking pictures of the water?”
- (ii) First question output by camera: “Are you taking a picture of a birthday cake or candles?”
- User's response: “Yes”
- Second question determined: “How many candles are there on the cake?”
- (iii) First question output by camera: “Who are you taking a picture of?”
- User's response: “Alice, Bob, and Carl”
- Second question determined: “Is Bob currently wearing the same shirt as he was in this picture?<display picture of Bob>”
- (iv) First question output by camera: “Are we at the beach?”
- User's response: “No.”
- Second question output by camera: “Are we at a ski resort?”
- User's response: “No”
- Third question determined: “Are you trying to capture the silhouette of an object?”
- According to some embodiments, an entire series of questions may be output based on a user's response to a question. For example, in response to a user indicating that he is indoors, the camera may ask the user a number of questions about the lights of the room in order to determine what kind of lights there are, where the lights are located, and what sort of lighting effect the user is hoping to achieve. The user's responses to these questions may then be used to determine one or more settings for the camera, as discussed herein.
- In one example, a computing device may use a decision tree to determine one or more questions to ask a user. For example, a camera may ask a user a first question. If the user gives a first response to the first question (e.g., “Yes”), then the camera may ask the user a second question (e.g., the question from the “Yes” branch of the decision tree). If the user gives a second response to the first question (e.g., “No”), then the camera may ask the user a third question (e.g., the question from the “No” branch of the decision tree). This process may repeat until the camera determines enough information to perform one or more actions (e.g., adjust a setting, guide a user in operating the camera).
- As discussed variously herein, a user's response to a question may be a factor in determining a question to ask the user. For instance, a determination condition may be satisfied based on a user's response to one or more questions. For example, one determination condition may be: (location=“outdoors”) AND (image_recognition (beach_template)). The information that a user is outdoors may have been received, for example, from a user as a response to a question. In another example, a determination condition is: (location=“zoo”) AND (subject_of image=“animal”). The information that a user is at the zoo may have been determined based on a response to a first question, and the information that the user is capturing an image of an animal may be determined based on a response to a second question.
- Note that the determination condition database shown in FIG. 10 includes a number of examples of the camera that describe determining a question based on a user's response to a previous question. In one example, if the camera is indoors (e.g., determined by asking “QUES-123478-01”) and the flash is turned off, then the camera may determine to output question “QUES-123478-02” (i.e., “What kind of lightbulbs does this room have?”) to a user. In another example, if a user answered “Alice” to “QUES-123478-03,” then the camera may output “QUES-123478-04” (i.e., “Please use the cursor to point to Alice in this picture.”).
- Some embodiments of the present invention may be advantageous in that by asking a user a plurality of questions (e.g., a series of related questions), a computing device (e.g., of a camera, a server) may determine enough information about a scene to perform one or more other actions (e.g., adjusting a setting, meta-tagging an image, guiding a user in operating the camera).
- As described herein, a camera, server or other computing device may use one or more templates to perform image recognition on captured images. For example, the camera may store a “beach template” that may be used to determine whether an image includes (and thus may have been captured on) a beach. As discussed herein, a wide variety of other templates are possible.
- The camera may use information provided by a user (e.g., a user's response to a question) to determine a template. The template may then be stored and used for processing images and/or for asking questions in the future.
- For example, a camera may output a question, “Who is the subject of this image?” and a user may respond: “Alice.” Based on the user's response and the image, the camera may create a template suitable for recognizing images of Alice (e.g., an “Alice template”). At a later time, the camera may use the “Alice template” to determine that Alice is in an image. A question may then be asked based on this determination (e.g., “Who is standing next to Alice in this picture?”).
- In another example, a camera may display a plurality of images to a user and ask the user, “Were all of these images captured in a gymnasium?” If the user responds “Yes” to the question, then the camera may create a “gymnasium template” based on similarities among the plurality of images (e.g., the color of the fluorescent lighting, the color of the wood floor, etc.). If the user later returns to the gymnasium to capture more images, the camera may recognize that it is in the gymnasium and ask the user a question based on this (e.g., “You're in a gymnasium, aren't you?”).
- In another example related to a template, a camera or server may store a “group photo template” that may be used for recognizing images of groups of people and for adjusting the settings of the camera so as to best capture images of groups. However, some group photos may not match the group photo template. For example, an image of a group of people in which people are lying down may not match the group photo template. Based on this, the camera may output a question to a user “Is this a group photo?” If the user responds “Yes,” then the camera may determine a new group photo template and use this new group photo template to replace the old group photo template. Optionally, two templates may be retained (e.g., one being for group photos where the subjects are lying down). In the future, the camera may recognize images of people lying down as group photos as well and ask appropriate questions based on these photos.
- It will be readily understood that a template may also be determined based on a variety of other factors (i.e., factors other than a user's response to a question). According to some embodiments, a template may be determined based on at least one image. For example, the camera may capture a plurality of images at a ski resort and determine a “ski resort template” based on these images (e.g., based on similarities between the images). This “ski resort template” may be used to recognize images in which people are shown skiing or snowboarding on snow. Note that snow provides a bright white background for such images, which may be helpful in distinguishing images of people at a ski resort, for example, from images of people engaged in other activities.
- Some embodiments provide for determining a template based on other indications by a user. For example, a user may use buttons on the back of the camera to select a plurality of images that are stored in the camera and may indicate that the camera should determine a template based on these images. For instance, the user may select a plurality of images captured at a dance party and ask the camera to create a “dance party template” based on the selected images.
- As discussed herein, in accordance with various embodiments of the present invention a camera or other imaging device may transfer one or more images to a second device (e.g., a server). According to some embodiments, a camera may determine whether to transmit one or more images to a second electronic device. For example, the camera may determine whether it is running low on memory and therefore should free up some memory by transmitting one or more images to a second electronic device and then deleting them. Such a determination may be based on a variety of factors, including, without limitation:
- (i) an amount of available memory (e.g., an amount of memory on the camera that is free, an amount of memory on the second device that is free)
- (ii) an amount of bandwidth (e.g., an amount of bandwidth available for transmitting images to the second device)
- (iii) factors relating to capturing images
- (iv) a user's preferences
- (v) factors relating to images stored in memory
- Similarly, according to some embodiments a camera may determine which images to transmit to a second device. For example, the camera may free up some memory by transmitting images of Alice to a second electronic device, but keep all images of Bob stored in the camera's secondary memory for viewing using the camera. Such a determination may be based on a variety of factors, including, without limitation:
- (i) the quality of at least one image (e.g., as measured by a rating)
- (ii) the compressibility of at least one image
- (iii) image content (e.g., the subject of an image)
- (iv) a user's preferences
- (v) meta-data associated with at least one image (e.g., time, location, subject, associated images)
- (vi) an amount of available memory (e.g., an amount of memory on the camera that is free, an amount of memory on the second electronic device that is free)
- (vii) an amount of bandwidth (e.g., an amount of bandwidth available for transmitting images to the second electronic device)
- Because the bandwidth of a connection between the camera and a second device may be limited, the camera may compress one or more images when transmitting them to a second device. In addition, the camera may determine whether to compress an image when transmitting it to a second device. For example, low quality images may be compressed before being transmitted to a second device, whereas high quality images may be transmitted at full resolution to the second device. Similarly, the camera may determine how much to compress one or more images when transmitting the one or more images to a second device. Determining whether to compress an image (and/or how much to compress the image) may be based on a variety of factors, including, without limitation:
- (i) an amount of available memory (e.g., an amount of memory on the camera that is free, an amount of memory on the second electronic device that is free)
- (ii) an amount of bandwidth (e.g., an amount of bandwidth available for transmitting images to the second electronic device)
- (iii) the quality of at least one image (e.g., as measured by a rating)
- (iv) the compressibility of at least one image
- (v) image content (e.g., the subject of an image)
- (vi) a user's preferences (e.g., indications by a user)
- (vii) meta-data associated with at least one image (e.g., time, location, subject, associated images)
- (viii) other images
- Note that in some embodiments a camera may delete or compress an image after transmitting it to a second electronic device, thereby saving memory. Since a copy of the image may be stored on the second device (e.g., in a server database), there may be no danger of losing or degrading the image by deleting or compressing it on the camera. Of course, in some circumstances it may not be desirable to delete or compress an image after transmitting the image to a second device. For example, a camera may transmit an image to a second electronic device in order to create a backup copy of the image.
- Capturing an image manually may include receiving an indication from a user that an image should be captured. Some examples of receiving an indication from a user include, without limitation: a user pressing a shutter button on a camera, thereby manually capturing an image; a user setting a self-timer on a camera, thereby indicating that the camera should capture an image in fifteen seconds; a user holding down the shutter button on a camera, indicating that the camera should capture a series of images (e.g., when taking pictures of a sporting event); a user putting a camera in a burst mode, in which the camera captures a sequence of three images each time the user presses the shutter button; and a user putting a camera into an auto-bracketing mode, in which the camera captures a series of images using different exposure settings each time the user presses the shutter button on the camera.
- As discussed herein, a camera or other imaging device may capture an image automatically and then determine a question to ask a user based on that image. For example, the image database in FIG. 8 depicts an example of an image “BEACHTRIP-04” that was captured automatically by a camera. It will be readily understood that capturing an image manually may involve receiving an indication from a user that an image should be captured (e.g., by the user pressing a shutter button). In contrast, automatically capturing an image may not involve receiving any such indication. For example, the camera may capture an image automatically without the user ever pressing the shutter button on the camera. In contrast to capturing an image manually, therefore, capturing an image automatically may include, without limitation, one or more of the following:
- (i) capturing an image without a user pressing the shutter button on the camera
- (ii) capturing an image without an indication from a user
- (iii) capturing an image without a direct indication from the user
- (iv) capturing an image without receiving an input from the user
- (v) capturing an image without receiving an indication that the user would like to capture the image
- (vi) capturing an image without the user's knowledge
- (vii) not providing an indication to a user that an image has been captured
- (viii) capturing an image without accessing information provided by the user
- (ix) capturing an image based on a condition
- (x) capturing an image based on a condition that was not set by a user
- (xi) capturing an image while the camera is being held by a user
- (xii) capturing an image independently of the user pressing the shutter button on the camera
- (xiii) determining whether to capture an image
- (xiv) determining whether to capture an image automatically
- According to some embodiments of the present invention, a user may or may not be aware that an image has been captured automatically. For example, the user's camera may not beep, display an image that has been captured, or provide any other indication that it has captured an image, as is typically done by digital cameras that capture images manually. Automatically capturing an image quietly and inconspicuously may help to prevent the camera from distracting a user who is in the midst of composing a shot. For example, a user may find it annoying or distracting to have the camera automatically flash or beep when he is about to capture an important image. In a second example, capturing images without a user's knowledge may allow the camera to give the user a pleasant surprise at the end of the day when the user reviews his captured images and finds that the camera captured sixty-eight images automatically in addition to the nineteen images that he captured manually. In another example, a user may manually capture a plurality of images at a birthday party, but miss capturing an image of the birthday boy opening one of the gifts. Fortunately, the camera may have automatically captured one or more images of this special event without the user's knowledge.
- In accordance with at least one embodiment of the present invention, a camera may capture an image automatically while a user is composing a shot. For example, a user may aim the camera at a subject and begin to compose a shot (e.g., adjusting the zoom on the camera, etc.). While the user is still composing the shot (i.e., before the user presses the shutter button on the camera to capture an image), the camera may capture one or more images automatically. For example, the camera may capture images of scenes that the user aims the camera at, even if the user does not press the shutter button on the camera.
- In some embodiments, one or more images may be captured based on a condition. For convenience, such a condition may be referred to herein as a capture condition. Capture conditions may be useful in triggering or enabling a variety of different functions, including, without limitation: determining when to capture an image, determining what image to capture, and determining how to capture an image.
- Conditions and the performance of one or more actions based on a condition are discussed variously herein. Accordingly, it will be understood that capturing an image based on a condition may include, without limitation, capturing an image when a condition occurs, in response to a condition, when a condition is true, etc. Also, it will be readily understood in light of discussions herein with respect to conditions, that a capture condition may comprise a Boolean expression and/or may be based on one or more factors. Various examples of factors upon which a condition may be based are discussed herein.
- In one example of how an image may be captured automatically based on a capture condition, whenever the shutter button on the camera is depressed halfway (i.e., a capture condition), a camera may automatically capture an image and store this image in RAM for further processing. In another example, a camera may include an orientation sensor that determines when the camera is being aimed horizontally and has not moved in the last two seconds. Based on this determination, the camera may capture an image, since a user of the camera may be composing a shot and the captured image may be useful in determining a question to ask the user about the shot. According to another example, a camera may include a microphone. If this microphone senses an increase in the noise level, then this may be a sign that an event is occurring. Based on the increase in noise level, the camera may capture an image, which may be useful in determining the situation and asking the user a question.
- Note that automatically capturing one or more images based on a capture condition may be particularly helpful for the camera in determining one or more questions to ask a user. For example, whenever a user raises the camera to a horizontal position and holds it steady, the camera may capture an image. This image may then be used to determine an appropriate question to ask the user (e.g., a question relating to the image that the user is about to capture). Various exemplary ways of determining a question based on an image that has been captured are discussed herein, and may be applied in accordance with some embodiments with respect to images captured automatically.
- An image that is captured based on a capture condition may be stored in memory temporarily or permanently. For example, a camera may automatically delete an image that is captured automatically after the camera has determined and output a question based on this image. For instance, the camera may automatically capture one or more images while a user is composing a shot. These images may be stored in memory temporarily and used for determining one or more questions to ask the user. These questions may be output to the user while he is composing the shot. The user's responses to these questions may then be used to adjust one or more settings on the camera, as discussed herein. Finally, the user may finish composing the shot and capture an image (e.g., based on the adjusted settings). Afterwards, the automatically captured images may be deleted from memory to free up space.
- Alternatively, an imaging device may capture images semi-continuously (e.g., like a video camera), and a capture condition may be used to select an image for further processing. For example, a camera may continuously capture images and store them temporarily in memory (e.g., in a circular buffer of thirty images). Then, based on a capture condition (e.g., a user pressing the shutter button halfway), the camera may select one of the previously captured images and determine a question to ask the user based on this image.
- It will be readily understood that an image that is captured automatically may be meta-tagged. For example, an image that is captured automatically may be meta-tagged to indicate that it can later be deleted (e.g., if the camera starts to run out of memory).
- Referring to FIG. 19, a flowchart illustrates a
process 1900 that is consistent with one or more embodiments of the present invention. Theprocess 1900 is a method for automatically capturing an image based on a capture condition. For illustrative purposes only, theprocess 1900 is described as being performed by acamera 130. Of course, theprocess 1900 may be performed by any type ofimaging device 210. - In
step 1905, thecamera 130 automatically captures an image based on a capture condition. Instep 1910, thecamera 130 determines a question based on the image. Instep 1915, the camera outputs the question based on an output condition. Various types of output conditions are discussed herein. Instep 1920, thecamera 130 receives a response to the question, and instep 1925, thecamera 130 adjusts a setting based on the response. - As described variously herein, a user may provide information by responding to a question that is output by a camera. This information provided by the user, as well as other information (e.g., information from sensors, information from analyzing images), may be used by the camera to determine one or more actions to perform (e.g., adjusting settings on the camera, guiding a user in operating the camera).
- According to some embodiments of the invention, information (e.g., acquired by a camera) may expire. For example, a user may respond to a question by indicating that he is “at the beach.” This response may be stored in the response database (e.g., such as the one shown in FIG. 7) and an action may be performed based on the response (e.g., the camera may be adjusted to “Sunny Beach” mode). However, at some point the information that the user is “at the beach” will no longer be valid. For example, the user may go to a restaurant to eat lunch, or go home after visiting the beach all day. In another example, information about the weather outside being sunny may expire at the end of the day when the sun goes down. In response to information expiring, the camera may perform an appropriate action such as adjusting a setting on the camera or outputting an additional question to a user. In accordance with at least one embodiment of the present invention, a server or other computing device may determine that information has expired. In some embodiments, one or more actions may be performed based on the expiration of information.
- Information may expire for a variety of different reasons. The information may no longer be correct, for example. For instance, information that the user is at the zoo is no longer correct if the user has left the zoo. In another example, information may no longer be applicable. For instance, information about how to crop an image of a group of people may no longer be applicable if a user is not capturing an image of a group of people. In some cases, more recent information may be available. For example, two hours ago the weather outside may have been cloudy and raining, but now the weather is sunny. In some instances, information may be time-sensitive and/or may be updated periodically, according to a schedule, from time to time, or at any time.
- According to at least one embodiment, information should be verified before being used again. For example, a user may indicate that he is interested in capturing an image with a slow shutter speed and maximum depth of field. However, these settings may or may not be appropriate for a new scene that the user is capturing. Thus, if it is determined that a new scene is to be captured, for example, it may be appropriate to verify whether the user is still interested in using the same settings (i.e., determine whether that information has expired, is still valid, and/or is still applicable).
- In order to account for the possibility that information provided by a user has a limited lifespan, a computing device may determine when information provided by a user expires and/or perform an action based on the information's expiration. For example, the camera may ask the user another question and/or may adjust a setting based on the expiration of the information.
- Different pieces of information may expire at different times (e.g., independently or each other). For example, information about the subject(s) of one or more images that a user is currently capturing (e.g., the identities of people in a group photo) may expire when the user ceases to aim the camera at the group of people. In another example, information about the location of the camera may expire when the camera is moved more than one hundred feet from its original location. Information about the current weather conditions may expire after two hours, for example. In some embodiments, information about a user of the camera may expire when the camera is turned off.
- Alternatively, or in addition, different pieces of information may expire at the same time. For example, all information about capturing images of Alice may expire if a user is now capturing an image of Bob. In another example, all information about a scene that a user was capturing may expire if the user turns the camera off for more than thirty minutes. In another example, all information about a user of the camera may expire if a user presses the “Reset User Information” button on the camera.
- A computing device may determine when one or more pieces of information expire based on a condition. This condition may also be referred to as an expiration condition to differentiate it from other types of conditions described elsewhere in this disclosure. Conditions are discussed in further detail variously herein. One skilled in the art will understand in light of the present disclosure that some conditions for expiring information may be similar to conditions for determining a question. For example, it will be readily understood that a condition may be a Boolean expression and/or may be based on one or more factors. Any of the factors described herein (e.g., images, indications by a user, movement of the camera, time-related factors, state of the camera, information from sensors, characteristics of a user, information from a database) may also be useful in determining when information expires. Some additional examples of factors are provided below.
- According to one example of determining that information has expired based on a condition, if a user turns the camera off (i.e., a condition), then any information about a scene that the user was capturing may be deemed to be expired. The next time the user turns the camera on, for example, the camera may ignore the expired information or, alternatively, ask a user a question to verify that the expired information is still relevant. In another example, if more than two hours have passed since a user provided a response to a question (i.e., a condition), then the camera may determine that the response has expired and perform an action (e.g., ask the question again). In another example, if a user has moved more than one thousand feet from his original location (i.e., a condition), then the camera may determine that information relating to his original location is no longer applicable
- Information may expire (or not expire) based on one or more indications by a user. For example, a user may respond to a question by indicating that information about a scene is or is not expired. For instance, the camera may ask a user, “Are we still at the beach?” In some exemplary embodiments, information that affects a setting on the camera may expire based on a user adjusting the setting on the camera. For example, information about the lighting in a room may cause the camera to adjust its white balance setting. If the user later adjusts (e.g., using a control) the white balance setting on the camera to “Sunny,” then this may indicate that the user is no longer indoors and that the information about the lighting of the room is no longer relevant.
- In another example, a user may press a “Reset Scene Information” button on the camera to indicate that information relating to a past scene is expired (e.g., meaning that the camera should disregard the information relating to the past scene). In still another example, a user may use the voice command “Same Scene” to indicate that information about a previous scene has not expired (e.g., even if the camera would otherwise have considered the information to be expired).
- According to some embodiments, information related to an image may expire when or after an image is captured (e.g., the information about the scene may only be applicable to that image). In at least one embodiment, information about a current scene that the user is capturing may expire when the camera is turned off.
- As mentioned above, some information may expire (or not expire) based on one or more images. For example, a computing device may use a face recognition program to analyze an image and to determine that an image is an image of Bob. Based on this, the camera may determine, for example, that information about capturing images of Alice is expired. In another example, a user may have indicated that he is at the beach. However, thirty minutes later, the camera may determine that one or more images captured recently do not match any of the “beach templates” stored by the camera. Based on this, the camera may determine that the information that the user is on the beach may have expired.
- Some types of information may expire (or not) based on the state of the camera. For example, a camera may keep track of how many images have been captured since a piece of information was received. After a threshold number of images (e.g., ten images) have been captured, the information may expire. In another example, information may expire whenever the camera is turned off. Note that the camera may be turned off based on an indication by a user (e.g., the user presses the power button on the camera) and/or based on other conditions (e.g., the camera may automatically turn itself off after five minutes of inactivity).
- According to some embodiments, information may expire when the camera's batteries are replaced, when the camera is plugged into a wall outlet to recharge, or when images are downloaded from the camera (e.g., for storage on a personal computer).
- Information may expire based on a user. For example, a camera may store information about its current user (e.g., the user's identity, the user's preferences and habits when capturing images, a list of images that have already been captured by the user). If the camera is later given to a new user, information about the previous user may expire, since it is not applicable to the new user. For example, Alice and Bob may share a camera. When Alice is capturing images using the camera, the camera may adjust one or more settings based on Alice's user preferences. If Alice then hands the camera to Bob, the information about Alice's user preferences may expire and be replaced with information about Bob's user preferences.
- According to some embodiments of the present invention, information may expire or not expire based on one or more of a variety of time-related factors. Some examples of time-related factors are described herein without limitation. Information may expire, for example, after a duration of time. For instance, information that a user provides about a scene may expire after thirty minutes unless it is reaffirmed by the user (e.g., by indicating the information is still valid, by providing additional information about the scene). In another example, information may expire at a specific time. For instance, information about whether the sky is sunny, partially cloudy, or overcast may expire at 6:34 p.m. (e.g., when the sun goes down). According to at least one embodiment, information may expire based on a condition existing for a duration of time. For example, information about the lighting in a room may expire if the camera's light meter reads bright (outdoors) light for more than thirty seconds.
- Examples of information expiring or not expiring based on information from sensors include, without limitation: determining a location, determining an orientation of a camera, and determining information about light. For example, a camera may use GPS device to determine how far it has been moved from a location where a user provided a response to a question. If the camera has been moved more than a threshold distance from the location where the user provided the response, then the information provided by the response may be determined to be expired. In another example, a camera may use an orientation sensor to determine when a user is aiming the camera at a scene. If the user stops aiming the camera at the scene for longer than ten seconds, for instance, then the camera may determine that the user is no longer capturing images of the scene and determine that information about the scene is expired. According to some embodiments, an imaging device may use a light sensor to determine the color of light that is shining on the camera. If the color of light shining on the camera is 5200K (daylight), then the camera may determine that information indicating the camera is under fluorescent light bulbs (4000K) is expired.
- According to one or more embodiments of the present invention, information expiring or not expiring may be based on one or more characteristics of a user include. For example, a user may be in the habit of turning his camera off anytime he does not anticipate capturing an image in the next minute. Based on this, the camera may adjust its conditions for expiring information so that information about a scene does not expire unless the user turns the camera off for an extended period of time (e.g., fifteen minutes). In another example, a user may prefer that he not be asked the same question twice in close succession (e.g., within ten minutes). Based on this, the camera may prolong the time that it takes for a piece of information to expire (e.g., to more than ten minutes). In this way, the camera may effectively postpone asking the user a second question relating to the information in accordance with the user's preference.
- In some embodiments of the present invention, information may expire or not expire based on information from one or more databases. In one example, a first piece of information may be based on the validity of one or more second pieces of information. If the second pieces of information expire, then the first piece of information may also expire. For example, a camera may store two pieces of information: a) the camera is currently indoors, and b) the room has fluorescent lighting. If the first piece of information (i.e., the camera is indoors) expires because the user moves outdoors, then this may cause the second piece of information (i.e., that the room has fluorescent lighting) to expire also, since it is unlikely that there is also fluorescent lighting outdoors. However, it will be recognized that the inverse may not be true. That is, the expiration of the information that the room has fluorescent lighting may not mean that the information that the camera is indoors has also expired. For example, a user of the camera may have just moved to another room.
- In another example, an imaging device (e.g., a PDA with an integrated camera) may determine that information has expired based on a change to an image template stored in a database. For example, the imaging device may determine a revised image template for Alice (e.g., a revised “Alice template”) because Alice has put on a blue sweater over her green tank top. Based on this, the imaging device may determine that information about the subject of the image (e.g., a girl in a green tank top) is expired.
- In yet another example, a camera may store information about when the sun comes up or goes down. When the sun goes down, the camera may then expire any information about the current weather conditions (e.g., sunny, cloudy, etc.). In another example, a camera may receive weather reports via a radio link (and optionally may store an indication of the received information). For instance, the camera may receive an updated weather report indicating that the weather outside is no longer sunny and is now raining. Based on this updated report, the camera may determine that information indicating that light is shining on subjects sitting outside should be expired.
- When a piece of information expires (e.g., as determined using an expiration condition), an imaging device and/or computing device may perform one or more of a variety of different actions, including, without limitation: ceasing to perform an action (e.g., an action that was performed based on the information), outputting a question, adjusting a setting, meta-tagging an image, guiding a user in operating the camera, storing information, and any combination thereof.
- If a piece of information expires, then this may mean that an action that a camera was performing based on the information is no longer appropriate. Therefore, in accordance with some embodiments, a camera may cease to perform the action (and may optionally perform a second action instead). Examples of ceasing to perform an action based on expiring information include, without limitation: readjusting a setting, ceasing to meta-tag images, and ceasing to guide a user in operating a camera.
- As discussed variously herein, an imaging device such as a camera may adjust a setting (e.g., the mode of the camera) based on a user's response to a question. According to various embodiments of the present invention when the user's response to the question expires (e.g., based on an expiration condition), the camera may adjust the setting again (e.g., returning the setting to its original value or adjusting to a new value). In a more detailed example, the camera may adjust itself to “Ski Slope” mode based on a user's indication that he is on a ski slope. When the ski slope closes (e.g., 4:00 p.m.) the camera may determine that the user's indication that he is on a ski slope has expired. Accordingly, the camera may readjust itself to cancel “Ski Slope” mode and put the camera in “Default” mode instead.
- According to some embodiments, a camera may cease to meta-tag images when information expires. In some embodiments, a camera may cease to meta-tag images with information that has expired. For example, the camera may meta-tag one or more images based on a user's response to a question, as described herein. If the information upon which the meta-data is based expires, then the camera may cease to associate that meta-data with images. For example, the camera may receive information from a user that the user is capturing images of a group of people: Alice, Bob, Carly, and Dave. This information may be used to meta-tag the captured images. However, when the user puts his camera down for twenty seconds, the information about the group of people may expire (e.g., based on an expiration condition). Accordingly, future images captured by the camera will not be tagged as images of Alice, Bob, Carly, and Dave.
- Expiration of information may cause an imaging device to cease guiding a user in operating the camera. For example, a camera may guide a user in operating the camera based on the user's response to a question, as described herein. For instance, the camera may guide a user in adjusting the shutter speed of the camera based on a user's indication that he is capturing images of wildlife. If an image recognition program running on the camera (or on a server in communication with the camera, as discussed herein) later determines that the user is about to capture an image of a person, then the camera may cease to provide instructions to the user about how to capture images of wildlife (e.g., because the wildlife-related information has effectively expired).
- According to at least one embodiment of the present invention, a camera may output a question to a user based on information expiring. For example, in response to a piece of information expiring, the camera may ask a user a question relating to the information that expired. The user's response to this question may be helpful in replacing the information that expired and/or in guiding the camera in performing additional actions to assist the user. The following, without limitation, are some exemplary scenarios related to outputting a question to a user based on information expiring:
- (i) A determination condition may be based on information expiring. For example, the camera may determine to output the question, “Are you indoors or outdoors?” based on the determination condition: expired (indoors_or_outdoors_response).
- (ii) Information about a scene may expire when a user stops aiming a camera at the scene. The camera may then remain idle, for example, until the user begins to aim the camera at a new scene, at which point the camera may determine that a) the information about the old scene has expired and b) the camera does not have any information about the new scene. Based on this, the camera may determine and output an appropriate question to the user.
- (iii) Information about the lighting of a scene (e.g., daylight, tungsten light bulb, fluorescent light bulb) may expire whenever a light sensor on the camera determines that the light color or intensity of a scene has changed dramatically. Accordingly, the camera may output the question, “What type of lighting does this scene have?” whenever the light sensor causes information to expire.
- (iv) Information about whether a scene includes running water may expire (e.g., based on image processing). Based on this expiration, the camera may output a question to the user, “Does this image include water?” Other types of situations in which it may be desirable to output a question based on information expiring may be readily apparent to those skilled in the art in light of the present disclosure.
- As discussed herein, according to some embodiments of the present invention, a camera may adjust one or more settings based on information expiring. Some examples, without limitation, are provided below, and various other exemplary ways in which a setting may be adjusted are discussed herein.
- In one example, information about the current lighting conditions may expire. Based on this, the camera may adjust its settings to auto-exposure and auto-white balance. In another example, information about who is the current user of the camera may expire. Based on this, the camera may revert to its default user preferences. In another example, information about what object in the field of view (e.g., the foreground, the background) a user would like to focus on may expire. Based on this expiration, the camera may adjust its focus settings (e.g., to five-region, center-weighted auto-focus).
- In yet another example, information about a user being on a boat may expire. Based on this, the camera may adjust its digital stabilization setting to “regular.” Information about the weather outside being sunny may expire because the current time of day is after sunset, for example. Based on this, the camera may assume that it is indoors or nighttime and turn its flash on.
- Various exemplary ways in which a camera, for example, may meta-tag an image are discussed herein. According to at least one embodiment of the present invention, a device such as a camera or server may meta-tag one or more images based on information expiring. For example, information about the subject of an image may expire. Based on this, a camera may meta-tag an image as “Subject: Unknown.” According to some embodiments, a user can later review the image and provide meta-data about the subject of the image. In another example, information about a location of the camera may expire. After determining the location information has expired, the camera may omit location information when meta-tagging an image (or may not meta-tag an image at all).
- Various exemplary ways in which a camera, for example, may guide a user in operating the camera are discussed herein. In accordance with one or more embodiments of the present invention, a camera may guide a user in operating the camera based on information expiring. For instance, information about whether the subject of an image is in the shade or the sun may expire. Based on this, the camera output a message to guide a user: “If your subject is in the shade, you may want to adjust the white balance setting on the camera to 7000K or use a flash. If your subject is in the sun, you may want to adjust the white balance setting on the camera to 5200K and make sure that your subject is facing towards the sun.” In another example, information about the subject of one or more images may expire. In response, the camera may output a message to a user: “You can meta-tag your images with information about their subjects by pressing the ‘Meta-Tag’ button and saying the name of the subject.”
- The camera may store information based on information expiring. That is, according to some embodiments of the present invention, the camera may store a first piece of information based on the expiration of a second piece of information. For example, information about the camera being underwater may expire based on a conductivity sensor on the body of the camera. Based on information received via the conductivity sensor, the camera may store an indication that it is not underwater.
- If an expiration condition occurs, then related information may be determined to be expired (and optionally the camera may perform one or more actions), as described herein. As also discussed herein, information may expire based on other information expiring. For example, if a camera is turned off for more than sixty minutes, then the information that the camera is outdoors may expire. Based on this, the information that the weather is sunny and that the user is on a beach may also expire. A single condition might cause multiple pieces of information to expire. For example, the information that an image includes a body of water and the piece of information that the user prefers to have no reflections may both expire if an image does not match a water template or if a user presses the “Reset Image Preferences” button on the camera.
- Different actions may be performed based on what causes information to expire. For example, if information indicating that the subject of an image is Alice expires because a user is no longer aiming the camera or because thirty seconds have elapsed, then the camera may stop meta-tagging images as “Subject: Alice.” However, if that information expires because an image recognition program (e.g., executed by the camera, executed by a server in communication with the camera) does not recognize the subject of an image as being Alice, then the camera may stop meta-tagging the image and ask: “Who are you photographing?” Thus, an action (e.g., determining and/or outputting a question) may be performed based on information expiring and/or based on the particular circumstance(s) that caused the information to expire.
- As discussed herein, a camera or other device (e.g., a server) may store an expiring information database that stores information about conditions that may cause information to expire. One example of an expiring information database is shown in FIG. 14. Note that the example of the expiring information database shown in FIG. 14 may store at least one expiration condition for each piece of information that is stored by the camera.
- Referring to FIG. 20, a flowchart illustrates a
process 2000 that is consistent with one or more embodiments of the present invention. Theprocess 2000 is a method for performing an action based on information expiring. For illustrative purposes only, theprocess 2000 is described as being performed by acamera 130. Of course, theprocess 2000 may be performed by any type ofimaging device 210 and/orcomputing device 220. - In
step 2005, thecamera 130 receives information related to use of thecamera 130. For example, the camera determines or otherwise receives (e.g., from a sensor, from aserver 110, from a user) any of the various types of information described herein. For example, thecamera 130 receives an indication that it is raining, a user preference, a signal that the memory is low, etc. Instep 2010, thecamera 130 determines an expiration condition for the information. For example, thecamera 130 determines that the piece of information should expire after thirty minutes. - In
step 2015, thecamera 130 stores an indication of the information and an indication of the expiration condition (e.g., in an expiration condition database). Instep 2020, thecamera 130 determines if the information has expired (e.g., based on the expiration condition). If the information has not expired, thecamera 130 performs a first action instep 2025. If the information has expired, thecamera 130 performs a second action based on the information expiring (e.g., a corresponding action indicated in an expiration condition database). - In accordance with various embodiments of the present invention, a camera may output a question to a party other than a user of the camera. For example, a camera may output a question to a human subject (i.e., a person) of one or more images captured (or to be captured) by the camera.
- Questions may be output to subjects of images for a variety of different purposes. For example, a question may be output to verify that an image was captured properly. For instance, a camera may be used to capture an image of a group of people (e.g., Alice, Bob, and Carly). Immediately after capturing the image of the group of people, the camera may output a question to the group: “Did anybody blink?” If one or more people in the group answer “Yes” to the question, then the camera may capture one or more additional images of the group, in the hope of capturing at least one image of the group in which nobody is blinking.
- According to another example, a preference of a subject may be determined. For example, the camera may output a question to a subject of an image: “Do you want this to be a close-up shot from the waist up, or a full-body shot that includes your feet?” The camera may adjust one or more settings (e.g., a zoom setting) based on the subject's response to this question. Performing one or more actions based on a subject's preferences (e.g., adjusting a setting, meta-tagging an image) may be particularly appropriate for subjects who have strong feelings about how images of them should be captured.
- In some embodiments, an imaging device may assist a subject in posing. For example, a camera may output a question to a subject of an image: “It looks like there's a piece of paper sticking to your shoe. Do you want to remove this before the photo is taken?” Based on the user's response, the camera may then pause to allow the subject to remove the piece of paper from his foot.
- It will be readily understood that a question may be output to a subject using an output device. Many types of output devices are discussed herein, and others will be readily apparent to those skilled in the art in light of the present disclosure. Some examples include, without limitation, an audio speaker, a video monitor, and a wireless handheld device.
- For example, a camera may include an audio speaker to play an audio recording of a question loud enough for a subject of an image to hear the question. In another example, the camera may use a HyperSonic Sound® directional sound system by American Technology Corporation to output a question to a subject that may not be heard by other subjects of the image, the user, or bystanders. In another example, a camera may include an LCD display or other type of video monitor that faces (or may be configurable to face) a subject of the camera (e.g., away from a user of the camera). The camera may use this LCD, for instance, to display a text question to the subject.
- In yet another example of an output device, a subject of a camera may carry a wireless handheld device (e.g., a remote control, a cell phone, a PDA) that communicates with the camera (e.g., using a infrared or radio communications link). The camera may output a question to the subject by transmitting the question to the wireless handheld device. The wireless handheld device may then display the question to the subject (e.g., using an audio speaker, LCD display screen, or other output means). Other embodiments operable for outputting a question to a subject of an image may be similar to those described herein for outputting a question to a user of the camera.
- According to some embodiments, a subject may respond to a question using an input device, such as, without limitation, a microphone, an image sensor, or a wireless handheld device. For example, a subject of an image may respond to a question by speaking the answer aloud. The camera may use a microphone and voice recognition software to record and determine the user's response to the question. In another example, a subject of an image may respond to a question by making an appropriate hand signal to the camera (e.g., thumbs-up for “Yes,” thumbs-down for “No”). The camera may use an image sensor to capture one or more images of the subject making the hand signal and then process the images using an image recognition program to determine the subject's response to the question.
- In another example, a subject of an image may respond to a question using a wireless handheld device (e.g., a remote control, a cell phone, a PDA) operable to communicate with a camera. For example, a subject of an image may press a button on his PDA or speak into a microphone on his cell phone to provide a response to a question. The PDA or cell phone may then transmit an indication of the response to the camera (e.g., via a communication network).
- A variety of exemplary actions that may be performed based on a user's response to a question are discussed herein (e.g., adjusting a setting, meta-tagging an image, outputting a second question). Other actions are also possible. Additional actions that may be performed by the camera based on a user's response to a question include automatically capturing an image and managing images stored in memory.
- As discussed herein, an imaging device may be configured to capture an image automatically. According to some embodiments, a camera may automatically capture one or more images based on a user's response to a question. For example, if the camera asks a user, “Are we at a football game?” and the user responds, “Yes,” then the camera may automatically capture one or more images whenever the players on the football field are moving. Various exemplary processes for automatically capturing an image (e.g., based on a condition) are discussed herein.
- Automatically capturing one or more images based on a user's response to a question may comprise one or more of, without limitation: determining whether the camera should automatically capture one or more images based on a user's response to a question; determining what images the camera should automatically capture based on a user's response to a question; and determining how the camera should treat the one or more automatically-captured images (e.g., compressing them) based on a user's response to a question.
- One way for a user's response to a question to affect a process of automatically capturing images is for the camera to adjust a setting relating to automatically capturing images. One example of a setting on the camera that may relate to automatically capturing images is a condition for automatically capturing images. For example, the camera may automatically capture an image when a condition is true. A user's response to a question may be a factor that affects a condition. In another example, a threshold value for determining whether to store an image may relate to the automatic capturing of images. For example, the camera may capture an image and then determine a rating for the image based on the quality of the image. If the rating of the image is higher than a threshold value, then the camera may store the automatically-captured image. If the rating is worse than the threshold value, then the automatically-captured image may be compressed or deleted. In another example, a parameter that affects how much an automatically-captured image is compressed may be adjusted. For example, the camera may automatically compress an automatically-captured image based on a compression setting. For instance, images with greater compression settings may be compressed more and images will lesser compression settings may be compressed less.
- According to one or more embodiments, a camera may manage one or more images stored in memory based on a user's response to a question. Managing images stored in memory may include, without limitation, one or more of:
- (i) Uploading an image using a network. For example, a camera may include a radio modem, cell phone, or other wireless network connection that allows the camera to transmit images to a second electronic device (e.g., a computer server, a laptop computer, a cell phone). Uploading an image using a network may be particularly useful in sharing images with other people (e.g., friends of a user) or freeing up memory on the camera (e.g., since an image may not be stored on the second electronic device).
- (ii) Modifying one or more images. For example, a camera may use image editing software to modify one or more images that are stored in memory. Examples of modifications that may be made to images include cropping, removing red-eye, color balancing, removing shadows, removing objects from the foreground or background, adding or removing meta-data, and combining images into a panorama.
- (iii) Compressing or deleting one or more images. For example, as discussed herein, a camera may automatically compress or delete one or more images stored in memory in order to make room for additional images that may be captured by the camera.
- The following, without limitation, are some exemplary scenarios related to managing images stored in memory based on a user's response. Each scenario includes a question (e.g., asked by a camera), a response from a user, and an action:
- (i) Question: “How many more images are you planning on capturing?”
- Response from a user: “20-30”
- Action: Use a wireless network connection (e.g., 3G cellular network) to upload images from the camera to a central computer. Then delete the images that have been uploaded, thereby freeing up space in the camera's memory for the 20-30 additional images that the user plans on capturing.
- (ii) Question: “Are you capturing an image of a group of people?”
- Response from a user: “Yes”
- Action: Process the captured image to remove shadows that fall across people's faces and red-eye that may have resulted from using a flash.
- (iii) Question: “Which images are more important to you: a) images of Alice, or b) images of Bob?”
- Response from user: “a) images of Alice”
- Action: Sort the images in memory into images of Alice and images of Bob. Re-compress all the images of Alice using a JPEG compression setting of 80%, thereby reducing the file sizes of these images and freeing up memory space in the camera. Do not perform any additional compression on the images of Bob that are stored in memory.
- One or more embodiments of the present invention may enable a camera or other imaging device to determine more easily a scene to be captured in an image and a user's intentions for capturing an image. Such determinations will enable some types of users to more easily adjust the settings on their cameras, making capturing images a simpler and more enjoyable process. In addition, some embodiments of the invention may allow a user to capture better images, even if he does not have a detailed knowledge of photography.
- Although the methods and apparatus of the present invention have been described herein in terms of particular embodiments, those skilled in the art will recognize that various embodiments of the present invention may be practiced with modification and alteration without departing from the teachings disclosed herein.
Claims (10)
1. A camera comprising:
means for recording an image;
means for transmitting the image to a remote computer for determining at least one question based on the image;
means for receiving from the remote computer an indication of at least one question; and
means for outputting to a user a representation of the at least one question.
2. A method comprising:
recording an image;
transmitting the image to a remote computer for determining at least one question based on the image;
receiving from the remote computer an indication of at least one question; and
outputting to a user a representation of the at least one question.
3. A camera comprising:
means for capturing an image;
means for transmitting the image to a remote computer for determining at least one meta-tag based on the image and a database of images, each respective image of the database of images having at least one associated meta-tag;
means for receiving an indication of a meta-tag from the remote computer; and
means for receiving an instruction from a user to associate the meta-tag with the image.
4. A camera comprising:
means for recording an image;
means for transmitting wirelessly at least a portion of the image to a remote computer;
means for receiving from the remote computer an indication of a question to provide to a user, the question including an indication of at least one meta-tag determined by the remote computer;
means for transmitting the question to the user;
means for receiving a response to the question from the user; and
means for associating at least one of the at least one meta-tag based on the response.
5. A camera comprising:
means for receiving a meta-tag from a remote computer; and
means for associating the meta-tag with at least one image.
6. A method comprising:
receiving, at a camera, a meta-tag from a remote computer; and
associating, by the camera, the meta-tag with at least one image.
7. A camera comprising:
means for receiving a first meta-tag and a second meta-tag from a remote computer; and
means for receiving from a user a request to associate at least one of the first meta-tag and the second meta-tag with at least one image.
8. A camera comprising:
means for transmitting to a user an indication of a first meta-tag received from a remote computer and an indication of a second meta-tag received from the remote computer;
means for receiving an indication of a selection by the user of at least one of the first meta-tag and the second meta-tag; and
means for modifying meta-data of at least one image based on the selection.
9. A device comprising:
means for recording a first digital image;
means for transmitting the first digital image to a remote computer for comparison with at least one second digital image;
means for receiving from the remote computer at least one image category based on the comparison; and
means for associating the at least one image category with the first digital image.
10. A method comprising:
outputting a question to a user;
receiving a response to the question from the user;
storing an indication of the response;
determining an expiration condition;
associating the expiration condition with the response;
determining if the expiration condition has occurred; and
adjusting at least one camera setting if the expiration condition has occurred.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/740,242 US20040174434A1 (en) | 2002-12-18 | 2003-12-18 | Systems and methods for suggesting meta-information to a camera user |
US12/472,149 US8558921B2 (en) | 2002-12-18 | 2009-05-26 | Systems and methods for suggesting meta-information to a camera user |
US13/956,348 US20130314566A1 (en) | 2002-12-18 | 2013-07-31 | Systems and methods for suggesting information for a photo to a user associated with the photo |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US43447502P | 2002-12-18 | 2002-12-18 | |
US10/740,242 US20040174434A1 (en) | 2002-12-18 | 2003-12-18 | Systems and methods for suggesting meta-information to a camera user |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/472,149 Continuation US8558921B2 (en) | 2002-12-18 | 2009-05-26 | Systems and methods for suggesting meta-information to a camera user |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040174434A1 true US20040174434A1 (en) | 2004-09-09 |
Family
ID=32930365
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/740,242 Abandoned US20040174434A1 (en) | 2002-12-18 | 2003-12-18 | Systems and methods for suggesting meta-information to a camera user |
US12/472,149 Expired - Fee Related US8558921B2 (en) | 2002-12-18 | 2009-05-26 | Systems and methods for suggesting meta-information to a camera user |
US13/956,348 Abandoned US20130314566A1 (en) | 2002-12-18 | 2013-07-31 | Systems and methods for suggesting information for a photo to a user associated with the photo |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/472,149 Expired - Fee Related US8558921B2 (en) | 2002-12-18 | 2009-05-26 | Systems and methods for suggesting meta-information to a camera user |
US13/956,348 Abandoned US20130314566A1 (en) | 2002-12-18 | 2013-07-31 | Systems and methods for suggesting information for a photo to a user associated with the photo |
Country Status (1)
Country | Link |
---|---|
US (3) | US20040174434A1 (en) |
Cited By (365)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040263639A1 (en) * | 2003-06-26 | 2004-12-30 | Vladimir Sadovsky | System and method for intelligent image acquisition |
US20050030388A1 (en) * | 2003-08-06 | 2005-02-10 | Stavely Donald J. | System and method for improving image capture ability |
US20050122405A1 (en) * | 2003-12-09 | 2005-06-09 | Voss James S. | Digital cameras and methods using GPS/time-based and/or location data to provide scene selection, and dynamic illumination and exposure adjustment |
US20050172147A1 (en) * | 2004-02-04 | 2005-08-04 | Eric Edwards | Methods and apparatuses for identifying opportunities to capture content |
US20050168623A1 (en) * | 2004-01-30 | 2005-08-04 | Stavely Donald J. | Digital image production method and apparatus |
US20050174430A1 (en) * | 2004-02-09 | 2005-08-11 | Anderson Michael E. | Method for organizing photographic images in a computer for locating, viewing and purchase |
US20050227792A1 (en) * | 2004-03-18 | 2005-10-13 | Hbl Ltd. | Virtual golf training and gaming system and method |
US20050254443A1 (en) * | 2004-05-14 | 2005-11-17 | Campbell Alexander G | Method and system for displaying data |
US20050278379A1 (en) * | 2004-06-10 | 2005-12-15 | Canon Kabushiki Kaisha | Image retrieval device and image retrieval method |
US20050280538A1 (en) * | 2004-06-22 | 2005-12-22 | Omron Corporation | Tag communication apparatus, control method for tag communication apparatus, computer readable medium for tag communication control and tag communication control system |
US20060044398A1 (en) * | 2004-08-31 | 2006-03-02 | Foong Annie P | Digital image classification system |
US20060059202A1 (en) * | 2004-09-14 | 2006-03-16 | Canon Kabushiki Kaisha | Image capture device |
US20060075344A1 (en) * | 2004-09-30 | 2006-04-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Providing assistance |
US20060080286A1 (en) * | 2004-08-31 | 2006-04-13 | Flashpoint Technology, Inc. | System and method for storing and accessing images based on position data associated therewith |
US20060076398A1 (en) * | 2004-09-30 | 2006-04-13 | Searete Llc | Obtaining user assistance |
US20060081695A1 (en) * | 2004-09-30 | 2006-04-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware. | Enhanced user assistance |
US20060088092A1 (en) * | 2004-10-21 | 2006-04-27 | Wen-Hsiung Chen | Method and apparatus of controlling a plurality of video surveillance cameras |
US20060090132A1 (en) * | 2004-10-26 | 2006-04-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced user assistance |
US20060086781A1 (en) * | 2004-10-27 | 2006-04-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced contextual user assistance |
US20060092294A1 (en) * | 2004-11-04 | 2006-05-04 | Lg Electronics Inc. | Mobile terminal and operating method thereof |
US20060116979A1 (en) * | 2004-12-01 | 2006-06-01 | Jung Edward K | Enhanced user assistance |
US20060117001A1 (en) * | 2004-12-01 | 2006-06-01 | Jung Edward K | Enhanced user assistance |
US20060170807A1 (en) * | 2005-02-01 | 2006-08-03 | Casio Computer Co., Ltd. | Image pick-up apparatus and computer program for such apparatus |
US20060173816A1 (en) * | 2004-09-30 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced user assistance |
US20060170956A1 (en) * | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image devices |
US20060190428A1 (en) * | 2005-01-21 | 2006-08-24 | Searete Llc A Limited Liability Corporation Of The State Of Delware | User assistance |
US20060190595A1 (en) * | 2005-02-21 | 2006-08-24 | Samsung Electronics Co., Ltd. | Device and method for processing data resource changing events in a mobile terminal |
US20060206817A1 (en) * | 2005-02-28 | 2006-09-14 | Jung Edward K | User assistance for a condition |
US20060221190A1 (en) * | 2005-03-24 | 2006-10-05 | Lifebits, Inc. | Techniques for transmitting personal data and metadata among computing devices |
US20060274154A1 (en) * | 2005-06-02 | 2006-12-07 | Searete, Lcc, A Limited Liability Corporation Of The State Of Delaware | Data storage usage protocol |
US20060274949A1 (en) * | 2005-06-02 | 2006-12-07 | Eastman Kodak Company | Using photographer identity to classify images |
US20070038529A1 (en) * | 2004-09-30 | 2007-02-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US20070096024A1 (en) * | 2005-10-27 | 2007-05-03 | Hiroaki Furuya | Image-capturing apparatus |
US20070102525A1 (en) * | 2005-11-10 | 2007-05-10 | Research In Motion Limited | System and method for activating an electronic device |
US20070109417A1 (en) * | 2005-11-16 | 2007-05-17 | Per Hyttfors | Methods, devices and computer program products for remote control of an image capturing device |
US20070118509A1 (en) * | 2005-11-18 | 2007-05-24 | Flashpoint Technology, Inc. | Collaborative service for suggesting media keywords based on location data |
US20070118508A1 (en) * | 2005-11-18 | 2007-05-24 | Flashpoint Technology, Inc. | System and method for tagging images based on positional information |
US20070124330A1 (en) * | 2005-11-17 | 2007-05-31 | Lydia Glass | Methods of rendering information services and related devices |
US7228010B1 (en) * | 2003-02-27 | 2007-06-05 | At&T Corp. | Systems, methods and devices for determining and assigning descriptive filenames to digital images |
US20070165273A1 (en) * | 2006-01-18 | 2007-07-19 | Pfu Limited | Image reading apparatus and computer program product |
US20070164988A1 (en) * | 2006-01-18 | 2007-07-19 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US20070196033A1 (en) * | 2006-02-21 | 2007-08-23 | Microsoft Corporation | Searching and indexing of photos based on ink annotations |
US20070200862A1 (en) * | 2005-12-26 | 2007-08-30 | Hiroaki Uchiyama | Imaging device, location information recording method, and computer program product |
US20070200912A1 (en) * | 2006-02-13 | 2007-08-30 | Premier Image Technology Corporation | Method and device for enhancing accuracy of voice control with image characteristic |
US20070204125A1 (en) * | 2006-02-24 | 2007-08-30 | Michael Hardy | System and method for managing applications on a computing device having limited storage space |
US20070223912A1 (en) * | 2006-03-27 | 2007-09-27 | Fujifilm Corporation | Photographing method and photographing apparatus |
US20070236581A1 (en) * | 2006-01-23 | 2007-10-11 | Hiroaki Uchiyama | Imaging device, method of recording location information, and computer program product |
US20070242138A1 (en) * | 2006-04-13 | 2007-10-18 | Manico Joseph A | Camera user input based image value index |
US20070252674A1 (en) * | 2004-06-30 | 2007-11-01 | Joakim Nelson | Face Image Correction |
US20070268392A1 (en) * | 2004-12-31 | 2007-11-22 | Joonas Paalasmaa | Provision Of Target Specific Information |
EP1865426A2 (en) * | 2006-06-09 | 2007-12-12 | Sony Corporation | Information processing apparatus, information processing method, and computer program |
US20070298813A1 (en) * | 2006-06-21 | 2007-12-27 | Singh Munindar P | System and method for providing a descriptor for a location to a recipient |
US20070300271A1 (en) * | 2006-06-23 | 2007-12-27 | Geoffrey Benjamin Allen | Dynamic triggering of media signal capture |
US20070298812A1 (en) * | 2006-06-21 | 2007-12-27 | Singh Munindar P | System and method for naming a location based on user-specific information |
US20080005070A1 (en) * | 2006-06-28 | 2008-01-03 | Bellsouth Intellectual Property Corporation | Non-Repetitive Web Searching |
US20080018748A1 (en) * | 2006-07-19 | 2008-01-24 | Sami Niemi | Method in relation to acquiring digital images |
US20080037826A1 (en) * | 2006-08-08 | 2008-02-14 | Scenera Research, Llc | Method and system for photo planning and tracking |
US20080043275A1 (en) * | 2006-08-16 | 2008-02-21 | Charles Morton | Method to Provide Print Functions to The User Via A Camera Interface |
US20080071761A1 (en) * | 2006-08-31 | 2008-03-20 | Singh Munindar P | System and method for identifying a location of interest to be named by a user |
US20080095434A1 (en) * | 2004-09-07 | 2008-04-24 | Chisato Funayama | Imaging System, Imaging Condition Setting Method, Terminal and Server Used for the Same |
US20080112591A1 (en) * | 2006-11-14 | 2008-05-15 | Lctank Llc | Apparatus and method for finding a misplaced object using a database and instructions generated by a portable device |
WO2008059422A1 (en) * | 2006-11-14 | 2008-05-22 | Koninklijke Philips Electronics N.V. | Method and apparatus for identifying an object captured by a digital image |
US20080126388A1 (en) * | 2006-11-08 | 2008-05-29 | Yahoo! Inc. | Customizable connections between media and meta-data via feeds |
US20080147726A1 (en) * | 2006-10-13 | 2008-06-19 | Paul Jin Hwang | System and method for automatic selection of digital photo album cover |
US20080159584A1 (en) * | 2006-03-22 | 2008-07-03 | Canon Kabushiki Kaisha | Information processing apparatus and information processing method |
US20080182665A1 (en) * | 2007-01-30 | 2008-07-31 | Microsoft Corporation | Video game to web site upload utility |
US20080229198A1 (en) * | 2004-09-30 | 2008-09-18 | Searete Llc, A Limited Liability Corporaiton Of The State Of Delaware | Electronically providing user assistance |
EP1984850A1 (en) * | 2006-02-14 | 2008-10-29 | Olaworks, Inc. | Method and system for tagging digital data |
US20080282177A1 (en) * | 2007-05-09 | 2008-11-13 | Brown Michael S | User interface for editing photo tags |
US20080297409A1 (en) * | 2007-05-29 | 2008-12-04 | Research In Motion Limited | System and method for selecting a geographic location to associate with an object |
EP2051506A1 (en) * | 2007-10-17 | 2009-04-22 | Fujifilm Corporation | Imaging device and imaging control method |
US20090111375A1 (en) * | 2007-10-24 | 2009-04-30 | Itookthisonmyphone.Com, Inc. | Automatic wireless photo upload for camera phone |
US20090115865A1 (en) * | 2007-11-06 | 2009-05-07 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US20090128502A1 (en) * | 2007-11-19 | 2009-05-21 | Cct Tech Advanced Products Limited | Image display with cordless phone |
US20090138560A1 (en) * | 2007-11-28 | 2009-05-28 | James Joseph Stahl Jr | Method and Apparatus for Automated Record Creation Using Information Objects, Such as Images, Transmitted Over a Communications Network to Inventory Databases and Other Data-Collection Programs |
US20090150147A1 (en) * | 2007-12-11 | 2009-06-11 | Jacoby Keith A | Recording audio metadata for stored images |
US20090164439A1 (en) * | 2007-12-19 | 2009-06-25 | Nevins David C | Apparatus, system, and method for organizing information by time and place |
US20090193021A1 (en) * | 2008-01-29 | 2009-07-30 | Gupta Vikram M | Camera system and method for picture sharing based on camera perspective |
US20090192998A1 (en) * | 2008-01-22 | 2009-07-30 | Avinci Media, Lc | System and method for deduced meta tags for electronic media |
WO2009114036A1 (en) * | 2008-03-14 | 2009-09-17 | Sony Ericsson Mobile Communications Ab | A method and apparatus of annotating digital images with data |
US20090244323A1 (en) * | 2008-03-28 | 2009-10-01 | Fuji Xerox Co., Ltd. | System and method for exposing video-taking heuristics at point of capture |
US20090254562A1 (en) * | 2005-09-02 | 2009-10-08 | Thomson Licensing | Automatic Metadata Extraction and Metadata Controlled Production Process |
US7639943B1 (en) * | 2005-11-15 | 2009-12-29 | Kalajan Kevin E | Computer-implemented system and method for automated image uploading and sharing from camera-enabled mobile devices |
WO2009156561A1 (en) * | 2008-06-27 | 2009-12-30 | Nokia Corporation | Method, apparatus and computer program product for providing image modification |
US20090327336A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Guided content metadata tagging for an online content repository |
US20100002102A1 (en) * | 2008-07-01 | 2010-01-07 | Sony Corporation | System and method for efficiently performing image processing operations |
US20100046842A1 (en) * | 2008-08-19 | 2010-02-25 | Conwell William Y | Methods and Systems for Content Processing |
US20100054601A1 (en) * | 2008-08-28 | 2010-03-04 | Microsoft Corporation | Image Tagging User Interface |
WO2010024584A2 (en) * | 2008-08-27 | 2010-03-04 | 키위플주식회사 | Object recognition system, wireless internet system having same, and object-based wireless communication service method using same |
US20100070845A1 (en) * | 2008-09-17 | 2010-03-18 | International Business Machines Corporation | Shared web 2.0 annotations linked to content segments of web documents |
US20100074328A1 (en) * | 2006-12-19 | 2010-03-25 | Koninklijke Philips Electronics N.V. | Method and system for encoding an image signal, encoded image signal, method and system for decoding an image signal |
US7693705B1 (en) * | 2005-02-16 | 2010-04-06 | Patrick William Jamieson | Process for improving the quality of documents using semantic analysis |
US7710452B1 (en) | 2005-03-16 | 2010-05-04 | Eric Lindberg | Remote video monitoring of non-urban outdoor sites |
US20100146390A1 (en) * | 2004-09-30 | 2010-06-10 | Searete Llc, A Limited Liability Corporation | Obtaining user assestance |
US20100165076A1 (en) * | 2007-03-07 | 2010-07-01 | Jean-Marie Vau | Process for automatically determining a probability of image capture with a terminal using contextual data |
US20100194927A1 (en) * | 2007-02-26 | 2010-08-05 | Syuji Nose | Image taking apparatus |
US7782365B2 (en) | 2005-06-02 | 2010-08-24 | Searete Llc | Enhanced video/still image correlation |
US20100218095A1 (en) * | 2004-09-30 | 2010-08-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Obtaining user assistance |
US20100223162A1 (en) * | 2004-09-30 | 2010-09-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US20100223065A1 (en) * | 2004-09-30 | 2010-09-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US7800582B1 (en) * | 2004-04-21 | 2010-09-21 | Weather Central, Inc. | Scene launcher system and method for weather report presentations and the like |
US20100271490A1 (en) * | 2005-05-04 | 2010-10-28 | Assignment For Published Patent Application, Searete LLC, a limited liability corporation of | Regional proximity for shared image device(s) |
US20100309011A1 (en) * | 2004-09-30 | 2010-12-09 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Obtaining user assistance |
US20100332553A1 (en) * | 2009-06-24 | 2010-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for updating composition database by using composition pattern of user, and digital photographing apparatus |
US7876357B2 (en) | 2005-01-31 | 2011-01-25 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US20110025715A1 (en) * | 2009-08-03 | 2011-02-03 | Ricoh Company, Ltd., | Appropriately scaled map display with superposed information |
US20110032432A1 (en) * | 2009-08-05 | 2011-02-10 | Lee Jang-Won | Digital image signal processing method, medium for recording the method, and digital image signal processing apparatus |
US20110032408A1 (en) * | 2005-09-29 | 2011-02-10 | Canon Kabushiki Kaisha | Image display apparatus and image display method |
US7895275B1 (en) | 2006-09-28 | 2011-02-22 | Qurio Holdings, Inc. | System and method providing quality based peer review and distribution of digital content |
US20110045871A1 (en) * | 2006-09-01 | 2011-02-24 | Research In Motion Limited | Method and apparatus for controlling a display in an electronic device |
US20110069179A1 (en) * | 2009-09-24 | 2011-03-24 | Microsoft Corporation | Network coordinated event capture and image storage |
US20110078766A1 (en) * | 2006-05-16 | 2011-03-31 | Yahoo! Inc | System and method for bookmarking and tagging a content item |
US20110077852A1 (en) * | 2009-09-25 | 2011-03-31 | Mythreyi Ragavan | User-defined marked locations for use in conjunction with a personal navigation device |
US7920169B2 (en) | 2005-01-31 | 2011-04-05 | Invention Science Fund I, Llc | Proximity of shared image devices |
US20110081952A1 (en) * | 2009-10-01 | 2011-04-07 | Song Yoo-Mee | Mobile terminal and tag editing method thereof |
US7922086B2 (en) | 2004-09-30 | 2011-04-12 | The Invention Science Fund I, Llc | Obtaining user assistance |
US20110096175A1 (en) * | 2009-10-27 | 2011-04-28 | Hon Hai Precision Industry Co., Ltd. | Method for adjusting a focal length of a camera |
US20110131497A1 (en) * | 2009-12-02 | 2011-06-02 | T-Mobile Usa, Inc. | Image-Derived User Interface Enhancements |
WO2011084092A1 (en) * | 2010-01-08 | 2011-07-14 | Telefonaktiebolaget L M Ericsson (Publ) | A method and apparatus for social tagging of media files |
US20110205397A1 (en) * | 2010-02-24 | 2011-08-25 | John Christopher Hahn | Portable imaging device having display with improved visibility under adverse conditions |
US20110216179A1 (en) * | 2010-02-24 | 2011-09-08 | Orang Dialameh | Augmented Reality Panorama Supporting Visually Impaired Individuals |
US20110228074A1 (en) * | 2010-03-22 | 2011-09-22 | Parulski Kenneth A | Underwater camera with presssure sensor |
US20110228075A1 (en) * | 2010-03-22 | 2011-09-22 | Madden Thomas E | Digital camera with underwater capture mode |
WO2011136993A1 (en) * | 2010-04-29 | 2011-11-03 | Eastman Kodak Company | Indoor/outdoor scene detection using gps |
US8090159B2 (en) | 2006-11-14 | 2012-01-03 | TrackThings LLC | Apparatus and method for identifying a name corresponding to a face or voice using a database |
US20120051668A1 (en) * | 2010-09-01 | 2012-03-01 | Apple Inc. | Consolidating Information Relating to Duplicate Images |
EP2432209A1 (en) * | 2010-09-15 | 2012-03-21 | Samsung Electronics Co., Ltd. | Apparatus and method for managing image data and metadata |
US8144944B2 (en) | 2007-08-14 | 2012-03-27 | Olympus Corporation | Image sharing system and method |
US20120154617A1 (en) * | 2010-12-20 | 2012-06-21 | Samsung Electronics Co., Ltd. | Image capturing device |
US20120188402A1 (en) * | 2008-08-18 | 2012-07-26 | Apple Inc. | Apparatus and method for compensating for variations in digital cameras |
US20120194699A1 (en) * | 2005-03-25 | 2012-08-02 | Nikon Corporation | Illumination device, imaging device, and imaging system |
WO2012125383A1 (en) * | 2011-03-17 | 2012-09-20 | Eastman Kodak Company | Digital camera user interface which adapts to environmental conditions |
CN102714558A (en) * | 2009-10-30 | 2012-10-03 | 三星电子株式会社 | Image providing system and method |
US20120257065A1 (en) * | 2011-04-08 | 2012-10-11 | Qualcomm Incorporated | Systems and methods to calibrate a multi camera device |
US20120266066A1 (en) * | 2011-04-18 | 2012-10-18 | Ting-Yee Liao | Image display device providing subject-dependent feedback |
US20120294539A1 (en) * | 2010-01-29 | 2012-11-22 | Kiwiple Co., Ltd. | Object identification system and method of identifying an object using the same |
US20120300069A1 (en) * | 2006-09-27 | 2012-11-29 | Sony Corporation | Imaging apparatus and imaging method |
US8341219B1 (en) * | 2006-03-07 | 2012-12-25 | Adobe Systems Incorporated | Sharing data based on tagging |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8350946B2 (en) | 2005-01-31 | 2013-01-08 | The Invention Science Fund I, Llc | Viewfinder for shared image device |
US20130018579A1 (en) * | 2007-08-23 | 2013-01-17 | International Business Machines Corporation | Pictorial navigation |
US8368773B1 (en) | 2008-08-29 | 2013-02-05 | Adobe Systems Incorporated | Metadata-driven method and apparatus for automatically aligning distorted images |
US20130050394A1 (en) * | 2011-08-23 | 2013-02-28 | Samsung Electronics Co. Ltd. | Apparatus and method for providing panoramic view during video telephony and video messaging |
US20130055088A1 (en) * | 2011-08-29 | 2013-02-28 | Ting-Yee Liao | Display device providing feedback based on image classification |
US8391640B1 (en) | 2008-08-29 | 2013-03-05 | Adobe Systems Incorporated | Method and apparatus for aligning and unwarping distorted images |
WO2013036181A1 (en) * | 2011-09-08 | 2013-03-14 | Telefonaktiebolaget L M Ericsson (Publ) | Assigning tags to media files |
CN103119926A (en) * | 2010-09-22 | 2013-05-22 | Nec卡西欧移动通信株式会社 | Image pick-up device, image transfer method and program |
US20130129254A1 (en) * | 2011-11-17 | 2013-05-23 | Thermoteknix Systems Limited | Apparatus for projecting secondary information into an optical system |
EP2466584A3 (en) * | 2010-12-15 | 2013-06-26 | Canon Kabushiki Kaisha | Collaborative image capture |
US20130162869A1 (en) * | 2003-08-05 | 2013-06-27 | DigitalOptics Corporation Europe Limited | Detecting Red Eye Filter and Apparatus Using Meta-Data |
US20130194422A1 (en) * | 2011-04-18 | 2013-08-01 | Guangzhou Jinghua Optical & Electronics Co., Ltd. | 360-Degree Automatic Tracking Hunting Camera And Operating Method Thereof |
US20130195375A1 (en) * | 2008-08-28 | 2013-08-01 | Microsoft Corporation | Tagging images with labels |
US20130201297A1 (en) * | 2012-02-07 | 2013-08-08 | Alcatel-Lucent Usa Inc. | Lensless compressive image acquisition |
US20130201343A1 (en) * | 2012-02-07 | 2013-08-08 | Hong Jiang | Lenseless compressive image acquisition |
US8606383B2 (en) | 2005-01-31 | 2013-12-10 | The Invention Science Fund I, Llc | Audio sharing |
US8615778B1 (en) | 2006-09-28 | 2013-12-24 | Qurio Holdings, Inc. | Personalized broadcast system |
US20130346068A1 (en) * | 2012-06-25 | 2013-12-26 | Apple Inc. | Voice-Based Image Tagging and Searching |
US8644702B1 (en) | 2005-12-28 | 2014-02-04 | Xi Processing L.L.C. | Computer-implemented system and method for notifying users upon the occurrence of an event |
EP2704420A1 (en) * | 2012-08-28 | 2014-03-05 | Google Inc. | System and method for capturing videos with a mobile device |
US8681225B2 (en) | 2005-06-02 | 2014-03-25 | Royce A. Levien | Storage access technique for captured data |
WO2014064690A1 (en) * | 2012-10-23 | 2014-05-01 | Sivan Ishay | Real time assessment of picture quality |
US8724007B2 (en) | 2008-08-29 | 2014-05-13 | Adobe Systems Incorporated | Metadata-driven method and apparatus for multi-image processing |
US20140172906A1 (en) * | 2012-12-19 | 2014-06-19 | Shivani A. Sud | Time-shifting image service |
US20140247368A1 (en) * | 2013-03-04 | 2014-09-04 | Colby Labs, Llc | Ready click camera control |
US20140258827A1 (en) * | 2013-03-07 | 2014-09-11 | Ricoh Co., Ltd. | Form Filling Based on Classification and Identification of Multimedia Data |
US20140282087A1 (en) * | 2013-03-12 | 2014-09-18 | Peter Cioni | System and Methods for Facilitating the Development and Management of Creative Assets |
US8842190B2 (en) | 2008-08-29 | 2014-09-23 | Adobe Systems Incorporated | Method and apparatus for determining sensor format factors from image metadata |
US8896708B2 (en) * | 2008-10-31 | 2014-11-25 | Adobe Systems Incorporated | Systems and methods for determining, storing, and using metadata for video media content |
US8902320B2 (en) | 2005-01-31 | 2014-12-02 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US20140354840A1 (en) * | 2006-02-16 | 2014-12-04 | Canon Kabushiki Kaisha | Image transmission apparatus, image transmission method, program, and storage medium |
US20150042843A1 (en) * | 2013-08-09 | 2015-02-12 | Broadcom Corporation | Systems and methods for improving images |
EP2698980A3 (en) * | 2012-08-17 | 2015-02-25 | Samsung Electronics Co., Ltd. | Camera device and methods for aiding users in use thereof |
US20150061932A1 (en) * | 2008-01-28 | 2015-03-05 | Blackberry Limited | Gps pre-acquisition for geotagging digital photos |
US20150085145A1 (en) * | 2013-09-20 | 2015-03-26 | Nvidia Corporation | Multiple image capture and processing |
US20150085159A1 (en) * | 2013-09-20 | 2015-03-26 | Nvidia Corporation | Multiple image capture and processing |
US8994852B2 (en) | 2007-08-23 | 2015-03-31 | Sony Corporation | Image-capturing apparatus and image-capturing method |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US20150186672A1 (en) * | 2013-12-31 | 2015-07-02 | Yahoo! Inc. | Photo privacy |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US20150207994A1 (en) * | 2014-01-17 | 2015-07-23 | Htc Corporation | Controlling method for electronic apparatus with one switch button |
US9093121B2 (en) | 2006-02-28 | 2015-07-28 | The Invention Science Fund I, Llc | Data management of an audio data stream |
US9124729B2 (en) | 2005-01-31 | 2015-09-01 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US20150247739A1 (en) * | 2008-03-06 | 2015-09-03 | Texas Instruments Incorporated | Processes for more accurately calibrating and operating e-compass for tilt error, circuits, and systems |
US20150339906A1 (en) * | 2014-05-23 | 2015-11-26 | Avermedia Technologies, Inc. | Monitoring system, apparatus and method thereof |
US9208548B1 (en) * | 2013-05-06 | 2015-12-08 | Amazon Technologies, Inc. | Automatic image enhancement |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US20160105617A1 (en) * | 2014-07-07 | 2016-04-14 | Google Inc. | Method and System for Performing Client-Side Zooming of a Remote Video Feed |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9319578B2 (en) | 2012-10-24 | 2016-04-19 | Alcatel Lucent | Resolution and focus enhancement |
US9325781B2 (en) | 2005-01-31 | 2016-04-26 | Invention Science Fund I, Llc | Audio sharing |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9344736B2 (en) | 2010-09-30 | 2016-05-17 | Alcatel Lucent | Systems and methods for compressive sense imaging |
US20160148249A1 (en) * | 2014-11-26 | 2016-05-26 | Adobe Systems Incorporated | Content Creation, Deployment Collaboration, and Tracking Exposure |
US20160227108A1 (en) * | 2015-02-02 | 2016-08-04 | Olympus Corporation | Imaging apparatus |
US9412007B2 (en) | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US9413948B2 (en) * | 2014-04-11 | 2016-08-09 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Systems and methods for recommending image capture settings based on a geographic location |
US20160231890A1 (en) * | 2011-07-26 | 2016-08-11 | Sony Corporation | Information processing apparatus and phase output method for determining phrases based on an image |
US9451200B2 (en) | 2005-06-02 | 2016-09-20 | Invention Science Fund I, Llc | Storage access technique for captured data |
US20160286132A1 (en) * | 2015-03-24 | 2016-09-29 | Samsung Electronics Co., Ltd. | Electronic device and method for photographing |
US9471668B1 (en) * | 2016-01-21 | 2016-10-18 | International Business Machines Corporation | Question-answering system |
US20160309054A1 (en) * | 2015-04-14 | 2016-10-20 | Apple Inc. | Asynchronously Requesting Information From A Camera Device |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US20160323483A1 (en) * | 2015-04-28 | 2016-11-03 | Invent.ly LLC | Automatically generating notes and annotating multimedia content specific to a video production |
US9489717B2 (en) | 2005-01-31 | 2016-11-08 | Invention Science Fund I, Llc | Shared image device |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9544498B2 (en) | 2010-09-20 | 2017-01-10 | Mobile Imaging In Sweden Ab | Method for forming images |
US20170019591A1 (en) * | 2014-03-28 | 2017-01-19 | Fujitsu Limited | Photographing assisting system, photographing apparatus, information processing apparatus and photographing assisting method |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
WO2017069670A1 (en) * | 2015-10-23 | 2017-04-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Providing camera settings from at least one image/video hosting service |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US20170140219A1 (en) * | 2004-04-12 | 2017-05-18 | Google Inc. | Adding Value to a Rendered Document |
US20170147549A1 (en) * | 2014-02-24 | 2017-05-25 | Invent.ly LLC | Automatically generating notes and classifying multimedia content specific to a video production |
US9667859B1 (en) * | 2015-12-28 | 2017-05-30 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US20170187910A1 (en) * | 2015-12-28 | 2017-06-29 | Amasing Apps USA LLC | Method, apparatus, and computer-readable medium for embedding options in an image prior to storage |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US20170255654A1 (en) * | 2011-04-18 | 2017-09-07 | Monument Peak Ventures, Llc | Image display device providing individualized feedback |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9792012B2 (en) | 2009-10-01 | 2017-10-17 | Mobile Imaging In Sweden Ab | Method relating to digital images |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9836054B1 (en) | 2016-02-16 | 2017-12-05 | Gopro, Inc. | Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
EP3172619A4 (en) * | 2014-07-23 | 2018-01-24 | eBay Inc. | Use of camera metadata for recommendations |
US9886161B2 (en) | 2014-07-07 | 2018-02-06 | Google Llc | Method and system for motion vector-based video monitoring and event categorization |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9892760B1 (en) | 2015-10-22 | 2018-02-13 | Gopro, Inc. | Apparatus and methods for embedding metadata into video stream |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9910341B2 (en) | 2005-01-31 | 2018-03-06 | The Invention Science Fund I, Llc | Shared image device designation |
US9911237B1 (en) * | 2016-03-17 | 2018-03-06 | A9.Com, Inc. | Image processing techniques for self-captured images |
US9922387B1 (en) | 2016-01-19 | 2018-03-20 | Gopro, Inc. | Storage of metadata and images |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9942511B2 (en) | 2005-10-31 | 2018-04-10 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
CN107948460A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method and device, computer equipment, computer-readable recording medium |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9967457B1 (en) | 2016-01-22 | 2018-05-08 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9973647B2 (en) | 2016-06-17 | 2018-05-15 | Microsoft Technology Licensing, Llc. | Suggesting image files for deletion based on image file parameters |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9973792B1 (en) | 2016-10-27 | 2018-05-15 | Gopro, Inc. | Systems and methods for presenting visual information during presentation of a video segment |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127783B2 (en) | 2014-07-07 | 2018-11-13 | Google Llc | Method and device for processing motion events |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10140827B2 (en) | 2014-07-07 | 2018-11-27 | Google Llc | Method and system for processing motion event notifications |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10187607B1 (en) | 2017-04-04 | 2019-01-22 | Gopro, Inc. | Systems and methods for using a variable capture frame rate for video capture |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192415B2 (en) | 2016-07-11 | 2019-01-29 | Google Llc | Methods and systems for providing intelligent alerts for events |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10277834B2 (en) * | 2017-01-10 | 2019-04-30 | International Business Machines Corporation | Suggestion of visual effects based on detected sound patterns |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10282562B1 (en) | 2015-02-24 | 2019-05-07 | ImageKeeper LLC | Secure digital data collection |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10325082B2 (en) * | 2016-02-03 | 2019-06-18 | Ricoh Company, Ltd. | Information processing apparatus, information processing system, authentication method, and recording medium |
US10339474B2 (en) | 2014-05-06 | 2019-07-02 | Modern Geographia, Llc | Real-time carpooling coordinating system and methods |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10372750B2 (en) * | 2016-02-12 | 2019-08-06 | Canon Kabushiki Kaisha | Information processing apparatus, method, program and storage medium |
US10380429B2 (en) | 2016-07-11 | 2019-08-13 | Google Llc | Methods and systems for person detection in a video feed |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10432728B2 (en) * | 2017-05-17 | 2019-10-01 | Google Llc | Automatic image sharing with designated users over a communication network |
US10432874B2 (en) | 2016-11-01 | 2019-10-01 | Snap Inc. | Systems and methods for fast video capture and sensor adjustment |
WO2019193582A1 (en) * | 2018-04-05 | 2019-10-10 | SurveyMe Limited | Methods and systems for gathering and display of responses to surveys and providing and redeeming rewards |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10445799B2 (en) | 2004-09-30 | 2019-10-15 | Uber Technologies, Inc. | Supply-chain side assistance |
US10458801B2 (en) | 2014-05-06 | 2019-10-29 | Uber Technologies, Inc. | Systems and methods for travel planning that calls for at least one transportation vehicle unit |
US10476827B2 (en) | 2015-09-28 | 2019-11-12 | Google Llc | Sharing images and image albums over a communication network |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10542161B2 (en) * | 2018-02-26 | 2020-01-21 | Kyocera Corporation | Electronic device, control method, and recording medium |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US20200125837A1 (en) * | 2005-10-26 | 2020-04-23 | Cortica Ltd. | System and method for generating a facial representation |
US10657468B2 (en) | 2014-05-06 | 2020-05-19 | Uber Technologies, Inc. | System and methods for verifying that one or more directives that direct transport of a second end user does not conflict with one or more obligations to transport a first end user |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10681199B2 (en) | 2006-03-24 | 2020-06-09 | Uber Technologies, Inc. | Wireless device with an aggregate user interface for controlling other devices |
US10685257B2 (en) | 2017-05-30 | 2020-06-16 | Google Llc | Systems and methods of person recognition in video streams |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
USD893508S1 (en) | 2014-10-07 | 2020-08-18 | Google Llc | Display screen or portion thereof with graphical user interface |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10762795B2 (en) * | 2016-02-08 | 2020-09-01 | Skydio, Inc. | Unmanned aerial vehicle privacy controls |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US20200380976A1 (en) * | 2018-01-26 | 2020-12-03 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US20210104240A1 (en) * | 2018-09-27 | 2021-04-08 | Panasonic Intellectual Property Management Co., Ltd. | Description support device and description support method |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11016938B2 (en) | 2010-09-01 | 2021-05-25 | Apple Inc. | Consolidating information relating to duplicate images |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US20210182333A1 (en) * | 2013-09-05 | 2021-06-17 | Ebay Inc. | Correlating image annotations with foreground features |
US11076123B2 (en) * | 2018-12-07 | 2021-07-27 | Renesas Electronics Corporation | Photographing control device, photographing system and photographing control method |
US11082701B2 (en) | 2016-05-27 | 2021-08-03 | Google Llc | Methods and devices for dynamic adaptation of encoding bitrate for video streaming |
US11100434B2 (en) | 2014-05-06 | 2021-08-24 | Uber Technologies, Inc. | Real-time carpooling coordinating system and methods |
US20210350785A1 (en) * | 2014-11-11 | 2021-11-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Systems and methods for selecting a voice to use during a communication with a user |
US20210374326A1 (en) * | 2020-02-14 | 2021-12-02 | Capital One Services, Llc | System and Method for Establishing an Interactive Communication Session |
US11212416B2 (en) | 2018-07-06 | 2021-12-28 | ImageKeeper LLC | Secure digital media capture and analysis |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11242143B2 (en) | 2016-06-13 | 2022-02-08 | Skydio, Inc. | Unmanned aerial vehicle beyond visual line of sight control |
US11250679B2 (en) | 2014-07-07 | 2022-02-15 | Google Llc | Systems and methods for categorizing motion events |
US11336793B2 (en) * | 2020-03-10 | 2022-05-17 | Seiko Epson Corporation | Scanning system for generating scan data for vocal output, non-transitory computer-readable storage medium storing program for generating scan data for vocal output, and method for generating scan data for vocal output in scanning system |
US11356643B2 (en) | 2017-09-20 | 2022-06-07 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
US11468198B2 (en) | 2020-04-01 | 2022-10-11 | ImageKeeper LLC | Secure digital media authentication and analysis |
US11481854B1 (en) | 2015-02-23 | 2022-10-25 | ImageKeeper LLC | Property measurement with automated document production |
US11553105B2 (en) | 2020-08-31 | 2023-01-10 | ImageKeeper, LLC | Secure document certification and execution system |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11599259B2 (en) | 2015-06-14 | 2023-03-07 | Google Llc | Methods and systems for presenting alert event indicators |
US11609481B2 (en) * | 2019-10-24 | 2023-03-21 | Canon Kabushiki Kaisha | Imaging apparatus that, during an operation of a shutter button, updates a specified position based on a line-of-sight position at a timing desired by a user, and method for controlling the same |
US11645349B2 (en) * | 2020-09-16 | 2023-05-09 | Adobe Inc. | Generating location based photo discovery suggestions |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
CN117714846A (en) * | 2023-07-12 | 2024-03-15 | 荣耀终端有限公司 | Control method and device for camera |
Families Citing this family (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8150617B2 (en) * | 2004-10-25 | 2012-04-03 | A9.Com, Inc. | System and method for displaying location-specific images on a mobile device |
WO2007038766A2 (en) * | 2005-09-28 | 2007-04-05 | Ontela, Inc. | System for secure data transfer between electronic devices with a wide range of capabilities over multiple communications media |
US9049243B2 (en) * | 2005-09-28 | 2015-06-02 | Photobucket Corporation | System and method for allowing a user to opt for automatic or selectively sending of media |
US9009265B2 (en) * | 2005-09-28 | 2015-04-14 | Photobucket Corporation | System and method for automatic transfer of data from one device to another |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
JP4645498B2 (en) * | 2006-03-27 | 2011-03-09 | ソニー株式会社 | Information processing apparatus and method, and program |
US8135684B2 (en) * | 2006-04-13 | 2012-03-13 | Eastman Kodak Company | Value index from incomplete data |
US8982181B2 (en) * | 2006-06-13 | 2015-03-17 | Newbery Revocable Trust Indenture | Digital stereo photographic system |
US9424270B1 (en) | 2006-09-28 | 2016-08-23 | Photobucket Corporation | System and method for managing media files |
EP1965344B1 (en) * | 2007-02-27 | 2017-06-28 | Accenture Global Services Limited | Remote object recognition |
GB2447672B (en) | 2007-03-21 | 2011-12-14 | Ford Global Tech Llc | Vehicle manoeuvring aids |
US20080303922A1 (en) * | 2007-06-08 | 2008-12-11 | Imran Chaudhri | Image capture |
US8253806B2 (en) * | 2007-12-17 | 2012-08-28 | Canon Kabushiki Kaisha | Image sharing system, image managing server, and control method and program thereof |
US10462409B2 (en) * | 2007-12-28 | 2019-10-29 | Google Technology Holdings LLC | Method for collecting media associated with a mobile device |
US8314838B2 (en) * | 2007-12-28 | 2012-11-20 | Motorola Mobility Llc | System and method for collecting media associated with a mobile device |
JP2009199586A (en) * | 2008-01-23 | 2009-09-03 | Canon Inc | Information processing apparatus and control method thereof |
US8600118B2 (en) | 2009-06-30 | 2013-12-03 | Non Typical, Inc. | System for predicting game animal movement and managing game animal images |
JP2011034415A (en) * | 2009-08-03 | 2011-02-17 | Masahide Tanaka | Monitoring apparatus |
KR101626005B1 (en) * | 2009-10-13 | 2016-06-13 | 삼성전자주식회사 | A digital image processing apparatus for recognizing firework, there of method and there of computer-readable storage medium |
US20110145258A1 (en) * | 2009-12-11 | 2011-06-16 | Nokia Corporation | Method and apparatus for tagging media items |
US20110314401A1 (en) | 2010-06-22 | 2011-12-22 | Thermoteknix Systems Ltd. | User-Profile Systems and Methods for Imaging Devices and Imaging Devices Incorporating Same |
US9908050B2 (en) * | 2010-07-28 | 2018-03-06 | Disney Enterprises, Inc. | System and method for image recognized content creation |
KR101692399B1 (en) * | 2010-10-14 | 2017-01-03 | 삼성전자주식회사 | Digital image processing apparatus and digital image processing method |
KR101641513B1 (en) * | 2010-10-22 | 2016-07-21 | 엘지전자 주식회사 | Image photographing apparatus of mobile terminal and method thereof |
US9484046B2 (en) * | 2010-11-04 | 2016-11-01 | Digimarc Corporation | Smartphone-based methods and systems |
US8908013B2 (en) * | 2011-01-20 | 2014-12-09 | Canon Kabushiki Kaisha | Systems and methods for collaborative image capturing |
AU2011200696B2 (en) * | 2011-02-17 | 2014-03-06 | Canon Kabushiki Kaisha | Method, apparatus and system for rating images |
US9723274B2 (en) * | 2011-04-19 | 2017-08-01 | Ford Global Technologies, Llc | System and method for adjusting an image capture setting |
US9926008B2 (en) | 2011-04-19 | 2018-03-27 | Ford Global Technologies, Llc | Trailer backup assist system with waypoint selection |
US9374562B2 (en) | 2011-04-19 | 2016-06-21 | Ford Global Technologies, Llc | System and method for calculating a horizontal camera to target distance |
US10196088B2 (en) | 2011-04-19 | 2019-02-05 | Ford Global Technologies, Llc | Target monitoring system and method |
US9555832B2 (en) | 2011-04-19 | 2017-01-31 | Ford Global Technologies, Llc | Display system utilizing vehicle and trailer dynamics |
US9854209B2 (en) | 2011-04-19 | 2017-12-26 | Ford Global Technologies, Llc | Display system utilizing vehicle and trailer dynamics |
US9683848B2 (en) | 2011-04-19 | 2017-06-20 | Ford Global Technologies, Llc | System for determining hitch angle |
US8935259B2 (en) | 2011-06-20 | 2015-01-13 | Google Inc | Text suggestions for images |
US20150077512A1 (en) * | 2011-06-30 | 2015-03-19 | Google Inc. | Substantially Real-Time Feedback in Mobile Imaging Operations |
JP5921101B2 (en) * | 2011-07-08 | 2016-05-24 | キヤノン株式会社 | Information processing apparatus, control method, and program |
US8659667B2 (en) * | 2011-08-29 | 2014-02-25 | Panasonic Corporation | Recipe based real-time assistance for digital image capture and other consumer electronics devices |
US9208392B2 (en) | 2011-09-20 | 2015-12-08 | Qualcomm Incorporated | Methods and apparatus for progressive pattern matching in a mobile environment |
US20130093899A1 (en) * | 2011-10-18 | 2013-04-18 | Nokia Corporation | Method and apparatus for media content extraction |
JP2013105346A (en) * | 2011-11-14 | 2013-05-30 | Sony Corp | Information presentation device, information presentation method, information presentation system, information registration device, information registration method, information registration system, and program |
WO2013085512A1 (en) * | 2011-12-07 | 2013-06-13 | Intel Corporation | Guided image capture |
KR101812585B1 (en) * | 2012-01-02 | 2017-12-27 | 삼성전자주식회사 | Method for providing User Interface and image photographing apparatus thereof |
JP5890692B2 (en) * | 2012-01-13 | 2016-03-22 | キヤノン株式会社 | Imaging apparatus, control method, and program |
US8965045B2 (en) * | 2012-02-22 | 2015-02-24 | Nokia Corporation | Image capture |
JP5279930B1 (en) * | 2012-03-27 | 2013-09-04 | 株式会社東芝 | Server, electronic device, server control method, server control program |
US8661004B2 (en) * | 2012-05-21 | 2014-02-25 | International Business Machines Corporation | Representing incomplete and uncertain information in graph data |
US9179056B2 (en) * | 2012-07-16 | 2015-11-03 | Htc Corporation | Image capturing systems with context control and related methods |
KR20140075997A (en) * | 2012-12-12 | 2014-06-20 | 엘지전자 주식회사 | Mobile terminal and method for controlling of the same |
US9154709B2 (en) | 2012-12-21 | 2015-10-06 | Google Inc. | Recommending transformations for photography |
US10564815B2 (en) | 2013-04-12 | 2020-02-18 | Nant Holdings Ip, Llc | Virtual teller systems and methods |
US20140347500A1 (en) * | 2013-05-22 | 2014-11-27 | Synchronoss Technologies, Inc. | Apparatus and method of document tagging by pattern matching |
US9426357B1 (en) * | 2013-08-28 | 2016-08-23 | Ambarella, Inc. | System and/or method to reduce a time to a target image capture in a camera |
US9464886B2 (en) | 2013-11-21 | 2016-10-11 | Ford Global Technologies, Llc | Luminescent hitch angle detection component |
US9464887B2 (en) | 2013-11-21 | 2016-10-11 | Ford Global Technologies, Llc | Illuminated hitch angle detection component |
US20150286873A1 (en) * | 2014-04-03 | 2015-10-08 | Bruce L. Davis | Smartphone-based methods and systems |
US9311639B2 (en) | 2014-02-11 | 2016-04-12 | Digimarc Corporation | Methods, apparatus and arrangements for device to device communication |
US9296421B2 (en) | 2014-03-06 | 2016-03-29 | Ford Global Technologies, Llc | Vehicle target identification using human gesture recognition |
CN106030614A (en) | 2014-04-22 | 2016-10-12 | 史內普艾德有限公司 | System and method for controlling a camera based on processing an image captured by other camera |
US9667824B2 (en) | 2014-05-27 | 2017-05-30 | Tribune Broadcasting Company, Llc | Use of location lulls to facilitate identifying and recording video capture location |
US9648230B2 (en) | 2014-05-27 | 2017-05-09 | Tribune Broadcasting Company, Llc | Use of wireless connection loss to facilitate identifying and recording video capture location |
US10049477B1 (en) | 2014-06-27 | 2018-08-14 | Google Llc | Computer-assisted text and visual styling for images |
CN104092946B (en) * | 2014-07-24 | 2018-09-04 | 北京智谷睿拓技术服务有限公司 | Image-pickup method and image collecting device |
US10112537B2 (en) | 2014-09-03 | 2018-10-30 | Ford Global Technologies, Llc | Trailer angle detection target fade warning |
CN105574006A (en) * | 2014-10-10 | 2016-05-11 | 阿里巴巴集团控股有限公司 | Method and device for establishing photographing template database and providing photographing recommendation information |
US9607242B2 (en) | 2015-01-16 | 2017-03-28 | Ford Global Technologies, Llc | Target monitoring system with lens cleaning device |
US9967408B2 (en) * | 2015-03-26 | 2018-05-08 | Canon Kabushiki Kaisha | Information setting apparatus, information management apparatus, information generation apparatus, and method and program for controlling the same |
EP3289430B1 (en) | 2015-04-27 | 2019-10-23 | Snap-Aid Patents Ltd. | Estimating and using relative head pose and camera field-of-view |
US10582125B1 (en) * | 2015-06-01 | 2020-03-03 | Amazon Technologies, Inc. | Panoramic image generation from video |
WO2016207875A1 (en) | 2015-06-22 | 2016-12-29 | Photomyne Ltd. | System and method for detecting objects in an image |
US9769367B2 (en) | 2015-08-07 | 2017-09-19 | Google Inc. | Speech and computer vision-based control |
JP6713185B2 (en) * | 2015-10-15 | 2020-06-24 | 株式会社日立ハイテク | Inspection apparatus and inspection method using template matching |
US9836060B2 (en) | 2015-10-28 | 2017-12-05 | Ford Global Technologies, Llc | Trailer backup assist system with target management |
US10225511B1 (en) | 2015-12-30 | 2019-03-05 | Google Llc | Low power framework for controlling image sensor mode in a mobile image capture device |
US9838641B1 (en) | 2015-12-30 | 2017-12-05 | Google Llc | Low power framework for processing, compressing, and transmitting images at a mobile image capture device |
US9836819B1 (en) | 2015-12-30 | 2017-12-05 | Google Llc | Systems and methods for selective retention and editing of images captured by mobile image capture device |
US10732809B2 (en) | 2015-12-30 | 2020-08-04 | Google Llc | Systems and methods for selective retention and editing of images captured by mobile image capture device |
EP3405903A4 (en) | 2016-01-19 | 2019-07-10 | Regwez, Inc. | Masking restrictive access control system |
US10380993B1 (en) | 2016-01-22 | 2019-08-13 | United Services Automobile Association (Usaa) | Voice commands for the visually impaired to move a camera relative to a document |
FR3063558A1 (en) * | 2017-03-02 | 2018-09-07 | Stmicroelectronics (Rousset) Sas | METHOD FOR CONTROLLING THE REAL-TIME DETECTION OF A SCENE BY A WIRELESS COMMUNICATION APPARATUS AND APPARATUS THEREFOR |
JP6748292B2 (en) * | 2017-03-31 | 2020-08-26 | 本田技研工業株式会社 | Album generating apparatus, album generating system, and album generating method |
US10710585B2 (en) | 2017-09-01 | 2020-07-14 | Ford Global Technologies, Llc | Trailer backup assist system with predictive hitch angle functionality |
US10425578B1 (en) * | 2017-09-07 | 2019-09-24 | Amazon Technologies, Inc. | Image capturing assistant |
US10425593B2 (en) * | 2017-10-19 | 2019-09-24 | Paypal, Inc. | Digital image filtering and post-capture processing using user specific data |
WO2019205069A1 (en) * | 2018-04-27 | 2019-10-31 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for updating 3d model of building |
US10897442B2 (en) | 2018-05-18 | 2021-01-19 | International Business Machines Corporation | Social media integration for events |
JP6631746B1 (en) * | 2018-09-28 | 2020-01-15 | ダイキン工業株式会社 | Cluster classification device, environment generation device, and environment generation system |
US10785406B2 (en) * | 2019-02-01 | 2020-09-22 | Qualcomm Incorporated | Photography assistance for mobile devices |
US10950076B1 (en) * | 2020-02-29 | 2021-03-16 | Hall Labs Llc | Garage access unit |
US11877050B2 (en) * | 2022-01-20 | 2024-01-16 | Qualcomm Incorporated | User interface for image capture |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4951079A (en) * | 1988-01-28 | 1990-08-21 | Konica Corp. | Voice-recognition camera |
US5266985A (en) * | 1990-07-16 | 1993-11-30 | Nikon Corporation | Camera with optimum composition determinator |
US5541656A (en) * | 1994-07-29 | 1996-07-30 | Logitech, Inc. | Digital camera with separate function and option icons and control switches |
US5633678A (en) * | 1995-12-20 | 1997-05-27 | Eastman Kodak Company | Electronic still camera for capturing and categorizing images |
US5687408A (en) * | 1995-07-05 | 1997-11-11 | Samsung Aerospace Industries, Ltd. | Camera and method for displaying picture composition point guides |
US5831670A (en) * | 1993-03-31 | 1998-11-03 | Nikon Corporation | Camera capable of issuing composition information |
US6094215A (en) * | 1998-01-06 | 2000-07-25 | Intel Corporation | Method of determining relative camera orientation position to create 3-D visual images |
US6128013A (en) * | 1997-10-30 | 2000-10-03 | Eastman Kodak Company | User interface for an image capture device |
US20010012062A1 (en) * | 1998-07-23 | 2001-08-09 | Eric C. Anderson | System and method for automatic analysis and categorization of images in an electronic imaging device |
US6301440B1 (en) * | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US20010030695A1 (en) * | 1999-06-02 | 2001-10-18 | Prabhu Girish V. | Customizing a digital camera for a plurality of users |
US20020030745A1 (en) * | 1997-11-24 | 2002-03-14 | Squilla John R. | Photographic system for enabling interactive communication between a camera and an attraction site |
US6408301B1 (en) * | 1999-02-23 | 2002-06-18 | Eastman Kodak Company | Interactive image storage, indexing and retrieval system |
US6608650B1 (en) * | 1998-12-01 | 2003-08-19 | Flashpoint Technology, Inc. | Interactive assistant process for aiding a user in camera setup and operation |
US6629104B1 (en) * | 2000-11-22 | 2003-09-30 | Eastman Kodak Company | Method for adding personalized metadata to a collection of digital images |
US7317485B1 (en) * | 1999-03-15 | 2008-01-08 | Fujifilm Corporation | Digital still camera with composition advising function, and method of controlling operation of same |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4346697B2 (en) | 1996-05-10 | 2009-10-21 | キヤノン株式会社 | Imaging device |
US6784924B2 (en) | 1997-02-20 | 2004-08-31 | Eastman Kodak Company | Network configuration file for automatically transmitting images from an electronic still camera |
JP3535693B2 (en) | 1997-04-30 | 2004-06-07 | キヤノン株式会社 | Portable electronic device, image processing method, imaging device, and computer-readable recording medium |
US7158172B2 (en) | 2000-06-20 | 2007-01-02 | Fuji Photo Film Co., Ltd. | Digital camera with an automatic image transmission function |
US20020028679A1 (en) | 2000-09-07 | 2002-03-07 | Eric Edwards | Data transmission based on available wireless bandwidth |
JP4143305B2 (en) * | 2001-01-30 | 2008-09-03 | 日本電気株式会社 | Robot device, verification environment determination method, and verification environment determination program |
JP4092879B2 (en) | 2001-02-05 | 2008-05-28 | 富士フイルム株式会社 | Authentication method for mobile devices |
US7177448B1 (en) * | 2001-04-12 | 2007-02-13 | Ipix Corporation | System and method for selecting and transmitting images of interest to a user |
JP2003023593A (en) | 2001-07-10 | 2003-01-24 | Olympus Optical Co Ltd | Electronic imaging camera |
US6639649B2 (en) * | 2001-08-06 | 2003-10-28 | Eastman Kodak Company | Synchronization of music and images in a camera with audio capabilities |
US7068309B2 (en) | 2001-10-09 | 2006-06-27 | Microsoft Corp. | Image exchange with image annotation |
US20030189643A1 (en) * | 2002-04-04 | 2003-10-09 | Angelica Quintana | Digital camera capable of sending files via online messenger |
US7143114B2 (en) * | 2002-04-18 | 2006-11-28 | Hewlett-Packard Development Company, L.P. | Automatic renaming of files during file management |
US7843495B2 (en) * | 2002-07-10 | 2010-11-30 | Hewlett-Packard Development Company, L.P. | Face recognition in a digital imaging system accessing a database of people |
EP2063627A4 (en) | 2006-08-23 | 2016-02-24 | Nikon Corp | Electronic camera and image transfer method used in electronic camera |
JP2011182381A (en) | 2010-02-08 | 2011-09-15 | Ricoh Co Ltd | Image processing device and image processing method |
KR101761613B1 (en) | 2010-10-04 | 2017-07-26 | 엘지전자 주식회사 | Mobile terminal and Method for transmitting image thereof |
-
2003
- 2003-12-18 US US10/740,242 patent/US20040174434A1/en not_active Abandoned
-
2009
- 2009-05-26 US US12/472,149 patent/US8558921B2/en not_active Expired - Fee Related
-
2013
- 2013-07-31 US US13/956,348 patent/US20130314566A1/en not_active Abandoned
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4951079A (en) * | 1988-01-28 | 1990-08-21 | Konica Corp. | Voice-recognition camera |
US5266985A (en) * | 1990-07-16 | 1993-11-30 | Nikon Corporation | Camera with optimum composition determinator |
US5831670A (en) * | 1993-03-31 | 1998-11-03 | Nikon Corporation | Camera capable of issuing composition information |
US5541656A (en) * | 1994-07-29 | 1996-07-30 | Logitech, Inc. | Digital camera with separate function and option icons and control switches |
US5687408A (en) * | 1995-07-05 | 1997-11-11 | Samsung Aerospace Industries, Ltd. | Camera and method for displaying picture composition point guides |
US5633678A (en) * | 1995-12-20 | 1997-05-27 | Eastman Kodak Company | Electronic still camera for capturing and categorizing images |
US6128013A (en) * | 1997-10-30 | 2000-10-03 | Eastman Kodak Company | User interface for an image capture device |
US20020030745A1 (en) * | 1997-11-24 | 2002-03-14 | Squilla John R. | Photographic system for enabling interactive communication between a camera and an attraction site |
US6094215A (en) * | 1998-01-06 | 2000-07-25 | Intel Corporation | Method of determining relative camera orientation position to create 3-D visual images |
US20010012062A1 (en) * | 1998-07-23 | 2001-08-09 | Eric C. Anderson | System and method for automatic analysis and categorization of images in an electronic imaging device |
US6608650B1 (en) * | 1998-12-01 | 2003-08-19 | Flashpoint Technology, Inc. | Interactive assistant process for aiding a user in camera setup and operation |
US6408301B1 (en) * | 1999-02-23 | 2002-06-18 | Eastman Kodak Company | Interactive image storage, indexing and retrieval system |
US7317485B1 (en) * | 1999-03-15 | 2008-01-08 | Fujifilm Corporation | Digital still camera with composition advising function, and method of controlling operation of same |
US20010030695A1 (en) * | 1999-06-02 | 2001-10-18 | Prabhu Girish V. | Customizing a digital camera for a plurality of users |
US7019778B1 (en) * | 1999-06-02 | 2006-03-28 | Eastman Kodak Company | Customizing a digital camera |
US6301440B1 (en) * | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US6629104B1 (en) * | 2000-11-22 | 2003-09-30 | Eastman Kodak Company | Method for adding personalized metadata to a collection of digital images |
Cited By (651)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8648938B2 (en) * | 1997-10-09 | 2014-02-11 | DigitalOptics Corporation Europe Limited | Detecting red eye filter and apparatus using meta-data |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US7460738B1 (en) | 2003-02-27 | 2008-12-02 | At&T Intellectual Property Ii, L.P. | Systems, methods and devices for determining and assigning descriptive filenames to digital images |
US7228010B1 (en) * | 2003-02-27 | 2007-06-05 | At&T Corp. | Systems, methods and devices for determining and assigning descriptive filenames to digital images |
US20040263639A1 (en) * | 2003-06-26 | 2004-12-30 | Vladimir Sadovsky | System and method for intelligent image acquisition |
US20130188077A1 (en) * | 2003-08-05 | 2013-07-25 | DigitalOptics Corporation Europe Limited | Detecting Red Eye Filter and Apparatus Using Meta-Data |
US20130162869A1 (en) * | 2003-08-05 | 2013-06-27 | DigitalOptics Corporation Europe Limited | Detecting Red Eye Filter and Apparatus Using Meta-Data |
US9025054B2 (en) * | 2003-08-05 | 2015-05-05 | Fotonation Limited | Detecting red eye filter and apparatus using meta-data |
US8957993B2 (en) * | 2003-08-05 | 2015-02-17 | FotoNation | Detecting red eye filter and apparatus using meta-data |
US9412007B2 (en) | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US20050030388A1 (en) * | 2003-08-06 | 2005-02-10 | Stavely Donald J. | System and method for improving image capture ability |
US20050122405A1 (en) * | 2003-12-09 | 2005-06-09 | Voss James S. | Digital cameras and methods using GPS/time-based and/or location data to provide scene selection, and dynamic illumination and exposure adjustment |
US20050168623A1 (en) * | 2004-01-30 | 2005-08-04 | Stavely Donald J. | Digital image production method and apparatus |
US8804028B2 (en) * | 2004-01-30 | 2014-08-12 | Hewlett-Packard Development Company, L.P. | Digital image production method and apparatus |
US20110069183A1 (en) * | 2004-02-04 | 2011-03-24 | Sony Corporation | Methods and apparatuses for identifying opportunities to capture content |
US7898572B2 (en) * | 2004-02-04 | 2011-03-01 | Sony Corporation | Methods and apparatuses for identifying opportunities to capture content |
US20050172147A1 (en) * | 2004-02-04 | 2005-08-04 | Eric Edwards | Methods and apparatuses for identifying opportunities to capture content |
US20050174430A1 (en) * | 2004-02-09 | 2005-08-11 | Anderson Michael E. | Method for organizing photographic images in a computer for locating, viewing and purchase |
US20050227792A1 (en) * | 2004-03-18 | 2005-10-13 | Hbl Ltd. | Virtual golf training and gaming system and method |
US9811728B2 (en) * | 2004-04-12 | 2017-11-07 | Google Inc. | Adding value to a rendered document |
US20170140219A1 (en) * | 2004-04-12 | 2017-05-18 | Google Inc. | Adding Value to a Rendered Document |
US7800582B1 (en) * | 2004-04-21 | 2010-09-21 | Weather Central, Inc. | Scene launcher system and method for weather report presentations and the like |
US9356712B2 (en) | 2004-05-14 | 2016-05-31 | Vibes Media Llc | Method and system for displaying data |
US20050254443A1 (en) * | 2004-05-14 | 2005-11-17 | Campbell Alexander G | Method and system for displaying data |
US20050278379A1 (en) * | 2004-06-10 | 2005-12-15 | Canon Kabushiki Kaisha | Image retrieval device and image retrieval method |
US7573418B2 (en) * | 2004-06-22 | 2009-08-11 | Omron Corporation | Tag communication apparatus, control method for tag communication apparatus, computer readable medium for tag communication control and tag communication control system |
US20050280538A1 (en) * | 2004-06-22 | 2005-12-22 | Omron Corporation | Tag communication apparatus, control method for tag communication apparatus, computer readable medium for tag communication control and tag communication control system |
US20070252674A1 (en) * | 2004-06-30 | 2007-11-01 | Joakim Nelson | Face Image Correction |
US8208010B2 (en) * | 2004-06-30 | 2012-06-26 | Sony Ericsson Mobile Communications Ab | Face image correction using multiple camera angles |
US20060080286A1 (en) * | 2004-08-31 | 2006-04-13 | Flashpoint Technology, Inc. | System and method for storing and accessing images based on position data associated therewith |
US20060044398A1 (en) * | 2004-08-31 | 2006-03-02 | Foong Annie P | Digital image classification system |
US20080095434A1 (en) * | 2004-09-07 | 2008-04-24 | Chisato Funayama | Imaging System, Imaging Condition Setting Method, Terminal and Server Used for the Same |
US8094963B2 (en) * | 2004-09-07 | 2012-01-10 | Nec Corporation | Imaging system, imaging condition setting method, terminal and server used for the same |
US7620701B2 (en) * | 2004-09-14 | 2009-11-17 | Canon Kabushiki Kaisha | Image capture device |
US20060059202A1 (en) * | 2004-09-14 | 2006-03-16 | Canon Kabushiki Kaisha | Image capture device |
US8762839B2 (en) | 2004-09-30 | 2014-06-24 | The Invention Science Fund I, Llc | Supply-chain side assistance |
US9038899B2 (en) | 2004-09-30 | 2015-05-26 | The Invention Science Fund I, Llc | Obtaining user assistance |
US9747579B2 (en) | 2004-09-30 | 2017-08-29 | The Invention Science Fund I, Llc | Enhanced user assistance |
US20100146390A1 (en) * | 2004-09-30 | 2010-06-10 | Searete Llc, A Limited Liability Corporation | Obtaining user assestance |
US20100218095A1 (en) * | 2004-09-30 | 2010-08-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Obtaining user assistance |
US20100223162A1 (en) * | 2004-09-30 | 2010-09-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US20060075344A1 (en) * | 2004-09-30 | 2006-04-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Providing assistance |
US20070038529A1 (en) * | 2004-09-30 | 2007-02-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US20100223065A1 (en) * | 2004-09-30 | 2010-09-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Supply-chain side assistance |
US10687166B2 (en) | 2004-09-30 | 2020-06-16 | Uber Technologies, Inc. | Obtaining user assistance |
US10872365B2 (en) | 2004-09-30 | 2020-12-22 | Uber Technologies, Inc. | Supply-chain side assistance |
US20060076398A1 (en) * | 2004-09-30 | 2006-04-13 | Searete Llc | Obtaining user assistance |
US8704675B2 (en) | 2004-09-30 | 2014-04-22 | The Invention Science Fund I, Llc | Obtaining user assistance |
US7922086B2 (en) | 2004-09-30 | 2011-04-12 | The Invention Science Fund I, Llc | Obtaining user assistance |
US9098826B2 (en) | 2004-09-30 | 2015-08-04 | The Invention Science Fund I, Llc | Enhanced user assistance |
US20060081695A1 (en) * | 2004-09-30 | 2006-04-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware. | Enhanced user assistance |
US20100309011A1 (en) * | 2004-09-30 | 2010-12-09 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Obtaining user assistance |
US20060173816A1 (en) * | 2004-09-30 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced user assistance |
US8282003B2 (en) | 2004-09-30 | 2012-10-09 | The Invention Science Fund I, Llc | Supply-chain side assistance |
US20080229198A1 (en) * | 2004-09-30 | 2008-09-18 | Searete Llc, A Limited Liability Corporaiton Of The State Of Delaware | Electronically providing user assistance |
US10445799B2 (en) | 2004-09-30 | 2019-10-15 | Uber Technologies, Inc. | Supply-chain side assistance |
US7649938B2 (en) * | 2004-10-21 | 2010-01-19 | Cisco Technology, Inc. | Method and apparatus of controlling a plurality of video surveillance cameras |
US20060088092A1 (en) * | 2004-10-21 | 2006-04-27 | Wen-Hsiung Chen | Method and apparatus of controlling a plurality of video surveillance cameras |
US20060090132A1 (en) * | 2004-10-26 | 2006-04-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced user assistance |
US8341522B2 (en) * | 2004-10-27 | 2012-12-25 | The Invention Science Fund I, Llc | Enhanced contextual user assistance |
US20060086781A1 (en) * | 2004-10-27 | 2006-04-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced contextual user assistance |
WO2006049413A1 (en) * | 2004-11-04 | 2006-05-11 | Lg Electronics Inc. | Mobile terminal and operating method thereof |
US7986350B2 (en) | 2004-11-04 | 2011-07-26 | Lg Electronics Inc. | Mobile terminal and operating method thereof |
US20060092294A1 (en) * | 2004-11-04 | 2006-05-04 | Lg Electronics Inc. | Mobile terminal and operating method thereof |
US20060116979A1 (en) * | 2004-12-01 | 2006-06-01 | Jung Edward K | Enhanced user assistance |
US20060117001A1 (en) * | 2004-12-01 | 2006-06-01 | Jung Edward K | Enhanced user assistance |
US10514816B2 (en) | 2004-12-01 | 2019-12-24 | Uber Technologies, Inc. | Enhanced user assistance |
US20070268392A1 (en) * | 2004-12-31 | 2007-11-22 | Joonas Paalasmaa | Provision Of Target Specific Information |
US9451219B2 (en) * | 2004-12-31 | 2016-09-20 | Nokia Technologies Oy | Provision of target specific information |
US9596414B2 (en) * | 2004-12-31 | 2017-03-14 | Nokie Technologies Oy | Provision of target specific information |
US9307577B2 (en) | 2005-01-21 | 2016-04-05 | The Invention Science Fund I, Llc | User assistance |
US20060190428A1 (en) * | 2005-01-21 | 2006-08-24 | Searete Llc A Limited Liability Corporation Of The State Of Delware | User assistance |
US9489717B2 (en) | 2005-01-31 | 2016-11-08 | Invention Science Fund I, Llc | Shared image device |
US7876357B2 (en) | 2005-01-31 | 2011-01-25 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US20060170956A1 (en) * | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image devices |
US8606383B2 (en) | 2005-01-31 | 2013-12-10 | The Invention Science Fund I, Llc | Audio sharing |
US8350946B2 (en) | 2005-01-31 | 2013-01-08 | The Invention Science Fund I, Llc | Viewfinder for shared image device |
US9325781B2 (en) | 2005-01-31 | 2016-04-26 | Invention Science Fund I, Llc | Audio sharing |
US9910341B2 (en) | 2005-01-31 | 2018-03-06 | The Invention Science Fund I, Llc | Shared image device designation |
US7920169B2 (en) | 2005-01-31 | 2011-04-05 | Invention Science Fund I, Llc | Proximity of shared image devices |
US9124729B2 (en) | 2005-01-31 | 2015-09-01 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US9019383B2 (en) | 2005-01-31 | 2015-04-28 | The Invention Science Fund I, Llc | Shared image devices |
US8902320B2 (en) | 2005-01-31 | 2014-12-02 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US8988537B2 (en) | 2005-01-31 | 2015-03-24 | The Invention Science Fund I, Llc | Shared image devices |
US7643080B2 (en) * | 2005-02-01 | 2010-01-05 | Casio Computer Co., Ltd. | Image pick-up apparatus and computer program for such apparatus |
US20060170807A1 (en) * | 2005-02-01 | 2006-08-03 | Casio Computer Co., Ltd. | Image pick-up apparatus and computer program for such apparatus |
US7693705B1 (en) * | 2005-02-16 | 2010-04-06 | Patrick William Jamieson | Process for improving the quality of documents using semantic analysis |
US8081960B2 (en) * | 2005-02-21 | 2011-12-20 | Samsung Electronics Co., Ltd. | Device and method for processing data resource changing events in a mobile terminal |
US20060190595A1 (en) * | 2005-02-21 | 2006-08-24 | Samsung Electronics Co., Ltd. | Device and method for processing data resource changing events in a mobile terminal |
US20060206817A1 (en) * | 2005-02-28 | 2006-09-14 | Jung Edward K | User assistance for a condition |
US7710452B1 (en) | 2005-03-16 | 2010-05-04 | Eric Lindberg | Remote video monitoring of non-urban outdoor sites |
US7653302B2 (en) * | 2005-03-24 | 2010-01-26 | Syabas Technology Inc. | Techniques for transmitting personal data and metadata among computing devices |
US20060221190A1 (en) * | 2005-03-24 | 2006-10-05 | Lifebits, Inc. | Techniques for transmitting personal data and metadata among computing devices |
US20120194699A1 (en) * | 2005-03-25 | 2012-08-02 | Nikon Corporation | Illumination device, imaging device, and imaging system |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US9819490B2 (en) * | 2005-05-04 | 2017-11-14 | Invention Science Fund I, Llc | Regional proximity for shared image device(s) |
US20100271490A1 (en) * | 2005-05-04 | 2010-10-28 | Assignment For Published Patent Application, Searete LLC, a limited liability corporation of | Regional proximity for shared image device(s) |
US7782365B2 (en) | 2005-06-02 | 2010-08-24 | Searete Llc | Enhanced video/still image correlation |
US9041826B2 (en) | 2005-06-02 | 2015-05-26 | The Invention Science Fund I, Llc | Capturing selected image objects |
US20090046933A1 (en) * | 2005-06-02 | 2009-02-19 | Gallagher Andrew C | Using photographer identity to classify images |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US20060274949A1 (en) * | 2005-06-02 | 2006-12-07 | Eastman Kodak Company | Using photographer identity to classify images |
US20060274154A1 (en) * | 2005-06-02 | 2006-12-07 | Searete, Lcc, A Limited Liability Corporation Of The State Of Delaware | Data storage usage protocol |
US8681225B2 (en) | 2005-06-02 | 2014-03-25 | Royce A. Levien | Storage access technique for captured data |
US7574054B2 (en) | 2005-06-02 | 2009-08-11 | Eastman Kodak Company | Using photographer identity to classify images |
US9967424B2 (en) * | 2005-06-02 | 2018-05-08 | Invention Science Fund I, Llc | Data storage usage protocol |
US9451200B2 (en) | 2005-06-02 | 2016-09-20 | Invention Science Fund I, Llc | Storage access technique for captured data |
US20090254562A1 (en) * | 2005-09-02 | 2009-10-08 | Thomson Licensing | Automatic Metadata Extraction and Metadata Controlled Production Process |
US20110173196A1 (en) * | 2005-09-02 | 2011-07-14 | Thomson Licensing Inc. | Automatic metadata extraction and metadata controlled production process |
US9420231B2 (en) | 2005-09-02 | 2016-08-16 | Gvbb Holdings S.A.R.L. | Automatic metadata extraction and metadata controlled production process |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US20110032408A1 (en) * | 2005-09-29 | 2011-02-10 | Canon Kabushiki Kaisha | Image display apparatus and image display method |
US8022961B2 (en) * | 2005-09-29 | 2011-09-20 | Canon Kabushiki Kaisha | Image display apparatus and image display method |
USRE45451E1 (en) * | 2005-09-29 | 2015-04-07 | Canon Kabushiki Kaisha | Image display apparatus and image display method |
US20200125837A1 (en) * | 2005-10-26 | 2020-04-23 | Cortica Ltd. | System and method for generating a facial representation |
US20070096024A1 (en) * | 2005-10-27 | 2007-05-03 | Hiroaki Furuya | Image-capturing apparatus |
US9942511B2 (en) | 2005-10-31 | 2018-04-10 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US20070102525A1 (en) * | 2005-11-10 | 2007-05-10 | Research In Motion Limited | System and method for activating an electronic device |
US20100009650A1 (en) * | 2005-11-10 | 2010-01-14 | Research In Motion Limited | System and method for activating an electronic device |
US7606552B2 (en) * | 2005-11-10 | 2009-10-20 | Research In Motion Limited | System and method for activating an electronic device |
US8041328B2 (en) * | 2005-11-10 | 2011-10-18 | Research In Motion Limited | System and method for activating an electronic device |
US8787865B2 (en) * | 2005-11-10 | 2014-07-22 | Blackberry Limited | System and method for activating an electronic device |
US8244200B2 (en) * | 2005-11-10 | 2012-08-14 | Research In Motion Limited | System, circuit and method for activating an electronic device |
US20100029242A1 (en) * | 2005-11-10 | 2010-02-04 | Research In Motion Limited | System and method for activating an electronic device |
US20100003944A1 (en) * | 2005-11-10 | 2010-01-07 | Research In Motion Limited | System, circuit and method for activating an electronic device |
US7639943B1 (en) * | 2005-11-15 | 2009-12-29 | Kalajan Kevin E | Computer-implemented system and method for automated image uploading and sharing from camera-enabled mobile devices |
WO2007057373A1 (en) * | 2005-11-16 | 2007-05-24 | Sony Ericsson Mobile Communications Ab | Remote control of an image captioning device |
US20070109417A1 (en) * | 2005-11-16 | 2007-05-17 | Per Hyttfors | Methods, devices and computer program products for remote control of an image capturing device |
US20070124330A1 (en) * | 2005-11-17 | 2007-05-31 | Lydia Glass | Methods of rendering information services and related devices |
US7822746B2 (en) | 2005-11-18 | 2010-10-26 | Qurio Holdings, Inc. | System and method for tagging images based on positional information |
US20110040779A1 (en) * | 2005-11-18 | 2011-02-17 | Qurio Holdings, Inc. | System and method for tagging images based on positional information |
US8359314B2 (en) * | 2005-11-18 | 2013-01-22 | Quiro Holdings, Inc. | System and method for tagging images based on positional information |
US8001124B2 (en) * | 2005-11-18 | 2011-08-16 | Qurio Holdings | System and method for tagging images based on positional information |
US20110314016A1 (en) * | 2005-11-18 | 2011-12-22 | Qurio Holdings, Inc. | System and method for tagging images based on positional information |
US20070118509A1 (en) * | 2005-11-18 | 2007-05-24 | Flashpoint Technology, Inc. | Collaborative service for suggesting media keywords based on location data |
US20070118508A1 (en) * | 2005-11-18 | 2007-05-24 | Flashpoint Technology, Inc. | System and method for tagging images based on positional information |
US7539411B2 (en) * | 2005-12-26 | 2009-05-26 | Ricoh Company, Ltd. | Imaging device, location information recording method, and computer program product |
US20070200862A1 (en) * | 2005-12-26 | 2007-08-30 | Hiroaki Uchiyama | Imaging device, location information recording method, and computer program product |
US20140176796A1 (en) * | 2005-12-28 | 2014-06-26 | XI Processing L.L.C | Computer-implemented system and method for notifying users upon the occurrence of an event |
US8644702B1 (en) | 2005-12-28 | 2014-02-04 | Xi Processing L.L.C. | Computer-implemented system and method for notifying users upon the occurrence of an event |
US9173009B2 (en) * | 2005-12-28 | 2015-10-27 | Gula Consulting Limited Liability Company | Computer-implemented system and method for notifying users upon the occurrence of an event |
US9385984B2 (en) | 2005-12-28 | 2016-07-05 | Gula Consulting Limited Liability Company | Computer-implemented system and method for notifying users upon the occurrence of an event |
US9667581B2 (en) | 2005-12-28 | 2017-05-30 | Gula Consulting Limited Liability Company | Computer-implemented system and method for notifying users upon the occurrence of an event |
US20070164988A1 (en) * | 2006-01-18 | 2007-07-19 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US7817104B2 (en) * | 2006-01-18 | 2010-10-19 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US20070165273A1 (en) * | 2006-01-18 | 2007-07-19 | Pfu Limited | Image reading apparatus and computer program product |
US7916328B2 (en) * | 2006-01-18 | 2011-03-29 | Pfu Limited | Image reading apparatus and computer program product |
US7843496B2 (en) * | 2006-01-23 | 2010-11-30 | Ricoh Company, Ltd. | Imaging device, method of recording location information, and computer program product |
US20070236581A1 (en) * | 2006-01-23 | 2007-10-11 | Hiroaki Uchiyama | Imaging device, method of recording location information, and computer program product |
US7792678B2 (en) * | 2006-02-13 | 2010-09-07 | Hon Hai Precision Industry Co., Ltd. | Method and device for enhancing accuracy of voice control with image characteristic |
US20070200912A1 (en) * | 2006-02-13 | 2007-08-30 | Premier Image Technology Corporation | Method and device for enhancing accuracy of voice control with image characteristic |
EP1984850A4 (en) * | 2006-02-14 | 2010-05-05 | Olaworks Inc | Method and system for tagging digital data |
EP1984850A1 (en) * | 2006-02-14 | 2008-10-29 | Olaworks, Inc. | Method and system for tagging digital data |
US20140354840A1 (en) * | 2006-02-16 | 2014-12-04 | Canon Kabushiki Kaisha | Image transmission apparatus, image transmission method, program, and storage medium |
US10038843B2 (en) * | 2006-02-16 | 2018-07-31 | Canon Kabushiki Kaisha | Image transmission apparatus, image transmission method, program, and storage medium |
US7970763B2 (en) * | 2006-02-21 | 2011-06-28 | Microsoft Corporation | Searching and indexing of photos based on ink annotations |
US20070196033A1 (en) * | 2006-02-21 | 2007-08-23 | Microsoft Corporation | Searching and indexing of photos based on ink annotations |
US20070204125A1 (en) * | 2006-02-24 | 2007-08-30 | Michael Hardy | System and method for managing applications on a computing device having limited storage space |
US9093121B2 (en) | 2006-02-28 | 2015-07-28 | The Invention Science Fund I, Llc | Data management of an audio data stream |
US8341219B1 (en) * | 2006-03-07 | 2012-12-25 | Adobe Systems Incorporated | Sharing data based on tagging |
US20080159584A1 (en) * | 2006-03-22 | 2008-07-03 | Canon Kabushiki Kaisha | Information processing apparatus and information processing method |
US11012552B2 (en) | 2006-03-24 | 2021-05-18 | Uber Technologies, Inc. | Wireless device with an aggregate user interface for controlling other devices |
US10681199B2 (en) | 2006-03-24 | 2020-06-09 | Uber Technologies, Inc. | Wireless device with an aggregate user interface for controlling other devices |
US20070223912A1 (en) * | 2006-03-27 | 2007-09-27 | Fujifilm Corporation | Photographing method and photographing apparatus |
US8073319B2 (en) * | 2006-03-27 | 2011-12-06 | Fujifilm Corporation | Photographing method and photographing apparatus based on face detection and photography conditions |
US20070242138A1 (en) * | 2006-04-13 | 2007-10-18 | Manico Joseph A | Camera user input based image value index |
WO2007120456A1 (en) * | 2006-04-13 | 2007-10-25 | Eastman Kodak Company | Camera user input based image value index |
US8330830B2 (en) | 2006-04-13 | 2012-12-11 | Eastman Kodak Company | Camera user input based image value index |
US8589782B2 (en) * | 2006-05-16 | 2013-11-19 | Yahoo! Inc. | System and method for bookmarking and tagging a content item |
US20110078766A1 (en) * | 2006-05-16 | 2011-03-31 | Yahoo! Inc | System and method for bookmarking and tagging a content item |
EP1865426A3 (en) * | 2006-06-09 | 2012-05-02 | Sony Corporation | Information processing apparatus, information processing method, and computer program |
EP1865426A2 (en) * | 2006-06-09 | 2007-12-12 | Sony Corporation | Information processing apparatus, information processing method, and computer program |
US20070298812A1 (en) * | 2006-06-21 | 2007-12-27 | Singh Munindar P | System and method for naming a location based on user-specific information |
US8099086B2 (en) | 2006-06-21 | 2012-01-17 | Ektimisi Semiotics Holdings, Llc | System and method for providing a descriptor for a location to a recipient |
US9992629B2 (en) | 2006-06-21 | 2018-06-05 | Scenera Mobile Technologies, Llc | System and method for providing a descriptor for a location to a recipient |
US9538324B2 (en) | 2006-06-21 | 2017-01-03 | Scenera Mobile Tehnologies, LLC | System and method for providing a descriptor for a location to a recipient |
US9846045B2 (en) | 2006-06-21 | 2017-12-19 | Scenera Mobile Technologies, Llc | System and method for naming a location based on user-specific information |
US8737969B2 (en) | 2006-06-21 | 2014-05-27 | Scenera Mobile Technologies, Llc | System and method for providing a descriptor for a location to a recipient |
US9055109B2 (en) | 2006-06-21 | 2015-06-09 | Scenera Mobile Technologies, Llc | System and method for providing a descriptor for a location to a recipient |
US20070298813A1 (en) * | 2006-06-21 | 2007-12-27 | Singh Munindar P | System and method for providing a descriptor for a location to a recipient |
US9338240B2 (en) | 2006-06-21 | 2016-05-10 | Scenera Mobile Technologies, Llc | System and method for naming a location based on user-specific information |
US8750892B2 (en) | 2006-06-21 | 2014-06-10 | Scenera Mobile Technologies, Llc | System and method for naming a location based on user-specific information |
US20070300271A1 (en) * | 2006-06-23 | 2007-12-27 | Geoffrey Benjamin Allen | Dynamic triggering of media signal capture |
US20080005070A1 (en) * | 2006-06-28 | 2008-01-03 | Bellsouth Intellectual Property Corporation | Non-Repetitive Web Searching |
US20080018748A1 (en) * | 2006-07-19 | 2008-01-24 | Sami Niemi | Method in relation to acquiring digital images |
US20110050960A1 (en) * | 2006-07-19 | 2011-03-03 | Scalado Ab | Method in relation to acquiring digital images |
US7920161B2 (en) * | 2006-07-19 | 2011-04-05 | Scalado Ab | Method for forming combined digital images |
US20080037826A1 (en) * | 2006-08-08 | 2008-02-14 | Scenera Research, Llc | Method and system for photo planning and tracking |
US7853100B2 (en) * | 2006-08-08 | 2010-12-14 | Fotomedia Technologies, Llc | Method and system for photo planning and tracking |
US20110052097A1 (en) * | 2006-08-08 | 2011-03-03 | Robert Sundstrom | Method And System For Photo Planning And Tracking |
US20080043275A1 (en) * | 2006-08-16 | 2008-02-21 | Charles Morton | Method to Provide Print Functions to The User Via A Camera Interface |
US8935244B2 (en) | 2006-08-31 | 2015-01-13 | Scenera Mobile Technologies, Llc | System and method for identifying a location of interest to be named by a user |
US9635511B2 (en) | 2006-08-31 | 2017-04-25 | Scenera Mobile Technologies, Llc | System and method for identifying a location of interest to be named by a user |
US8554765B2 (en) | 2006-08-31 | 2013-10-08 | Ektimisi Semiotics Holdings, Llc | System and method for identifying a location of interest to be named by a user |
US8407213B2 (en) | 2006-08-31 | 2013-03-26 | Ektimisi Semiotics Holdings, Llc | System and method for identifying a location of interest to be named by a user |
US20080071761A1 (en) * | 2006-08-31 | 2008-03-20 | Singh Munindar P | System and method for identifying a location of interest to be named by a user |
US8487868B2 (en) * | 2006-09-01 | 2013-07-16 | Research In Motion Limited | Method and apparatus for controlling a display in an electronic device |
US20110045871A1 (en) * | 2006-09-01 | 2011-02-24 | Research In Motion Limited | Method and apparatus for controlling a display in an electronic device |
US9179057B2 (en) * | 2006-09-27 | 2015-11-03 | Sony Corporation | Imaging apparatus and imaging method that acquire environment information and information of a scene being recorded |
US20120300069A1 (en) * | 2006-09-27 | 2012-11-29 | Sony Corporation | Imaging apparatus and imaging method |
US8990850B2 (en) | 2006-09-28 | 2015-03-24 | Qurio Holdings, Inc. | Personalized broadcast system |
US20110125861A1 (en) * | 2006-09-28 | 2011-05-26 | Qurio Holdings, Inc. | System and method providing peer review and distribution of digital content |
US8060574B2 (en) * | 2006-09-28 | 2011-11-15 | Qurio Holdings, Inc. | System and method providing quality based peer review and distribution of digital content |
US7895275B1 (en) | 2006-09-28 | 2011-02-22 | Qurio Holdings, Inc. | System and method providing quality based peer review and distribution of digital content |
US8615778B1 (en) | 2006-09-28 | 2013-12-24 | Qurio Holdings, Inc. | Personalized broadcast system |
WO2008048398A3 (en) * | 2006-10-13 | 2008-07-03 | Sony Corp | System and method for automatic selection of digital photo album cover |
US8176065B2 (en) | 2006-10-13 | 2012-05-08 | Sony Corporation | System and method for automatic selection of digital photo album cover |
US20080147726A1 (en) * | 2006-10-13 | 2008-06-19 | Paul Jin Hwang | System and method for automatic selection of digital photo album cover |
US8015189B2 (en) * | 2006-11-08 | 2011-09-06 | Yahoo! Inc. | Customizable connections between media and meta-data via feeds |
US20080126388A1 (en) * | 2006-11-08 | 2008-05-29 | Yahoo! Inc. | Customizable connections between media and meta-data via feeds |
US8212668B2 (en) | 2006-11-14 | 2012-07-03 | TrackThings LLC | Apparatus and method for finding a misplaced object using a database and instructions generated by a portable device |
US20100002941A1 (en) * | 2006-11-14 | 2010-01-07 | Koninklijke Philips Electronics N.V. | Method and apparatus for identifying an object captured by a digital image |
US7986230B2 (en) * | 2006-11-14 | 2011-07-26 | TrackThings LLC | Apparatus and method for finding a misplaced object using a database and instructions generated by a portable device |
US20080112591A1 (en) * | 2006-11-14 | 2008-05-15 | Lctank Llc | Apparatus and method for finding a misplaced object using a database and instructions generated by a portable device |
US8090159B2 (en) | 2006-11-14 | 2012-01-03 | TrackThings LLC | Apparatus and method for identifying a name corresponding to a face or voice using a database |
US20110222729A1 (en) * | 2006-11-14 | 2011-09-15 | TrackThings LLC | Apparatus and method for finding a misplaced object using a database and instructions generated by a portable device |
WO2008059422A1 (en) * | 2006-11-14 | 2008-05-22 | Koninklijke Philips Electronics N.V. | Method and apparatus for identifying an object captured by a digital image |
US20100074328A1 (en) * | 2006-12-19 | 2010-03-25 | Koninklijke Philips Electronics N.V. | Method and system for encoding an image signal, encoded image signal, method and system for decoding an image signal |
US20080182665A1 (en) * | 2007-01-30 | 2008-07-31 | Microsoft Corporation | Video game to web site upload utility |
US8346073B2 (en) * | 2007-02-26 | 2013-01-01 | Fujifilm Corporation | Image taking apparatus |
US8254771B2 (en) * | 2007-02-26 | 2012-08-28 | Fujifilm Corporation | Image taking apparatus for group photographing |
US20100194927A1 (en) * | 2007-02-26 | 2010-08-05 | Syuji Nose | Image taking apparatus |
US20100195994A1 (en) * | 2007-02-26 | 2010-08-05 | Syuji Nose | Image taking apparatus |
US8339469B2 (en) * | 2007-03-07 | 2012-12-25 | Eastman Kodak Company | Process for automatically determining a probability of image capture with a terminal using contextual data |
US20100165076A1 (en) * | 2007-03-07 | 2010-07-01 | Jean-Marie Vau | Process for automatically determining a probability of image capture with a terminal using contextual data |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US20080282177A1 (en) * | 2007-05-09 | 2008-11-13 | Brown Michael S | User interface for editing photo tags |
US8667384B2 (en) | 2007-05-09 | 2014-03-04 | Blackberry Limited | User interface for editing photo tags |
US9183229B2 (en) * | 2007-05-29 | 2015-11-10 | Blackberry Limited | System and method for selecting a geographic location to associate with an object |
US20080297409A1 (en) * | 2007-05-29 | 2008-12-04 | Research In Motion Limited | System and method for selecting a geographic location to associate with an object |
US20140321774A1 (en) * | 2007-05-29 | 2014-10-30 | Blackberry Limited | System And Method For Selecting A Geographic Location To Associate With An Object |
US8803980B2 (en) * | 2007-05-29 | 2014-08-12 | Blackberry Limited | System and method for selecting a geographic location to associate with an object |
US8144944B2 (en) | 2007-08-14 | 2012-03-27 | Olympus Corporation | Image sharing system and method |
US8983773B2 (en) * | 2007-08-23 | 2015-03-17 | International Business Machines Corporation | Pictorial navigation |
US8994852B2 (en) | 2007-08-23 | 2015-03-31 | Sony Corporation | Image-capturing apparatus and image-capturing method |
US20130018579A1 (en) * | 2007-08-23 | 2013-01-17 | International Business Machines Corporation | Pictorial navigation |
EP2051506A1 (en) * | 2007-10-17 | 2009-04-22 | Fujifilm Corporation | Imaging device and imaging control method |
US20090102940A1 (en) * | 2007-10-17 | 2009-04-23 | Akihiro Uchida | Imaging device and imaging control method |
US8111315B2 (en) | 2007-10-17 | 2012-02-07 | Fujifilm Corporation | Imaging device and imaging control method that detects and displays composition information |
US20090111375A1 (en) * | 2007-10-24 | 2009-04-30 | Itookthisonmyphone.Com, Inc. | Automatic wireless photo upload for camera phone |
US9866743B2 (en) | 2007-11-06 | 2018-01-09 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US8890966B2 (en) | 2007-11-06 | 2014-11-18 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US8488012B2 (en) * | 2007-11-06 | 2013-07-16 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US9497371B2 (en) | 2007-11-06 | 2016-11-15 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US20090115865A1 (en) * | 2007-11-06 | 2009-05-07 | Sony Corporation | Automatic image-capturing apparatus, automatic image-capturing control method, image display system, image display method, display control apparatus, and display control method |
US20090128502A1 (en) * | 2007-11-19 | 2009-05-21 | Cct Tech Advanced Products Limited | Image display with cordless phone |
EP2073542A2 (en) | 2007-11-19 | 2009-06-24 | CCT Tech Advanced Products Limited | Digital picture frame with cordless phone |
EP2073542A3 (en) * | 2007-11-19 | 2010-12-01 | CCT Tech Advanced Products Limited | Digital picture frame with cordless phone |
US20090138560A1 (en) * | 2007-11-28 | 2009-05-28 | James Joseph Stahl Jr | Method and Apparatus for Automated Record Creation Using Information Objects, Such as Images, Transmitted Over a Communications Network to Inventory Databases and Other Data-Collection Programs |
US20090150147A1 (en) * | 2007-12-11 | 2009-06-11 | Jacoby Keith A | Recording audio metadata for stored images |
US8385588B2 (en) | 2007-12-11 | 2013-02-26 | Eastman Kodak Company | Recording audio metadata for stored images |
WO2009075754A1 (en) * | 2007-12-11 | 2009-06-18 | Eastman Kodak Company | Recording audio metadata for stored images |
US8994731B2 (en) | 2007-12-19 | 2015-03-31 | Temporal Llc | Apparatus, system, and method for organizing information by time and place |
US20090164439A1 (en) * | 2007-12-19 | 2009-06-25 | Nevins David C | Apparatus, system, and method for organizing information by time and place |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US20090192998A1 (en) * | 2008-01-22 | 2009-07-30 | Avinci Media, Lc | System and method for deduced meta tags for electronic media |
US20150061932A1 (en) * | 2008-01-28 | 2015-03-05 | Blackberry Limited | Gps pre-acquisition for geotagging digital photos |
US9766342B2 (en) * | 2008-01-28 | 2017-09-19 | Blackberry Limited | GPS pre-acquisition for geotagging digital photos |
US20090193021A1 (en) * | 2008-01-29 | 2009-07-30 | Gupta Vikram M | Camera system and method for picture sharing based on camera perspective |
US20150247739A1 (en) * | 2008-03-06 | 2015-09-03 | Texas Instruments Incorporated | Processes for more accurately calibrating and operating e-compass for tilt error, circuits, and systems |
WO2009114036A1 (en) * | 2008-03-14 | 2009-09-17 | Sony Ericsson Mobile Communications Ab | A method and apparatus of annotating digital images with data |
US20090232417A1 (en) * | 2008-03-14 | 2009-09-17 | Sony Ericsson Mobile Communications Ab | Method and Apparatus of Annotating Digital Images with Data |
US8300117B2 (en) * | 2008-03-28 | 2012-10-30 | Fuji Xerox Co., Ltd. | System and method for exposing video-taking heuristics at point of capture |
US20090244323A1 (en) * | 2008-03-28 | 2009-10-01 | Fuji Xerox Co., Ltd. | System and method for exposing video-taking heuristics at point of capture |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US8126912B2 (en) | 2008-06-27 | 2012-02-28 | Microsoft Corporation | Guided content metadata tagging for an online content repository |
WO2009156561A1 (en) * | 2008-06-27 | 2009-12-30 | Nokia Corporation | Method, apparatus and computer program product for providing image modification |
US20090327336A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Guided content metadata tagging for an online content repository |
US8768070B2 (en) | 2008-06-27 | 2014-07-01 | Nokia Corporation | Method, apparatus and computer program product for providing image modification |
US20090324103A1 (en) * | 2008-06-27 | 2009-12-31 | Natasha Gelfand | Method, apparatus and computer program product for providing image modification |
US20100002102A1 (en) * | 2008-07-01 | 2010-01-07 | Sony Corporation | System and method for efficiently performing image processing operations |
US8624989B2 (en) * | 2008-07-01 | 2014-01-07 | Sony Corporation | System and method for remotely performing image processing operations with a network server device |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US8743213B2 (en) * | 2008-08-18 | 2014-06-03 | Apple Inc. | Apparatus and method for compensating for variations in digital cameras |
US20120188402A1 (en) * | 2008-08-18 | 2012-07-26 | Apple Inc. | Apparatus and method for compensating for variations in digital cameras |
US8520979B2 (en) * | 2008-08-19 | 2013-08-27 | Digimarc Corporation | Methods and systems for content processing |
US20100046842A1 (en) * | 2008-08-19 | 2010-02-25 | Conwell William Y | Methods and Systems for Content Processing |
US20110161365A1 (en) * | 2008-08-27 | 2011-06-30 | Eiu-Hyun Shin | Object identification system, wireless internet system having the same and method servicing a wireless communication based on an object using the same |
US8433722B2 (en) | 2008-08-27 | 2013-04-30 | Kiwiple Co., Ltd. | Object identification system, wireless internet system having the same and method servicing a wireless communication based on an object using the same |
WO2010024584A2 (en) * | 2008-08-27 | 2010-03-04 | 키위플주식회사 | Object recognition system, wireless internet system having same, and object-based wireless communication service method using same |
WO2010024584A3 (en) * | 2008-08-27 | 2010-06-17 | 키위플주식회사 | Object recognition system, wireless internet system having same, and object-based wireless communication service method using same |
US20100054601A1 (en) * | 2008-08-28 | 2010-03-04 | Microsoft Corporation | Image Tagging User Interface |
US8867779B2 (en) | 2008-08-28 | 2014-10-21 | Microsoft Corporation | Image tagging user interface |
US9020183B2 (en) * | 2008-08-28 | 2015-04-28 | Microsoft Technology Licensing, Llc | Tagging images with labels |
US20130195375A1 (en) * | 2008-08-28 | 2013-08-01 | Microsoft Corporation | Tagging images with labels |
US8675988B2 (en) | 2008-08-29 | 2014-03-18 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8724007B2 (en) | 2008-08-29 | 2014-05-13 | Adobe Systems Incorporated | Metadata-driven method and apparatus for multi-image processing |
US8830347B2 (en) | 2008-08-29 | 2014-09-09 | Adobe Systems Incorporated | Metadata based alignment of distorted images |
US10068317B2 (en) | 2008-08-29 | 2018-09-04 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8368773B1 (en) | 2008-08-29 | 2013-02-05 | Adobe Systems Incorporated | Metadata-driven method and apparatus for automatically aligning distorted images |
US8391640B1 (en) | 2008-08-29 | 2013-03-05 | Adobe Systems Incorporated | Method and apparatus for aligning and unwarping distorted images |
US8842190B2 (en) | 2008-08-29 | 2014-09-23 | Adobe Systems Incorporated | Method and apparatus for determining sensor format factors from image metadata |
US20100070845A1 (en) * | 2008-09-17 | 2010-03-18 | International Business Machines Corporation | Shared web 2.0 annotations linked to content segments of web documents |
US8896708B2 (en) * | 2008-10-31 | 2014-11-25 | Adobe Systems Incorporated | Systems and methods for determining, storing, and using metadata for video media content |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US20100332553A1 (en) * | 2009-06-24 | 2010-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for updating composition database by using composition pattern of user, and digital photographing apparatus |
US8856192B2 (en) * | 2009-06-24 | 2014-10-07 | Samsung Electronics Co., Ltd. | Method and apparatus for updating composition database by using composition pattern of user, and digital photographing apparatus |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20110025715A1 (en) * | 2009-08-03 | 2011-02-03 | Ricoh Company, Ltd., | Appropriately scaled map display with superposed information |
US8269797B2 (en) | 2009-08-03 | 2012-09-18 | Ricoh Company, Ltd. | Appropriately scaled map display with superposed information |
US20110032432A1 (en) * | 2009-08-05 | 2011-02-10 | Lee Jang-Won | Digital image signal processing method, medium for recording the method, and digital image signal processing apparatus |
US8687126B2 (en) * | 2009-08-05 | 2014-04-01 | Samsung Electronics Co., Ltd. | Digital image signal processing method, medium for recording the method, and digital image signal processing apparatus |
US9426359B2 (en) | 2009-08-05 | 2016-08-23 | Samsung Electronics Co., Ltd. | Digital image signal processing method, medium for recording the method, and digital image signal processing apparatus |
US20110069179A1 (en) * | 2009-09-24 | 2011-03-24 | Microsoft Corporation | Network coordinated event capture and image storage |
US20110077852A1 (en) * | 2009-09-25 | 2011-03-31 | Mythreyi Ragavan | User-defined marked locations for use in conjunction with a personal navigation device |
EP2323370A1 (en) * | 2009-10-01 | 2011-05-18 | LG Electronics Inc. | Mobile terminal and image metadata editing method thereof |
US9792012B2 (en) | 2009-10-01 | 2017-10-17 | Mobile Imaging In Sweden Ab | Method relating to digital images |
US20110081952A1 (en) * | 2009-10-01 | 2011-04-07 | Song Yoo-Mee | Mobile terminal and tag editing method thereof |
CN102035935A (en) * | 2009-10-01 | 2011-04-27 | Lg电子株式会社 | Mobile terminal and image metadata editing method thereof |
CN104104823A (en) * | 2009-10-01 | 2014-10-15 | Lg电子株式会社 | Mobile terminal and tag editing method thereof |
US8724004B2 (en) | 2009-10-01 | 2014-05-13 | Lg Electronics Inc. | Mobile terminal and tag editing method thereof |
US8390732B2 (en) * | 2009-10-27 | 2013-03-05 | Hon Hai Precision Industry Co., Ltd. | Method for adjusting a focal length of a camera |
US20110096175A1 (en) * | 2009-10-27 | 2011-04-28 | Hon Hai Precision Industry Co., Ltd. | Method for adjusting a focal length of a camera |
CN102714558A (en) * | 2009-10-30 | 2012-10-03 | 三星电子株式会社 | Image providing system and method |
US9003290B2 (en) * | 2009-12-02 | 2015-04-07 | T-Mobile Usa, Inc. | Image-derived user interface enhancements |
US20110131497A1 (en) * | 2009-12-02 | 2011-06-02 | T-Mobile Usa, Inc. | Image-Derived User Interface Enhancements |
WO2011084092A1 (en) * | 2010-01-08 | 2011-07-14 | Telefonaktiebolaget L M Ericsson (Publ) | A method and apparatus for social tagging of media files |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US20120294539A1 (en) * | 2010-01-29 | 2012-11-22 | Kiwiple Co., Ltd. | Object identification system and method of identifying an object using the same |
US20110205397A1 (en) * | 2010-02-24 | 2011-08-25 | John Christopher Hahn | Portable imaging device having display with improved visibility under adverse conditions |
US10535279B2 (en) | 2010-02-24 | 2020-01-14 | Nant Holdings Ip, Llc | Augmented reality panorama supporting visually impaired individuals |
US8605141B2 (en) | 2010-02-24 | 2013-12-10 | Nant Holdings Ip, Llc | Augmented reality panorama supporting visually impaired individuals |
US20110216179A1 (en) * | 2010-02-24 | 2011-09-08 | Orang Dialameh | Augmented Reality Panorama Supporting Visually Impaired Individuals |
US11348480B2 (en) | 2010-02-24 | 2022-05-31 | Nant Holdings Ip, Llc | Augmented reality panorama systems and methods |
US9526658B2 (en) | 2010-02-24 | 2016-12-27 | Nant Holdings Ip, Llc | Augmented reality panorama supporting visually impaired individuals |
US12048669B2 (en) | 2010-02-24 | 2024-07-30 | Nant Holdings Ip, Llc | Augmented reality panorama systems and methods |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
CN102870044A (en) * | 2010-03-22 | 2013-01-09 | 伊斯曼柯达公司 | Underwater camera with pressure sensor |
US20110228075A1 (en) * | 2010-03-22 | 2011-09-22 | Madden Thomas E | Digital camera with underwater capture mode |
US20110228074A1 (en) * | 2010-03-22 | 2011-09-22 | Parulski Kenneth A | Underwater camera with presssure sensor |
WO2011119336A1 (en) * | 2010-03-22 | 2011-09-29 | Eastman Kodak Company | Underwater camera with pressure sensor |
WO2011119319A1 (en) * | 2010-03-22 | 2011-09-29 | Eastman Kodak Company | Digital camera with underwater capture mode |
CN102822738A (en) * | 2010-03-22 | 2012-12-12 | 伊斯曼柯达公司 | Digital camera with underwater capture mode |
CN102859990A (en) * | 2010-04-29 | 2013-01-02 | 伊斯曼柯达公司 | Indoor/outdoor scene detection using GPS |
JP2013526215A (en) * | 2010-04-29 | 2013-06-20 | インテレクチュアル ベンチャーズ ファンド 83 エルエルシー | Digital camera system that detects indoor / outdoor scenes using GPS |
WO2011136993A1 (en) * | 2010-04-29 | 2011-11-03 | Eastman Kodak Company | Indoor/outdoor scene detection using gps |
US8665340B2 (en) | 2010-04-29 | 2014-03-04 | Intellectual Ventures Fund 83 Llc | Indoor/outdoor scene detection using GPS |
US9805086B2 (en) | 2010-09-01 | 2017-10-31 | Apple Inc. | Consolidating information relating to duplicate images |
US8774561B2 (en) * | 2010-09-01 | 2014-07-08 | Apple Inc. | Consolidating information relating to duplicate images |
US11016938B2 (en) | 2010-09-01 | 2021-05-25 | Apple Inc. | Consolidating information relating to duplicate images |
US20120051668A1 (en) * | 2010-09-01 | 2012-03-01 | Apple Inc. | Consolidating Information Relating to Duplicate Images |
EP2432209A1 (en) * | 2010-09-15 | 2012-03-21 | Samsung Electronics Co., Ltd. | Apparatus and method for managing image data and metadata |
US9544498B2 (en) | 2010-09-20 | 2017-01-10 | Mobile Imaging In Sweden Ab | Method for forming images |
EP2621160A1 (en) * | 2010-09-22 | 2013-07-31 | NEC CASIO Mobile Communications, Ltd. | Image pick-up device, image transfer method and program |
EP2621160A4 (en) * | 2010-09-22 | 2014-05-21 | Nec Casio Mobile Comm Ltd | Image pick-up device, image transfer method and program |
CN103119926A (en) * | 2010-09-22 | 2013-05-22 | Nec卡西欧移动通信株式会社 | Image pick-up device, image transfer method and program |
US9344736B2 (en) | 2010-09-30 | 2016-05-17 | Alcatel Lucent | Systems and methods for compressive sense imaging |
EP2466584A3 (en) * | 2010-12-15 | 2013-06-26 | Canon Kabushiki Kaisha | Collaborative image capture |
US8711228B2 (en) | 2010-12-15 | 2014-04-29 | Canon Kabushiki Kaisha | Collaborative image capture |
US20120154617A1 (en) * | 2010-12-20 | 2012-06-21 | Samsung Electronics Co., Ltd. | Image capturing device |
WO2012125383A1 (en) * | 2011-03-17 | 2012-09-20 | Eastman Kodak Company | Digital camera user interface which adapts to environmental conditions |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9313390B2 (en) * | 2011-04-08 | 2016-04-12 | Qualcomm Incorporated | Systems and methods to calibrate a multi camera device |
US20120257065A1 (en) * | 2011-04-08 | 2012-10-11 | Qualcomm Incorporated | Systems and methods to calibrate a multi camera device |
US20130194422A1 (en) * | 2011-04-18 | 2013-08-01 | Guangzhou Jinghua Optical & Electronics Co., Ltd. | 360-Degree Automatic Tracking Hunting Camera And Operating Method Thereof |
US20120266066A1 (en) * | 2011-04-18 | 2012-10-18 | Ting-Yee Liao | Image display device providing subject-dependent feedback |
US20170255654A1 (en) * | 2011-04-18 | 2017-09-07 | Monument Peak Ventures, Llc | Image display device providing individualized feedback |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US20160231890A1 (en) * | 2011-07-26 | 2016-08-11 | Sony Corporation | Information processing apparatus and phase output method for determining phrases based on an image |
US20130050394A1 (en) * | 2011-08-23 | 2013-02-28 | Samsung Electronics Co. Ltd. | Apparatus and method for providing panoramic view during video telephony and video messaging |
US9392102B2 (en) * | 2011-08-23 | 2016-07-12 | Samsung Electronics Co., Ltd. | Apparatus and method for providing panoramic view during video telephony and video messaging |
US10289273B2 (en) | 2011-08-29 | 2019-05-14 | Monument Peak Ventures, Llc | Display device providing feedback based on image classification |
US20130055088A1 (en) * | 2011-08-29 | 2013-02-28 | Ting-Yee Liao | Display device providing feedback based on image classification |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9454280B2 (en) * | 2011-08-29 | 2016-09-27 | Intellectual Ventures Fund 83 Llc | Display device providing feedback based on image classification |
WO2013036181A1 (en) * | 2011-09-08 | 2013-03-14 | Telefonaktiebolaget L M Ericsson (Publ) | Assigning tags to media files |
US9424258B2 (en) | 2011-09-08 | 2016-08-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Assigning tags to media files |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US20130129254A1 (en) * | 2011-11-17 | 2013-05-23 | Thermoteknix Systems Limited | Apparatus for projecting secondary information into an optical system |
US20160006916A1 (en) * | 2012-02-07 | 2016-01-07 | Alcatel-Lucent Usa Inc. | Lensless compressive image acquisition |
US20130201297A1 (en) * | 2012-02-07 | 2013-08-08 | Alcatel-Lucent Usa Inc. | Lensless compressive image acquisition |
US20130201343A1 (en) * | 2012-02-07 | 2013-08-08 | Hong Jiang | Lenseless compressive image acquisition |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
WO2014004536A2 (en) * | 2012-06-25 | 2014-01-03 | Apple Inc. | Voice-based image tagging and searching |
WO2014004536A3 (en) * | 2012-06-25 | 2014-08-21 | Apple Inc. | Voice-based image tagging and searching |
US20130346068A1 (en) * | 2012-06-25 | 2013-12-26 | Apple Inc. | Voice-Based Image Tagging and Searching |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9319583B2 (en) | 2012-08-17 | 2016-04-19 | Samsung Electronics Co., Ltd. | Camera device and methods for aiding users in use thereof |
EP2698980A3 (en) * | 2012-08-17 | 2015-02-25 | Samsung Electronics Co., Ltd. | Camera device and methods for aiding users in use thereof |
EP2704420A1 (en) * | 2012-08-28 | 2014-03-05 | Google Inc. | System and method for capturing videos with a mobile device |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10009537B2 (en) | 2012-10-23 | 2018-06-26 | Snapaid Ltd. | Real time assessment of picture quality |
WO2014064690A1 (en) * | 2012-10-23 | 2014-05-01 | Sivan Ishay | Real time assessment of picture quality |
US9338348B2 (en) | 2012-10-23 | 2016-05-10 | Snapaid Ltd. | Real time assessment of picture quality |
US10659682B2 (en) | 2012-10-23 | 2020-05-19 | Snapaid Ltd. | Real time assessment of picture quality |
US11671702B2 (en) | 2012-10-23 | 2023-06-06 | Snapaid Ltd. | Real time assessment of picture quality |
US10944901B2 (en) | 2012-10-23 | 2021-03-09 | Snapaid Ltd. | Real time assessment of picture quality |
US11252325B2 (en) | 2012-10-23 | 2022-02-15 | Snapaid Ltd. | Real time assessment of picture quality |
US9661226B2 (en) | 2012-10-23 | 2017-05-23 | Snapaid Ltd. | Real time assessment of picture quality |
US9319578B2 (en) | 2012-10-24 | 2016-04-19 | Alcatel Lucent | Resolution and focus enhancement |
US20140172906A1 (en) * | 2012-12-19 | 2014-06-19 | Shivani A. Sud | Time-shifting image service |
US9607011B2 (en) * | 2012-12-19 | 2017-03-28 | Intel Corporation | Time-shifting image service |
US20140247368A1 (en) * | 2013-03-04 | 2014-09-04 | Colby Labs, Llc | Ready click camera control |
US9189468B2 (en) * | 2013-03-07 | 2015-11-17 | Ricoh Company, Ltd. | Form filling based on classification and identification of multimedia data |
US20140258827A1 (en) * | 2013-03-07 | 2014-09-11 | Ricoh Co., Ltd. | Form Filling Based on Classification and Identification of Multimedia Data |
US9942297B2 (en) * | 2013-03-12 | 2018-04-10 | Light Iron Digital, Llc | System and methods for facilitating the development and management of creative assets |
US20140282087A1 (en) * | 2013-03-12 | 2014-09-18 | Peter Cioni | System and Methods for Facilitating the Development and Management of Creative Assets |
US9208548B1 (en) * | 2013-05-06 | 2015-12-08 | Amazon Technologies, Inc. | Automatic image enhancement |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US20150042843A1 (en) * | 2013-08-09 | 2015-02-12 | Broadcom Corporation | Systems and methods for improving images |
US11657084B2 (en) * | 2013-09-05 | 2023-05-23 | Ebay Inc. | Correlating image annotations with foreground features |
US20210182333A1 (en) * | 2013-09-05 | 2021-06-17 | Ebay Inc. | Correlating image annotations with foreground features |
US20150085145A1 (en) * | 2013-09-20 | 2015-03-26 | Nvidia Corporation | Multiple image capture and processing |
US20150085159A1 (en) * | 2013-09-20 | 2015-03-26 | Nvidia Corporation | Multiple image capture and processing |
US20150186672A1 (en) * | 2013-12-31 | 2015-07-02 | Yahoo! Inc. | Photo privacy |
US9578239B2 (en) * | 2014-01-17 | 2017-02-21 | Htc Corporation | Controlling method for electronic apparatus with one switch button |
US20150207994A1 (en) * | 2014-01-17 | 2015-07-23 | Htc Corporation | Controlling method for electronic apparatus with one switch button |
US20170147549A1 (en) * | 2014-02-24 | 2017-05-25 | Invent.ly LLC | Automatically generating notes and classifying multimedia content specific to a video production |
US20170019591A1 (en) * | 2014-03-28 | 2017-01-19 | Fujitsu Limited | Photographing assisting system, photographing apparatus, information processing apparatus and photographing assisting method |
US10044930B2 (en) * | 2014-03-28 | 2018-08-07 | Fujitsu Limited | Photographing assisting system, photographing apparatus, information processing apparatus and photographing assisting method |
US9413948B2 (en) * | 2014-04-11 | 2016-08-09 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Systems and methods for recommending image capture settings based on a geographic location |
US10458801B2 (en) | 2014-05-06 | 2019-10-29 | Uber Technologies, Inc. | Systems and methods for travel planning that calls for at least one transportation vehicle unit |
US10657468B2 (en) | 2014-05-06 | 2020-05-19 | Uber Technologies, Inc. | System and methods for verifying that one or more directives that direct transport of a second end user does not conflict with one or more obligations to transport a first end user |
US11669785B2 (en) | 2014-05-06 | 2023-06-06 | Uber Technologies, Inc. | System and methods for verifying that one or more directives that direct transport of a second end user does not conflict with one or more obligations to transport a first end user |
US11100434B2 (en) | 2014-05-06 | 2021-08-24 | Uber Technologies, Inc. | Real-time carpooling coordinating system and methods |
US11466993B2 (en) | 2014-05-06 | 2022-10-11 | Uber Technologies, Inc. | Systems and methods for travel planning that calls for at least one transportation vehicle unit |
US10339474B2 (en) | 2014-05-06 | 2019-07-02 | Modern Geographia, Llc | Real-time carpooling coordinating system and methods |
US20150339906A1 (en) * | 2014-05-23 | 2015-11-26 | Avermedia Technologies, Inc. | Monitoring system, apparatus and method thereof |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10373617B2 (en) | 2014-05-30 | 2019-08-06 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10467872B2 (en) | 2014-07-07 | 2019-11-05 | Google Llc | Methods and systems for updating an event timeline with event indicators |
US9886161B2 (en) | 2014-07-07 | 2018-02-06 | Google Llc | Method and system for motion vector-based video monitoring and event categorization |
US11011035B2 (en) | 2014-07-07 | 2021-05-18 | Google Llc | Methods and systems for detecting persons in a smart home environment |
US10127783B2 (en) | 2014-07-07 | 2018-11-13 | Google Llc | Method and device for processing motion events |
US9940523B2 (en) | 2014-07-07 | 2018-04-10 | Google Llc | Video monitoring user interface for displaying motion events feed |
US10140827B2 (en) | 2014-07-07 | 2018-11-27 | Google Llc | Method and system for processing motion event notifications |
US11062580B2 (en) | 2014-07-07 | 2021-07-13 | Google Llc | Methods and systems for updating an event timeline with event indicators |
US10452921B2 (en) | 2014-07-07 | 2019-10-22 | Google Llc | Methods and systems for displaying video streams |
US10180775B2 (en) | 2014-07-07 | 2019-01-15 | Google Llc | Method and system for displaying recorded and live video feeds |
US20160105617A1 (en) * | 2014-07-07 | 2016-04-14 | Google Inc. | Method and System for Performing Client-Side Zooming of a Remote Video Feed |
US10867496B2 (en) | 2014-07-07 | 2020-12-15 | Google Llc | Methods and systems for presenting video feeds |
US10108862B2 (en) | 2014-07-07 | 2018-10-23 | Google Llc | Methods and systems for displaying live video and recorded video |
US10977918B2 (en) | 2014-07-07 | 2021-04-13 | Google Llc | Method and system for generating a smart time-lapse video clip |
US10789821B2 (en) | 2014-07-07 | 2020-09-29 | Google Llc | Methods and systems for camera-side cropping of a video feed |
US11250679B2 (en) | 2014-07-07 | 2022-02-15 | Google Llc | Systems and methods for categorizing motion events |
US10192120B2 (en) | 2014-07-07 | 2019-01-29 | Google Llc | Method and system for generating a smart time-lapse video clip |
US12002264B2 (en) * | 2014-07-23 | 2024-06-04 | Ebay Inc. | Use of camera metadata for recommendations |
EP3172619A4 (en) * | 2014-07-23 | 2018-01-24 | eBay Inc. | Use of camera metadata for recommendations |
US10248862B2 (en) | 2014-07-23 | 2019-04-02 | Ebay Inc. | Use of camera metadata for recommendations |
US11704905B2 (en) * | 2014-07-23 | 2023-07-18 | Ebay Inc. | Use of camera metadata for recommendations |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
USD893508S1 (en) | 2014-10-07 | 2020-08-18 | Google Llc | Display screen or portion thereof with graphical user interface |
US20210350785A1 (en) * | 2014-11-11 | 2021-11-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Systems and methods for selecting a voice to use during a communication with a user |
US10929812B2 (en) * | 2014-11-26 | 2021-02-23 | Adobe Inc. | Content creation, deployment collaboration, and subsequent marketing activities |
CN105654332A (en) * | 2014-11-26 | 2016-06-08 | 奥多比公司 | Content creation, deployment collaboration and following marketing activities |
US20160148249A1 (en) * | 2014-11-26 | 2016-05-26 | Adobe Systems Incorporated | Content Creation, Deployment Collaboration, and Tracking Exposure |
US20160148280A1 (en) * | 2014-11-26 | 2016-05-26 | Adobe Systems Incorporated | Content Creation, Deployment Collaboration, and Channel Dependent Content Selection |
CN105631701A (en) * | 2014-11-26 | 2016-06-01 | 奥多比公司 | Content creation, deployment collaboration, and tracking exposure |
US11087282B2 (en) * | 2014-11-26 | 2021-08-10 | Adobe Inc. | Content creation, deployment collaboration, and channel dependent content selection |
US20160148278A1 (en) * | 2014-11-26 | 2016-05-26 | Adobe Systems Incorporated | Content Creation, Deployment Collaboration, and Subsequent Marketing Activities |
US10776754B2 (en) * | 2014-11-26 | 2020-09-15 | Adobe Inc. | Content creation, deployment collaboration, and subsequent marketing activities |
US10936996B2 (en) * | 2014-11-26 | 2021-03-02 | Adobe Inc. | Content creation, deployment collaboration, activity stream, and task management |
CN105631702A (en) * | 2014-11-26 | 2016-06-01 | 奥多比公司 | Content creation, deployment collaboration, activity stream, and task management |
US11004036B2 (en) * | 2014-11-26 | 2021-05-11 | Adobe Inc. | Content creation, deployment collaboration, and tracking exposure |
CN105654356A (en) * | 2014-11-26 | 2016-06-08 | 奥多比公司 | Content establishment, deployment cooperation and content selection based on channel |
US20160148277A1 (en) * | 2014-11-26 | 2016-05-26 | Adobe Systems Incorporated | Content Creation, Deployment Collaboration, and Subsequent Marketing Activities |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US10075632B2 (en) * | 2015-02-02 | 2018-09-11 | Olympus Corporation | Imaging apparatus |
US9843721B2 (en) * | 2015-02-02 | 2017-12-12 | Olympus Corporation | Imaging apparatus |
US10375298B2 (en) | 2015-02-02 | 2019-08-06 | Olympus Corporation | Imaging apparatus |
US20160227108A1 (en) * | 2015-02-02 | 2016-08-04 | Olympus Corporation | Imaging apparatus |
US20180048808A1 (en) * | 2015-02-02 | 2018-02-15 | Olympus Corporation | Imaging apparatus |
US11481854B1 (en) | 2015-02-23 | 2022-10-25 | ImageKeeper LLC | Property measurement with automated document production |
US12106391B2 (en) | 2015-02-23 | 2024-10-01 | ImageKeeper LLC | Property measurement with automated document production |
US11227070B2 (en) | 2015-02-24 | 2022-01-18 | ImageKeeper LLC | Secure digital data collection |
US10282562B1 (en) | 2015-02-24 | 2019-05-07 | ImageKeeper LLC | Secure digital data collection |
US11550960B2 (en) | 2015-02-24 | 2023-01-10 | ImageKeeper LLC | Secure digital data collection |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US20160286132A1 (en) * | 2015-03-24 | 2016-09-29 | Samsung Electronics Co., Ltd. | Electronic device and method for photographing |
US10009505B2 (en) * | 2015-04-14 | 2018-06-26 | Apple Inc. | Asynchronously requesting information from a camera device |
US20160309054A1 (en) * | 2015-04-14 | 2016-10-20 | Apple Inc. | Asynchronously Requesting Information From A Camera Device |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US20160323483A1 (en) * | 2015-04-28 | 2016-11-03 | Invent.ly LLC | Automatically generating notes and annotating multimedia content specific to a video production |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11599259B2 (en) | 2015-06-14 | 2023-03-07 | Google Llc | Methods and systems for presenting alert event indicators |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10476827B2 (en) | 2015-09-28 | 2019-11-12 | Google Llc | Sharing images and image albums over a communication network |
US11146520B2 (en) | 2015-09-28 | 2021-10-12 | Google Llc | Sharing images and image albums over a communication network |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10431258B2 (en) | 2015-10-22 | 2019-10-01 | Gopro, Inc. | Apparatus and methods for embedding metadata into video stream |
US9892760B1 (en) | 2015-10-22 | 2018-02-13 | Gopro, Inc. | Apparatus and methods for embedding metadata into video stream |
US10536628B2 (en) | 2015-10-23 | 2020-01-14 | Telefonaktiebolaget Lm Ericsson (Publ) | Providing camera settings from at least one image/video hosting service |
WO2017069670A1 (en) * | 2015-10-23 | 2017-04-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Providing camera settings from at least one image/video hosting service |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10194073B1 (en) | 2015-12-28 | 2019-01-29 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US9667859B1 (en) * | 2015-12-28 | 2017-05-30 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US10958837B2 (en) | 2015-12-28 | 2021-03-23 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US20170187910A1 (en) * | 2015-12-28 | 2017-06-29 | Amasing Apps USA LLC | Method, apparatus, and computer-readable medium for embedding options in an image prior to storage |
US10469748B2 (en) | 2015-12-28 | 2019-11-05 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US10678844B2 (en) | 2016-01-19 | 2020-06-09 | Gopro, Inc. | Storage of metadata and images |
US9922387B1 (en) | 2016-01-19 | 2018-03-20 | Gopro, Inc. | Storage of metadata and images |
US9471668B1 (en) * | 2016-01-21 | 2016-10-18 | International Business Machines Corporation | Question-answering system |
US9967457B1 (en) | 2016-01-22 | 2018-05-08 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US10469739B2 (en) | 2016-01-22 | 2019-11-05 | Gopro, Inc. | Systems and methods for determining preferences for capture settings of an image capturing device |
US10325082B2 (en) * | 2016-02-03 | 2019-06-18 | Ricoh Company, Ltd. | Information processing apparatus, information processing system, authentication method, and recording medium |
US10762795B2 (en) * | 2016-02-08 | 2020-09-01 | Skydio, Inc. | Unmanned aerial vehicle privacy controls |
US11361665B2 (en) * | 2016-02-08 | 2022-06-14 | Skydio, Inc. | Unmanned aerial vehicle privacy controls |
US11189180B2 (en) | 2016-02-08 | 2021-11-30 | Skydio, Inc. | Unmanned aerial vehicle visual line of sight control |
US11854413B2 (en) | 2016-02-08 | 2023-12-26 | Skydio, Inc | Unmanned aerial vehicle visual line of sight control |
US10372750B2 (en) * | 2016-02-12 | 2019-08-06 | Canon Kabushiki Kaisha | Information processing apparatus, method, program and storage medium |
US10599145B2 (en) | 2016-02-16 | 2020-03-24 | Gopro, Inc. | Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle |
US12105509B2 (en) | 2016-02-16 | 2024-10-01 | Gopro, Inc. | Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle |
US11640169B2 (en) | 2016-02-16 | 2023-05-02 | Gopro, Inc. | Systems and methods for determining preferences for control settings of unmanned aerial vehicles |
US9836054B1 (en) | 2016-02-16 | 2017-12-05 | Gopro, Inc. | Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9911237B1 (en) * | 2016-03-17 | 2018-03-06 | A9.Com, Inc. | Image processing techniques for self-captured images |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US11082701B2 (en) | 2016-05-27 | 2021-08-03 | Google Llc | Methods and devices for dynamic adaptation of encoding bitrate for video streaming |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US11242143B2 (en) | 2016-06-13 | 2022-02-08 | Skydio, Inc. | Unmanned aerial vehicle beyond visual line of sight control |
US11897607B2 (en) | 2016-06-13 | 2024-02-13 | Skydio, Inc. | Unmanned aerial vehicle beyond visual line of sight control |
US9973647B2 (en) | 2016-06-17 | 2018-05-15 | Microsoft Technology Licensing, Llc. | Suggesting image files for deletion based on image file parameters |
US10657382B2 (en) | 2016-07-11 | 2020-05-19 | Google Llc | Methods and systems for person detection in a video feed |
US11587320B2 (en) | 2016-07-11 | 2023-02-21 | Google Llc | Methods and systems for person detection in a video feed |
US10380429B2 (en) | 2016-07-11 | 2019-08-13 | Google Llc | Methods and systems for person detection in a video feed |
US10192415B2 (en) | 2016-07-11 | 2019-01-29 | Google Llc | Methods and systems for providing intelligent alerts for events |
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US9973792B1 (en) | 2016-10-27 | 2018-05-15 | Gopro, Inc. | Systems and methods for presenting visual information during presentation of a video segment |
US10469764B2 (en) * | 2016-11-01 | 2019-11-05 | Snap Inc. | Systems and methods for determining settings for fast video capture and sensor adjustment |
US11140336B2 (en) | 2016-11-01 | 2021-10-05 | Snap Inc. | Fast video capture and sensor adjustment |
US10432874B2 (en) | 2016-11-01 | 2019-10-01 | Snap Inc. | Systems and methods for fast video capture and sensor adjustment |
US11812160B2 (en) | 2016-11-01 | 2023-11-07 | Snap Inc. | Fast video capture and sensor adjustment |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10277834B2 (en) * | 2017-01-10 | 2019-04-30 | International Business Machines Corporation | Suggestion of visual effects based on detected sound patterns |
US10187607B1 (en) | 2017-04-04 | 2019-01-22 | Gopro, Inc. | Systems and methods for using a variable capture frame rate for video capture |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11778028B2 (en) * | 2017-05-17 | 2023-10-03 | Google Llc | Automatic image sharing with designated users over a communication network |
US20220094745A1 (en) * | 2017-05-17 | 2022-03-24 | Google Llc | Automatic image sharing with designated users over a communication network |
US10432728B2 (en) * | 2017-05-17 | 2019-10-01 | Google Llc | Automatic image sharing with designated users over a communication network |
US11212348B2 (en) * | 2017-05-17 | 2021-12-28 | Google Llc | Automatic image sharing with designated users over a communication network |
US11386285B2 (en) | 2017-05-30 | 2022-07-12 | Google Llc | Systems and methods of person recognition in video streams |
US10685257B2 (en) | 2017-05-30 | 2020-06-16 | Google Llc | Systems and methods of person recognition in video streams |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11356643B2 (en) | 2017-09-20 | 2022-06-07 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
US11710387B2 (en) | 2017-09-20 | 2023-07-25 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US12125369B2 (en) | 2017-09-20 | 2024-10-22 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
CN107948460A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method and device, computer equipment, computer-readable recording medium |
US20200380976A1 (en) * | 2018-01-26 | 2020-12-03 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US11721333B2 (en) * | 2018-01-26 | 2023-08-08 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
US10542161B2 (en) * | 2018-02-26 | 2020-01-21 | Kyocera Corporation | Electronic device, control method, and recording medium |
WO2019193582A1 (en) * | 2018-04-05 | 2019-10-10 | SurveyMe Limited | Methods and systems for gathering and display of responses to surveys and providing and redeeming rewards |
US11212416B2 (en) | 2018-07-06 | 2021-12-28 | ImageKeeper LLC | Secure digital media capture and analysis |
US20210104240A1 (en) * | 2018-09-27 | 2021-04-08 | Panasonic Intellectual Property Management Co., Ltd. | Description support device and description support method |
US11942086B2 (en) * | 2018-09-27 | 2024-03-26 | Panasonic Intellectual Property Management Co., Ltd. | Description support device and description support method |
US11076123B2 (en) * | 2018-12-07 | 2021-07-27 | Renesas Electronics Corporation | Photographing control device, photographing system and photographing control method |
US11609481B2 (en) * | 2019-10-24 | 2023-03-21 | Canon Kabushiki Kaisha | Imaging apparatus that, during an operation of a shutter button, updates a specified position based on a line-of-sight position at a timing desired by a user, and method for controlling the same |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
US20210374326A1 (en) * | 2020-02-14 | 2021-12-02 | Capital One Services, Llc | System and Method for Establishing an Interactive Communication Session |
US11336793B2 (en) * | 2020-03-10 | 2022-05-17 | Seiko Epson Corporation | Scanning system for generating scan data for vocal output, non-transitory computer-readable storage medium storing program for generating scan data for vocal output, and method for generating scan data for vocal output in scanning system |
US11468198B2 (en) | 2020-04-01 | 2022-10-11 | ImageKeeper LLC | Secure digital media authentication and analysis |
US11553105B2 (en) | 2020-08-31 | 2023-01-10 | ImageKeeper, LLC | Secure document certification and execution system |
US11838475B2 (en) | 2020-08-31 | 2023-12-05 | ImageKeeper LLC | Secure document certification and execution system |
US11645349B2 (en) * | 2020-09-16 | 2023-05-09 | Adobe Inc. | Generating location based photo discovery suggestions |
CN117714846A (en) * | 2023-07-12 | 2024-03-15 | 荣耀终端有限公司 | Control method and device for camera |
Also Published As
Publication number | Publication date |
---|---|
US20090231441A1 (en) | 2009-09-17 |
US20130314566A1 (en) | 2013-11-28 |
US8558921B2 (en) | 2013-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8558921B2 (en) | Systems and methods for suggesting meta-information to a camera user | |
US8466987B2 (en) | Automatic capture and management of images | |
JP7077376B2 (en) | Image pickup device and its control method | |
CN111866404B (en) | Video editing method and electronic equipment | |
CN111083138B (en) | Short video production system, method, electronic device and readable storage medium | |
US20090040324A1 (en) | Imaging apparatus, imaging system, and imaging method | |
CN103957356A (en) | Electronic device, and image obtaining method of electronic device | |
JP7074891B2 (en) | Shooting method and terminal device | |
CN110557560B (en) | Image pickup apparatus, control method thereof, and storage medium | |
WO2021063096A1 (en) | Video synthesis method, apparatus, electronic device, and storage medium | |
CN114019744A (en) | Image pickup apparatus and control method thereof | |
CN104486548A (en) | Information processing method and electronic equipment | |
WO2013187796A1 (en) | Method for automatically editing digital video files | |
WO2023021759A1 (en) | Information processing device and information processing method | |
JP2020137050A (en) | Imaging device, imaging method, imaging program, and learning device | |
WO2022019171A1 (en) | Information processing device, information processing method, and program | |
WO2022014143A1 (en) | Imaging system | |
McLernon | Canon PowerShot G11 Digital Field Guide |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: WALKER DIGITAL, LLC, CONNECTICUT Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WALKER, JAY S.;JORASCH, JAMES A.;SAMMON, RUSSELL P.;REEL/FRAME:015349/0032;SIGNING DATES FROM 20040413 TO 20040518 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |