US20110279479A1 - Narrowcasting From Public Displays, and Related Methods - Google Patents
Narrowcasting From Public Displays, and Related Methods Download PDFInfo
- Publication number
- US20110279479A1 US20110279479A1 US13/193,182 US201113193182A US2011279479A1 US 20110279479 A1 US20110279479 A1 US 20110279479A1 US 201113193182 A US201113193182 A US 201113193182A US 2011279479 A1 US2011279479 A1 US 2011279479A1
- Authority
- US
- United States
- Prior art keywords
- data
- sign
- user
- watermark
- observer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 50
- 230000003190 augmentative effect Effects 0.000 claims description 12
- 238000004519 manufacturing process Methods 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 27
- 230000002452 interceptive effect Effects 0.000 abstract description 5
- 230000008685 targeting Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 description 41
- 230000000875 corresponding effect Effects 0.000 description 22
- 238000012545 processing Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 12
- 230000033001 locomotion Effects 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 244000290333 Vanilla fragrans Species 0.000 description 2
- 235000009499 Vanilla fragrans Nutrition 0.000 description 2
- 235000012036 Vanilla tahitensis Nutrition 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 235000021461 frappuccino Nutrition 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 241000414697 Tegra Species 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 241000482268 Zea mays subsp. mays Species 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000011664 nicotinic acid Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000002304 perfume Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/005—Robust watermarking, e.g. average attack or collusion attack resistant
- G06T1/0064—Geometric transfor invariant watermarking, e.g. affine transform invariant
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41415—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0051—Embedding of the watermark in the spatial domain
Definitions
- the present technology relates to electronic displays, and more particularly relates to arrangements employing portable devices (e.g., cell phones) to interact with such displays.
- portable devices e.g., cell phones
- a sign can present an ad for perfume if it detects a woman, and an ad for menswear it if detects a man.
- Mobile Trak, Inc. offers a SmarTrak module for roadside signage, which monitors stray local oscillator emissions from passing cars, and thereby discerns the radio stations to which they are tuned. Again, this information can be used for demographic profiling and ad targeting.
- BluScreen is an auction-based framework for presenting advertising on electronic signage.
- the system senses Bluetooth transmissions from nearby viewers who allow profile data from their cell phones to be publicly accessed. BluScreen passes this profile data to advertisers, who then bid for the opportunity to present ads to the identified viewers.
- the French institute INRIA has developed an opt-in system in which an electronic public display board senses mobile phone numbers of passersby (by Bluetooth), and sends them brief messages or content (e.g., ringtones, videos, discount vouchers).
- the content can be customized in accordance with user profile information shared from the mobile phones. See, e.g., US patent publication 20090047899.
- BlueFire offers several interactive signage technologies, using SMS messaging or Bluetooth.
- an advertiser can respond electronically with coupons, content, etc., sent to the observer's cell phone.
- a marketing campaign by Ogilvy fosters user engagement with electronic signage through use of rewards.
- a sign invites viewers to enter a contest by sending an SMS message to a specified address.
- the system responds with a question, which—if the viewer responds with the correct answer—causes the sign to present a congratulatory fireworks display, and enters the viewer in a drawing for a car.
- Digital watermarking (a form of steganography) is the science of encoding physical and electronic objects with plural-bit digital data, in such a manner that the data is essentially hidden from human perception, yet can be recovered by computer analysis.
- electronic objects e.g., digital audio or imagery—including video
- the data may be encoded as slight variations in sample values (e.g., luminance, chrominance, audio amplitude).
- sample values e.g., luminance, chrominance, audio amplitude
- orthogonal domain also termed “non-perceptual,” e.g., MPEG, DCT, wavelet, etc.
- the data may be encoded as slight variations in quantization or coefficient values.
- Watermarking can be used to imperceptibly tag content with persistent digital identifiers, and finds myriad uses. Some are in the realm of device control—e.g., conveying data signaling how a receiving device should handle the content with which the watermark is conveyed. Others encode data associating content with a store of related data. For example, a photograph published on the web may encode a watermark payload identifying a particular record in an online database. That database record, in turn, may contain a link to the photographer's web site.
- U.S. Pat. No. 6,947,571 details a number of such “connected-content” applications and techniques.
- Digital watermarking systems typically have two primary components: an encoder that embeds the watermark in a host media signal, and a decoder that detects and reads the embedded watermark from the encoded signal.
- the encoder embeds a watermark by subtly altering the host media signal.
- the payload of the watermark can be any number of bits; 32 or 128 are popular payload sizes, although greater or lesser values can be used (much greater in the case of video—if plural frames are used).
- the reading component analyzes a suspect signal to detect whether a watermark is present. (The suspect signal may be image data captured, e.g., by a cell phone camera.) If a watermark signal is detected, the reader typically proceeds to extract the encoded information from the watermark.
- One popular form of watermarking redundantly embeds the payload data across host imagery, in tiled fashion. Each tile conveys the entire payload, permitting a reader to extract the payload even if only an excerpt of the encoded image is captured.
- different digital watermark messages are “narrowcast” to each of plural different observers of an electronic sign.
- the location of each observer relative to the sign is determined.
- Watermarks are then geometrically designed for the different observers, in accordance with their respective viewpoints.
- the watermark tiles can be pre-distorted to compensate for distortion introduced by each observer's viewing perspective.
- the payloads of the various watermarks can be tailored in accordance with sensed demographics about the respective observers (e.g., age, gender, ethnicity). Imagery encoded with such thus-arranged watermark signals is then presented on the sign.
- a teen boy in the right-foreground of the sign's viewing area may receive one payload, and an adult man in the left-background of the sign's viewing area may receive a different payload.
- the former may be an electronic coupon entitling the teen to a dollar off a Vanilla Frappuccino drink at the Starbucks down the mall; the latter may be an electronic coupon for a free New York Times at the same store.
- different watermarks can be respectively added to and removed from the displayed sign content.
- the locations of the respective observers can be detected straightforwardly by a camera associated with the electronic sign.
- determination of location can proceed by reference to data provided from an observer's cell phone, e.g., the shape of the sign as captured by the cell phone camera, or location data provided by a GPS or other position-determining system associated with the cell phone.
- the detector in a viewer's cell phone may detect a watermark not tailored for that viewer's position.
- the preferred watermark detector outputs one or more parameters characterizing attributes of the detected watermark (e.g., rotation, scale, bit error rate, etc.).
- the detection software may be arranged to provide different responses, depending on these parameters. For example, if the scale is outside a desired range, and the bit error rate is higher than normal, the cell phone can deduce that the watermark was tailored for a different observer, and can provide a default response rather than the particular response indicated by the watermark's payload. E.g., instead of a coupon for a dollar off a Vanilla Frappuccino drink, the default response may be a coupon for fifty cents off any Starbucks purchase.
- different responses are provided to different viewers without geometrically tailoring different watermarks. Instead, all viewers detect the same watermark data. However, due to different profile data associated with different viewers, the viewer devices respond differently.
- software on each user device may send data from the detected watermark payload to a remote server, together with data indicating the age and/or gender of the device owner.
- the remote server can return different responses, accordingly.
- the server may issue a coupon for free popcorn at the nearby movie theater.
- the server may issue a coupon for half-off a companion's theater admission.
- each watermark payload includes a few or several bits indicating the audience demographic or context to which it is targeted (e.g., by gender, age, ethnicity, home zip code, education, political or other orientation, social network membership, etc.).
- User devices examine the different watermark signals, but take action only when a watermark corresponding to demographic data associated with a user of that device is detected (e.g., stored in a local or remote user profile dataset).
- different frames of watermark data are tailored for different demographic groups of viewers in accordance with a time-multiplexed standard—synchronized to a reference clock.
- the first frame in a cycle of, e.g., 30 frames, may be targeted to teen boys.
- the second may be targeted to teen girls, etc.
- Each receiving cell phone knows the demographic of the owner and, by consulting the cell phone's time base, can identify the frame of watermark intended for such a person.
- the cycle may repeat every second, or other interval.
- the multiplexing of different watermarks across the visual screen channel can be accomplished by using different image frequency bands to convey different watermark payloads to different viewers.
- Some embodiments of the present technology make no use of digital watermarks. Yet differently-located viewers can nonetheless obtain different responses to electronic signage.
- the locations of observers are determined, together with their respective demographics, as above.
- the sign system determines what responses are appropriate to the differently-located viewers, and stores corresponding data in an online repository (database server).
- database server For the teen boy in the right foreground of an electronic sign for the Gap store, the system may store a coupon for a free trial size bottle of cologne.
- the stored response For the middle aged woman in the center background, the stored response may be a five dollar Gap gift certificate.
- an observer's cell phone captures an image of the sign
- data related to the captured imagery is transmitted to a computer associated with the sign.
- Analysis software e.g., at that computer, determines—from the size of the depicted sign, and the length ratio between two of its sides (or other geometrical analysis), the viewer's position. With this information the computer retrieves corresponding response information stored by the sign, and returns it back to the observer.
- the teen gets the cologne, the woman gets the gift certificate.
- FIG. 1 is a diagram showing some of the apparatus employed in an illustrative embodiment.
- FIG. 2 shows a field of view of a camera mounted on top of an electronic sign, including two viewers, and six viewing zones.
- FIG. 3 is a perspective view of two viewers in a viewing zone of an electronic sign.
- FIG. 4 is a diagram showing that the direction to each viewer can be characterized by a horizontal azimuth angle A and a vertical elevation angle B.
- FIG. 5 is a view of an electronic sign with a displayed message.
- FIGS. 6 and 7 are views of the FIG. 5 sign, as seen by the two observers in FIGS. 2 and 3 .
- FIG. 8A is a top-down view showing, for four vertical zones A-D of a display screen, how more distant parts of the screen subtend smaller angles for a viewer.
- FIG. 8B shows how the phenomenon of FIG. 8A can be redressed, by pre-distorting information presented on the screen.
- FIG. 9 shows a display pre-distorted in two dimensions, in accordance with position of a viewer.
- FIG. 10 shows how two watermarks, with different pre-distortion, can be presented on the screen.
- FIG. 11 shows how the pre-distortion of presented watermark information can be varied, as the position of an observer varies.
- FIG. 12 shows how the size of a watermark tile can be tailored, by a watermark encoder, to target a desired observer.
- FIGS. 13A and 13B show partial screen views as captured by a cell phone.
- FIG. 14 shows a pattern by which direction and distance to a screen can be determined.
- FIG. 15 is a diagram showing an illustrative 64 bit watermark payload.
- FIG. 1 shows some of the apparatus employed in one implementation of the present technology.
- An electronic display system portion includes a display screen 10 , a camera 12 , and a computer 14 .
- the display screen may include a loudspeaker 15 , or such a speaker may be separately associated with the system.
- the computer 14 has connectivity to other devices by one or more arrangements such as internet, Bluetooth, etc.
- the computer 14 controls the information displayed on the display screen. (A single computer may be responsible for control of many screens—such as in an airport.)
- the display screen 10 is viewed by an observer carrying an imaging device, such as a cell phone (smart phone) 16 . It, too, has connectivity to other devices, such as by internet, Bluetooth, cellular (including SMS), etc.
- an imaging device such as a cell phone (smart phone) 16 . It, too, has connectivity to other devices, such as by internet, Bluetooth, cellular (including SMS), etc.
- Also involved in certain embodiments are one or more remote computers 18 , with which the just-noted devices can communicate by internet or otherwise.
- FIGS. 2 and 3 show two observers 22 , 24 viewing the electronic sign 10 .
- a viewing area 26 in front of the sign is arbitrarily divided into six zones: left, center and right (as viewed from the sign)—each with foregoing and background positions.
- Observer 22 is in the left foreground, and observer 24 is in the center background.
- Camera 12 captures video of the viewing area 26 , e.g., from atop the sign 10 . From this captured image data, the computer 14 determines the position of each observer. The position may be determined in a gross sense, e.g., by classifying each viewer in one of the six viewing zones of FIG. 2 . Or more precise location data can be generated, such as by identifying the azimuth (A), elevation (B) and length of a vector 32 from the middle of the screen to the mid-point of the observer's eyes, as shown in FIG. 4 . (Distance to the viewer can be estimated by reference to the distance—in pixels—between the users' eye pupils, which is typically 2.8-3.1 inches.)
- the camera system 12 may be modeled, or measured, to understand the mapping between pixel positions within its field of view, and orientations to viewers. Each pixel corresponds to imagery incident on the lens from a unique direction.
- FIG. 5 shows a display that may be presented on the electronic sign 10 .
- FIGS. 6 and 7 show this same sign from the vantage points of the left foreground observer 22 , and the center background observer 24 , respectively.
- the size and shape of the display perceived by the different observers depends on their respective positions. This is made clearer by FIG. 8A .
- FIG. 8A shows a top-down view of the screen 10 , with an observer 82 positioned in front of the screen's edge.
- the screen is regarded as having four equal-width vertical quarter-panels A-D, it will be seen that the nearest panel (D) subtends a 45 degree angle as viewed by the observer in this case.
- the other quarter-panels C, B and A subtend progressively smaller ranges of the observer's field of view. (The entire screen fills about 76 degrees of the observer's field of view, so the 45 degree apparent width of the nearest quarter-panel is larger than that of the other three quarter-panels combined.)
- a watermark is hidden in the imagery, it will be similarly distorted as viewed by the cell phone 16 .
- tiles nearest the viewer will appear relatively larger, and tiles further away will appear relatively smaller.
- Contemporary watermark detectors such as those disclosed in U.S. Pat. No. 6,590,996, are robust to such distortion. The detector assesses the scale and rotation of each component tile, and then decodes the payload from each. The payloads from all of the decoded tiles are combined to yield output watermark data that is reliable even if data from certain tiles is unreadable.
- the watermark pattern hidden in the imagery is pre-distorted in accordance with the location of the observer so as to counteract this perspective distortion.
- FIG. 8B illustrates one form of such pre-distortion. If the screen 10 is again regarded as having four vertical panels, they are now of different widths. The furthest panel A′ is much larger than the others. The pre-distortion is arranged so that each panel subtends the same angular field of view to the observer (in this case about 19 degrees).
- this pre-distortion can be viewed as projecting the watermark from screen 10 onto a virtual screen 10 ′, relative to which the observer is on the center axis 84 .
- FIG. 9 shows the result of this watermark pre-distortion, in two dimensions.
- Each rectangle in FIG. 9 shows the extent of one illustrative watermark tile. Tiles nearest the viewer are relatively smaller; those remote are relative larger.
- the tile widths shown in FIG. 9 correspond to widths A′-D′ of FIG. 8B .
- the tile heights also vary in accordance with vertical position of the observer's perspective (here regarded to be along the vertical mid-line of the screen). Tiles near the top and bottom of the screen are thus taller than tiles along the middle.
- the watermark detector finds that each tile has substantially the same apparent scale. No longer does a portion of the screen closer to the observer present larger tiles, etc. It is as if the watermark detector is seeing the screen from a point along the central axis projecting from the screen, from a distance.
- the computer 14 can vary the distortion of the watermark pattern presented on the display screen, in accordance with changes in the detected position of the observer. So if the observer moves from one side of the screen to another, the pre-distortion of the watermark pattern can follow the observer accordingly.
- advertising or other human-perceptible imagery presented on the screen 10
- the watermark detector sees a substantially undistorted, uniform watermark pattern—regardless of observer (cell phone) location.
- the same arrangement can be extended to plural different observers.
- the electronic sign system can present several different watermark patterns on screen 10 —each targeting a different observer.
- the different patterns can be interleaved in time, or presented simultaneously.
- the use of multiple watermark patterns on the same display screen is conceptually illustrated by patterns 42 and 44 in FIG. 11 .
- the first watermark pattern 42 (depicted in fine solid lines) is an array of pre-distorted tiles identical to that of FIG. 9 .
- the second pattern 44 (depicted in bold dashed lines) is a different array of tiles, configured for a different observer. In particular, this second pattern is evidently targeted for an observer viewing from the center axis of the display, from a distance (because the tiles are all of uniform size).
- the intended observer of pattern 44 is also evidently further from the screen than the intended observer of pattern 42 (i.e., the smallest tile of watermark pattern 44 is larger than the smallest tile of watermark pattern 42 —indicating a more remote viewing perspective is intended).
- the computer 14 encodes different frames of displayed content with different watermark patterns (each determined in accordance with location of an observer).
- the applied watermark pattern can be changed on a per-frame basis, or can be held static for several frames before changing.
- Decoders in observing cell phones may decode all the watermarks, but may be programmed to disregard those that apparently target differently-located observers. This can be discerned by noting variation in the apparent scale of the component watermark tiles across the field of view: if the tiles within a frame are differently-scaled, the pattern has evidently been pre-distorted for a different observer. Only if all of the tiles in a frame have substantially uniform scale does the cell phone detector regard the pattern as targeted for that observer, and take action based thereon.
- the computer 14 computes the patterns individually (again, each based on targeted observer location), and then combines the patterns for encoding into the displayed content.
- decoders in observing cell phones are tuned relatively sharply, so they only respond to watermark tiles that have a certain apparent size. Tiles patterns that are larger or smaller are disregarded—treated like part of the host image content: noise to be ignored.
- the camera's watermark decoder parameters may be tuned so that it responds only to watermark tiles having a nominal size of 200 pixels per side, +/ ⁇ 10 pixels.
- the electronic display screen has the same aspect ratio as the camera sensor, but is 4.5 feet tall and 6 feet wide.
- the intended viewer is on the sign's center line—far enough away that the sign only fills a fourth of the camera's field of view (i.e., half in height, half in width, or 600 ⁇ 800 camera pixels).
- the computer 14 must size the displayed watermark tiles to be 1.5 feet on a side in order to target the intended observer. That is, for the watermark tiles to be imaged by the camera as squares that are 200 pixels on a side, three of them must span the sign vertically, and four across, as shown in FIG. 12 . (For clarity of illustration, the uniform tile grid of FIG. 12 , and of pattern 44 in FIG.
- displayed watermark patterns take into account the positions of targeted observers.
- the payloads of these watermarks can also be tailored to the targeted observers.
- the payloads are tailored demographically.
- the demographics may be determined from imagery captured by the camera 12 (e.g., age, ethnicity, gender).
- demographic data may be provided otherwise, such as by the individual.
- data stored in the individual's cell phone, or in the individual's FaceBook profile may be available, and may reveal information including home zip code and area code, income level, employment, education, musical and movie preferences, fashion preferences, hobbies and other interests, friends, travel destinations, etc.
- Demographics may be regarded as a type of context.
- One definition of context is “Any information that can be used to characterize the situation of an entity.
- An entity is a person, place or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.”
- Context information can be of many sorts, including the computing context (network connectivity, memory availability, CPU contention, etc.), user context (user profile, location, preferences, nearby friends, social network(s) and situation, etc.), physical context (e.g., lighting, noise level, traffic, etc.), temporal context (time of day, day, month, season, etc.), history of the above, etc.
- computing context network connectivity, memory availability, CPU contention, etc.
- user context user profile, location, preferences, nearby friends, social network(s) and situation, etc.
- physical context e.g., lighting, noise level, traffic, etc.
- temporal context time of day, day, month, season, etc.
- the position of the viewer needn't be determined by use of a camera associated with the electronic signage. Instead, data sensed by the viewer's cell phone can be used. There are a variety of approaches.
- a preliminary issue in some embodiments is identifying what screen the viewer is watching. This information allows the user's cell phone to communicate with the correct electronic sign system (or the correct control system, which may govern many individual electronic signs). Often this step can be skipped, because there may only be one screen nearby, and there is no ambiguity (or the embodiment does not require such knowledge). In other contexts, however, there may be many screens, and analysis first needs to identify which one is being viewed. (Contexts with several closely-spaced screens include trade shows and airport concourses.)
- One way to identify which screen is being watched is by reference to data indicating the position of the viewer, e.g., by latitude and longitude. If the positions of candidate screens are similarly known, the screen from which a viewer is capturing imagery may be determined by simple proximity.
- GPS is a familiar location sensing technology, and can be used in certain embodiments. In other embodiments GPS may not suffice, e.g., because the GPS signals do not penetrate indoors, or because the positional accuracy is not sufficient. In such cases alternative location technologies can be used.
- GPS is detailed in published patent application WO08/073347.
- screen content is used to identify the presentation being viewed.
- An image captured by the viewer's cell phone can be compared with imagery recently presented by a set of candidate screens, to find a best match.
- the candidate screens may be identified by their gross geographic location, e.g., Portland Airport, or other methods for constraining a set of possible electronic signs can be employed.
- the comparison can be based on a simple statistical metric, such as color histogram. Or it can be based on more detailed analysis—such as feature correlation between the cell phone image, and images presented on the candidate screens.
- Myriad comparison techniques are possible. Among them are those based on SIFT or image fingerprinting (both discussed below).
- Digital watermark data encoded in the displayed imagery or video can also serve to identify the content/screen being watched.
- audio content may be used to identify the content/screen to which the viewer is being exposed.
- watermarking or comparison-based approaches e.g., fingerprinting
- a subliminal identifier can be emitted by the electronic sign (or associated loudspeaker) and discerned by the viewer's cell phone.
- luminance of the screen is subtly modulated to convey a binary identifier that is sensed by the phone.
- an LED or other emitter positioned along the bezel of the screen can transmit an identifying pattern. (Infrared illumination can be used, since most cell cameras have some sensitivity down into infrared.)
- a remote server such as server 18 in FIG. 1 , receives position or image data from an inquiring cell phone, and determines—e.g., by comparison with reference data—which sign/content is being viewed.
- the remote server may then look-up an IP address for the corresponding computer 14 from a table or other data structure, and inform the sign system of the viewing cell phone. It may also transmit this address information to the cell phone—allowing the phone to communicate directly with the sign system. (Other communication means can alternatively be used.
- the remote server can provide the cell phone with Bluetooth, WiFi, or other data enabling the cell phone to communicate with the sign system.
- a virtual session can be established between a phone and a sign system, defining a logical association between the pair.
- the viewer's position relative to the screen can be determined.
- one technique relies on position data. If sufficient positional accuracy is available, the perspective from which an observer is viewing an electronic sign can be determined from knowledge of the observer's position and viewing orientation, together with the sign's position and orientation.
- Another approach to determining the viewer's position relative to an electronic sign is based on apparent geometry. Opposing sides of the display screen are of equal lengths, and adjacent sides are at right angles to each other. If a pinhole camera model is assumed, these same relations hold for the depiction of the screen in imagery captured by the viewer's cell phone—if viewed from along the screen's center axis (i.e., its perpendicular). If not viewed from the screen's perpendicular, one or more of these relationships will be different; the rectangle will be geometrically distorted.
- the usual geometric distortion is primarily the trapezoidal effect, also known as “keystoning.”
- the geometric distortions in a viewer-captured image can be analyzed to determine the viewing angle to the screen perpendicular. This viewing angle, in turn, can indicate the approximate position of the viewer (i.e., where the viewing angle vector intersects the likely viewing plane—the plane in which the camera resides, e.g., 5.5 feet above the floor).
- Known image processing techniques can be used to find the depiction of a quadrilateral screen in a captured image.
- Edge finding techniques can be employed. So can thresholded blobs (e.g., blurring the image, and comparing resultant pixel values to an escalating threshold until a quadrilateral bright object is distinguished).
- pattern recognition methods such as using the Hough transform, can be used.
- An exemplary sign-finding methodology is detailed in Tam, “Quadrilateral signboard detection and text extraction,” Int'l Conf. on Imaging, Science, Systems and Technology, pp. 708-713, 2003.
- the viewing distance may not be a concern.
- viewing distance may be estimated by judging where the viewing angle intersects the viewing plane, as noted above.
- the size of the sign can be used. This information is known to the sign system computer 14 , and can be provided to the cell phone if the cell phone processor performs a distance estimation. Or if imagery captured by the cell phone is provided to the sign system computer for analysis, the computer can factor sign-size information into its analysis to help determine distance.
- the captured image of the electronic sign may be of a scale that is not indicative of viewing distance. Data from the camera system, providing a metric indicating the degree of zoom, can be used by the relevant processor to address this issue.
- the screen rectangle is not entirely captured within the cell phone image frame, some information about the user's position can nonetheless be determined.
- the partial screen rectangle shown in FIG. 13A one complete edge, and two incomplete opposing edges
- the incompletely captured opposing edges appear to converge if extended, indicating that the viewer is to the left of edge A.
- the diverging opposing edges of FIG. 13B indicate the viewer is to the right of edge A
- Still another way in which the observer's viewing position can be discerned from cell phone-captured image data is by reference to watermark information encoded in graphical data presented by the sign, and included in the user-captured imagery.
- Steganographically encoded watermark signals such as detailed in U.S. Pat. No. 6,590,996, commonly include an orientation signal component by which the watermark decoder can detect affine geometrical distortions introduced in the imagery since encoding, so that the encoded payload can be decoded properly despite such distortions.
- the detailed watermark system allows six degrees of image distortion to be discerned from captured imagery: rotation, scale, differential scale, shear, and translation in both x and y.
- Displayed imagery from which viewer position information can be estimated does not need to be dedicated to this purpose; any graphic can be used. In some cases, however, graphics can be provided that are especially tailored to facilitate determination of viewer position.
- image-based understanding of a scene can be aided by presenting one or more features or objects on or near the screen, for which reference information is known (e.g., size, position, angle), and by which the system can understand other features—by relation.
- a target pattern is displayed on the screen (or presented adjacent the screen) from which, e.g., viewing distance and orientation can be discerned.
- Such targets thus serve as beacons, signaling distance and orientation information to any observing camera system.
- One such target is the TRIPcode, detailed, e.g., in de Ipi ⁇ a, TRIP: a Low-Cost Vision-Based Location System for Ubiquitous Computing, Personal and Ubiquitous Computing, Vol. 6, No. 3, May, 2002, pp. 206-219.
- the target (shown in FIG. 14 ) encodes information including the target's radius, allowing a camera-equipped system to determine both the distance from the camera to the target, and the target's 3D pose.
- the Ipi ⁇ a arrangement allows a camera-equipped system to understand both the distance to the screen, and the screen's spatial orientation relative to the camera.
- the TRIPcode has undergone various implementations, being successively known as SpotCode, and then ShotCode (and sometimes Bango). It is now understood to be commercialized by OP3 B.V.
- the aesthetics of the depicted TRIPcode target are not generally suited for display on signage.
- the pattern can be overlaid infrequently in one frame among a series of images (e.g., once every 3 seconds, in a 30 frame-per-second display arrangement).
- the position of the target can be varied to reduce visual artifacts.
- the color needn't be black; a less conspicuous color (e.g., yellow) may be used.
- markers of other shapes can be used.
- a square marker suitable for determining the 3D position of a surface is Sony's CyberCode and is detailed, e.g., in Rekimoto, CyberCode: Designing Augmented Reality Environments with Visual Tags, Proc. of Designing Augmented Reality Environments 2000, pp. 1-10.
- a variety of other reference markers can alternatively be used—depending on the requirements of a particular application.
- such information can be communicated to the sign's computer system (if same was not originally discerned by such system), and a watermark targeting that viewer's spatial location can be defined and encoded in imagery presented on the sign.
- the sign has a camera system from which it can estimate gender, age, or other attribute of viewers, it can tailor the targeted watermark payload (or the payoff associated with an arbitrary payload) in accordance with the estimated attribute(s) associated with the viewer at the discerned location.
- profile information may be provided by the viewer to the sign system computer along with the viewer-captured imagery (or with location information derived therefrom).
- a user's cell phone captures an image of part or all of the sign, and transmits same (e.g., by Bluetooth or internet TCP/IP) to the sign system computer.
- the sign system computer discerns the user's location from the geometry of the sign as depicted in the transmitted image. From its own camera, the sign system has characterized gender, age or other demographic(s) of several people at different locations in front of the sign. By matching the geometry-discerned location of the viewer who provided imagery by Bluetooth, with one of the positions in front of the sign where the sign system computer has demographically characterized viewers, the computer can infer the demographic(s) of the particular viewer from whom the Bluetooth transmission was received.
- the sign system can then Bluetooth-transmit payoff data back to that viewer—and tailor same to that particular viewer's estimated demographic(s). (Note that in this arrangement, as in some others, the payoff is sent by Bluetooth—not, e.g., encoded in a watermark presented on the sign.)
- Payoffs The type and variety of payoff that can be provided to the user's phone is virtually limitless. Electronic coupons have been noted above. Others include multimedia entertainment content (music videos, motion picture clips), and links/access credentials to online resources. A visitor to a trade show, for example, may share profile information indicating his professional occupation (e.g., RF engineer). Signage encountered at vendor booths may sense this information, and provide links sselling the vendor's product offerings that are relevant to such a professional. The user may not act on such links while at the trade show, but may save them for later review when he returns to his office. In like fashion, other payoffs may be stored for later use.
- profile information indicating his professional occupation (e.g., RF engineer).
- Signage encountered at vendor booths may sense this information, and provide links sselling the vendor's product offerings that are relevant to such a professional. The user may not act on such links while at the trade show, but may save them for later review when he returns to his office. In like fashion, other payoffs may be
- a user may wish to engage in a visually interactive session with content presented by an electronic sign—defining the user's own personal experience. For example, the user may want to undertake an activity that prompts one or more changes in the sign—such as by playing a game.
- Contemporary cell phones offer a variety of sensors that can be used in such interactive sessions—not just pushbuttons (virtual or physical), but also accelerometers, magnetometers, cameras, etc. Such phones can be used like game controllers (think Wii) in conjunction with electronic sign systems. Two or more users can engage in multi-player experiences—with their devices controlling aspects of the sign system, through use of the camera and/or other sensors.
- a user's phone captures an image of a sign.
- the imagery, or other data from the phone is analyzed to determine which sign (or content) is being viewed, as described earlier.
- the cell phone then exchanges information with the sign system (e.g., computer 14 ) to establish a session and control play of a game.
- the cell phone may transmit imagery captured by the phone camera—from which motion of the phone can be deduced (e.g., by tracking one or more features across several frames of image data captured by the camera, as detailed in U.S. Pat. No. 7,174,031).
- data from one or more accelerometers in the phone can be transmitted to the sign system—again indicating motion of the phone.
- the computer takes these signals as input, and controls play of the game accordingly.
- the screen may be in an airport bar, and the game may be a virtual football game—sponsored by a local professional football team (e.g., the Seattle Seahawks).
- a virtual football game sponsored by a local professional football team (e.g., the Seattle Seahawks).
- anyone in the bar can select a team member to play (with available players identified by graphical icons on the edge of the display) through use of their cell phone.
- a user can point their phone at the icon for a desired player (e.g., positioning the camera so the player icon appears at virtual crosshairs in the center of the phone's display screen) and then push/tap a physical/virtual button to indicate a selection.
- the phone image may be relayed to the sign system, to inform it of the player's selection.
- the phone can send an identifier derived from the selected icon, e.g., a watermark or image fingerprint.
- the system provides feedback indicating that the player has been selected (graphic overlay, vibration, etc), and once selected, reflects that state on the electronic sign. After the player has been selected, the user controls the player's movements in future plays of the virtual football game by movement of the user's cell phone.
- the user does not control an individual player. Instead, the user acts as coach—identifying which players are to be swapped into or out of the lineup.
- the computer system then simulates play based on the roster of players selected by the user.
- Another game is a virtual Lego game, or puzzle building exercise.
- One or more players can each select Lego or puzzle pieces on the digital screen (like picking players, above), and move them into place by pointing the camera to the desired location and issuing a signal (e.g., using the phone's user interface, such as a tap) to drop the piece in that place.
- the orientation at which the piece is placed can be controlled by the orientation of the user's phone when the “drop” signal is issued.
- each piece is uniquely identified by a watermark, barcode, fingerprint, or other feature recognition arrangement, to facilitate selection and control.
- a method involving an electronic sign, viewed by a first observer comprising: obtaining position information about the first observer (e.g., by reference to image data captured by a camera associated with the sign, or by a camera associated with the observer); defining a first digital watermark signal that takes into account the position information; encoding image data in accordance with said first digital watermark signal; and presenting the encoded image data on the electronic sign.
- a second observer may be similarly treated, and provided a watermark signal that is the same or different than that provided to the first observer.
- Another method involves an electronic sign system viewed by plural observers, each conveying a sensor-equipped device (e.g., a cell phone equipped with a microphone and/or camera).
- This method includes establishing a first data payload for a first observer of the electronic sign; establishing a second data payload for a second observer of the electronic sign; steganographically encoding audio or visual content data with digital watermark data, where the digital watermark data conveys the first and second data payloads; and presenting the encoded content data using the electronic sign system.
- the sensor-equipped device conveyed by the first observer responds to the first data payload encoded in the presented content data but not the second data payload
- the sensor-equipped device conveyed by the second observer responds to the second data payload encoded in the presented content data but not the first data payload
- Another method involves an electronic sign system including a screen viewed by different combinations of observers at different times.
- This method includes detecting a first person observing the screen; encoding content presented by the electronic sign system with a first watermark signal corresponding to the first observer; while the first person is still observing the screen, detecting a second person newly observing the screen; encoding the content presented by the electronic sign system with a first watermark signal corresponding to the first observer, and also a second watermark signal corresponding to the second observer; when one of said persons is detected as no longer observing the sign, encoding the content presented on the electronic sign system with the watermark signal corresponding to a remaining observer, but not with the watermark signal corresponding to the person who is no longer observing the sign.
- different combinations of watermark signals are encoded in content presented on the electronic sign system, in accordance with different combinations of persons observing the screen at different times.
- Another method includes using a handheld device to capture image data from a display.
- a parameter of a digital watermark signal steganographically encoded in the captured image data is then determined.
- This parameter is other than payload data encoded by the watermark signal and may comprise, e.g., a geometrical parameter or an error metric.
- a decision is made as to how the device should respond to the display.
- Yet another method involves an electronic sign, viewed by a first observer, and includes: obtaining first contextual information relating to the first observer; defining a first digital watermark signal that takes into account the first contextual information; steganographically encoding first image data in accordance with the first digital watermark signal; and presenting the encoded image data on the electronic sign.
- the method may be extended to similarly treat a second observer, but with a second, different digital watermark signal.
- the same first image data is presented to both observers, but is steganographically encoded with different watermark signals in accordance with different contextual information.
- an electronic sign presents content that is viewed by plural observers.
- This method includes: using a first camera-equipped device conveyed by a first observer, viewing the presented content and capturing first image data corresponding thereto; determining first identifying data by reference to the captured first image data; using a second camera-equipped device conveyed by a second observer, viewing the same presented content and capturing second image data corresponding thereto, the second image data differing from the first due to different vantage points of the first and second observers; determining second identifying data by reference to the captured second image data; by reference to the first identifying data, together with information specific to the first device or first observer, providing a first response to the first device; and by reference to the second identifying data, together with information specific to the second device or second observer, providing a second, different, response to the second device.
- the first and second devices provide different responses to viewing of the same content presented on the electronic sign.
- the second identifying data can be the same as the first identifying data, notwithstanding that the
- Yet another method includes capturing image data corresponding to an electronic sign using a camera-equipped device conveyed by the observer; determining which of plural electronic signs is being observed by a first observer, by reference to the captured image data; and exchanging data between the device and the electronic sign based, at least in part, on said determination.
- data can be transmitted from the device, such as data dependent at least in part on the camera, or motion data.
- the motion data can be generated by use of one or more accelerometers in the device, or can be generated by tracking one or more visible features across several frames of image data captured by the camera.
- Another method concerns providing demographically-targeted responses to observers of an electronic sign, based on viewing location.
- This method includes: obtaining first demographic information relating to a first observer, and second demographic information relating to a second observer; determining first response data associated with the first demographic information, and second response data associated with the second demographic information; obtaining first location data relating to the first observer, and second location data relating to the second observer; receiving image data from an observer's device; processing the received image data to estimate a location from which it was captured; and if the estimated location is the first location, returning the first response data to said device. (If the estimated location is the second location, second response data can be returned to the device.)
- a further method includes establishing an association between a camera-equipped device conveyed by an observer, and an electronic sign system; receiving data from the device, wherein the received data depends—at least in part—on image data captured by the camera; and controlling an operation of the electronic sign system, at least in part, based on the received data.
- This method can further include presenting depictions of plural game items on the electronic sign; and receiving data from the device, indicating that the observer has viewed using the camera device—and selected—a particular one of said game item depictions presented on the screen.
- a depiction of game play can be presented on the electronic sign, where such play reflects the observer's selection of the particular game item.
- the depicted game items can comprise puzzle pieces
- the method can include receiving signals from the device indicating a position, and orientation, at which a puzzle piece is to be deposited, wherein said signals depend, at least in part, on image data captured by the camera.
- a second observer can also participate, e.g., by establishing a logical association between a camera-equipped second device conveyed by the second observer, and the electronic sign; receiving data from the second device, wherein said received data depends—at least in part—on image data captured by the second device, said received data indicating that the second observer has viewed using the camera of the second device—and selected—a particular different one of said depicted puzzle pieces; and receiving signals from the second device indicating a position, and orientation, at which the different one of said depicted puzzle pieces is to be deposited, wherein said signals depend, at least in part, on image data captured by the camera of the second device.
- Selection of particular game items can proceed by use of feature recognition, digital watermark-based identification, barcode-based identification, fingerprint-based identification, etc.
- an electronic sign presents content that is viewed by plural observers.
- This method includes: by use of a first camera-equipped device conveyed by a first observer, viewing the presented content and capturing first image data corresponding thereto; processing the first image data to produce first identifying data; by use of a second camera-equipped device conveyed by a second observer, viewing the same presented content and capturing second image data corresponding thereto, the second image data differing from the first due to different vantage points of the first and second observers; processing the second image data to produce second identifying data; using a sensor associated with the electronic sign, capturing third image data depicting the first and second observers; processing the third image data to estimate demographic data associated with the first and second observers; by reference to the estimated demographic data, determining first response data for the first observer, and second, different, response data for the second observer; also processing the third image data to generate first location information corresponding to the first observer, and second location information corresponding to the second observer; receiving first or second identifying data; by reference to the generated location
- Yet another method includes, by use of a first sensor-equipped device conveyed by a user, capturing content data from an electronic sign system; by reference to a time-base, determining which of plural temporal portions of digital watermark data encoded in the captured content data corresponds, contextually, to the user; and taking an action based on a determined temporal portion of the digital watermark data.
- Still another method includes receiving input image data having an undistorted aspect; encoding the input image data in accordance with a steganographic digital watermark pattern; and presenting the encoded image data on a display screen; wherein the steganographic digital watermark pattern has distorted aspect relative to the input image data.
- the digital watermark pattern may be distorted in accordance with a position of an observer.
- the sign being viewed by the observer is identified by reference to location information about the observer and the sign. In others, identification is made by reference to image data captured by the observer (e.g., using robust local image descriptors, fingerprint, or watermark data).
- the scale of a watermark signal may be tailored in accordance with a viewing distance; and/or the projection of a watermark signal may be tailored in accordance with a viewing angle (e.g., the watermark signal may be pre-distorted in accordance with viewer location).
- a watermark's payload may be established in accordance with demographic information about the observer (e.g., obtained from the observer, or estimated from observation of the observer).
- the encoding of watermark data may be pre-distorted in accordance with a viewing geometry associated with the observer.
- plural data payloads may be decoded in one of said sensor-equipped devices, but only one of the decoded payloads is selected for response (e.g., because it corresponds to profile data associated with the device or its user, e.g., stored in the sensor-equipped device.
- profile information may indicate gender, age, and/or home zip code data).
- Different payloads may be multiplexed, e.g., in time or frequency.
- Yet another method includes capturing imagery using a camera associated with a first system; detecting features in the captured imagery; and identifying, to a second system, augmented reality graphical data associated with the detected features, wherein the second system is different than the first.
- the first system may comprise an electronic sign system, and the second system may comprise a user's cell phone.
- the method can additionally include presenting augmented reality graphical data on the second system, wherein the presented data is a tailored in accordance with one or more demographic attributes of user of the second system.
- This technology can also be implemented using face-worn apparatus, such as augmented reality (AR) glasses.
- AR glasses include display technology by which computer information can be viewed by the user—either overlaid on the scene in front of the user, or blocking that scene.
- Virtual reality goggles are an example of such apparatus.
- Exemplary technology is detailed in patent documents U.S. Pat. No. 7,397,607 and 20050195128.
- Commercial offerings include the Vuzix iWear VR920, the Naturalpoint Trackir 5, and the ezVision X4 Video Glasses by ezGear.
- An upcoming alternative is AR contact lenses.
- Such technology is detailed, e.g., in patent document 20090189830 and in Parviz, Augmented Reality in a Contact Lens, IEEE Spectrum, September, 2009.
- Some or all such devices may communicate, e.g., wirelessly, with other computing devices (carried by the user, electronic signs, or others), and they can include self-contained processing capability. Likewise, they may incorporate other features known from existing smart phones and patent documents, including electronic compass, accelerometer, camera(s), projector(s), GPS, etc.
- LIDAR laser range finding
- each includes one or more processors (e.g., of an Intel, AMD or ARM variety), one or more memories (e.g. RAM), storage (e.g., a disk or flash memory), a user interface (which may include, e.g., a keypad, a TFT LCD or OLED display screen, touch or other gesture sensors, a camera or other optical sensor, a compass sensor, a 3D magnetometer, a 3-axis accelerometer, a microphone, etc., together with software instructions for providing a graphical user interface), interconnections between these elements (e.g., buses), and an interface for communicating with other devices (which may be wireless, such as GSM, CDMA, W-CDMA, CDMA2000, TDMA, EV-DO, HSDPA, WiFi, WiMax, mesh networks, Zigbee and other 802.15 arrangements, or Bluetooth, and/or wired, such as through an Ethernet local area network,
- processors e.g., of an Intel, AMD or ARM variety
- memories e
- processors including general purpose processor instructions for a variety of programmable processors, including microprocessors, graphics processing units (GPUs, such as the nVidia Tegra APX 2600), digital signal processors (e.g., the Texas Instruments TMS320 series devices), etc. These instructions may be implemented as software, firmware, etc. These instructions can also be implemented to various forms of processor circuitry, including programmable logic devices, FPGAs (e.g., Xilinx Virtex series devices), FPOAs (e.g., PicoChip brand devices), and application specific circuits—including digital, analog and mixed analog/digital circuitry. Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Transformation of content signal data may also be distributed among different processor and memory devices.
- each device includes operating system software that provides interfaces to hardware resources and general purpose functions, and also includes application software which can be selectively invoked to perform particular tasks desired by a user.
- Known browser software, communications software, and media processing software can be adapted for many of the uses detailed herein.
- Software and hardware configuration data/instructions are commonly stored as instructions in one or more data structures conveyed by tangible media, such as magnetic or optical discs, memory cards, ROM, etc., which may be accessed across a network.
- Some embodiments may be implemented as embedded systems—a special purpose computer system in which the operating system software and the application software is indistinguishable to the user (e.g., as is commonly the case in basic cell phones).
- the functionality detailed in this specification can be implemented in operating system software, application software and/or as embedded system software.
- data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.
- Operations need not be performed exclusively by specifically-identifiable hardware. Rather, some operations can be referred out to other services (e.g., cloud computing), which attend to their execution by still further, generally anonymous, systems.
- Such distributed systems can be large scale (e.g., involving computing resources around the globe), or local (e.g., as when a portable device identifies one or more nearby mobile or other devices through Bluetooth communication, and involves one or more of them in a task.)
- content signals e.g., image signals, audio signals, etc.
- images and video forms of electromagnetic waves traveling through physical space and depicting physical objects
- audio pressure waves traveling through a physical medium may be captured using an audio transducer (e.g., microphone) and converted to an electronic signal (digital or analog form). While these signals are typically processed in electronic and digital form to implement the components and processes described above, they may also be captured, processed, transferred and stored in other physical forms, including electronic, optical, magnetic and electromagnetic wave forms.
- the content signals are transformed in various ways and for various purposes during processing, producing various data structure representations of the signals and related information.
- the data structure signals in memory are transformed for manipulation during searching, sorting, reading, writing and retrieval.
- the signals are also transformed for capture, transfer, storage, and output via display or audio transducer (e.g., speakers).
- Implementations of the present technology can make use of user interfaces employing touchscreen technology. Such user interfaces (as well as other aspects of the Apple iPhone) are detailed in published patent application 20080174570.
- Touchscreen interfaces are a form of gesture interface.
- Another form of gesture interface that can be used in embodiments of the present technology operates by sensing movement of a smart phone—by tracking movement of features within captured imagery. Further information on such gestural interfaces is detailed in Digimarc's U.S. Pat. No. 6,947,571. Gestural techniques can be employed whenever user input is to be provided to the system.
- the detailed functionality must be activated by user instruction (e.g., by launching an ap).
- the cell phone device may be configured to run in a media-foraging mode—always processing ambient audio and imagery, to discern stimulus relevant to the user and respond accordingly.
- Sensor information may be referred to the cloud for analysis. In some arrangements this is done in lieu of local device processing (or after certain local device processing has been done). Sometimes, however, such data can be passed to the cloud and processed both there and in the local device simultaneously.
- the cost of cloud processing is usually small, so the primary cost may be one of bandwidth. If bandwidth is available, there may be little reason not to send data to the cloud, even if it is also processed locally. In some cases the local device may return results faster; in others the cloud may win the race. By using both, simultaneously, the user is assured of the speediest possible results.
- advertising may be presented on the electronic signage. Measurements noting the length of viewer engagement with different signs, and number of commercial impressions, can be logged, and corresponding census-based reports can be issued to advertisers by audience survey companies. This information can be compiled by software in the phone, or by software associated with the sign. Knowing demographic information about the viewer allows targeted advertising to be presented. If a communication session is established, follow-up information can be sent using the same information channel. Advertising may also be presented on the user's cell phone, and similarly measured.
- Content fingerprinting seeks to distill content (e.g., a graphic, a video, a song, etc.) down to an essentially unique identifier, or set of identifiers. Many fingerprinting techniques are known. Examples of image/video fingerprinting are detailed in patent publications U.S. Pat. Nos. 7,020,304 (Digimarc), 7,486,827 (Seiko-Epson), 5,893,095 (Virage), 20070253594 (Vobile), 20080317278 (Thomson), and 20020044659 (NEC).
- Examples of audio fingerprinting are detailed in patent publications 20070250716, 20070174059 and 20080300011 (Digimarc), 20080276265, 20070274537 and 20050232411 (Nielsen), 20070124756 (Google), U.S. Pat. Nos. 6,834,308 (Audible Magic), 7,516,074 (Auditude), and 6,990,453 and 7,359,889 (both Shazam).
- Scale Invariant Feature Transform may be regarded as a form of image fingerprinting. Unlike some others, it can identify visual information despite affine and perspective transformation. SIFT is further detailed in certain of the earlier cited applications (e.g., US20100048242) as well as in patent documents U.S. Pat. No. 6,711,293 and WO07/130,688.
- SIFT is perhaps the most well known technique for generating robust local scene descriptors, there are others, which may be more or less suitable—depending on the application.
- GLOH c.f., Mikolajczyk et al, “Performance Evaluation of Local Descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 27, No. 10, pp. 1615-1630, 2005
- SURF c.f., Bay et al, “SURF: Speeded Up Robust Features,” Eur. Conf. on Computer Vision (1), pp. 404-417, 2006
- Chen et al “Efficient Extraction of Robust Image Features on Mobile Devices,” Proc. of the 6 th IEEE and ACM Int.
- position data about the observer can be determined by means such as GPS, or by the technology detailed in published patent application WO08/073,347.
- the same technology can be used to identify the location of electronic signs. From such information, the fact that a particular observer is viewing a particular sign can be inferred.
- the system of WO08/073,347 can also be used to generate highly accurate time information, e.g., on which time-based systems can rely.
- imagery captured by the cell phone is sent to the sign system
- metadata accompanying the imagery commonly identifies the make and model of the cell phone.
- This information can be stored by the sign system and used for various purposes. One is simply to demographically classify the user (e.g., a user with a Blackberry is more likely a business person, whereas a person with a Motorola Rival is more likely a teen). Another is to determine information about the phone's camera system (e.g., aperture, resolution, etc.). Watermark or other information presented on the electronic sign can then be tailored in accordance with the camera particulars (e.g., the size of the watermarking tile)—a type of “informed embedding.”
- the sign may nonetheless estimate something about the user's cell phone camera, by reference to the user's estimated age, gender and/or ethnicity.
- Stored reference data can indicate the popularity of different phone (camera) models with different demographic groups.
- the peak demographic for the Apple iPhone is reported to be the 35-54 year old age group, owning about 36% of these devices, whereas 13-17 year olds only own about 5% of these devices. Men are much more likely than women to own Android phones. Update cycles for phones also varies with demographics.
- a 15 year old boy is likely to be carrying a cell phone that is less than a year old, whereas a 50 year old woman is more likely to be carrying a cell phone that is at least two years old. Older phones have lower resolution cameras. Etc. Thus, by estimating the viewer's age and gender, an informed guess may be made about the cell phone camera that the user may be carrying. Again, the display on the sign can be tailored accordingly (e.g., by setting watermarking parameters in accordance with estimated camera resolution).
- AR augmented reality
- iPhone/Android applications such as UrbanSpoon, Layar, Bionic Eye, Wikitude, Tonchidot, and Google Goggles, the details of which are familiar to the artisan.
- Exemplary AR systems are detailed in patent documents US20100045869, US20090322671 and US20090244097. Briefly, such arrangements sense visual features in captured imagery, and present additional information on a viewing screen—commonly as an overlay on the originally-captured imagery. In the present context, the information displayed on electronic signage can be used as the visual features.
- the overlay can be presented on the user's phone, and be customized to the user, e.g., by context (including viewing location and/or demographics).
- Information can be exchanged between the phone and the sign system via watermark data encoded in imagery displayed on the electronic sign.
- Other arrangements can also be employed, such as IP, Bluetooth, etc., once a logical association has been established between a particular cell phone and a particular sign/content.
- the user's cell phone 16 or the camera 12 of the electronic sign system, captures imagery from which features are sensed. Associated displays/information may then be presented on the display screen 10 of the electronic sign system. Such information may be presented on the sign as an overlay on the captured imagery containing the sensed features, or separately.
- While certain operations are described as taking place in computer 14 , cell phone 16 , or remote server(s) 18 , etc., the location of the various operations is flexible. Operations can take place on any appropriate computer device (or distributed among plural devices), and data relayed as necessary.
- the technology is not limited to flat displays but is also applicable with curved displays.
- Face-finding algorithms are well known (e.g., as employed in many popular consumer cameras) and can be employed to identify the faces of observers, and locate their eyes.
- the distance between an observer's eyes e.g., in pixels in imagery captured by camera 12 , can be used in the various embodiments to estimate the observer's distance from the camera (and thus from the display screen).
- a sample watermark payload protocol is shown in FIG. 15 . It includes 8 bits to identify the protocol (so the cell phone watermark decoder system knows how to interpret the rest of the payload), and 4 bits to indicate the demographic audience to which it is targeted (e.g., men between the ages of 30 and 55).
- the “immediate response data” that follows is literal auxiliary data that can be used by the cell phone without reference to a remote database. For example, it conveys text or information that the cell phone—or another system—can use immediately, such as indexing a small store of payoff data loaded into a cell phone data store, to present different coupons to different merchants.
- the remaining 20 bits of data serves to index a remote database where corresponding information (e.g., re coupons or other payoffs) is stored.
- Other data fields such as one indicating an age-appropriateness rating, can additionally, or alternatively, be employed.
- the protocol may be extensible, e.g., by a flag bit indicating that a following payload conveys additional data.
- the payload of FIG. 15 is simply illustrative. In any particular implementation, a different payload will likely be used—depending on the particular application requirements.
- Camera systems and associated software from Quividi and/or TruMedia can be used for camera 12 , to identify observers and classify them demographically demographics.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Strategic Management (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Security & Cryptography (AREA)
- Marketing (AREA)
- Economics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A user with a cell phone interacts, in a personalized session, with an electronic sign system. In some embodiments, the user's location relative to the sign is discerned from camera imagery—either imagery captured by the cell phone (i.e., of the sign), or captured by the sign system (i.e., of the user). Demographic information about the user can be estimated from imagery captured acquired by the sign system, or can be accessed from stored profile data associated with the user. The sign system can transmit payoffs (e.g., digital coupons or other response data) to viewers—customized per user demographics. In some arrangements, the payoff data is represented by digital watermark data encoded in the signage content. The encoding can take into account the user's location relative to the sign—allowing geometrical targeting of different payoffs to differently-located viewers. Other embodiments allow a user to engage an electronic sign system for interactive game play, using the cell phone as a controller.
Description
- This application is a division of application Ser. No. 12/716,908, filed Mar. 3, 2010, which claims priority to provisional application 61/157,153, filed Mar. 3, 2009. The disclosures of these applications are incorporated-by-reference herein.
- The present technology relates to electronic displays, and more particularly relates to arrangements employing portable devices (e.g., cell phones) to interact with such displays.
- The present technology relates to that detailed in the assignee's copending application Ser. Nos. 12/271,772, filed Nov. 14, 2008 (published as US20100119208); 12/484,115, filed Jun. 12, 2009 (published as US20100048242); 12/490,980, filed Jun. 24, 2009 (published as US20100205628); PCT/US09/54358, filed Aug. 19, 2009 (published as WO2010022185); PCT/US2010/021836, filed Jan. 22, 2010 (published as WO2010093510); and 12/712,176, filed Feb. 24, 2010 (published as US20110098056).
- The principles and teachings from the just-noted work are intended to be applied in the context of the presently-detailed arrangements, and vice versa
- Electronic display screens are becoming prevalent in public places, and are widely used for advertising. Some display systems try to heighten viewer engagement by interactivity of various sorts.
- Frederik Pohl's 1952 science fiction novel The Space Merchants foreshadowed interactive electronic advertising. A character complains that every time he turned to look out the window of an airplane, “wham: a . . . . Taunton ad for some crummy product opaques the window and one of their nagging, stupid jingles drills into your ear.”
- Fifty years later, in the movie Minority Report, Tom Cruise tries to unobtrusively walk through a mall, only to be repeatedly identified and hailed by name, by electronic billboards.
- Published patent application WO 2007/120686 by Quividi discloses electronic billboards equipped with camera systems that sense viewers and estimate their ages and genders. Ads can be targeted in accordance with the sensed data, and audience measurement information can be compiled.
- TruMedia markets related automated audience measurement technology, used in connection with electronic billboards and store displays. A sign can present an ad for perfume if it detects a woman, and an ad for menswear it if detects a man.
- Mobile Trak, Inc. offers a SmarTrak module for roadside signage, which monitors stray local oscillator emissions from passing cars, and thereby discerns the radio stations to which they are tuned. Again, this information can be used for demographic profiling and ad targeting.
- BluScreen is an auction-based framework for presenting advertising on electronic signage. The system senses Bluetooth transmissions from nearby viewers who allow profile data from their cell phones to be publicly accessed. BluScreen passes this profile data to advertisers, who then bid for the opportunity to present ads to the identified viewers.
- The French institute INRIA has developed an opt-in system in which an electronic public display board senses mobile phone numbers of passersby (by Bluetooth), and sends them brief messages or content (e.g., ringtones, videos, discount vouchers). The content can be customized in accordance with user profile information shared from the mobile phones. See, e.g., US patent publication 20090047899.
- BlueFire offers several interactive signage technologies, using SMS messaging or Bluetooth. One invites observers to vote in a poll, e.g., who will win this weekend's game? Once the observer is thus-engaged, an advertiser can respond electronically with coupons, content, etc., sent to the observer's cell phone.
- A marketing campaign by Ogilvy fosters user engagement with electronic signage through use of rewards. A sign invites viewers to enter a contest by sending an SMS message to a specified address. The system responds with a question, which—if the viewer responds with the correct answer—causes the sign to present a congratulatory fireworks display, and enters the viewer in a drawing for a car.
- Certain embodiments of the present technology employ digital watermarking. Digital watermarking (a form of steganography) is the science of encoding physical and electronic objects with plural-bit digital data, in such a manner that the data is essentially hidden from human perception, yet can be recovered by computer analysis. In electronic objects (e.g., digital audio or imagery—including video), the data may be encoded as slight variations in sample values (e.g., luminance, chrominance, audio amplitude). Or, if the object is represented in a so-called orthogonal domain (also termed “non-perceptual,” e.g., MPEG, DCT, wavelet, etc.), the data may be encoded as slight variations in quantization or coefficient values. The present assignee's U.S. Pat. Nos. 6,122,403, 6,590,996, 6,912,295 and 7,027,614, and application Ser. No. 12/337,029 (filed Dec. 17, 2008, published as US20100150434) are illustrative of certain watermarking technologies.
- Watermarking can be used to imperceptibly tag content with persistent digital identifiers, and finds myriad uses. Some are in the realm of device control—e.g., conveying data signaling how a receiving device should handle the content with which the watermark is conveyed. Others encode data associating content with a store of related data. For example, a photograph published on the web may encode a watermark payload identifying a particular record in an online database. That database record, in turn, may contain a link to the photographer's web site. U.S. Pat. No. 6,947,571 details a number of such “connected-content” applications and techniques.
- Digital watermarking systems typically have two primary components: an encoder that embeds the watermark in a host media signal, and a decoder that detects and reads the embedded watermark from the encoded signal. The encoder embeds a watermark by subtly altering the host media signal. The payload of the watermark can be any number of bits; 32 or 128 are popular payload sizes, although greater or lesser values can be used (much greater in the case of video—if plural frames are used). The reading component analyzes a suspect signal to detect whether a watermark is present. (The suspect signal may be image data captured, e.g., by a cell phone camera.) If a watermark signal is detected, the reader typically proceeds to extract the encoded information from the watermark.
- One popular form of watermarking redundantly embeds the payload data across host imagery, in tiled fashion. Each tile conveys the entire payload, permitting a reader to extract the payload even if only an excerpt of the encoded image is captured.
- In accordance with one aspect of the present technology, different digital watermark messages are “narrowcast” to each of plural different observers of an electronic sign. In one arrangement, the location of each observer relative to the sign is determined. Watermarks are then geometrically designed for the different observers, in accordance with their respective viewpoints. For example, the watermark tiles can be pre-distorted to compensate for distortion introduced by each observer's viewing perspective. The payloads of the various watermarks can be tailored in accordance with sensed demographics about the respective observers (e.g., age, gender, ethnicity). Imagery encoded with such thus-arranged watermark signals is then presented on the sign.
- Due to the different geometries of the different watermarks, different observers detect different watermark payloads. Thus, a teen boy in the right-foreground of the sign's viewing area may receive one payload, and an adult man in the left-background of the sign's viewing area may receive a different payload. The former may be an electronic coupon entitling the teen to a dollar off a Vanilla Frappuccino drink at the Starbucks down the mall; the latter may be an electronic coupon for a free New York Times at the same store. As different people enter and leave the viewing area, different watermarks can be respectively added to and removed from the displayed sign content.
- The locations of the respective observers can be detected straightforwardly by a camera associated with the electronic sign. In other embodiments, determination of location can proceed by reference to data provided from an observer's cell phone, e.g., the shape of the sign as captured by the cell phone camera, or location data provided by a GPS or other position-determining system associated with the cell phone.
- Current watermark detectors excel at recovering watermarks even from severely distorted content. Accordingly, the detector in a viewer's cell phone may detect a watermark not tailored for that viewer's position. The preferred watermark detector outputs one or more parameters characterizing attributes of the detected watermark (e.g., rotation, scale, bit error rate, etc.). The detection software may be arranged to provide different responses, depending on these parameters. For example, if the scale is outside a desired range, and the bit error rate is higher than normal, the cell phone can deduce that the watermark was tailored for a different observer, and can provide a default response rather than the particular response indicated by the watermark's payload. E.g., instead of a coupon for a dollar off a Vanilla Frappuccino drink, the default response may be a coupon for fifty cents off any Starbucks purchase.
- In other embodiments, different responses are provided to different viewers without geometrically tailoring different watermarks. Instead, all viewers detect the same watermark data. However, due to different profile data associated with different viewers, the viewer devices respond differently.
- For example, software on each user device may send data from the detected watermark payload to a remote server, together with data indicating the age and/or gender of the device owner. The remote server can return different responses, accordingly. To the teen boy, the server may issue a coupon for free popcorn at the nearby movie theater. To the adult man, the server may issue a coupon for half-off a companion's theater admission.
- In a related example, different watermarks are successively presented in different frames of a video presentation on the display screen. Each watermark payload includes a few or several bits indicating the audience demographic or context to which it is targeted (e.g., by gender, age, ethnicity, home zip code, education, political or other orientation, social network membership, etc.). User devices examine the different watermark signals, but take action only when a watermark corresponding to demographic data associated with a user of that device is detected (e.g., stored in a local or remote user profile dataset).
- In still a further arrangement, different frames of watermark data are tailored for different demographic groups of viewers in accordance with a time-multiplexed standard—synchronized to a reference clock. The first frame in a cycle of, e.g., 30 frames, may be targeted to teen boys. The second may be targeted to teen girls, etc. Each receiving cell phone knows the demographic of the owner and, by consulting the cell phone's time base, can identify the frame of watermark intended for such a person. The cycle may repeat every second, or other interval.
- In another arrangement, the multiplexing of different watermarks across the visual screen channel can be accomplished by using different image frequency bands to convey different watermark payloads to different viewers.
- Some embodiments of the present technology make no use of digital watermarks. Yet differently-located viewers can nonetheless obtain different responses to electronic signage.
- In one such arrangement, the locations of observers are determined, together with their respective demographics, as above. The sign system then determines what responses are appropriate to the differently-located viewers, and stores corresponding data in an online repository (database server). For the teen boy in the right foreground of an electronic sign for the Gap store, the system may store a coupon for a free trial size bottle of cologne. For the middle aged woman in the center background, the stored response may be a five dollar Gap gift certificate.
- When an observer's cell phone captures an image of the sign, data related to the captured imagery is transmitted to a computer associated with the sign. Analysis software, e.g., at that computer, determines—from the size of the depicted sign, and the length ratio between two of its sides (or other geometrical analysis), the viewer's position. With this information the computer retrieves corresponding response information stored by the sign, and returns it back to the observer. The teen gets the cologne, the woman gets the gift certificate.
- The foregoing and other features and advantages of the present technology will be more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
-
FIG. 1 is a diagram showing some of the apparatus employed in an illustrative embodiment. -
FIG. 2 shows a field of view of a camera mounted on top of an electronic sign, including two viewers, and six viewing zones. -
FIG. 3 is a perspective view of two viewers in a viewing zone of an electronic sign. -
FIG. 4 is a diagram showing that the direction to each viewer can be characterized by a horizontal azimuth angle A and a vertical elevation angle B. -
FIG. 5 is a view of an electronic sign with a displayed message. -
FIGS. 6 and 7 are views of theFIG. 5 sign, as seen by the two observers inFIGS. 2 and 3 . -
FIG. 8A is a top-down view showing, for four vertical zones A-D of a display screen, how more distant parts of the screen subtend smaller angles for a viewer. -
FIG. 8B shows how the phenomenon ofFIG. 8A can be redressed, by pre-distorting information presented on the screen. -
FIG. 9 shows a display pre-distorted in two dimensions, in accordance with position of a viewer. -
FIG. 10 shows how two watermarks, with different pre-distortion, can be presented on the screen. -
FIG. 11 shows how the pre-distortion of presented watermark information can be varied, as the position of an observer varies. -
FIG. 12 shows how the size of a watermark tile can be tailored, by a watermark encoder, to target a desired observer. -
FIGS. 13A and 13B show partial screen views as captured by a cell phone. -
FIG. 14 shows a pattern by which direction and distance to a screen can be determined. -
FIG. 15 is a diagram showing an illustrative 64 bit watermark payload. -
FIG. 1 shows some of the apparatus employed in one implementation of the present technology. An electronic display system portion includes adisplay screen 10, acamera 12, and acomputer 14. The display screen may include aloudspeaker 15, or such a speaker may be separately associated with the system. Thecomputer 14 has connectivity to other devices by one or more arrangements such as internet, Bluetooth, etc. Thecomputer 14 controls the information displayed on the display screen. (A single computer may be responsible for control of many screens—such as in an airport.) - The
display screen 10 is viewed by an observer carrying an imaging device, such as a cell phone (smart phone) 16. It, too, has connectivity to other devices, such as by internet, Bluetooth, cellular (including SMS), etc. - Also involved in certain embodiments are one or more
remote computers 18, with which the just-noted devices can communicate by internet or otherwise. -
FIGS. 2 and 3 show twoobservers electronic sign 10. In this example aviewing area 26 in front of the sign is arbitrarily divided into six zones: left, center and right (as viewed from the sign)—each with foregoing and background positions.Observer 22 is in the left foreground, andobserver 24 is in the center background. -
Camera 12 captures video of theviewing area 26, e.g., from atop thesign 10. From this captured image data, thecomputer 14 determines the position of each observer. The position may be determined in a gross sense, e.g., by classifying each viewer in one of the six viewing zones ofFIG. 2 . Or more precise location data can be generated, such as by identifying the azimuth (A), elevation (B) and length of avector 32 from the middle of the screen to the mid-point of the observer's eyes, as shown inFIG. 4 . (Distance to the viewer can be estimated by reference to the distance—in pixels—between the users' eye pupils, which is typically 2.8-3.1 inches.) - (The
camera system 12 may be modeled, or measured, to understand the mapping between pixel positions within its field of view, and orientations to viewers. Each pixel corresponds to imagery incident on the lens from a unique direction.) -
FIG. 5 shows a display that may be presented on theelectronic sign 10.FIGS. 6 and 7 show this same sign from the vantage points of theleft foreground observer 22, and thecenter background observer 24, respectively. The size and shape of the display perceived by the different observers depends on their respective positions. This is made clearer byFIG. 8A . -
FIG. 8A shows a top-down view of thescreen 10, with anobserver 82 positioned in front of the screen's edge. If the screen is regarded as having four equal-width vertical quarter-panels A-D, it will be seen that the nearest panel (D) subtends a 45 degree angle as viewed by the observer in this case. The other quarter-panels C, B and A subtend progressively smaller ranges of the observer's field of view. (The entire screen fills about 76 degrees of the observer's field of view, so the 45 degree apparent width of the nearest quarter-panel is larger than that of the other three quarter-panels combined.) - This phenomenon distorts the imagery presented on the screen, as viewed by the observer. The human eye and brain, or course, have no trouble with this distortion; it is taken for granted—ever-present in nearly everything we see.
- If a watermark is hidden in the imagery, it will be similarly distorted as viewed by the
cell phone 16. In a watermark of the tiled variety, tiles nearest the viewer will appear relatively larger, and tiles further away will appear relatively smaller. Contemporary watermark detectors, such as those disclosed in U.S. Pat. No. 6,590,996, are robust to such distortion. The detector assesses the scale and rotation of each component tile, and then decodes the payload from each. The payloads from all of the decoded tiles are combined to yield output watermark data that is reliable even if data from certain tiles is unreadable. - Notwithstanding this capability, in one implementation of the present technology the watermark pattern hidden in the imagery is pre-distorted in accordance with the location of the observer so as to counteract this perspective distortion.
FIG. 8B illustrates one form of such pre-distortion. If thescreen 10 is again regarded as having four vertical panels, they are now of different widths. The furthest panel A′ is much larger than the others. The pre-distortion is arranged so that each panel subtends the same angular field of view to the observer (in this case about 19 degrees). - To a first approximation, this pre-distortion can be viewed as projecting the watermark from
screen 10 onto avirtual screen 10′, relative to which the observer is on thecenter axis 84. -
FIG. 9 shows the result of this watermark pre-distortion, in two dimensions. Each rectangle inFIG. 9 shows the extent of one illustrative watermark tile. Tiles nearest the viewer are relatively smaller; those remote are relative larger. - The tile widths shown in
FIG. 9 correspond to widths A′-D′ ofFIG. 8B . The tile heights also vary in accordance with vertical position of the observer's perspective (here regarded to be along the vertical mid-line of the screen). Tiles near the top and bottom of the screen are thus taller than tiles along the middle. - When the watermark tiles are pre-distorted in the
FIG. 9 fashion, the watermark detector finds that each tile has substantially the same apparent scale. No longer does a portion of the screen closer to the observer present larger tiles, etc. It is as if the watermark detector is seeing the screen from a point along the central axis projecting from the screen, from a distance. - As shown in
FIG. 10 , thecomputer 14 can vary the distortion of the watermark pattern presented on the display screen, in accordance with changes in the detected position of the observer. So if the observer moves from one side of the screen to another, the pre-distortion of the watermark pattern can follow the observer accordingly. - Note that advertising, or other human-perceptible imagery presented on the
screen 10, is not pre-distorted. That is, the human viewer sees the advertising with the familiar location-dependent perspective distortion effects that we see all the time. The watermark detector, however, sees a substantially undistorted, uniform watermark pattern—regardless of observer (cell phone) location. - The same arrangement can be extended to plural different observers. The electronic sign system can present several different watermark patterns on
screen 10—each targeting a different observer. The different patterns can be interleaved in time, or presented simultaneously. - The use of multiple watermark patterns on the same display screen is conceptually illustrated by
patterns FIG. 11 . The first watermark pattern 42 (depicted in fine solid lines) is an array of pre-distorted tiles identical to that ofFIG. 9 . The second pattern 44 (depicted in bold dashed lines) is a different array of tiles, configured for a different observer. In particular, this second pattern is evidently targeted for an observer viewing from the center axis of the display, from a distance (because the tiles are all of uniform size). The intended observer ofpattern 44 is also evidently further from the screen than the intended observer of pattern 42 (i.e., the smallest tile ofwatermark pattern 44 is larger than the smallest tile ofwatermark pattern 42—indicating a more remote viewing perspective is intended). - In the case of time-sequential interleaving of different watermarks, the
computer 14 encodes different frames of displayed content with different watermark patterns (each determined in accordance with location of an observer). The applied watermark pattern can be changed on a per-frame basis, or can be held static for several frames before changing. Decoders in observing cell phones may decode all the watermarks, but may be programmed to disregard those that apparently target differently-located observers. This can be discerned by noting variation in the apparent scale of the component watermark tiles across the field of view: if the tiles within a frame are differently-scaled, the pattern has evidently been pre-distorted for a different observer. Only if all of the tiles in a frame have substantially uniform scale does the cell phone detector regard the pattern as targeted for that observer, and take action based thereon. - In the case of simultaneous display of plural watermark patterns, the
computer 14 computes the patterns individually (again, each based on targeted observer location), and then combines the patterns for encoding into the displayed content. - In this implementation, decoders in observing cell phones are tuned relatively sharply, so they only respond to watermark tiles that have a certain apparent size. Tiles patterns that are larger or smaller are disregarded—treated like part of the host image content: noise to be ignored.
- To illustrate, consider a camera with an image sensor that outputs images of size 1200 by 1600 pixels. The camera's watermark decoder parameters may be tuned so that it responds only to watermark tiles having a nominal size of 200 pixels per side, +/−10 pixels.
- For sake of simplicity, imagine the electronic display screen has the same aspect ratio as the camera sensor, but is 4.5 feet tall and 6 feet wide. Imagine, further, that the intended viewer is on the sign's center line—far enough away that the sign only fills a fourth of the camera's field of view (i.e., half in height, half in width, or 600×800 camera pixels). In this arrangement, the
computer 14 must size the displayed watermark tiles to be 1.5 feet on a side in order to target the intended observer. That is, for the watermark tiles to be imaged by the camera as squares that are 200 pixels on a side, three of them must span the sign vertically, and four across, as shown inFIG. 12 . (For clarity of illustration, the uniform tile grid ofFIG. 12 , and ofpattern 44 inFIG. 11 , ignores the pre-distortion that may be applied to counteract the apparent distortion caused by the observer's perspective from the sign's center line, i.e., that tiles the left and right edges of the sign are further away and so should be enlarged, etc.) - It will be recognized that the same narrow tuning of the watermark detector can be employed in the time-sequential interleaving of different watermark patterns—to distinguish the intended watermark pattern from patterns targeting other observers.
- By the arrangements just-described, displayed watermark patterns take into account the positions of targeted observers. The payloads of these watermarks can also be tailored to the targeted observers.
- In one particular arrangement the payloads are tailored demographically. The demographics may be determined from imagery captured by the camera 12 (e.g., age, ethnicity, gender). Alternatively, or in addition, demographic data may be provided otherwise, such as by the individual. For example, data stored in the individual's cell phone, or in the individual's FaceBook profile, may be available, and may reveal information including home zip code and area code, income level, employment, education, musical and movie preferences, fashion preferences, hobbies and other interests, friends, travel destinations, etc.
- Demographics may be regarded as a type of context. One definition of context is “Any information that can be used to characterize the situation of an entity. An entity is a person, place or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.”
- Context information can be of many sorts, including the computing context (network connectivity, memory availability, CPU contention, etc.), user context (user profile, location, preferences, nearby friends, social network(s) and situation, etc.), physical context (e.g., lighting, noise level, traffic, etc.), temporal context (time of day, day, month, season, etc.), history of the above, etc. These and other contextual data can each be used as a basis for different watermark payloads (or, more generally, as a basis for different responses/payoffs to the user).
- The position of the viewer needn't be determined by use of a camera associated with the electronic signage. Instead, data sensed by the viewer's cell phone can be used. There are a variety of approaches.
- A preliminary issue in some embodiments is identifying what screen the viewer is watching. This information allows the user's cell phone to communicate with the correct electronic sign system (or the correct control system, which may govern many individual electronic signs). Often this step can be skipped, because there may only be one screen nearby, and there is no ambiguity (or the embodiment does not require such knowledge). In other contexts, however, there may be many screens, and analysis first needs to identify which one is being viewed. (Contexts with several closely-spaced screens include trade shows and airport concourses.)
- One way to identify which screen is being watched is by reference to data indicating the position of the viewer, e.g., by latitude and longitude. If the positions of candidate screens are similarly known, the screen from which a viewer is capturing imagery may be determined by simple proximity.
- GPS is a familiar location sensing technology, and can be used in certain embodiments. In other embodiments GPS may not suffice, e.g., because the GPS signals do not penetrate indoors, or because the positional accuracy is not sufficient. In such cases alternative location technologies can be used. One is detailed in published patent application WO08/073347.
- If latitude/longitude or the like leaves ambiguity, other position data relating to the viewer can be employed, such as magnetometer and/or accelerometer data indicating the compass direction towards which the cell phone is facing, and its inclination/declination relative to horizontal. Again, if the positions of the screens are adequately characterized, this information can allow unique identification of one screen from among many.
- In other arrangements, screen content is used to identify the presentation being viewed. An image captured by the viewer's cell phone can be compared with imagery recently presented by a set of candidate screens, to find a best match. (The candidate screens may be identified by their gross geographic location, e.g., Portland Airport, or other methods for constraining a set of possible electronic signs can be employed.) The comparison can be based on a simple statistical metric, such as color histogram. Or it can be based on more detailed analysis—such as feature correlation between the cell phone image, and images presented on the candidate screens. Myriad comparison techniques are possible. Among them are those based on SIFT or image fingerprinting (both discussed below).
- Digital watermark data encoded in the displayed imagery or video can also serve to identify the content/screen being watched.
- (Sometimes several screens may be presenting the same visual content. In such case it may not matter whether the viewer is watching a screen in Concourse A or B, or in New York or California. Rather, what is relevant is the content being viewed.)
- Similarly, audio content may be used to identify the content/screen to which the viewer is being exposed. Again, watermarking or comparison-based approaches (e.g., fingerprinting) can be used to perform such identification.
- In other arrangements, still other screen identification techniques can be used. For example, a subliminal identifier can be emitted by the electronic sign (or associated loudspeaker) and discerned by the viewer's cell phone. In one such arrangement, luminance of the screen is subtly modulated to convey a binary identifier that is sensed by the phone. Similarly, an LED or other emitter positioned along the bezel of the screen can transmit an identifying pattern. (Infrared illumination can be used, since most cell cameras have some sensitivity down into infrared.)
- In some embodiments, a remote server, such as
server 18 inFIG. 1 , receives position or image data from an inquiring cell phone, and determines—e.g., by comparison with reference data—which sign/content is being viewed. The remote server may then look-up an IP address for thecorresponding computer 14 from a table or other data structure, and inform the sign system of the viewing cell phone. It may also transmit this address information to the cell phone—allowing the phone to communicate directly with the sign system. (Other communication means can alternatively be used. For example, the remote server can provide the cell phone with Bluetooth, WiFi, or other data enabling the cell phone to communicate with the sign system.) By such arrangements, a virtual session can be established between a phone and a sign system, defining a logical association between the pair. - Once the screen (or content) being viewed is known, the viewer's position relative to the screen can be determined.
- Again, one technique relies on position data. If sufficient positional accuracy is available, the perspective from which an observer is viewing an electronic sign can be determined from knowledge of the observer's position and viewing orientation, together with the sign's position and orientation.
- Another approach to determining the viewer's position relative to an electronic sign is based on apparent geometry. Opposing sides of the display screen are of equal lengths, and adjacent sides are at right angles to each other. If a pinhole camera model is assumed, these same relations hold for the depiction of the screen in imagery captured by the viewer's cell phone—if viewed from along the screen's center axis (i.e., its perpendicular). If not viewed from the screen's perpendicular, one or more of these relationships will be different; the rectangle will be geometrically distorted.
- The usual geometric distortion is primarily the trapezoidal effect, also known as “keystoning.” The geometric distortions in a viewer-captured image can be analyzed to determine the viewing angle to the screen perpendicular. This viewing angle, in turn, can indicate the approximate position of the viewer (i.e., where the viewing angle vector intersects the likely viewing plane—the plane in which the camera resides, e.g., 5.5 feet above the floor).
- Known image processing techniques can be used to find the depiction of a quadrilateral screen in a captured image. Edge finding techniques can be employed. So can thresholded blobs (e.g., blurring the image, and comparing resultant pixel values to an escalating threshold until a quadrilateral bright object is distinguished). Or pattern recognition methods, such as using the Hough transform, can be used. An exemplary sign-finding methodology is detailed in Tam, “Quadrilateral signboard detection and text extraction,” Int'l Conf. on Imaging, Science, Systems and Technology, pp. 708-713, 2003.
- Once the screen is identified within the captured imagery, straightforward photogrammetric techniques can be applied to discern the viewing angle, by reference to the corner points, and/or from distortion of the displayed image contents. (An exemplary treatment of such analysis is provided in Chupeau, “In-theater piracy: finding where the pirate was,” Proc. SPIE, Vo. 6819, 2008, which examines camcorded motion picture copies to determine the location in a movie auditorium from which the copy was filmed.)
- If available, information modeling the lens system of the cell phone's camera can be used in connection with the image analysis, to yield still more accurate results. However, the pinhole camera model will generally suffice.
- Depending on the particular embodiment, the viewing distance may not be a concern. (If relevant, viewing distance may be estimated by judging where the viewing angle intersects the viewing plane, as noted above.) In judging distance, the size of the sign can be used. This information is known to the
sign system computer 14, and can be provided to the cell phone if the cell phone processor performs a distance estimation. Or if imagery captured by the cell phone is provided to the sign system computer for analysis, the computer can factor sign-size information into its analysis to help determine distance. (If the cell phone camera has a zoom feature, the captured image of the electronic sign may be of a scale that is not indicative of viewing distance. Data from the camera system, providing a metric indicating the degree of zoom, can be used by the relevant processor to address this issue.) - If the screen rectangle is not entirely captured within the cell phone image frame, some information about the user's position can nonetheless be determined. Considering, for example, the partial screen rectangle shown in
FIG. 13A (one complete edge, and two incomplete opposing edges), the incompletely captured opposing edges appear to converge if extended, indicating that the viewer is to the left of edge A. In contrast, the diverging opposing edges ofFIG. 13B indicate the viewer is to the right of edge A - Still another way in which the observer's viewing position can be discerned from cell phone-captured image data is by reference to watermark information encoded in graphical data presented by the sign, and included in the user-captured imagery. Steganographically encoded watermark signals, such as detailed in U.S. Pat. No. 6,590,996, commonly include an orientation signal component by which the watermark decoder can detect affine geometrical distortions introduced in the imagery since encoding, so that the encoded payload can be decoded properly despite such distortions. In particular, the detailed watermark system allows six degrees of image distortion to be discerned from captured imagery: rotation, scale, differential scale, shear, and translation in both x and y.
- These six parameters suffice for most at-a-distance viewing scenarios, where perspective effects are modest. Close-in perspective distortion can be handled by encoding the displayed imagery with several successive (or overlaid) watermark orientation signals: one conventional, and one or more others pre-distorted with different perspective transforms. The watermark reader can indicate which of the perspective-transformed orientation signals is decoded with the lowest error rate (or highest signal-to-noise ratio), indicating the perspective transformation. Alternatively, a conventional watermark can be encoded in the content, and the decoder can apply a series of different perspective transformations to the captured imagery prior to decoding, to identify the one yielding the lowest error rate (or highest S/N ratio).
- (The use of bit errors as a metric for assessing quality of watermark decoding is detailed, e.g., in Bradley, “Comparative performance of watermarking schemes using M-ary modulation with binary schemes employing error correction coding,” SPIE, Vol. 4314, pp. 629-642, 2001, and in patent publication US20020159614, as well as in others of the cited documents. These errors are ultimately corrected by error correction schemes.)
- Yet another way to estimate the observer's viewing position is by reference to apparent distortion of known imagery presented on the display screen and captured by the observer's cell phone. SIFT, robust scene descriptor schemes, and image fingerprints that are robust to geometric transformation, can be used for this purpose. As part of the matching process, synchronization parameters can be estimated, allowing the position of the viewer to be estimated.
- Displayed imagery from which viewer position information can be estimated does not need to be dedicated to this purpose; any graphic can be used. In some cases, however, graphics can be provided that are especially tailored to facilitate determination of viewer position.
- For example, image-based understanding of a scene can be aided by presenting one or more features or objects on or near the screen, for which reference information is known (e.g., size, position, angle), and by which the system can understand other features—by relation. In one particular arrangement, a target pattern is displayed on the screen (or presented adjacent the screen) from which, e.g., viewing distance and orientation can be discerned. Such targets thus serve as beacons, signaling distance and orientation information to any observing camera system. One such target is the TRIPcode, detailed, e.g., in de Ipiña, TRIP: a Low-Cost Vision-Based Location System for Ubiquitous Computing, Personal and Ubiquitous Computing, Vol. 6, No. 3, May, 2002, pp. 206-219.
- As detailed in the Ipiña paper, the target (shown in
FIG. 14 ) encodes information including the target's radius, allowing a camera-equipped system to determine both the distance from the camera to the target, and the target's 3D pose. By presenting the target on the electronic screen at its encoded size, the Ipiña arrangement allows a camera-equipped system to understand both the distance to the screen, and the screen's spatial orientation relative to the camera. - The TRIPcode has undergone various implementations, being successively known as SpotCode, and then ShotCode (and sometimes Bango). It is now understood to be commercialized by OP3 B.V.
- The aesthetics of the depicted TRIPcode target are not generally suited for display on signage. However, the pattern can be overlaid infrequently in one frame among a series of images (e.g., once every 3 seconds, in a 30 frame-per-second display arrangement). The position of the target can be varied to reduce visual artifacts. The color needn't be black; a less conspicuous color (e.g., yellow) may be used.
- While a round target, such as the TRIPcode, is desirable for computational ease, e.g., in recognizing such shape in its different elliptical poses, markers of other shapes can be used. A square marker suitable for determining the 3D position of a surface is Sony's CyberCode and is detailed, e.g., in Rekimoto, CyberCode: Designing Augmented Reality Environments with Visual Tags, Proc. of Designing Augmented Reality Environments 2000, pp. 1-10. A variety of other reference markers can alternatively be used—depending on the requirements of a particular application.
- As before, once a viewer's location relative to the sign has been discerned, such information can be communicated to the sign's computer system (if same was not originally discerned by such system), and a watermark targeting that viewer's spatial location can be defined and encoded in imagery presented on the sign. If the sign has a camera system from which it can estimate gender, age, or other attribute of viewers, it can tailor the targeted watermark payload (or the payoff associated with an arbitrary payload) in accordance with the estimated attribute(s) associated with the viewer at the discerned location. Or, such profile information may be provided by the viewer to the sign system computer along with the viewer-captured imagery (or with location information derived therefrom).
- In another arrangement, a user's cell phone captures an image of part or all of the sign, and transmits same (e.g., by Bluetooth or internet TCP/IP) to the sign system computer. The sign system computer discerns the user's location from the geometry of the sign as depicted in the transmitted image. From its own camera, the sign system has characterized gender, age or other demographic(s) of several people at different locations in front of the sign. By matching the geometry-discerned location of the viewer who provided imagery by Bluetooth, with one of the positions in front of the sign where the sign system computer has demographically characterized viewers, the computer can infer the demographic(s) of the particular viewer from whom the Bluetooth transmission was received. The sign system can then Bluetooth-transmit payoff data back to that viewer—and tailor same to that particular viewer's estimated demographic(s). (Note that in this arrangement, as in some others, the payoff is sent by Bluetooth—not, e.g., encoded in a watermark presented on the sign.)
- The type and variety of payoff that can be provided to the user's phone is virtually limitless. Electronic coupons have been noted above. Others include multimedia entertainment content (music videos, motion picture clips), and links/access credentials to online resources. A visitor to a trade show, for example, may share profile information indicating his professional occupation (e.g., RF engineer). Signage encountered at vendor booths may sense this information, and provide links showcasing the vendor's product offerings that are relevant to such a professional. The user may not act on such links while at the trade show, but may save them for later review when he returns to his office. In like fashion, other payoffs may be stored for later use.
- In many instances, a user may wish to engage in a visually interactive session with content presented by an electronic sign—defining the user's own personal experience. For example, the user may want to undertake an activity that prompts one or more changes in the sign—such as by playing a game.
- Contemporary cell phones offer a variety of sensors that can be used in such interactive sessions—not just pushbuttons (virtual or physical), but also accelerometers, magnetometers, cameras, etc. Such phones can be used like game controllers (think Wii) in conjunction with electronic sign systems. Two or more users can engage in multi-player experiences—with their devices controlling aspects of the sign system, through use of the camera and/or other sensors.
- In one particular arrangement, a user's phone captures an image of a sign. The imagery, or other data from the phone, is analyzed to determine which sign (or content) is being viewed, as described earlier. The cell phone then exchanges information with the sign system (e.g., computer 14) to establish a session and control play of a game. For example, the cell phone may transmit imagery captured by the phone camera—from which motion of the phone can be deduced (e.g., by tracking one or more features across several frames of image data captured by the camera, as detailed in U.S. Pat. No. 7,174,031). Or, data from one or more accelerometers in the phone can be transmitted to the sign system—again indicating motion of the phone. As is conventional, the computer takes these signals as input, and controls play of the game accordingly.
- The screen may be in an airport bar, and the game may be a virtual football game—sponsored by a local professional football team (e.g., the Seattle Seahawks). Anyone in the bar can select a team member to play (with available players identified by graphical icons on the edge of the display) through use of their cell phone. For example, a user can point their phone at the icon for a desired player (e.g., positioning the camera so the player icon appears at virtual crosshairs in the center of the phone's display screen) and then push/tap a physical/virtual button to indicate a selection. The phone image may be relayed to the sign system, to inform it of the player's selection. Or the phone can send an identifier derived from the selected icon, e.g., a watermark or image fingerprint.
- The system provides feedback indicating that the player has been selected (graphic overlay, vibration, etc), and once selected, reflects that state on the electronic sign. After the player has been selected, the user controls the player's movements in future plays of the virtual football game by movement of the user's cell phone.
- In another football game, the user does not control an individual player. Instead, the user acts as coach—identifying which players are to be swapped into or out of the lineup. The computer system then simulates play based on the roster of players selected by the user.
- Another game is a virtual Lego game, or puzzle building exercise. One or more players can each select Lego or puzzle pieces on the digital screen (like picking players, above), and move them into place by pointing the camera to the desired location and issuing a signal (e.g., using the phone's user interface, such as a tap) to drop the piece in that place. The orientation at which the piece is placed can be controlled by the orientation of the user's phone when the “drop” signal is issued. In certain embodiments, each piece is uniquely identified by a watermark, barcode, fingerprint, or other feature recognition arrangement, to facilitate selection and control.
- A few arrangements particularly contemplated by applicant include the following:
- A method involving an electronic sign, viewed by a first observer, the method comprising: obtaining position information about the first observer (e.g., by reference to image data captured by a camera associated with the sign, or by a camera associated with the observer); defining a first digital watermark signal that takes into account the position information; encoding image data in accordance with said first digital watermark signal; and presenting the encoded image data on the electronic sign.
- A second observer may be similarly treated, and provided a watermark signal that is the same or different than that provided to the first observer.
- Another method involves an electronic sign system viewed by plural observers, each conveying a sensor-equipped device (e.g., a cell phone equipped with a microphone and/or camera). This method includes establishing a first data payload for a first observer of the electronic sign; establishing a second data payload for a second observer of the electronic sign; steganographically encoding audio or visual content data with digital watermark data, where the digital watermark data conveys the first and second data payloads; and presenting the encoded content data using the electronic sign system. In this arrangement, the sensor-equipped device conveyed by the first observer responds to the first data payload encoded in the presented content data but not the second data payload, and the sensor-equipped device conveyed by the second observer responds to the second data payload encoded in the presented content data but not the first data payload.
- Another method involves an electronic sign system including a screen viewed by different combinations of observers at different times. This method includes detecting a first person observing the screen; encoding content presented by the electronic sign system with a first watermark signal corresponding to the first observer; while the first person is still observing the screen, detecting a second person newly observing the screen; encoding the content presented by the electronic sign system with a first watermark signal corresponding to the first observer, and also a second watermark signal corresponding to the second observer; when one of said persons is detected as no longer observing the sign, encoding the content presented on the electronic sign system with the watermark signal corresponding to a remaining observer, but not with the watermark signal corresponding to the person who is no longer observing the sign. By such arrangement, different combinations of watermark signals are encoded in content presented on the electronic sign system, in accordance with different combinations of persons observing the screen at different times.
- Another method includes using a handheld device to capture image data from a display. A parameter of a digital watermark signal steganographically encoded in the captured image data is then determined. This parameter is other than payload data encoded by the watermark signal and may comprise, e.g., a geometrical parameter or an error metric. Depending on the outcome of this determination (which may include comparing the parameter against a reference), a decision is made as to how the device should respond to the display.
- Yet another method involves an electronic sign, viewed by a first observer, and includes: obtaining first contextual information relating to the first observer; defining a first digital watermark signal that takes into account the first contextual information; steganographically encoding first image data in accordance with the first digital watermark signal; and presenting the encoded image data on the electronic sign. As before, the method may be extended to similarly treat a second observer, but with a second, different digital watermark signal. In such case, the same first image data is presented to both observers, but is steganographically encoded with different watermark signals in accordance with different contextual information.
- In still another method, an electronic sign presents content that is viewed by plural observers. This method includes: using a first camera-equipped device conveyed by a first observer, viewing the presented content and capturing first image data corresponding thereto; determining first identifying data by reference to the captured first image data; using a second camera-equipped device conveyed by a second observer, viewing the same presented content and capturing second image data corresponding thereto, the second image data differing from the first due to different vantage points of the first and second observers; determining second identifying data by reference to the captured second image data; by reference to the first identifying data, together with information specific to the first device or first observer, providing a first response to the first device; and by reference to the second identifying data, together with information specific to the second device or second observer, providing a second, different, response to the second device. By such arrangement, the first and second devices provide different responses to viewing of the same content presented on the electronic sign. (The second identifying data can be the same as the first identifying data, notwithstanding that the captured first image data is different than the captured second image data.)
- Yet another method includes capturing image data corresponding to an electronic sign using a camera-equipped device conveyed by the observer; determining which of plural electronic signs is being observed by a first observer, by reference to the captured image data; and exchanging data between the device and the electronic sign based, at least in part, on said determination.
- In such arrangement, data can be transmitted from the device, such as data dependent at least in part on the camera, or motion data. The motion data can be generated by use of one or more accelerometers in the device, or can be generated by tracking one or more visible features across several frames of image data captured by the camera.
- Another method concerns providing demographically-targeted responses to observers of an electronic sign, based on viewing location. This method includes: obtaining first demographic information relating to a first observer, and second demographic information relating to a second observer; determining first response data associated with the first demographic information, and second response data associated with the second demographic information; obtaining first location data relating to the first observer, and second location data relating to the second observer; receiving image data from an observer's device; processing the received image data to estimate a location from which it was captured; and if the estimated location is the first location, returning the first response data to said device. (If the estimated location is the second location, second response data can be returned to the device.)
- A further method includes establishing an association between a camera-equipped device conveyed by an observer, and an electronic sign system; receiving data from the device, wherein the received data depends—at least in part—on image data captured by the camera; and controlling an operation of the electronic sign system, at least in part, based on the received data.
- This method can further include presenting depictions of plural game items on the electronic sign; and receiving data from the device, indicating that the observer has viewed using the camera device—and selected—a particular one of said game item depictions presented on the screen. A depiction of game play can be presented on the electronic sign, where such play reflects the observer's selection of the particular game item.
- The depicted game items can comprise puzzle pieces, and the method can include receiving signals from the device indicating a position, and orientation, at which a puzzle piece is to be deposited, wherein said signals depend, at least in part, on image data captured by the camera.
- A second observer can also participate, e.g., by establishing a logical association between a camera-equipped second device conveyed by the second observer, and the electronic sign; receiving data from the second device, wherein said received data depends—at least in part—on image data captured by the second device, said received data indicating that the second observer has viewed using the camera of the second device—and selected—a particular different one of said depicted puzzle pieces; and receiving signals from the second device indicating a position, and orientation, at which the different one of said depicted puzzle pieces is to be deposited, wherein said signals depend, at least in part, on image data captured by the camera of the second device.
- Selection of particular game items can proceed by use of feature recognition, digital watermark-based identification, barcode-based identification, fingerprint-based identification, etc.
- In another method, an electronic sign presents content that is viewed by plural observers. This method includes: by use of a first camera-equipped device conveyed by a first observer, viewing the presented content and capturing first image data corresponding thereto; processing the first image data to produce first identifying data; by use of a second camera-equipped device conveyed by a second observer, viewing the same presented content and capturing second image data corresponding thereto, the second image data differing from the first due to different vantage points of the first and second observers; processing the second image data to produce second identifying data; using a sensor associated with the electronic sign, capturing third image data depicting the first and second observers; processing the third image data to estimate demographic data associated with the first and second observers; by reference to the estimated demographic data, determining first response data for the first observer, and second, different, response data for the second observer; also processing the third image data to generate first location information corresponding to the first observer, and second location information corresponding to the second observer; receiving first or second identifying data; by reference to the generated location information, determining whether the received identifying data is based on image data captured by the first device or the second device; if the received identifying data is determined to have been based on image data captured by the first device, responding to said received identifying data with the first response data; and if the received identifying data is determined to have been based on image data captured by the second device, responding to said received identifying data with the second response data. By such arrangement, the method infers from which observer the identifying data was received, and responds with demographically-determined response data corresponding to that observer.
- Yet another method includes, by use of a first sensor-equipped device conveyed by a user, capturing content data from an electronic sign system; by reference to a time-base, determining which of plural temporal portions of digital watermark data encoded in the captured content data corresponds, contextually, to the user; and taking an action based on a determined temporal portion of the digital watermark data.
- Still another method includes receiving input image data having an undistorted aspect; encoding the input image data in accordance with a steganographic digital watermark pattern; and presenting the encoded image data on a display screen; wherein the steganographic digital watermark pattern has distorted aspect relative to the input image data. (The digital watermark pattern may be distorted in accordance with a position of an observer.)
- In some of the arrangements detailed herein, the sign being viewed by the observer is identified by reference to location information about the observer and the sign. In others, identification is made by reference to image data captured by the observer (e.g., using robust local image descriptors, fingerprint, or watermark data).
- Similarly, in some of the detailed arrangements, the scale of a watermark signal may be tailored in accordance with a viewing distance; and/or the projection of a watermark signal may be tailored in accordance with a viewing angle (e.g., the watermark signal may be pre-distorted in accordance with viewer location). A watermark's payload may be established in accordance with demographic information about the observer (e.g., obtained from the observer, or estimated from observation of the observer).
- If the content is visual (rather than audio), the encoding of watermark data may be pre-distorted in accordance with a viewing geometry associated with the observer. In some arrangements, plural data payloads may be decoded in one of said sensor-equipped devices, but only one of the decoded payloads is selected for response (e.g., because it corresponds to profile data associated with the device or its user, e.g., stored in the sensor-equipped device. Such profile information may indicate gender, age, and/or home zip code data). Different payloads may be multiplexed, e.g., in time or frequency.
- Yet another method includes capturing imagery using a camera associated with a first system; detecting features in the captured imagery; and identifying, to a second system, augmented reality graphical data associated with the detected features, wherein the second system is different than the first. The first system may comprise an electronic sign system, and the second system may comprise a user's cell phone. The method can additionally include presenting augmented reality graphical data on the second system, wherein the presented data is a tailored in accordance with one or more demographic attributes of user of the second system.
- While this specification earlier noted its relation to the assignee's previous patent filings, it bears repeating. These disclosures should be read in concert and construed as a whole. Applicant intends that features in each disclosure be combined with features in the others. Thus, for example, the arrangements and details described in the present specification can be used in variant implementations of the systems and methods described in the earlier-cited patents and applications, while the arrangements and details of those documents can be used in variant implementations of the systems and methods described in the present specification. Similarly for the other noted documents. Thus, it should be understood that the methods, elements and concepts disclosed in the present application can be combined with the methods, elements and concepts detailed in those related applications. While some such arrangements have been particularly detailed in the present specification, many have not—due to the large number of permutations and combinations. However, implementation of all such combinations is straightforward to the artisan from the provided teachings.
- Having described and illustrated the principles of the technology with reference to illustrative features and examples, it will be recognized that the technology is not so limited.
- For example, while reference has been made to mobile devices such as cell phones, it will be recognized that this technology finds utility with all manner of devices. PDAs, organizers, portable music players, desktop computers, laptop computers, tablet computers, netbooks, ultraportables, wearable computers, servers, etc., can all make use of the principles detailed herein. Particularly contemplated phones include the Apple iPhone, and smart phones following Google's Android specification (e.g., the G1 phone, manufactured for T-Mobile by HTC Corp., the Motorola Droid phone, and the Google Nexus phone). The term “cell phone” should be construed to encompass all such devices, even those that are not strictly-speaking cellular, nor telephones (e.g., the recently announced Apple iPad device).
- This technology can also be implemented using face-worn apparatus, such as augmented reality (AR) glasses. Such glasses include display technology by which computer information can be viewed by the user—either overlaid on the scene in front of the user, or blocking that scene. Virtual reality goggles are an example of such apparatus. Exemplary technology is detailed in patent documents U.S. Pat. No. 7,397,607 and 20050195128. Commercial offerings include the Vuzix iWear VR920, the Naturalpoint Trackir 5, and the ezVision X4 Video Glasses by ezGear. An upcoming alternative is AR contact lenses. Such technology is detailed, e.g., in patent document 20090189830 and in Parviz, Augmented Reality in a Contact Lens, IEEE Spectrum, September, 2009. Some or all such devices may communicate, e.g., wirelessly, with other computing devices (carried by the user, electronic signs, or others), and they can include self-contained processing capability. Likewise, they may incorporate other features known from existing smart phones and patent documents, including electronic compass, accelerometer, camera(s), projector(s), GPS, etc.
- Further out, features such as laser range finding (LIDAR) may become standard on phones (and related devices), and can be employed in conjunction with the present technology (e.g., to identify signs being viewed by the observer, and their distance).
- The design of cell phones and other computer devices referenced in this disclosure is familiar to the artisan. In general terms, each includes one or more processors (e.g., of an Intel, AMD or ARM variety), one or more memories (e.g. RAM), storage (e.g., a disk or flash memory), a user interface (which may include, e.g., a keypad, a TFT LCD or OLED display screen, touch or other gesture sensors, a camera or other optical sensor, a compass sensor, a 3D magnetometer, a 3-axis accelerometer, a microphone, etc., together with software instructions for providing a graphical user interface), interconnections between these elements (e.g., buses), and an interface for communicating with other devices (which may be wireless, such as GSM, CDMA, W-CDMA, CDMA2000, TDMA, EV-DO, HSDPA, WiFi, WiMax, mesh networks, Zigbee and other 802.15 arrangements, or Bluetooth, and/or wired, such as through an Ethernet local area network, a T-1 internet connection, etc).
- More generally, the processes and system components detailed in this specification may be implemented as instructions for computing devices, including general purpose processor instructions for a variety of programmable processors, including microprocessors, graphics processing units (GPUs, such as the nVidia Tegra APX 2600), digital signal processors (e.g., the Texas Instruments TMS320 series devices), etc. These instructions may be implemented as software, firmware, etc. These instructions can also be implemented to various forms of processor circuitry, including programmable logic devices, FPGAs (e.g., Xilinx Virtex series devices), FPOAs (e.g., PicoChip brand devices), and application specific circuits—including digital, analog and mixed analog/digital circuitry. Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Transformation of content signal data may also be distributed among different processor and memory devices.
- Software instructions for implementing the detailed functionality can be readily authored by artisans, from the descriptions provided herein, e.g., written in C, C++, Visual Basic, Java, Python, Tcl, Perl, Scheme, Ruby, etc. Mobile devices according to the present technology can include software modules for performing the different functions and acts. Software applications for cell phones can be distributed through different vendors ap stores (e.g., the Apple Ap Store, for iPhone devices).
- Commonly, each device includes operating system software that provides interfaces to hardware resources and general purpose functions, and also includes application software which can be selectively invoked to perform particular tasks desired by a user. Known browser software, communications software, and media processing software can be adapted for many of the uses detailed herein. Software and hardware configuration data/instructions are commonly stored as instructions in one or more data structures conveyed by tangible media, such as magnetic or optical discs, memory cards, ROM, etc., which may be accessed across a network. Some embodiments may be implemented as embedded systems—a special purpose computer system in which the operating system software and the application software is indistinguishable to the user (e.g., as is commonly the case in basic cell phones). The functionality detailed in this specification can be implemented in operating system software, application software and/or as embedded system software.
- Different of the functionality described in this specification can be implemented on different devices. For example, in a system in which a cell phone communicates with a sign system computer, different tasks can be performed exclusively by one device or the other, or execution can be distributed between the devices. Extraction of watermark data and fingerprints from imagery, and estimation of viewing angle and distance, are but a few examples of such tasks. Thus, it should be understood that description of an operation as being performed by a particular device (e.g., the sign system computer) is not limiting but exemplary; performance of the operation by another device (e.g., a cell phone, or a remote computer), or shared between devices, is also expressly contemplated. As will be understood by the artisan, the results of any operation can be sent to another unit for use in subsequent operation(s).
- In like fashion, description of data being stored on a particular device is also exemplary; data can be stored anywhere: local device, remote device, in the cloud, distributed, etc.
- Operations need not be performed exclusively by specifically-identifiable hardware. Rather, some operations can be referred out to other services (e.g., cloud computing), which attend to their execution by still further, generally anonymous, systems. Such distributed systems can be large scale (e.g., involving computing resources around the globe), or local (e.g., as when a portable device identifies one or more nearby mobile or other devices through Bluetooth communication, and involves one or more of them in a task.)
- It will be recognized that the detailed processing of content signals (e.g., image signals, audio signals, etc.) includes the transformation of these signals in various physical forms. Images and video (forms of electromagnetic waves traveling through physical space and depicting physical objects) may be captured from physical objects using cameras or other capture equipment, or generated by a computing device. Similarly, audio pressure waves traveling through a physical medium may be captured using an audio transducer (e.g., microphone) and converted to an electronic signal (digital or analog form). While these signals are typically processed in electronic and digital form to implement the components and processes described above, they may also be captured, processed, transferred and stored in other physical forms, including electronic, optical, magnetic and electromagnetic wave forms. The content signals are transformed in various ways and for various purposes during processing, producing various data structure representations of the signals and related information. In turn, the data structure signals in memory are transformed for manipulation during searching, sorting, reading, writing and retrieval. The signals are also transformed for capture, transfer, storage, and output via display or audio transducer (e.g., speakers).
- Implementations of the present technology can make use of user interfaces employing touchscreen technology. Such user interfaces (as well as other aspects of the Apple iPhone) are detailed in published patent application 20080174570.
- Touchscreen interfaces are a form of gesture interface. Another form of gesture interface that can be used in embodiments of the present technology operates by sensing movement of a smart phone—by tracking movement of features within captured imagery. Further information on such gestural interfaces is detailed in Digimarc's U.S. Pat. No. 6,947,571. Gestural techniques can be employed whenever user input is to be provided to the system.
- In some embodiments, the detailed functionality must be activated by user instruction (e.g., by launching an ap). In other arrangements, the cell phone device may be configured to run in a media-foraging mode—always processing ambient audio and imagery, to discern stimulus relevant to the user and respond accordingly.
- Sensor information (or data based on sensor information) may be referred to the cloud for analysis. In some arrangements this is done in lieu of local device processing (or after certain local device processing has been done). Sometimes, however, such data can be passed to the cloud and processed both there and in the local device simultaneously. The cost of cloud processing is usually small, so the primary cost may be one of bandwidth. If bandwidth is available, there may be little reason not to send data to the cloud, even if it is also processed locally. In some cases the local device may return results faster; in others the cloud may win the race. By using both, simultaneously, the user is assured of the speediest possible results.
- While this disclosure has detailed particular ordering of acts and particular combinations of elements in the illustrative embodiments, it will be recognized that other methods may re-order acts (possibly omitting some and adding others), and other combinations may omit some elements and add others, etc.
- Although disclosed as complete systems, sub-combinations of the detailed arrangements are also separately contemplated.
- Elements and teachings within the different embodiments disclosed in the present specification are also meant to be exchanged and combined.
- Reference was made to the internet in certain embodiments. In other embodiments, other networks—including private networks of computers—can be employed also, or instead.
- While this specification focused on capturing imagery from electronic signage, and providing associated payoffs to observers, many similar arrangements can be practiced with the audio from electronic signage. The perspective-based features are not readily available with audio, but other principles detailed herein can be adapted to audio-only implementations.
- In all the detailed embodiments, advertising may be presented on the electronic signage. Measurements noting the length of viewer engagement with different signs, and number of commercial impressions, can be logged, and corresponding census-based reports can be issued to advertisers by audience survey companies. This information can be compiled by software in the phone, or by software associated with the sign. Knowing demographic information about the viewer allows targeted advertising to be presented. If a communication session is established, follow-up information can be sent using the same information channel. Advertising may also be presented on the user's cell phone, and similarly measured.
- Related arrangements are detailed in published patent applications 20080208849 and 20080228733 (Digimarc), 20080165960 (TagStory), 20080162228 (Trivid), 20080178302 and 20080059211 (Attributor), 20080109369 (Google), 20080249961 (Nielsen), and 20080209502 (MovieLabs).
- Technology for encoding/decoding watermarks is detailed, e.g., in Digimarc's patents cited earlier, as well as in Nielsen's U.S. Pat. Nos. 6,968,564 and 7,006,555, and in Arbitron's U.S. Pat. Nos. 5,450,490, 5,764,763, 6,862,355, and 6,845,360.
- Content fingerprinting seeks to distill content (e.g., a graphic, a video, a song, etc.) down to an essentially unique identifier, or set of identifiers. Many fingerprinting techniques are known. Examples of image/video fingerprinting are detailed in patent publications U.S. Pat. Nos. 7,020,304 (Digimarc), 7,486,827 (Seiko-Epson), 5,893,095 (Virage), 20070253594 (Vobile), 20080317278 (Thomson), and 20020044659 (NEC). Examples of audio fingerprinting are detailed in patent publications 20070250716, 20070174059 and 20080300011 (Digimarc), 20080276265, 20070274537 and 20050232411 (Nielsen), 20070124756 (Google), U.S. Pat. Nos. 6,834,308 (Audible Magic), 7,516,074 (Auditude), and 6,990,453 and 7,359,889 (both Shazam).
- Scale Invariant Feature Transform (SIFT) may be regarded as a form of image fingerprinting. Unlike some others, it can identify visual information despite affine and perspective transformation. SIFT is further detailed in certain of the earlier cited applications (e.g., US20100048242) as well as in patent documents U.S. Pat. No. 6,711,293 and WO07/130,688.
- While SIFT is perhaps the most well known technique for generating robust local scene descriptors, there are others, which may be more or less suitable—depending on the application. These include GLOH (c.f., Mikolajczyk et al, “Performance Evaluation of Local Descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 27, No. 10, pp. 1615-1630, 2005); and SURF (c.f., Bay et al, “SURF: Speeded Up Robust Features,” Eur. Conf. on Computer Vision (1), pp. 404-417, 2006); as well as Chen et al, “Efficient Extraction of Robust Image Features on Mobile Devices,” Proc. of the 6th IEEE and ACM Int. Symp. On Mixed and Augmented Reality, 2007; and Takacs et al, “Outdoors Augmented Reality on Mobile Phone Using Loxel-Based Visual Feature Organization,” ACM Int. Conf. on Multimedia Information Retrieval, October 2008. A survey of local descriptor features is provided in Mikolajczyk et al, “A Performance Evaluation of Local Descriptors,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2005. Nokia has done work on visual search, including published patent applications 20070106721, 20080071749, 20080071750, 20080071770, 20080071988, 20080267504, 20080267521, 20080268876, 20080270378, 20090083237, 20090083275, and 20090094289. Features and teachings detailed in these documents are suitable for combination with the technologies and arrangements detailed in the present application, and vice versa.
- While many of the embodiments make use of watermarking technology to convey data from the sign system to observing cell phones, in other embodiments other communications technologies can be used between the phone and the sign system, such as RFID, Near Field Communication, displayed barcodes, infrared, SMS messaging, etc. Image or other content fingerprinting can also be used to identify (e.g., to the cell phone) the particular display being observed. With the display thus-identified, a corresponding store of auxiliary information can be accessed, and corresponding actions can then be based on the stored information.
- As noted, position data about the observer can be determined by means such as GPS, or by the technology detailed in published patent application WO08/073,347. The same technology can be used to identify the location of electronic signs. From such information, the fact that a particular observer is viewing a particular sign can be inferred. A store of auxiliary information—detailing, e.g., a payoff to the observer—can thereby be identified and accessed, to enable the corresponding payoff. (The system of WO08/073,347 can also be used to generate highly accurate time information, e.g., on which time-based systems can rely.)
- If imagery captured by the cell phone is sent to the sign system, metadata accompanying the imagery commonly identifies the make and model of the cell phone. This information can be stored by the sign system and used for various purposes. One is simply to demographically classify the user (e.g., a user with a Blackberry is more likely a business person, whereas a person with a Motorola Rival is more likely a teen). Another is to determine information about the phone's camera system (e.g., aperture, resolution, etc.). Watermark or other information presented on the electronic sign can then be tailored in accordance with the camera particulars (e.g., the size of the watermarking tile)—a type of “informed embedding.”
- Relatedly, if no information has been received from the user by the sign system, the sign may nonetheless estimate something about the user's cell phone camera, by reference to the user's estimated age, gender and/or ethnicity. Stored reference data, for example, can indicate the popularity of different phone (camera) models with different demographic groups. E.g., the peak demographic for the Apple iPhone is reported to be the 35-54 year old age group, owning about 36% of these devices, whereas 13-17 year olds only own about 5% of these devices. Men are much more likely than women to own Android phones. Update cycles for phones also varies with demographics. A 15 year old boy is likely to be carrying a cell phone that is less than a year old, whereas a 50 year old woman is more likely to be carrying a cell phone that is at least two years old. Older phones have lower resolution cameras. Etc. Thus, by estimating the viewer's age and gender, an informed guess may be made about the cell phone camera that the user may be carrying. Again, the display on the sign can be tailored accordingly (e.g., by setting watermarking parameters in accordance with estimated camera resolution).
- The detailed technology can also employ augmented reality (AR) techniques. AR has been popularized by iPhone/Android applications such as UrbanSpoon, Layar, Bionic Eye, Wikitude, Tonchidot, and Google Goggles, the details of which are familiar to the artisan. Exemplary AR systems are detailed in patent documents US20100045869, US20090322671 and US20090244097. Briefly, such arrangements sense visual features in captured imagery, and present additional information on a viewing screen—commonly as an overlay on the originally-captured imagery. In the present context, the information displayed on electronic signage can be used as the visual features. The overlay can be presented on the user's phone, and be customized to the user, e.g., by context (including viewing location and/or demographics). Information can be exchanged between the phone and the sign system via watermark data encoded in imagery displayed on the electronic sign. Other arrangements can also be employed, such as IP, Bluetooth, etc., once a logical association has been established between a particular cell phone and a particular sign/content.
- In other arrangements the user's
cell phone 16, or thecamera 12 of the electronic sign system, captures imagery from which features are sensed. Associated displays/information may then be presented on thedisplay screen 10 of the electronic sign system. Such information may be presented on the sign as an overlay on the captured imagery containing the sensed features, or separately. - Elements from the detailed arrangements can be combined with elements of the prior art—such as noted in the Background discussion—to yield additional implementations.
- While certain operations are described as taking place in
computer 14,cell phone 16, or remote server(s) 18, etc., the location of the various operations is flexible. Operations can take place on any appropriate computer device (or distributed among plural devices), and data relayed as necessary. - Although illustrated in the context of large-format public displays, it should be recognized that the same principles find application elsewhere, including with conventional laptop displays, other cell phone displays, electronic picture frames, e-books, televisions, motion picture projection screens, etc. Microsoft's “Second Light” technology, as detailed in Izadi et al, “Going Beyond the Display: A Surface Technology with an Electronically Switchable Diffuser,” Microsoft Research, 2009, can also be used in conjunction with the principles detailed herein.
- Naturally, the technology is not limited to flat displays but is also applicable with curved displays.
- Face-finding algorithms are well known (e.g., as employed in many popular consumer cameras) and can be employed to identify the faces of observers, and locate their eyes. As noted, the distance between an observer's eyes, e.g., in pixels in imagery captured by
camera 12, can be used in the various embodiments to estimate the observer's distance from the camera (and thus from the display screen). - A sample watermark payload protocol is shown in
FIG. 15 . It includes 8 bits to identify the protocol (so the cell phone watermark decoder system knows how to interpret the rest of the payload), and 4 bits to indicate the demographic audience to which it is targeted (e.g., men between the ages of 30 and 55). The “immediate response data” that follows is literal auxiliary data that can be used by the cell phone without reference to a remote database. For example, it conveys text or information that the cell phone—or another system—can use immediately, such as indexing a small store of payoff data loaded into a cell phone data store, to present different coupons to different merchants. The remaining 20 bits of data serves to index a remote database where corresponding information (e.g., re coupons or other payoffs) is stored. Other data fields, such as one indicating an age-appropriateness rating, can additionally, or alternatively, be employed. The protocol may be extensible, e.g., by a flag bit indicating that a following payload conveys additional data. - The payload of
FIG. 15 is simply illustrative. In any particular implementation, a different payload will likely be used—depending on the particular application requirements. - Camera systems and associated software from Quividi and/or TruMedia can be used for
camera 12, to identify observers and classify them demographically demographics. - It will be recognized that certain embodiments of the present technology allow a signboard to serve as a “narrowcaster,” as contrasted with its usual “broadcaster” role. And yet this is achieved in open fashion, without resort to closed architectures in which, e.g., specified devices or dedicated protocols must be used.
- In the interest of conciseness, the myriad variations and combinations of the described technology are not cataloged in this document. Applicant recognizes and intends that the concepts of this specification can be combined, substituted and interchanged—both among and between themselves, as well as with those known from the cited prior art. Moreover, it will be recognized that the detailed technology can be included with other technologies—current and upcoming—to advantageous effect.
- To provide a comprehensive disclosure without unduly lengthening this specification, applicant incorporates-by-reference the documents and patent disclosures referenced above. (Such documents are incorporated in their entireties, even if cited above in connection with specific of their teachings.) These references disclose technologies and teachings that can be incorporated into the arrangements detailed herein, and into which the technologies and teachings detailed herein can be incorporated.
Claims (6)
1-20. (canceled)
21. A method comprising the acts:
receiving input image data having an undistorted aspect;
encoding the input image data in accordance with a steganographic digital watermark pattern; and
presenting the encoded image data on a display screen;
wherein the steganographic digital watermark pattern has a distorted aspect relative to the input image data.
22. A method comprising the acts:
capturing imagery using a camera associated with a first system;
detecting features in the captured imagery; and
identifying, to a second system, augmented reality graphical data associated with the detected features, wherein the second system is different than the first.
23. The method of claim 22 in which the first system comprises an electronic sign system, and the second system comprises a user's cell phone.
24. The method of claim 22 that further includes presenting augmented reality graphical data on the second system, wherein the presented data is a tailored in accordance with one or more demographic attributes of user of the second system.
25. An article of manufacture including a computer-readable medium having instructions stored thereon that, if executed by a computing device, cause the computing device to perform operations comprising:
receive imagery captured using a camera associated with a first system;
detect features in the captured imagery; and
identify, to a second system, augmented reality graphical data associated with the detected features, wherein the second system is different than the first.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/193,182 US20110279479A1 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting From Public Displays, and Related Methods |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15715309P | 2009-03-03 | 2009-03-03 | |
US12/716,908 US8412577B2 (en) | 2009-03-03 | 2010-03-03 | Narrowcasting from public displays, and related methods |
US13/193,182 US20110279479A1 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting From Public Displays, and Related Methods |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/716,908 Division US8412577B2 (en) | 2009-03-03 | 2010-03-03 | Narrowcasting from public displays, and related methods |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110279479A1 true US20110279479A1 (en) | 2011-11-17 |
Family
ID=42679071
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/716,908 Expired - Fee Related US8412577B2 (en) | 2009-03-03 | 2010-03-03 | Narrowcasting from public displays, and related methods |
US13/193,182 Abandoned US20110279479A1 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting From Public Displays, and Related Methods |
US13/193,141 Active 2030-07-02 US9524584B2 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting from public displays, and related methods |
US13/193,157 Expired - Fee Related US9460560B2 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting from public displays, and related methods |
US13/792,793 Abandoned US20130286046A1 (en) | 2009-03-03 | 2013-03-11 | Narrowcasting from public displays, and related methods |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/716,908 Expired - Fee Related US8412577B2 (en) | 2009-03-03 | 2010-03-03 | Narrowcasting from public displays, and related methods |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/193,141 Active 2030-07-02 US9524584B2 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting from public displays, and related methods |
US13/193,157 Expired - Fee Related US9460560B2 (en) | 2009-03-03 | 2011-07-28 | Narrowcasting from public displays, and related methods |
US13/792,793 Abandoned US20130286046A1 (en) | 2009-03-03 | 2013-03-11 | Narrowcasting from public displays, and related methods |
Country Status (6)
Country | Link |
---|---|
US (5) | US8412577B2 (en) |
EP (1) | EP2404443A4 (en) |
JP (1) | JP5742057B2 (en) |
KR (1) | KR20110128322A (en) |
CA (1) | CA2754061A1 (en) |
WO (1) | WO2010102040A1 (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072463A1 (en) * | 2010-09-16 | 2012-03-22 | Madhav Moganti | Method and apparatus for managing content tagging and tagged content |
US20130147836A1 (en) * | 2011-12-07 | 2013-06-13 | Sheridan Martin Small | Making static printed content dynamic with virtual data |
US8533192B2 (en) | 2010-09-16 | 2013-09-10 | Alcatel Lucent | Content capture device and methods for automatically tagging content |
US8655881B2 (en) | 2010-09-16 | 2014-02-18 | Alcatel Lucent | Method and apparatus for automatically tagging content |
WO2014040189A1 (en) * | 2012-09-13 | 2014-03-20 | Ati Technologies Ulc | Method and apparatus for controlling presentation of multimedia content |
US8840250B1 (en) * | 2012-01-11 | 2014-09-23 | Rawles Llc | Projection screen qualification and selection |
US9165381B2 (en) | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
US9183807B2 (en) | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Displaying virtual data as printed content |
US9229231B2 (en) | 2011-12-07 | 2016-01-05 | Microsoft Technology Licensing, Llc | Updating printed content with personalized virtual data |
US20160016222A1 (en) * | 2013-03-01 | 2016-01-21 | Novpress Gmbh Pressen Und Presswerkzeuge & Co. Kg | Handheld Pressing Device |
US9332522B2 (en) | 2014-05-20 | 2016-05-03 | Disney Enterprises, Inc. | Audiolocation system combining use of audio fingerprinting and audio watermarking |
US9723293B1 (en) | 2011-06-21 | 2017-08-01 | Amazon Technologies, Inc. | Identifying projection surfaces in augmented reality environments |
US20180332317A1 (en) * | 2017-05-09 | 2018-11-15 | Lytro, Inc. | Adaptive control for immersive experience delivery |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US20190172091A1 (en) * | 2017-12-04 | 2019-06-06 | At&T Intellectual Property I, L.P. | Apparatus and methods for adaptive signage |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
Families Citing this family (145)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7644282B2 (en) | 1998-05-28 | 2010-01-05 | Verance Corporation | Pre-processed information embedding system |
US6737957B1 (en) | 2000-02-16 | 2004-05-18 | Verance Corporation | Remote control signaling using audio watermarks |
JP2006504986A (en) | 2002-10-15 | 2006-02-09 | ベランス・コーポレイション | Media monitoring, management and information system |
US20060239501A1 (en) | 2005-04-26 | 2006-10-26 | Verance Corporation | Security enhancements of digital watermarks for multi-media content |
US8020004B2 (en) | 2005-07-01 | 2011-09-13 | Verance Corporation | Forensic marking using a common customization function |
US8781967B2 (en) | 2005-07-07 | 2014-07-15 | Verance Corporation | Watermarking in an encrypted domain |
JP5742057B2 (en) | 2009-03-03 | 2015-07-01 | ディジマーク コーポレイション | Narrow casting from public displays and related arrangements |
US9749607B2 (en) | 2009-07-16 | 2017-08-29 | Digimarc Corporation | Coordinated illumination and image signal capture for enhanced signal detection |
US8175617B2 (en) | 2009-10-28 | 2012-05-08 | Digimarc Corporation | Sensor-based mobile search, related methods and systems |
US9218530B2 (en) | 2010-11-04 | 2015-12-22 | Digimarc Corporation | Smartphone-based methods and systems |
US8971567B2 (en) | 2010-03-05 | 2015-03-03 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |
US10664940B2 (en) | 2010-03-05 | 2020-05-26 | Digimarc Corporation | Signal encoding to reduce perceptibility of changes over time |
US8477990B2 (en) * | 2010-03-05 | 2013-07-02 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |
US8838978B2 (en) | 2010-09-16 | 2014-09-16 | Verance Corporation | Content access management using extracted watermark information |
US9965756B2 (en) | 2013-02-26 | 2018-05-08 | Digimarc Corporation | Methods and arrangements for smartphone payments |
KR20120050118A (en) * | 2010-11-10 | 2012-05-18 | 삼성전자주식회사 | Apparatus and method for fishing game using mobile projector |
WO2012065160A2 (en) * | 2010-11-12 | 2012-05-18 | Mount Everest Technologies, Llc | Sensor system |
US9501882B2 (en) | 2010-11-23 | 2016-11-22 | Morphotrust Usa, Llc | System and method to streamline identity verification at airports and beyond |
KR20120076673A (en) * | 2010-12-13 | 2012-07-09 | 삼성전자주식회사 | Method and apparatus for providing advertisement serbvice in mobile communication system |
US8845107B1 (en) | 2010-12-23 | 2014-09-30 | Rawles Llc | Characterization of a scene with structured light |
US8905551B1 (en) | 2010-12-23 | 2014-12-09 | Rawles Llc | Unpowered augmented reality projection accessory display device |
US8845110B1 (en) | 2010-12-23 | 2014-09-30 | Rawles Llc | Powered augmented reality projection accessory display device |
US9721386B1 (en) * | 2010-12-27 | 2017-08-01 | Amazon Technologies, Inc. | Integrated augmented reality environment |
US9508194B1 (en) | 2010-12-30 | 2016-11-29 | Amazon Technologies, Inc. | Utilizing content output devices in an augmented reality environment |
US9607315B1 (en) | 2010-12-30 | 2017-03-28 | Amazon Technologies, Inc. | Complementing operation of display devices in an augmented reality environment |
US8938257B2 (en) | 2011-08-19 | 2015-01-20 | Qualcomm, Incorporated | Logo detection for indoor positioning |
WO2013033266A1 (en) * | 2011-08-30 | 2013-03-07 | Paedae | Method and apparatus for personalized marketing |
US10474858B2 (en) | 2011-08-30 | 2019-11-12 | Digimarc Corporation | Methods of identifying barcoded items by evaluating multiple identification hypotheses, based on data from sensors including inventory sensors and ceiling-mounted cameras |
US9367770B2 (en) | 2011-08-30 | 2016-06-14 | Digimarc Corporation | Methods and arrangements for identifying objects |
JP6251906B2 (en) | 2011-09-23 | 2017-12-27 | ディジマーク コーポレイション | Smartphone sensor logic based on context |
US8615104B2 (en) | 2011-11-03 | 2013-12-24 | Verance Corporation | Watermark extraction based on tentative watermarks |
US8682026B2 (en) | 2011-11-03 | 2014-03-25 | Verance Corporation | Efficient extraction of embedded watermarks in the presence of host content distortions |
US8923548B2 (en) | 2011-11-03 | 2014-12-30 | Verance Corporation | Extraction of embedded watermarks from a host content using a plurality of tentative watermarks |
JP6121647B2 (en) | 2011-11-11 | 2017-04-26 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US8745403B2 (en) | 2011-11-23 | 2014-06-03 | Verance Corporation | Enhanced content management based on watermark extraction records |
US9202234B2 (en) | 2011-12-08 | 2015-12-01 | Sharp Laboratories Of America, Inc. | Globally assembled, locally interpreted conditional digital signage playlists |
US9323902B2 (en) | 2011-12-13 | 2016-04-26 | Verance Corporation | Conditional access using embedded watermarks |
US9547753B2 (en) * | 2011-12-13 | 2017-01-17 | Verance Corporation | Coordinated watermarking |
US8849710B2 (en) | 2011-12-30 | 2014-09-30 | Ebay Inc. | Projection shopping with a mobile device |
WO2013149267A2 (en) * | 2012-03-29 | 2013-10-03 | Digimarc Corporation | Image-related methods and arrangements |
US8620021B2 (en) | 2012-03-29 | 2013-12-31 | Digimarc Corporation | Image-related methods and arrangements |
US9516360B2 (en) * | 2012-04-12 | 2016-12-06 | Qualcomm Incorporated | Estimating demographic statistics of media viewership via context aware mobile devices |
JP5668015B2 (en) * | 2012-04-25 | 2015-02-12 | 株式会社Nttドコモ | Terminal device, information display system, program |
US8655694B2 (en) | 2012-05-29 | 2014-02-18 | Wesley John Boudville | Dynamic group purchases using barcodes |
US8970455B2 (en) | 2012-06-28 | 2015-03-03 | Google Technology Holdings LLC | Systems and methods for processing content displayed on a flexible display |
JP2014030122A (en) * | 2012-07-31 | 2014-02-13 | Toshiba Tec Corp | Digital signage apparatus, control program of the same, and digital signage system |
JP6076353B2 (en) * | 2012-08-30 | 2017-02-08 | 株式会社ソニー・インタラクティブエンタテインメント | Content providing apparatus, content providing method, program, information storage medium, broadcast station apparatus, and data structure |
US9571606B2 (en) | 2012-08-31 | 2017-02-14 | Verance Corporation | Social media viewing system |
US9106964B2 (en) | 2012-09-13 | 2015-08-11 | Verance Corporation | Enhanced content distribution using advertisements |
US8869222B2 (en) | 2012-09-13 | 2014-10-21 | Verance Corporation | Second screen content |
US8752761B2 (en) | 2012-09-21 | 2014-06-17 | Symbol Technologies, Inc. | Locationing using mobile device, camera, and a light source |
US10175750B1 (en) * | 2012-09-21 | 2019-01-08 | Amazon Technologies, Inc. | Projected workspace |
US9268136B1 (en) | 2012-09-28 | 2016-02-23 | Google Inc. | Use of comparative sensor data to determine orientation of head relative to body |
US9727586B2 (en) * | 2012-10-10 | 2017-08-08 | Samsung Electronics Co., Ltd. | Incremental visual query processing with holistic feature feedback |
WO2014062906A1 (en) * | 2012-10-19 | 2014-04-24 | Interphase Corporation | Motion compensation in an interactive display system |
US9830588B2 (en) * | 2013-02-26 | 2017-11-28 | Digimarc Corporation | Methods and arrangements for smartphone payments |
US20140278847A1 (en) * | 2013-03-14 | 2014-09-18 | Fabio Gallo | Systems and methods for virtualized advertising |
WO2014153199A1 (en) | 2013-03-14 | 2014-09-25 | Verance Corporation | Transactional video marking system |
US9818150B2 (en) | 2013-04-05 | 2017-11-14 | Digimarc Corporation | Imagery and annotations |
KR101391582B1 (en) * | 2013-06-05 | 2014-05-07 | (주)캡보이트레이딩 | Block and toy decoration cap |
US9859743B2 (en) * | 2013-06-14 | 2018-01-02 | Intel Corporation | Mobile wireless charging service |
JP6221394B2 (en) * | 2013-06-19 | 2017-11-01 | 富士通株式会社 | Image processing apparatus, image processing method, and image processing program |
JP2015004848A (en) * | 2013-06-21 | 2015-01-08 | ソニー株式会社 | Information processing device, communication system, and information processing method |
KR101857450B1 (en) * | 2013-07-19 | 2018-05-14 | 삼성전자주식회사 | Information providing system comprising of content providing device and terminal device and the controlling method thereof |
US9251549B2 (en) | 2013-07-23 | 2016-02-02 | Verance Corporation | Watermark extractor enhancements based on payload ranking |
US9407620B2 (en) | 2013-08-23 | 2016-08-02 | Morphotrust Usa, Llc | System and method for identity management |
US10320778B2 (en) | 2013-08-27 | 2019-06-11 | Morphotrust Usa, Llc | Digital identification document |
US10282802B2 (en) | 2013-08-27 | 2019-05-07 | Morphotrust Usa, Llc | Digital identification document |
US9426328B2 (en) * | 2013-08-28 | 2016-08-23 | Morphotrust Usa, Llc | Dynamic digital watermark |
US10249015B2 (en) | 2013-08-28 | 2019-04-02 | Morphotrust Usa, Llc | System and method for digitally watermarking digital facial portraits |
US9497349B2 (en) * | 2013-08-28 | 2016-11-15 | Morphotrust Usa, Llc | Dynamic digital watermark |
CA2867833C (en) * | 2013-10-17 | 2020-06-16 | Staples, Inc. | Intelligent content and navigation |
US9208334B2 (en) | 2013-10-25 | 2015-12-08 | Verance Corporation | Content management using multiple abstraction layers |
JP6355423B2 (en) * | 2013-11-08 | 2018-07-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Display method |
US9402095B2 (en) * | 2013-11-19 | 2016-07-26 | Nokia Technologies Oy | Method and apparatus for calibrating an audio playback system |
US20150149301A1 (en) * | 2013-11-26 | 2015-05-28 | El Media Holdings Usa, Llc | Coordinated Virtual Presences |
US20150149287A1 (en) * | 2013-11-27 | 2015-05-28 | Wendell Brown | Responding to an advertisement using a mobile computing device |
JP5772942B2 (en) * | 2013-12-25 | 2015-09-02 | 富士ゼロックス株式会社 | Information processing apparatus and information processing program |
US9760898B2 (en) | 2014-01-06 | 2017-09-12 | The Nielsen Company (Us), Llc | Methods and apparatus to detect engagement with media presented on wearable media devices |
US10424038B2 (en) | 2015-03-20 | 2019-09-24 | Digimarc Corporation | Signal encoding outside of guard band region surrounding text characters, including varying encoding strength |
US9635378B2 (en) | 2015-03-20 | 2017-04-25 | Digimarc Corporation | Sparse modulation for robust signaling and synchronization |
US9832353B2 (en) | 2014-01-31 | 2017-11-28 | Digimarc Corporation | Methods for encoding, decoding and interpreting auxiliary data in media signals |
JP6624738B2 (en) | 2014-02-10 | 2019-12-25 | ヒヴェスタック インコーポレイティッドHivestack Inc. | Method performed by a system for delivering digital advertisements for out-of-home advertising campaigns and system for delivering digital advertisements |
US10129251B1 (en) | 2014-02-11 | 2018-11-13 | Morphotrust Usa, Llc | System and method for verifying liveliness |
US9311639B2 (en) | 2014-02-11 | 2016-04-12 | Digimarc Corporation | Methods, apparatus and arrangements for device to device communication |
JP2017514345A (en) | 2014-03-13 | 2017-06-01 | ベランス・コーポレイション | Interactive content acquisition using embedded code |
GB2524538A (en) * | 2014-03-26 | 2015-09-30 | Nokia Technologies Oy | An apparatus, method and computer program for providing an output |
US10412436B2 (en) * | 2014-09-12 | 2019-09-10 | At&T Mobility Ii Llc | Determining viewership for personalized delivery of television content |
US9600754B2 (en) | 2014-12-23 | 2017-03-21 | Digimarc Corporation | Machine-readable glass |
US11615199B1 (en) * | 2014-12-31 | 2023-03-28 | Idemia Identity & Security USA LLC | User authentication for digital identifications |
US10091197B2 (en) | 2015-01-16 | 2018-10-02 | Digimarc Corporation | Configuring, controlling and monitoring computers using mobile devices |
WO2016153936A1 (en) | 2015-03-20 | 2016-09-29 | Digimarc Corporation | Digital watermarking and data hiding with narrow-band absorption materials |
US10783601B1 (en) | 2015-03-20 | 2020-09-22 | Digimarc Corporation | Digital watermarking and signal encoding with activable compositions |
US20160307227A1 (en) * | 2015-04-14 | 2016-10-20 | Ebay Inc. | Passing observer sensitive publication systems |
JP2016200779A (en) * | 2015-04-14 | 2016-12-01 | カシオ計算機株式会社 | Content reproduction apparatus, content reproduction system, content reproduction method and program |
US10706456B2 (en) * | 2015-04-22 | 2020-07-07 | Staples, Inc. | Intelligent item tracking and expedited item reordering by stakeholders |
CN104821143B (en) * | 2015-04-24 | 2018-04-17 | 杭州磐景智造文化创意有限公司 | Interaction systems based on screen Dynamic Announce |
US9952821B2 (en) * | 2015-09-01 | 2018-04-24 | Electronics And Telecommunications Research Institute | Screen position sensing method in multi display system, content configuring method, watermark image generating method for sensing screen position server, and display terminal |
US9772812B1 (en) * | 2016-03-28 | 2017-09-26 | Amazon Technologies, Inc. | Device-layout determinations |
US10027850B2 (en) | 2016-04-19 | 2018-07-17 | Blackberry Limited | Securing image data detected by an electronic device |
US10019639B2 (en) * | 2016-04-19 | 2018-07-10 | Blackberry Limited | Determining a boundary associated with image data |
WO2018020764A1 (en) | 2016-07-28 | 2018-02-01 | ソニー株式会社 | Content output system, terminal device, content output method, and recording medium |
US10832306B2 (en) * | 2016-09-15 | 2020-11-10 | International Business Machines Corporation | User actions in a physical space directing presentation of customized virtual environment |
JP7074066B2 (en) | 2016-11-14 | 2022-05-24 | ソニーグループ株式会社 | Information processing equipment, information processing methods, recording media, and programs |
US10705859B2 (en) * | 2016-12-27 | 2020-07-07 | Facebook, Inc. | Electronic displays with customized content |
US10692107B2 (en) * | 2017-02-27 | 2020-06-23 | Verizon Media Inc. | Methods and systems for determining exposure to fixed-location dynamic displays |
CN110612504A (en) * | 2017-05-16 | 2019-12-24 | 深圳市汇顶科技股份有限公司 | Advertisement playing system and advertisement playing method |
US20180349946A1 (en) * | 2017-05-31 | 2018-12-06 | Telefonaktiebolaget Lm Ericsson (Publ) | System, method and architecture for real-time native advertisement placement in an augmented/mixed reality (ar/mr) environment |
KR102351542B1 (en) * | 2017-06-23 | 2022-01-17 | 삼성전자주식회사 | Application Processor including function of compensation of disparity, and digital photographing apparatus using the same |
JP6953865B2 (en) * | 2017-07-28 | 2021-10-27 | 富士フイルムビジネスイノベーション株式会社 | Information processing system |
US11270251B2 (en) | 2017-10-16 | 2022-03-08 | Florence Corporation | Package management system with accelerated delivery |
US11144873B2 (en) | 2017-10-16 | 2021-10-12 | Florence Corporation | Package management system with accelerated delivery |
US10915856B2 (en) | 2017-10-16 | 2021-02-09 | Florence Corporation | Package management system with accelerated delivery |
US10685233B2 (en) * | 2017-10-24 | 2020-06-16 | Google Llc | Sensor based semantic object generation |
US10885496B2 (en) | 2017-10-24 | 2021-01-05 | Staples, Inc. | Restocking hub with interchangeable buttons mapped to item identifiers |
JP2019086988A (en) * | 2017-11-06 | 2019-06-06 | シャープ株式会社 | Content distribution system, content distribution device, content distribution method, and program |
JP2019086989A (en) * | 2017-11-06 | 2019-06-06 | シャープ株式会社 | Content distribution system, content distribution device, content distribution method, and program |
US10872392B2 (en) | 2017-11-07 | 2020-12-22 | Digimarc Corporation | Generating artistic designs encoded with robust, machine-readable data |
US10896307B2 (en) | 2017-11-07 | 2021-01-19 | Digimarc Corporation | Generating and reading optical codes with variable density to adapt for visual quality and reliability |
US11062108B2 (en) | 2017-11-07 | 2021-07-13 | Digimarc Corporation | Generating and reading optical codes with variable density to adapt for visual quality and reliability |
US10250948B1 (en) * | 2018-01-05 | 2019-04-02 | Aron Surefire, Llc | Social media with optical narrowcasting |
US10504158B2 (en) | 2018-03-16 | 2019-12-10 | Intersection Parent, Inc. | Systems, methods and programmed products for electronic bidding on and electronic tracking, delivery and performance of digital advertisements on non-personal digital devices |
US11057685B2 (en) * | 2018-03-29 | 2021-07-06 | Ncr Corporation | Media content proof of play over optical medium |
US10511808B2 (en) * | 2018-04-10 | 2019-12-17 | Facebook, Inc. | Automated cinematic decisions based on descriptive models |
KR101936178B1 (en) * | 2018-05-04 | 2019-01-08 | (주) 알트소프트 | Control service system of local device using reference region |
CA3044566A1 (en) | 2018-05-29 | 2019-11-29 | Staples, Inc. | Intelligent item reordering using an adaptable mobile graphical user interface |
US11410118B2 (en) | 2018-06-01 | 2022-08-09 | Florence Corporation | Package management system |
US10880533B2 (en) * | 2018-06-25 | 2020-12-29 | Canon Kabushiki Kaisha | Image generation apparatus, image generation method, and storage medium, for generating a virtual viewpoint image |
JP7065727B2 (en) * | 2018-08-09 | 2022-05-12 | シャープ株式会社 | Content transmission system, display device, content transmission method and program |
CA3109226A1 (en) | 2018-08-21 | 2020-02-27 | Florence Corporation | Purchased item management and promotional systems and methods |
CN111062735A (en) * | 2018-10-16 | 2020-04-24 | 百度在线网络技术(北京)有限公司 | Advertisement putting method, device, system, terminal and computer readable storage medium |
CN113748007A (en) | 2019-03-13 | 2021-12-03 | 数字标记公司 | Digital marking of recycled articles |
US11039160B2 (en) | 2019-03-21 | 2021-06-15 | The Nielsen Company (Us), Llc | Methods and apparatus for delivering extended payloads with composite watermarks |
US11037038B2 (en) | 2019-03-27 | 2021-06-15 | Digimarc Corporation | Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork |
WO2020198660A1 (en) | 2019-03-27 | 2020-10-01 | Digimarc Corporation | Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork |
USD954481S1 (en) | 2019-12-13 | 2022-06-14 | Florence Corporation | Double walled locker door |
US11529011B2 (en) | 2019-06-11 | 2022-12-20 | Florence Corporation | Package delivery receptacle and method of use |
KR102278693B1 (en) * | 2019-11-19 | 2021-07-16 | 주식회사 코이노 | Signage integrated management system providing Online to Offline user interaction based on Artificial Intelligence and method thereof |
KR20210071351A (en) * | 2019-12-06 | 2021-06-16 | 삼성전자주식회사 | Display system , the controlling method of the display system and display apparatus |
US11443401B2 (en) * | 2020-05-21 | 2022-09-13 | At&T Intellectual Property I, L.P. | Digital watermarking |
US20240037864A1 (en) * | 2020-07-28 | 2024-02-01 | Knowck Co,. Ltd. | Method, system, and non-transitory computer-readable recording medium for managing augmented reality interface for content being provided by digital signage |
CN114527864B (en) * | 2020-11-19 | 2024-03-15 | 京东方科技集团股份有限公司 | Augmented reality text display system, method, equipment and medium |
US20220198523A1 (en) * | 2020-12-18 | 2022-06-23 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
US20230360079A1 (en) * | 2022-01-18 | 2023-11-09 | e-con Systems India Private Limited | Gaze estimation system and method thereof |
KR102516278B1 (en) * | 2022-06-08 | 2023-03-30 | (주)이브이알스튜디오 | User terminal for providing intuitive control environment on media pannel, server, and display apparatus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030126013A1 (en) * | 2001-12-28 | 2003-07-03 | Shand Mark Alexander | Viewer-targeted display system and method |
US6947571B1 (en) * | 1999-05-19 | 2005-09-20 | Digimarc Corporation | Cell phones with optical capabilities, and related applications |
US20070116325A1 (en) * | 2001-03-05 | 2007-05-24 | Rhoads Geoffrey B | Embedding Geo-Location Information In Media |
US7254249B2 (en) * | 2001-03-05 | 2007-08-07 | Digimarc Corporation | Embedding location data in video |
US20090025024A1 (en) * | 2007-07-20 | 2009-01-22 | James Beser | Audience determination for monetizing displayable content |
US20090141939A1 (en) * | 2007-11-29 | 2009-06-04 | Chambers Craig A | Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision |
US20100013951A1 (en) * | 2004-06-24 | 2010-01-21 | Rodriguez Tony F | Digital Watermarking Methods, Programs and Apparatus |
US20100257252A1 (en) * | 2009-04-01 | 2010-10-07 | Microsoft Corporation | Augmented Reality Cloud Computing |
US7921036B1 (en) * | 2002-04-30 | 2011-04-05 | Videomining Corporation | Method and system for dynamically targeting content based on automatic demographics and behavior analysis |
Family Cites Families (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5862260A (en) * | 1993-11-18 | 1999-01-19 | Digimarc Corporation | Methods for surveying dissemination of proprietary empirical data |
US6763122B1 (en) * | 1999-11-05 | 2004-07-13 | Tony Rodriguez | Watermarking an image in color plane separations and detecting such watermarks |
US6571279B1 (en) * | 1997-12-05 | 2003-05-27 | Pinpoint Incorporated | Location enhanced information delivery system |
US6590996B1 (en) * | 2000-02-14 | 2003-07-08 | Digimarc Corporation | Color adaptive watermarking |
US6763123B2 (en) * | 1995-05-08 | 2004-07-13 | Digimarc Corporation | Detection of out-of-phase low visibility watermarks |
US20060284839A1 (en) * | 1999-12-15 | 2006-12-21 | Automotive Technologies International, Inc. | Vehicular Steering Wheel with Input Device |
US6411725B1 (en) * | 1995-07-27 | 2002-06-25 | Digimarc Corporation | Watermark enabled video objects |
US7095871B2 (en) * | 1995-07-27 | 2006-08-22 | Digimarc Corporation | Digital asset management and linking media signals with related data using watermarks |
GB2324669A (en) * | 1997-04-23 | 1998-10-28 | Ibm | Controlling video or image presentation according to encoded content classification information within the video or image data |
US6298176B2 (en) * | 1997-10-17 | 2001-10-02 | Welch Allyn Data Collection, Inc. | Symbol-controlled image data reading system |
US7756892B2 (en) * | 2000-05-02 | 2010-07-13 | Digimarc Corporation | Using embedded data with file sharing |
CA2269651A1 (en) * | 1998-05-12 | 1999-11-12 | Lucent Technologies, Inc. | Transform domain image watermarking method and system |
US6154571A (en) | 1998-06-24 | 2000-11-28 | Nec Research Institute, Inc. | Robust digital watermarking |
GB2361377B (en) * | 1998-12-11 | 2003-03-26 | Kent Ridge Digital Labs | Method and device for generating digital data watermarked with authentication data |
US7406214B2 (en) * | 1999-05-19 | 2008-07-29 | Digimarc Corporation | Methods and devices employing optical sensors and/or steganography |
EP1923830A3 (en) * | 1999-05-19 | 2008-08-27 | Digimarc Corporation | Methods and systems for controlling computers or linking to internet resources from physical and electronic objects |
US6349410B1 (en) * | 1999-08-04 | 2002-02-19 | Intel Corporation | Integrating broadcast television pause and web browsing |
US7188186B1 (en) * | 1999-09-03 | 2007-03-06 | Meyer Thomas W | Process of and system for seamlessly embedding executable program code into media file formats such as MP3 and the like for execution by digital media player and viewing systems |
US6385329B1 (en) * | 2000-02-14 | 2002-05-07 | Digimarc Corporation | Wavelet domain watermarks |
US6484148B1 (en) * | 2000-02-19 | 2002-11-19 | John E. Boyd | Electronic advertising device and method of using the same |
US20020046100A1 (en) | 2000-04-18 | 2002-04-18 | Naoto Kinjo | Image display method |
JP2001319217A (en) * | 2000-05-09 | 2001-11-16 | Fuji Photo Film Co Ltd | Image display method |
US7657058B2 (en) * | 2000-07-19 | 2010-02-02 | Digimarc Corporation | Watermark orientation signals conveying payload data |
AU2002225593A1 (en) * | 2000-10-17 | 2002-04-29 | Digimarc Corporation | User control and activation of watermark enabled objects |
US20020066111A1 (en) * | 2000-11-22 | 2002-05-30 | Digimarc Corporation | Watermark communication and control systems |
US6965683B2 (en) * | 2000-12-21 | 2005-11-15 | Digimarc Corporation | Routing networks for use with watermark systems |
US7249257B2 (en) * | 2001-03-05 | 2007-07-24 | Digimarc Corporation | Digitally watermarked maps and signs and related navigational tools |
US7197160B2 (en) * | 2001-03-05 | 2007-03-27 | Digimarc Corporation | Geographic information systems using digital watermarks |
US8543823B2 (en) * | 2001-04-30 | 2013-09-24 | Digimarc Corporation | Digital watermarking for identification documents |
US7340076B2 (en) * | 2001-05-10 | 2008-03-04 | Digimarc Corporation | Digital watermarks for unmanned vehicle navigation |
JP2002354446A (en) * | 2001-05-30 | 2002-12-06 | Hitachi Ltd | Method and system for outputting advertisement |
US7212663B2 (en) * | 2002-06-19 | 2007-05-01 | Canesta, Inc. | Coded-array technique for obtaining depth and other position information of an observed object |
JP2004072501A (en) * | 2002-08-07 | 2004-03-04 | Sony Corp | Method and device for superposing information, method and device for detecting information, and superposed information detection system |
JP2004157499A (en) * | 2002-09-13 | 2004-06-03 | Ntt Data Sanyo System Corp | Advertisement distribution system |
JP4981455B2 (en) * | 2004-02-04 | 2012-07-18 | ディジマーク コーポレイション | On-chip digital watermarked image signal and photo travel log with digital watermark |
JP4201812B2 (en) * | 2004-03-25 | 2008-12-24 | 三洋電機株式会社 | Information data providing apparatus and image processing apparatus |
US7925549B2 (en) * | 2004-09-17 | 2011-04-12 | Accenture Global Services Limited | Personalized marketing architecture |
JP4632417B2 (en) * | 2004-10-26 | 2011-02-16 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US20060125968A1 (en) | 2004-12-10 | 2006-06-15 | Seiko Epson Corporation | Control system, apparatus compatible with the system, and remote controller |
WO2006080228A1 (en) * | 2005-01-28 | 2006-08-03 | Access Co., Ltd. | Terminal device, optical read code information providing method, and optical read code generation method |
WO2006105686A1 (en) * | 2005-04-06 | 2006-10-12 | Eidgenössische Technische Hochschule Zürich | Method of executing an application in a mobile device |
US7953211B2 (en) | 2005-06-01 | 2011-05-31 | Radziewicz Clifford J | Automated ringback update system |
US8447828B2 (en) | 2005-09-21 | 2013-05-21 | Qurio Holdings, Inc. | System and method for hosting images embedded in external websites |
US8002619B2 (en) * | 2006-01-05 | 2011-08-23 | Wms Gaming Inc. | Augmented reality wagering game system |
CN101523408B (en) * | 2006-01-23 | 2013-11-20 | 数字标记公司 | Methods, systems, and subcombinations useful with physical articles |
EP2070328B1 (en) * | 2006-10-05 | 2011-02-16 | Vestel Elektronik Sanayi ve Ticaret A.S. | Watermark detection method for broadcasting |
US8565815B2 (en) * | 2006-11-16 | 2013-10-22 | Digimarc Corporation | Methods and systems responsive to features sensed from imagery or other data |
KR20070015239A (en) * | 2007-01-13 | 2007-02-01 | (주)아루온게임즈 | Game system and method in combination with mobile phones and a game console |
JP2008225904A (en) | 2007-03-13 | 2008-09-25 | Sony Corp | Data processing system and data processing method |
JP2008269550A (en) * | 2007-03-27 | 2008-11-06 | Masayuki Taguchi | Recognition system for dynamically displayed two-dimensional code |
US9846883B2 (en) * | 2007-04-03 | 2017-12-19 | International Business Machines Corporation | Generating customized marketing messages using automatically generated customer identification data |
US8781968B1 (en) * | 2008-08-25 | 2014-07-15 | Sprint Communications Company L.P. | Dynamic display based on estimated viewers |
US9788043B2 (en) * | 2008-11-07 | 2017-10-10 | Digimarc Corporation | Content interaction methods and systems employing portable devices |
US9117268B2 (en) * | 2008-12-17 | 2015-08-25 | Digimarc Corporation | Out of phase digital watermarking in two chrominance directions |
US8441441B2 (en) * | 2009-01-06 | 2013-05-14 | Qualcomm Incorporated | User interface for mobile devices |
JP5742057B2 (en) | 2009-03-03 | 2015-07-01 | ディジマーク コーポレイション | Narrow casting from public displays and related arrangements |
US9679437B2 (en) * | 2010-06-08 | 2017-06-13 | Bally Gaming, Inc. | Augmented reality for wagering game activity |
-
2010
- 2010-03-03 JP JP2011553088A patent/JP5742057B2/en not_active Expired - Fee Related
- 2010-03-03 EP EP10749287.8A patent/EP2404443A4/en not_active Withdrawn
- 2010-03-03 WO PCT/US2010/026096 patent/WO2010102040A1/en active Application Filing
- 2010-03-03 CA CA2754061A patent/CA2754061A1/en not_active Abandoned
- 2010-03-03 US US12/716,908 patent/US8412577B2/en not_active Expired - Fee Related
- 2010-03-03 KR KR1020117022847A patent/KR20110128322A/en not_active Application Discontinuation
-
2011
- 2011-07-28 US US13/193,182 patent/US20110279479A1/en not_active Abandoned
- 2011-07-28 US US13/193,141 patent/US9524584B2/en active Active
- 2011-07-28 US US13/193,157 patent/US9460560B2/en not_active Expired - Fee Related
-
2013
- 2013-03-11 US US13/792,793 patent/US20130286046A1/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6947571B1 (en) * | 1999-05-19 | 2005-09-20 | Digimarc Corporation | Cell phones with optical capabilities, and related applications |
US20070116325A1 (en) * | 2001-03-05 | 2007-05-24 | Rhoads Geoffrey B | Embedding Geo-Location Information In Media |
US7254249B2 (en) * | 2001-03-05 | 2007-08-07 | Digimarc Corporation | Embedding location data in video |
US20030126013A1 (en) * | 2001-12-28 | 2003-07-03 | Shand Mark Alexander | Viewer-targeted display system and method |
US7921036B1 (en) * | 2002-04-30 | 2011-04-05 | Videomining Corporation | Method and system for dynamically targeting content based on automatic demographics and behavior analysis |
US20100013951A1 (en) * | 2004-06-24 | 2010-01-21 | Rodriguez Tony F | Digital Watermarking Methods, Programs and Apparatus |
US20090025024A1 (en) * | 2007-07-20 | 2009-01-22 | James Beser | Audience determination for monetizing displayable content |
US20090141939A1 (en) * | 2007-11-29 | 2009-06-04 | Chambers Craig A | Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision |
US20100257252A1 (en) * | 2009-04-01 | 2010-10-07 | Microsoft Corporation | Augmented Reality Cloud Computing |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US8849827B2 (en) | 2010-09-16 | 2014-09-30 | Alcatel Lucent | Method and apparatus for automatically tagging content |
US8533192B2 (en) | 2010-09-16 | 2013-09-10 | Alcatel Lucent | Content capture device and methods for automatically tagging content |
US8655881B2 (en) | 2010-09-16 | 2014-02-18 | Alcatel Lucent | Method and apparatus for automatically tagging content |
US8666978B2 (en) * | 2010-09-16 | 2014-03-04 | Alcatel Lucent | Method and apparatus for managing content tagging and tagged content |
US20120072463A1 (en) * | 2010-09-16 | 2012-03-22 | Madhav Moganti | Method and apparatus for managing content tagging and tagged content |
US9723293B1 (en) | 2011-06-21 | 2017-08-01 | Amazon Technologies, Inc. | Identifying projection surfaces in augmented reality environments |
US9183807B2 (en) | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Displaying virtual data as printed content |
US9182815B2 (en) * | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Making static printed content dynamic with virtual data |
US9229231B2 (en) | 2011-12-07 | 2016-01-05 | Microsoft Technology Licensing, Llc | Updating printed content with personalized virtual data |
US20130147836A1 (en) * | 2011-12-07 | 2013-06-13 | Sheridan Martin Small | Making static printed content dynamic with virtual data |
US8840250B1 (en) * | 2012-01-11 | 2014-09-23 | Rawles Llc | Projection screen qualification and selection |
US9165381B2 (en) | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
WO2014040189A1 (en) * | 2012-09-13 | 2014-03-20 | Ati Technologies Ulc | Method and apparatus for controlling presentation of multimedia content |
US20160016222A1 (en) * | 2013-03-01 | 2016-01-21 | Novpress Gmbh Pressen Und Presswerkzeuge & Co. Kg | Handheld Pressing Device |
US10427201B2 (en) * | 2013-03-01 | 2019-10-01 | Novopress Gmbh Pressen Und Presswerkzeuge & Co. Kg | Handheld pressing device |
US9332522B2 (en) | 2014-05-20 | 2016-05-03 | Disney Enterprises, Inc. | Audiolocation system combining use of audio fingerprinting and audio watermarking |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10440407B2 (en) * | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US20180332317A1 (en) * | 2017-05-09 | 2018-11-15 | Lytro, Inc. | Adaptive control for immersive experience delivery |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US20190172091A1 (en) * | 2017-12-04 | 2019-06-06 | At&T Intellectual Property I, L.P. | Apparatus and methods for adaptive signage |
US11188944B2 (en) * | 2017-12-04 | 2021-11-30 | At&T Intellectual Property I, L.P. | Apparatus and methods for adaptive signage |
US11636518B2 (en) | 2017-12-04 | 2023-04-25 | At&T Intellectual Property I, L.P. | Apparatus and methods for adaptive signage |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
Also Published As
Publication number | Publication date |
---|---|
CA2754061A1 (en) | 2010-09-10 |
US9460560B2 (en) | 2016-10-04 |
JP5742057B2 (en) | 2015-07-01 |
JP2012520018A (en) | 2012-08-30 |
WO2010102040A1 (en) | 2010-09-10 |
EP2404443A4 (en) | 2013-09-04 |
US20110281599A1 (en) | 2011-11-17 |
US20130286046A1 (en) | 2013-10-31 |
US9524584B2 (en) | 2016-12-20 |
US8412577B2 (en) | 2013-04-02 |
EP2404443A1 (en) | 2012-01-11 |
KR20110128322A (en) | 2011-11-29 |
US20110280437A1 (en) | 2011-11-17 |
US20100228632A1 (en) | 2010-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9524584B2 (en) | Narrowcasting from public displays, and related methods | |
US10559053B2 (en) | Screen watermarking methods and arrangements | |
US10262356B2 (en) | Methods and arrangements including data migration among computing platforms, e.g. through use of steganographic screen encoding | |
US10181339B2 (en) | Smartphone-based methods and systems | |
US7224995B2 (en) | Data entry method and system | |
Yuan et al. | Dynamic and invisible messaging for visual MIMO | |
JP5864437B2 (en) | Method and configuration for signal rich art | |
US20170243246A1 (en) | Content rendering system dependent on previous ambient audio | |
US8391851B2 (en) | Gestural techniques with wireless mobile phone devices | |
EP2635997A2 (en) | Smartphone-based methods and systems | |
US9058660B2 (en) | Feature searching based on feature quality information | |
EP3129916A1 (en) | System and method for embedding dynamic marks into visual images in a detectable manner | |
JP5426441B2 (en) | Advertisement image display device and advertisement image display method | |
TW201514887A (en) | Playing system and method of image information | |
EP2793169A1 (en) | Method and apparatus for managing objects of interest | |
KR20120076541A (en) | Advertising method using augmented reality coder and system thereof | |
KR101625751B1 (en) | AR marker having boundary code, and system, and method for providing augmented reality using the same | |
Chen et al. | BodyAd: A Body-Aware Advertising Signage System with Activity-Specific Interaction Based on loT Technologies |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION |