[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109947236B - Method and device for controlling content on a display of an electronic device - Google Patents

Method and device for controlling content on a display of an electronic device Download PDF

Info

Publication number
CN109947236B
CN109947236B CN201811536062.9A CN201811536062A CN109947236B CN 109947236 B CN109947236 B CN 109947236B CN 201811536062 A CN201811536062 A CN 201811536062A CN 109947236 B CN109947236 B CN 109947236B
Authority
CN
China
Prior art keywords
electronic device
user
display
distance
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811536062.9A
Other languages
Chinese (zh)
Other versions
CN109947236A (en
Inventor
莱拉·丹尼尔森
格内尔·托马斯·斯特拉特
雷切尔-迈克尔·阿塞耶格
厄于温·斯塔姆内斯
汤姆·厄于斯泰因·卡夫利
埃里克·福斯特伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elliptic Laboratories ASA
Original Assignee
Elliptic Laboratories ASA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from NO20180146A external-priority patent/NO344671B1/en
Application filed by Elliptic Laboratories ASA filed Critical Elliptic Laboratories ASA
Publication of CN109947236A publication Critical patent/CN109947236A/en
Application granted granted Critical
Publication of CN109947236B publication Critical patent/CN109947236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a method and a device for controlling content on a display of an electronic device, the method comprising the steps of: emitting a first ultrasonic signal from an ultrasonic transducer located in the electronic device, at least a portion of which is directed toward a user, receiving a second ultrasonic signal at a receiver transducer located in the electronic device; the second ultrasound signal comprises a portion of the first ultrasound signal reflected from the face of the user; calculating a distance between the user and the electronic device using acoustic measurements involving at least one acoustic transducer; wherein the electronic device comprises a memory storing at least two sets of predetermined display features, the electronic device being arranged to display a first set when the distance between the electronic device and the user exceeds at least one selected threshold and to display a second set of display features when the distance is less than the threshold.

Description

Method and device for controlling content on a display of an electronic device
Technical Field
The present teachings relate to background display features for electronic devices.
Background
In electronic devices, more particularly in mobile devices, infrared proximity sensing is often used to detect the presence of a user and/or the orientation of the device and to change the information displayed in dependence thereon, for example as shown in US2016/0219217, WO2017/098524, EP 2428864 and EP 2615524. However, there is not enough range or enough field of view ("FoV") to detect, for example, movement of a user's hand.
Display conversion may be accomplished using a touch screen of an electronic device, however, the touch screen may be difficult to use when the user's arm is extended, for example, for taking self-portrait or self-portrait. In addition, the user is often required to use a second hand, for example, to contract and enlarge with a touch screen.
US2012/0287163A1 teaches automatically scaling the size of a set of visual content based on the proximity of the user's face to the display. US2009/0164896A1 teaches a technique for managing content display on a display of an electronic device based on a user's distance from the display, wherein the distance can be estimated by analyzing video data to detect the user's face.
Both of these known solutions use distance to change the zoom factor of the information on the screen, but do not take into account that the distance between the device and the face is chosen by the user, which depends on the context and has a specific function affecting the type of information to be observed on the display.
Thus, there remains a need for a method and product that enables switching between related user interaction options based on the use or context of the device. At least some of the problems inherent in the prior art, which are solved by the features of the appended independent claims, will be shown.
Disclosure of Invention
The present invention thus provides a device and method for adjusting information about a background on a display, particularly but not exclusively for situations such as so-called "self-portrait" or self-portrait where the content displayed will depend on the distance between the device and the face of the user. For example, transition from showing a camera preview to showing a still image without the user touching a button or screen. After the self-portrait image has been captured and the handset is brought closer to the user's head, the captured image is magnified. When the arm is extended again, the image will shrink and the viewfinder will become active again.
According to known techniques, a user needs to touch gallery icons and zoom in and out on the face using pinch and zoom out touch gestures or other touch tools. This can be time consuming and potentially result in missed photo opportunities.
Drawings
The invention will be discussed in more detail with reference to the accompanying drawings, by way of example
Fig. 1A, B illustrates an aspect of the present teachings in which a mobile device adapts its display based on context.
Fig. 2 shows a flow chart for using the invention.
Fig. 3a, b show the use of the device according to the invention.
Detailed Description
Fig. 1A and 1B illustrate a first example for zooming in and out of a captured self-portrait image in a relatively quick and intuitive manner according to an aspect of the present teachings, where fig. 1A illustrates a long-range display mode and fig. 1B illustrates a short-range display mode. In fig. 1A, the user 2 is taking a self-photograph, wherein the preview image on the display 1A of the device or mobile phone 1 shows the complete scene, allowing to compose a preferred image, for example comprising a background landscape. The exposure of the photo may be controlled on the display 1a, for example using a gesture 3 based contactless interface, for example as described in WO2015/022498, or automatically upon focusing of the face.
In fig. 1B, the device 1 detects that it is moving close to the user 2 and thus provides a detailed image on the display 1a, allowing the user to check the exposure of the face and of course zoom out to see the complete image, using standard gestures or menus.
This provides a one-hand pull up function allowing the user to quickly ascertain whether the captured self-timer is of good quality by zooming in on the part of the image containing the face or to the centre of the image when the user 2 brings the device 1 closer together. When the arm is extended again by an increasing distance, the image is reduced and the viewfinder function of the display 1a becomes active again. When a face is detected in the viewfinder, this may be marked with a rectangle around the face, as shown in FIG. 1A.
The present solution may be used to switch the display context between long and short distance display modes, for example:
camera background and gallery background: when the arm is extended, the screen shows the feed from the front camera. When the phone is close to the body, the screen shows the last photo taken (gallery).
Panorama and zoom: when the arm is extended, the screen shows the panorama. When the phone is close to the body, the screen shows an enlarged face, as shown in fig. 1A, B.
Panorama and custom viewing: when the arm is extended, the screen shows the panorama. When the phone is close to the body, the screen shows sharing/custom/image editing options in social media.
Preferably, the face detection is performed using an imaging unit, such as a camera, using well known face detection algorithms, as mentioned in the above publications. Alternatively, it may include 3D imaging and recognition of faces, or may be an acoustic imaging unit, as suggested by chapter Yoong, K. & McKerrow, P.J.2005, 'Face recognition with CTFM sonar' ("face recognition and CTFM sonar "),C.Sammut(eds),Australasian Conference on Robotics and Automation,Australian Robotics and Automation Association,Sydney,, pages 1-10), which describes the use of acoustic signals to recognize faces.
Image-based analysis can be used to estimate distance based on known features in the face being recognized, but this is slow and inaccurate and of course also depends on the recognition of the face, which is not always easy. The invention is therefore based on acoustic distance measurement, and can be implemented by means of image processing, for example, setting limits on the measured scanning distance range.
The acoustic measurement may be based on well known techniques such as pulse echo, chirp, coded signal, resonance measurement or similar methods, which use available transducers that emit acoustic signals and measure the time lapse before receiving the echo. Preferably, the acoustic measurement is performed within the ultrasound range, outside the audible range, but possibly close to the audible range, so that it can use transducers already present in devices operating in the audible range. By controlling the emission frequency and analysing the reflected signal, it is also possible to detect the relative movement between the user 2 and the device 1, for example using well known doppler analysis. In other words, the acoustic measurements may include analysis in the time and frequency domains.
Thus, the preferred algorithm produces an estimate of the motion of the device with the acoustic transducer relative to the acoustic reflector (possibly the user's head). By design, the position is reset in case doppler is used when the device is stationary or performs a selected action, and thus any movement relative to that position is measured. The estimation is based on accumulating the velocity of the reflector relative to the device, wherein the doppler effect is used to estimate the velocity. Alternatively, the movement may be found simply by monitoring the change in measured distance using a series of emitted acoustic codes or patterns.
When continuously measuring movement between the user and the device, the displayed information may also be continuously changed, for example by using an animation showing the movement or continuously zooming in on a captured image or image on a display.
Fig. 2 shows a process of starting imaging when the camera is activated. The real-time image 101 is scanned to detect a face. If a face 101a is identified, an image 102a with the face is captured. Further, the distance and/or movement may also be sensed acoustically 103a and by monitoring the face 104 a. To determine whether the device has moved toward the head after the facial image has been captured, a combination of ultrasonic detection and facial recognition is used.
If the device is moved towards the face, the image may be enlarged to focus on the face 105a, and if movement away from the face is detected acoustically 106a and/or visually 107a, the image will return to the live preview mode 101 again.
Further, the back command 110 may be given by a gesture or pressing a back button, while returning to the live view mode 101 at any stage.
If no face 101b is detected, the image 102b is taken without a face being recognized, in which case the distance and/or movement is measured only acoustically 103b, and if moved towards the user, the central part 105b of the image may be shown, and when the distance is increased 107b, the system returns to the live view mode 101, and the face is scanned again in this step as well.
The face recognition portion may use the size of the bounding box around the face to determine whether the device has moved toward or away from the face after image capture or indicate an approximate distance based on the distance between certain features in the recognized face in the image sample during processing. In this case, when the size of the face box increases beyond a given threshold, a pull up of the display is triggered.
As described above and shown in fig. 3a, the method according to the invention relates to a device 1 that transmits an ultrasonic signal 31 and analyzes echoes. Which allows fine-grained measurement of the distance between the phone 1 and the body 2 (neither face recognition nor inertial sensors can be provided). It has drawbacks such as receiving reflections from other people 34 and the user's hands/arms. Therefore, it is advantageous to include a solid reference (provided by face recognition) that provides the initial distance range 33. It may also be arranged to react only when there is large evidence of movement (to filter out unexpected movement measurements) so the response will be delayed. Inertial sensors can be used to reduce the delay in response.
The transducers used according to the present solution may depend on the device, preferably based on separate transmitters and receivers, but other well known ideas are also conceivable, such as a single transmitter/receiver device, or an array, pair or matrix, which adds directionality to the emitted and/or received signals, e.g. for suppressing reflections outside the camera's field of view or taking into account the known lobes of the transmitters and receivers. In this way, other people 34 or objects in the image may be removed from the distance measurement. If a recognized face appears in the image, the targeting system may automatically select the recognized face, or this may be selected by the user.
When using a camera according to the invention, the movement of the phone will be substantially perpendicular to the screen. Thus, the inertial sensor may be well suited to detect the onset of motion. Inertial sensors may be affected by drift, but face recognition and ultrasonic distance measurements may be used to correct for drift.
Face recognition provides a rough measure of distance and can be used to set the expected distance range 33 when recognizing the size of features to reliably authenticate the primary user. The approximate distance range based on face recognition may also be used to adjust the user's arm length or other selections made during operation. Thus, the combination of the two may eliminate the risk of capturing motion from the body sideways, or reflections from the limbs, from points outside the recommended range. The range 33 may also be linked to a threshold of the range 32, where the image is enlarged to show a face or other feature on the display, as described above, in the close range display mode.
Alternatively, the range 33 may be defined by the detected motion. If motion between the device and the user is detected and exceeds a predetermined limit in time or amplitude, the display may change in the same manner as if the measurement provided an absolute distance measurement.
The exact range may vary, for example, depending on the age and arm length of the person, and may be set to user preferences.
Typical user characteristics such as arm length and other variables may also be based on inputs in the devices available to the algorithm and on statistics of previous uses. The measured magnitude of the identified facial features may also be continuously monitored in order to track any changes in the distance range and by this process acoustic reflections from other persons, arms etc. close to the device can be avoided or suppressed, as shown in fig. 3b, wherein the reactions of the person 34 from the side are ignored.
Although the figures show the device being held by a person, the invention may also work if the device is stationary, for example placed on a support, and the user moves relative to the device. For example, the distance required between the camera and the user is large, and it is difficult to capture both the scenery and the user.
In summary, the present invention relates to an apparatus and method for controlling display content on an electronic device comprising the display and a camera having a known field of view.
The method comprises the following steps:
A first ultrasound signal is emitted from an ultrasound transducer located in the device, at least a portion of which is directed towards the user, which is positioned within the field of view covered by the imaging unit.
A second ultrasonic signal is received at a receiver transducer located in the device. The second ultrasound signal comprises a portion of the first ultrasound signal reflected from the face of the user. Thus, for example, the travel time may be used to calculate the distance between the device and the face.
The distance is then calculated between the user and the electronic device using acoustic measurements involving at least one acoustic transducer capable of transmitting and receiving acoustic signals or a specific transducer for transmitting and receiving signals, wherein the transducers may be synchronized to measure the propagation time.
The device further comprises a memory storing at least two sets of predetermined display features, wherein the device is arranged to display a first set when the distance between the device and the user exceeds at least one selected threshold and to display a second set of display features when the distance is less than said threshold.
The number of thresholds may depend on the function and application and in some cases may vary from one to a high enough number to allow the display to vary continuously according to distance. Thus, the display depends on the measurement range for the user, wherein the display may change continuously as an animation, image transformation or user interface modification, or the display may change between two predetermined states (images) depending on whether the detected range is above or below a defined threshold.
In addition, movement of the device relative to the user or face may be measured, for example, by analyzing the reflected acoustic signal to detect doppler shift relative to the emitted signal. Based on this movement, the estimated trajectory of the movement may be used to estimate the time to change between the groups of display features, and may also be used to present or show the measured movement on the screen, for example by using an animation or zooming in on the captured image according to the measured movement.
The movement and trajectory is typically related to movement along a line between the device and the face, but relative movement perpendicular thereto may also be used, for example, movement based on the face recognized on the screen.
The imaging unit or camera is preferably capable of authenticating at least some facial features of a user facing the device, wherein the imaging unit is operatively coupled to the electronic device. The size of the recognized facial features can be used to estimate the approximate distance between the device and the user's face and to set a limited range of distances calculated from the acoustic signal, thereby ignoring signals occurring from objects outside said range and in this way avoiding possible interference from outside the range.
The display settings may include two or more distance thresholds such that when the device is outside of a first threshold, shutter control associated with imaging means on the electronic device is activated, when the device is within the first threshold but outside of a second threshold, shutter control of the device is hidden, and when the device is within the second threshold, display zoom-in control is activated.
The device may be manually operated or activated, for example using an inertial sensor, when sensing movement in a direction between the imaging apparatus and the user.
Accordingly, an electronic device according to the present invention includes: an imaging unit including a face recognition circuit; an acoustic measurement unit comprising a transmitter for transmitting an acoustic signal having predetermined characteristics, and a receiver for receiving and analyzing the acoustic signal and measuring the distance between the face of the user and the device, and possibly also the relative movement. As mentioned above, the transmitter and receiver may be separate units or be comprised of the same transducer.
The device comprises a display adapted to show the imaging area and the selected information and a display control adapted to present a first set of information on the display when the distance exceeds a selected threshold and a second set of information when the distance is below said threshold. Each set of display information may be stored in a memory accessible to the user, the type of information being selected by the user.
The invention also relates to a software product implementing at least some of the features of the method for controlling an electronic device disclosed herein.

Claims (11)

1. A method for controlling content on a display of an electronic device, the method comprising the steps of:
emitting a first ultrasonic signal from an ultrasonic transducer located in the electronic device, at least a portion of which is directed towards a user,
Receiving a second ultrasonic signal at a receiver transducer located in the electronic device; the second ultrasound signal includes a portion composed of the first ultrasound signal reflected from the face of the user;
Calculating a distance between the user and the electronic device using acoustic measurements involving at least one acoustic transducer;
Wherein the electronic device comprises a memory storing at least two sets of predetermined display features, the electronic device being arranged to display a first set when a distance between the electronic device and the user exceeds at least one selected threshold, and to display a second set of display features when the distance is less than the threshold,
Wherein the threshold comprises at least two distance thresholds, and wherein when the electronic device is outside a first threshold, a shutter control associated with an imaging unit on the electronic device is activated,
Concealing shutter control of the electronic device when the electronic device is within the first threshold but outside a second threshold, and
And when the electronic equipment is within the second threshold value, activating a display zoom-out control.
2. The method according to claim 1, comprising the steps of: movement of the electronic device relative to the user is measured and the measured movement is presented on the display.
3. The method of claim 2, wherein the movement is measured by analyzing reflected acoustic signals to detect doppler shift relative to the emitted signals.
4. A method according to claim 2, wherein the estimated trajectory of movement is used to estimate the time to change between the sets of display features.
5. A method according to claim 3, comprising the steps of: at least some facial features facing a user of the electronic device are identified using an imaging unit operatively coupled to the electronic device.
6. The method of claim 5, wherein the size of the identified facial feature is used to estimate an approximate distance between the electronic device and the user's face in order to set a limited range of distances calculated from the acoustic signal, thereby ignoring signals occurring from objects outside the range.
7. An electronic device, comprising: an imaging unit including a display; and an acoustic measurement unit comprising a transmitter for transmitting an acoustic signal having a predetermined characteristic, and a receiver for receiving and analyzing the acoustic signal, and measuring a distance between a user in front of the device and the electronic device, wherein
The electronic device comprising a display control adapted to present a first set of information on the display when the distance exceeds a selected threshold and a second set of information when the distance is below the threshold,
Wherein the threshold comprises at least two distance thresholds, and wherein when the electronic device is outside a first threshold, a shutter control associated with the imaging unit on the electronic device is activated,
When the electronic device is within the first threshold but outside of the second threshold, the shutter control of the electronic device is hidden, and
When the electronic device is within the second threshold, a display zoom-out control is activated.
8. The electronic device of claim 7, further comprising a face recognition circuit, the face recognition circuit comprising an imaging device.
9. The electronic device according to claim 8, wherein the imaging unit is adapted to calculate an approximate distance range between the user and the device based on the magnitude of the measured feature of the user, the acoustic measurement unit calculating the distance based on reflected signals within the approximate distance range.
10. The electronic device of claim 7, wherein each set of display information is stored in a memory accessible to a user, the type of information being selected by the user.
11. The electronic device according to claim 7, wherein the acoustic measurement unit is adapted to measure movement in a direction between the user and the electronic device.
CN201811536062.9A 2017-12-21 2018-12-14 Method and device for controlling content on a display of an electronic device Active CN109947236B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762608965P 2017-12-21 2017-12-21
US62/608,965 2017-12-21
NO20180146A NO344671B1 (en) 2017-12-21 2018-01-29 Contextual display
NO20180146 2018-01-29

Publications (2)

Publication Number Publication Date
CN109947236A CN109947236A (en) 2019-06-28
CN109947236B true CN109947236B (en) 2024-05-28

Family

ID=67006412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811536062.9A Active CN109947236B (en) 2017-12-21 2018-12-14 Method and device for controlling content on a display of an electronic device

Country Status (1)

Country Link
CN (1) CN109947236B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119442203B (en) * 2025-01-07 2025-06-20 荣耀终端股份有限公司 Identity recognition method and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010193254A (en) * 2009-02-19 2010-09-02 Sanyo Electric Co Ltd Imaging apparatus and group photographing support program
JP2011242458A (en) * 2010-05-14 2011-12-01 Nippon Telegr & Teleph Corp <Ntt> Display device and display method
CN104461290A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Photographing control method and device
JP2016071558A (en) * 2014-09-29 2016-05-09 シャープ株式会社 Display control device, control method, control program, and recording medium
JP2016127525A (en) * 2015-01-07 2016-07-11 キヤノン株式会社 Imaging apparatus, and its control method and program
CN106227439A (en) * 2015-06-07 2016-12-14 苹果公司 Apparatus and method for capturing and interacting with enhanced digital images
CN107172347A (en) * 2017-05-12 2017-09-15 维沃移动通信有限公司 A kind of photographic method and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3518094A1 (en) * 2010-04-26 2019-07-31 BlackBerry Limited Portable electronic device and method of controlling same
GB201421427D0 (en) * 2014-12-02 2015-01-14 Elliptic Laboratories As Ultrasonic proximity and movement detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010193254A (en) * 2009-02-19 2010-09-02 Sanyo Electric Co Ltd Imaging apparatus and group photographing support program
JP2011242458A (en) * 2010-05-14 2011-12-01 Nippon Telegr & Teleph Corp <Ntt> Display device and display method
JP2016071558A (en) * 2014-09-29 2016-05-09 シャープ株式会社 Display control device, control method, control program, and recording medium
CN104461290A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Photographing control method and device
JP2016127525A (en) * 2015-01-07 2016-07-11 キヤノン株式会社 Imaging apparatus, and its control method and program
CN106227439A (en) * 2015-06-07 2016-12-14 苹果公司 Apparatus and method for capturing and interacting with enhanced digital images
CN107172347A (en) * 2017-05-12 2017-09-15 维沃移动通信有限公司 A kind of photographic method and terminal

Also Published As

Publication number Publication date
CN109947236A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
US10523870B2 (en) Contextual display
US7620316B2 (en) Method and device for touchless control of a camera
KR101688355B1 (en) Interaction of multiple perceptual sensing inputs
US9465443B2 (en) Gesture operation input processing apparatus and gesture operation input processing method
EP2927634B1 (en) Single-camera ranging method and system
EP2352078B1 (en) Information processing apparatus, information processing method, information recording medium, and program
US9127942B1 (en) Surface distance determination using time-of-flight of light
CN110506415B (en) Video recording method and electronic equipment
US8988662B1 (en) Time-of-flight calculations using a shared light source
US9558563B1 (en) Determining time-of-fight measurement parameters
JP2000347692A (en) Person detecting method, person detecting device, and control system using it
US9062969B1 (en) Surface distance determination using reflected light
JP6866467B2 (en) Gesture recognition device, gesture recognition method, projector with gesture recognition device and video signal supply device
JP6147350B2 (en) Distance measuring device
JP5291560B2 (en) Operating device
JP4682816B2 (en) Obstacle position detector
KR20170100892A (en) Position Tracking Apparatus
CN109947236B (en) Method and device for controlling content on a display of an electronic device
KR101265349B1 (en) Ultrasound System
JP4870651B2 (en) Information input system and information input method
CN105338241A (en) Shooting method and device
KR101444270B1 (en) Unmanned mobile monitoring system
JP2008182321A (en) Image display system
CN107422856A (en) Method, apparatus and storage medium for machine processing user command
CN108734065B (en) Gesture image acquisition equipment and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant