WO2021054580A1 - 객체의 후면 정보 제공 장치 및 그 방법 - Google Patents
객체의 후면 정보 제공 장치 및 그 방법 Download PDFInfo
- Publication number
- WO2021054580A1 WO2021054580A1 PCT/KR2020/008066 KR2020008066W WO2021054580A1 WO 2021054580 A1 WO2021054580 A1 WO 2021054580A1 KR 2020008066 W KR2020008066 W KR 2020008066W WO 2021054580 A1 WO2021054580 A1 WO 2021054580A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- user
- gaze
- unit
- blind spot
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/388—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
- H04N13/39—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the picture elements emitting light at places where a pair of light beams intersect in a transparent material
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Definitions
- the present invention relates to an apparatus and method for providing rear information of an object, and in particular, to obtain an omnidirectional peripheral image related to an object, to track a user's gaze in real time, and to the acquired peripheral image in response to the tracked user's gaze.
- the present invention relates to an apparatus and method for providing rear information of an object that generates a shielding image for an area corresponding to a blind spot in, and displays the generated shielding image through a display unit provided at one side of the object.
- Korean Patent Application Publication No. 10-2017-0126149 is a prior art document in the technical field to which the present invention belongs.
- Another object of the present invention is to provide an apparatus and method for providing rear information of an object that provides a stereoscopic image of an external situation through a display unit by converting a shielding image in real time according to a change in gaze of one or a plurality of users or the position of both eyes.
- An apparatus for providing rear information of an object includes: a three-dimensional scanning unit for acquiring an image around the object including a blind spot that is shielded by the object in a user's gaze direction with respect to a user located in front of the object; A gaze tracking unit for tracking a user's gaze located in front of the object; A controller configured to generate a shielding image for an area corresponding to a blind spot by the object from the acquired surrounding image in response to the tracked user's gaze; And a display unit displaying the generated shielding image.
- the three-dimensional scanning unit is configured on the other side of the object, and acquires the surrounding image including a blind spot created by the object when a user located in front of the object looks at the object can do.
- the gaze tracking unit is configured on one side of the object, acquires a user image including a user located in front of the object, recognizes the user from the acquired user image, and recognizes the user. It is possible to track the gaze of users in real time.
- control unit includes: a binocular tracking module for outputting coordinate information according to a user's gaze based on a user image acquired by the gaze tracking unit; An image control module for extracting an area corresponding to the coordinate information for each frame based on the surrounding image obtained from the stereoscopic scanning unit; And an image generation module that provides an image of the extracted region from the surrounding image to the display unit as the shielding image.
- the binocular tracking module includes: a preprocessor configured to generate a closed curve for a user's face and both eyes by performing a binarization process on the user image; An eye detection unit detecting binocular portions in the face area formed by the generated closed curve; And a coordinate calculator that calculates coordinate information according to the detected binocular portion.
- the image control module includes: a peripheral image input unit temporarily storing the surrounding image in units of a predetermined frame; A focus setting unit for setting focus by mapping the calculated coordinate information to the entire area of the representative frame of the temporarily stored surrounding image; And an area extracting unit that extracts an area corresponding to a blind spot by the object based on the set focus.
- a user located in front of the object is provided with an image around the object including a blind spot that is shielded by the object in the direction of the user's gaze by a three-dimensional scanning unit.
- the generating of the occlusion image may include: generating closed curves for the user's face and both eyes by performing a binarization process on the user image including the user acquired through the gaze tracking unit; Detecting binocular portions in the face area formed by the generated closed curve; Calculating coordinate information according to the detected binocular portion; Temporarily storing the surrounding image acquired through the stereoscopic scanning unit in units of a predetermined frame; Setting a focus by mapping the calculated coordinate information to the entire area of the representative frame of the temporarily stored surrounding image; Extracting an area corresponding to a blind spot by the object based on the set focus; And generating a shielding image, which is an image of an area corresponding to a blind spot by the extracted object from the surrounding image.
- the coordinate information may include a distance between a user and both eyes and a distance between the user and the display unit.
- the present invention acquires an omnidirectional surrounding image related to an object, tracks a user's gaze in real time, and generates a shielding image for a region corresponding to a blind spot from the acquired surrounding image in response to the tracked user's gaze. , By displaying the generated shielding image through a display unit provided at one side of the object, the user may feel that there is no opaque object or check back information of the opaque object at a level corresponding thereto.
- the present invention provides a stereoscopic image of an external situation through a display unit by converting a shielded image in real time according to a change in gaze of one or a plurality of users or the position of both eyes, thereby eliminating the sense of heterogeneity that the user feels due to the provision of rear information. There is an effect of reducing and increasing user satisfaction.
- FIG. 1 is a block diagram showing a configuration of an apparatus for providing rear information of an object according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of a control unit according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating a method of extracting a shielding image of an apparatus for providing rear information of an object according to an embodiment of the present invention.
- FIG. 4 is a flowchart illustrating a method of providing rear information of an object according to an embodiment of the present invention.
- FIG. 5 is a diagram showing an example of an image according to a user's gaze according to an embodiment of the present invention.
- FIG. 6 is a diagram illustrating an example of a surrounding image including a blind spot shielded by an object according to a user's gaze according to an embodiment of the present invention.
- FIG. 7 is a diagram showing an example of an image occluded by an object according to an embodiment of the present invention.
- FIG. 8 is a diagram illustrating an example of a screen of a display unit on which a shielding image is displayed according to an embodiment of the present invention.
- FIG. 1 is a block diagram showing the configuration of an apparatus 10 for providing rear information of an object according to an embodiment of the present invention.
- the apparatus 10 for providing information on the rear surface of an object includes a three-dimensional scanning unit 100, a gaze tracking unit 200, a control unit 300, and a display unit 400. Not all of the components of the device 10 for providing information on the rear surface of the object shown in FIG. 1 are essential components, and the device 10 for providing information on the rear surface of the object is implemented by more components than those shown in FIG. 1 Alternatively, the apparatus 10 for providing information on the rear surface of the object may be implemented by using fewer components.
- the object rear information providing device 10 includes a communication unit (not shown) for performing a communication function with other terminals, a storage unit (not shown) for storing various information and programs (or applications), and various information and An audio output unit (not shown) for outputting audio information corresponding to a program execution result, and a sensor unit (not shown) for detecting nearby objects and detecting movement of the corresponding object.
- the three-dimensional scanning unit (or three-dimensional scanning device) 100 is configured (or arranged/formed) around a specific object.
- the object may be an opaque object (or an opaque object).
- the three-dimensional scanning unit 100 acquires image information for a time-of-flight (TOF) type camera, a structured light (SL) type sensor, a CCD image sensor (or camera module/camera), and 360 degrees omnidirectional. It consists of a stereo vision camera (or stereo camera) that can be used.
- TOF time-of-flight
- SL structured light
- CCD image sensor or camera module/camera
- 360 degrees omnidirectional 360 degrees omnidirectional. It consists of a stereo vision camera (or stereo camera) that can be used.
- the three-dimensional scanning unit 100 includes an image around the object including a blind spot that is shielded by the object in the user's gaze direction (or the rear direction of the object facing the user) with respect to the user located in front of the object.
- a peripheral image on the back of the object / information on the surrounding image / an omnidirectional image around the object) is acquired (or photographed).
- the three-dimensional scanning unit 100 configured on the other side of the specific object is, when a user located in front of the object looks at the object, surrounding the object including the blind spot created by the object.
- a peripheral image (or a peripheral image on the back of an object/ambient image information/an omnidirectional peripheral image around an object) is acquired.
- the three-dimensional scanning unit 100 may be configured in a plurality of objects (or around the object) in order to obtain an image around the object around the object, and obtain an image around the object around the object.
- the three-dimensional scanning unit 100 may be installed at a location advantageous for photographing a surrounding situation corresponding to a viewpoint viewed by the user based on the user around the object.
- the three-dimensional scanning unit 100 photographs a surrounding situation corresponding to the point viewed by the user and provides it to the control unit 300, and uses a fisheye lens or the like to cover an area at least wider than the user's field of view.
- An image may be acquired, and the acquired surrounding image may be provided to the controller 300.
- the three-dimensional scanning unit 100 when a radar is used as the three-dimensional scanning unit 100, a wide-angle peripheral image may be obtained by rotating the mounted sensor. Accordingly, the 3D scanning unit 100 may acquire not only a flat image but also a 3D image having spatial information.
- the three-dimensional scanning unit 100 may acquire a surrounding image by photographing a blind spot formed by the other side of the object from one side of the object.
- the gaze tracking unit (or eye tracking device) 200 is configured (or arranged/formed) around the specific object.
- the gaze tracking unit 200 includes a TOF type camera, an SL type sensor, a CCD image sensor (or camera module/camera), and a stereo vision type camera (or stereo camera) capable of acquiring image information for 360 degrees omnidirectional. ).
- the gaze tracking unit 200 tracks (or measures/detects) a user's gaze located in front of the object in real time. At this time, the gaze tracking unit 200 may detect a user's eye position, gaze, visual direction, head direction, etc. by applying a known image tracking technique.
- the gaze tracking unit 200 configured on one side (or in front of the object) of the specific object is a still image or video obtained by an image sensor (camera module or camera) in a video call mode, a photographing mode, a video conference mode, etc. Process the image frame. That is, according to the codec (CODEC), corresponding image data obtained by the image sensor is encoded/decoded to meet each standard.
- the gaze tracking unit 200 photographs a user image (or user image information) including a user located in front of the object, and outputs a video signal corresponding to the captured user image (subject image). .
- the gaze tracking unit 200 recognizes the user from the acquired user image and tracks the recognized gaze of the user in real time.
- the gaze tracking unit 200 is configured in a plurality of the object (or around the object) in order to detect the user around the object located in all directions around the object, and to track the detected user's gaze in real time. , It is possible to track each individual user's gaze located around the object.
- the gaze tracking unit 200 detects the gaze of the user and provides it to the controller 300 so that the controller 300 can detect a region corresponding to the blind spot according to the gaze of the user around the object. .
- the apparatus 10 for providing rear information of an object detects the user's gaze through the gaze tracking unit 200, and A more natural image can be viewed by the user by creating and displaying a shielded image by setting a certain area.
- the surrounding image obtained by the three-dimensional scanning unit 100 is an image having a larger area than the area corresponding to the user's field of view, and the object is located at a specific location spaced apart from the object by a predetermined distance based on the object.
- An image related to a direction corresponding to (or coincident with) the gaze of the user looking at, and the user image obtained by the gaze tracking unit 200 is a corresponding image located at a specific location spaced a certain distance from the object based on the object. It may be an image including a user (or an image related to a direction facing the user's gaze).
- the controller (or image processing unit) (controller, or microcontroller unit (MCU) 300) is disposed (or configured/formed) on one side of a specific object in which the rear information providing device 10 of the object is configured.
- control unit 300 executes an overall control function of the apparatus 10 for providing information on the rear surface of the object.
- control unit 300 executes an overall control function of the apparatus 10 for providing rear information of an object by using programs and data stored in the storage unit (not shown).
- the control unit 300 may include RAM, ROM, CPU, GPU, and bus, and RAM, ROM, CPU, GPU, and the like may be connected to each other through a bus.
- the CPU may access the storage unit and perform booting using the O/S stored in the storage unit, and may perform various operations using various programs, contents, data, etc. stored in the storage unit.
- the control unit 300 includes a binocular tracking module 310, an image control module 320, and an image generation module 330. Not all of the components of the control unit 300 shown in FIG. 2 are essential components, and the control unit 300 may be implemented by more components than those shown in FIG. 2, or fewer components. The control unit 300 may be implemented.
- the binocular tracking module 310 receives the user image tck obtained by the gaze tracking unit 200 and outputs coordinate information according to the user's gaze (or calculates coordinate information to which the user's gaze faces).
- the binocular tracking module 310 includes a preprocessor 311, an eye detection unit 312, and a coordinate calculation unit 313.
- the preprocessor 311 generates (or derives) closed curves for the user's face and both eyes by performing a binarization process on the user image tck including the user acquired by the gaze tracking unit 200.
- the preprocessor 311 may perform preprocessing by applying a binarization technique to the user image and converting the gray scale of each pixel in the image into two binarized values of 0 gray or 255 gray.
- the face part and both eyes form a closed curve, respectively.
- the preprocessor 311 may exclude other parts except for the user's face and eyes through a binarization process.
- a predetermined threshold is used, so that grayscales above the threshold in the image are binarized to black, and grayscales below the threshold in the image can be binarized to white. You can set the value.
- the eye detection unit 312 detects binocular portions in the face area formed by the generated closed curve.
- the eye detection unit 312 identifies (or recognizes) the user's face by identifying a closed curve in the preprocessed image, and detects both eyes of the user by identifying two adjacent closed curves within the identified user's face. I can.
- the coordinate calculation unit 313 calculates coordinate information according to the detected binocular part.
- the coordinate information includes a distance between both eyes of the user, a distance between the user and the display unit 400 (or the object), and the like.
- the coordinate calculation unit 313 may calculate coordinate information for both eyes identified by the eye detection unit 312 and provides coordinate information including the user's gaze direction to the control unit 130.
- the image control module 320 receives the surrounding image img_i obtained by the stereoscopic scanning unit 100 and extracts a region corresponding to coordinate information for each frame.
- the image control module 320 extracts an image of an external situation in which the current user's gaze is directed according to the estimated gaze direction of the user.
- the image control module 320 includes a peripheral image input unit 321, a focus setting unit 322, and an area extraction unit 323.
- the surrounding image input unit 321 receives the surrounding image obtained by the stereoscopic scanning unit 100 and temporarily stores it in a storage unit (not shown) in units of a predetermined frame.
- the surrounding image input unit 321 may receive the surrounding image img_i in real time from the stereoscopic scanning unit 110 and temporarily store the surrounding image img_i input in real time in a certain frame unit. It is temporarily stored as, and a region corresponding to an occlusion video may be extracted by mapping coordinate information to a representative frame, for example, an initial frame from the frame.
- the focus setting unit 322 sets focus (or focus) by mapping (or applying) the calculated coordinate information to the entire area of the representative frame of the surrounding image.
- the focus setting unit 322 maps the gaze direction according to coordinate information on the representative frame of the temporarily stored surrounding image img_i, so that the user's gaze is focused on a certain area in the surrounding image img_i. You can set (or check) whether it is turned on.
- the area extracting unit 323 extracts an area corresponding to the blind spot by the object based on the set focus.
- the area extracting unit 323 may determine and extract which area is the blind spot by the object in the surrounding image according to the user's focus.
- the user's focus may be toward the object, or to the left or right side of the object, and accordingly, a part that can be seen by the real eye and a part that is hidden by the object and invisible are present.
- the area extracting unit 323 extracts an area corresponding to a blind spot by the object from the surrounding image based on the set focus.
- the image generation module 330 provides an image of an area set by the image control module 320 in the surrounding image as a shielding image to the display unit 400.
- the image generation module 330 generates a shielding image, which is an image of an area corresponding to a blind spot by the extracted object from the surrounding image.
- the image generation module 330 considers the screen size of the display unit 400, the edge area (or bezel area) of the display unit 400, and the like, and the size (or resolution) of the generated shielding image Can be adjusted.
- the image generation module 330 generates a shielded image including an image or pattern for a portion corresponding to an area for a blind spot extracted by the image control module 320 from the entire surrounding image. It may be output to the display unit 400.
- the image generation module 330 refers to the coordinate information and determines the left eye according to the position and distance of both eyes with reference to the display unit 400.
- the user's face is input to the display unit 400 by shifting the pixels displaying the left-eye image and the right-eye image in the left or right direction so that the image and the right-eye image are incident on the user's left and right eyes, respectively.
- the image generation module 330 may generate a shielding image according to an area for a blind spot by reflecting a result of tracking the user's gaze in real time.
- the control unit 300 determines the gaze direction of the moving user. And by reflecting the position of the gaze to the occluded image, a shielding image that moves in the opposite direction corresponding to the changed coordinate information may be generated in response to a change in the gaze direction of the user and the movement of the gaze position.
- the control unit 300 moves the shielded image about 5cm in the opposite direction corresponding to the corresponding movement change (for example, 5cm to the left).
- the corrected shielding image in a direction opposite to the change in the user's gaze movement compared to the previous shielding image may be displayed through the display unit 400.
- control unit 300 receives a surrounding image and a user image from the three-dimensional scanning unit 100 and the gaze tracking unit 200, and based on this, the shielding image corresponding to the blind spot generated by the object May be generated and output to the display unit 400.
- the control unit 300 may be implemented as a programmed MCU mounted on a predetermined printed circuit board, a memory device, etc., and may be mounted on one side of the object provided with the rear information providing device 10 of the object. However, the location is not limited to a specific space.
- control unit 300 receives a surrounding image photographing an external situation from the outside of the object from the three-dimensional scanning unit 100, and provides a user image that recommends the user's gaze from the gaze tracking unit 200. I can receive it.
- the surrounding image is an image having an area wider than at least an area corresponding to the user's field of view
- the control unit 300 calculates coordinate information according to the user's gaze from the user image, and the calculated coordinates on the surrounding image
- a shielding image which is an image of the extracted area from the surrounding image, may be output to the display unit 400.
- the controller 300 when a plurality of surrounding images and a plurality of user images corresponding to all directions around the object are acquired by the stereoscopic scanning unit 100 and the gaze tracking unit 200, the controller 300 The surrounding image and the user image corresponding to the gaze may be respectively checked, and a shielding image corresponding to the blind spot by the object may be generated based on the identified surrounding image and the user image according to the user's gaze.
- a shielded image according to a change in gaze of a single user located around the object is displayed through the display unit 400, but is not limited thereto. Accordingly, a plurality of shielding images may be displayed through the display unit 400 or the plurality of display units 400.
- the display unit (or display unit) 400 is configured (or arranged/formed) on one side (or periphery) of a specific object.
- the display unit 400 may display various contents such as various menu screens using a user interface and/or a graphic user interface stored in the storage unit under the control of the control unit 300.
- the content displayed on the display unit 400 includes various text or image data (including various information data) and a menu screen including data such as icons, list menus, and combo boxes.
- the display unit 400 may be a touch screen.
- the display unit 400 includes a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and electricity. It may include at least one of an electrophoretic display (EPD), a flexible display, an e-ink display, and a light emitting diode (LED).
- LCD liquid crystal display
- TFT LCD thin film transistor liquid crystal display
- OLED organic light-emitting diode
- electricity may include at least one of an electrophoretic display (EPD), a flexible display, an e-ink display, and a light emitting diode (LED).
- EPD electrophoretic display
- LED light emitting diode
- the display unit 140 configured on the front surface of the object displays the shielding image generated by the control unit 130.
- the display unit 140 may display the shielding image generated in real time by the control unit 130 in consideration of the user's gaze tracking result according to the gaze tracking unit 120.
- the flexible display When a flexible display is used as the display unit 400, even when the object is curved, the flexible display may be attached (or placed/fixed) to the object in a curved state according to the shape.
- the rear information providing device 10 of the object tracks the user's gaze in real time to calculate coordinate information, and the blind spot of the object suitable for the user's gaze from the surrounding image of the external situation that is changed in real time.
- the shielded image may be output to a curved viewport matching the flexible display 400.
- the display unit 400 may be a general 2D display unit, as well as a 3D stereoscopic display device that displays different images corresponding to the left and right eyes of the user.
- the three-dimensional scanning unit 100 acquires a surrounding image having spatial information and provides it to the control unit 300, and the control unit 300 corresponds to the gaze of the user tracked by the gaze tracking unit 200.
- the control unit 300 corresponds to the gaze of the user tracked by the gaze tracking unit 200.
- the display unit 400 corresponds to the image of the object, and displays different images in the left eye and the right eye to implement a three-dimensional image.
- the display unit 400 may display not only a 2D flat image, but also a 3D stereoscopic image according to binocular parallax.
- the apparatus 10 for providing rear information of an object can provide an environment in which the blind spot is minimized for users to external situations related to the object, thereby preventing an accident due to the blind spot.
- FIG. 3 is a diagram illustrating a method of extracting a shielded image of the apparatus 10 for providing information on the rear surface of an object according to an exemplary embodiment of the present invention.
- the visible area A1 and the blind area caused by the object 360 that is, the invisible area B1. Is formed. Accordingly, the user 350 cannot check the external situation of the invisible area B1.
- the apparatus 10 for providing rear information of the object configures the three-dimensional scanning unit 100 to the other side of the object 360, and
- the gaze tracking unit 200 is configured as one side, and a shielded image corresponding to the invisible area B1 photographed by the three-dimensional scanning unit 100 is displayed on the display unit 400 configured on one side of the object 360.
- the user 350 can check the entire external situation (A1+B1).
- the shielded image may be reproduced (or converted) according to the changed direction and displayed through the display unit 400.
- the three-dimensional scanning unit 100 captures the surrounding image in which the eleventh area A2 and the twelfth area B2 are at least extended from the visible area A1 of the user 350 for an external situation. And the control unit 300 adjusts the position of the twelfth area B2 in the entire surrounding image A2+B2 in response to the tracked face position and line of sight of the user 350 so that the display unit 400 Mark on.
- an omnidirectional peripheral image related to an object is acquired, the user's gaze is tracked in real time, and a shielding image for an area corresponding to the blind spot is generated from the acquired surrounding image in response to the tracked user's gaze.
- the generated shielding image may be displayed through a display unit provided on one side of the object.
- the shielding image may be converted in real time according to a change in gaze of one or a plurality of users or the position of both eyes, thereby providing an external situation as a stereoscopic image through the display unit.
- FIG. 4 is a flowchart illustrating a method of providing rear information of an object according to an embodiment of the present invention.
- the three-dimensional scanning unit 100 includes an image around the object including a blind spot that is shielded by the object in the user's gaze direction (or the rear direction of the object facing the user) with respect to the user located in front of the object. Acquires (or photographs) a surrounding image on the back of the object/ambient image information/all-round surrounding image around the object).
- the three-dimensional scanning unit 100 configured on the other side of the specific object is, when a user located in front of the object looks at the object, surrounding the object including the blind spot created by the object.
- a peripheral image (or a peripheral image on the back of an object/ambient image information/an omnidirectional peripheral image around an object) is acquired.
- the first three-dimensional scanning unit Reference numeral 100 obtains a first surrounding image 600 including a trash can viewed by the first user and a first blind spot shielded by the trash can in the direction of a tree (S410).
- the gaze tracking unit 200 tracks (or measures/detects) the gaze of a user located in front of the object in real time.
- the gaze tracking unit 200 configured on one side of the specific object (or in front of the object) acquires a user image (or user image information) including a user located in front of the object, and from the obtained user image The user is recognized, and the recognized user's gaze is tracked in real time.
- the surrounding image obtained by the three-dimensional scanning unit 100 is an image having a larger area than the area corresponding to the user's field of view, and the object is located at a specific location spaced apart from the object by a predetermined distance based on the object.
- An image related to a direction corresponding to (or coincident with) the gaze of the user looking at, and the user image obtained by the gaze tracking unit 200 is a corresponding image located at a specific location spaced a certain distance from the object based on the object. It may be an image including a user (or an image related to a direction facing the user's gaze).
- the first gaze tracking unit 200 configured in the front of the trash can is within a preset distance from the trash can (or A first user image including the first user is acquired from the corresponding first gaze tracking unit 200).
- the first gaze tracking unit recognizes (or senses) a first user within the first user image, and tracks the recognized gaze of the first user (S420).
- the controller 300 generates (or extracts) a shielding image for an area corresponding to a blind spot by the object from the acquired surrounding image in response to the tracked user's gaze.
- control unit 300 generates (or derives) closed curves for the user's face and both eyes by performing binarization processing on the user image including the user acquired through the gaze tracking unit 200.
- control unit 300 detects binocular portions in the face area formed by the generated closed curve.
- control unit 300 calculates coordinate information according to the detected binocular part.
- the coordinate information includes a distance between both eyes of the user, a distance between the user and the display unit 400 (or the object), and the like.
- control unit 300 temporarily stores the surrounding image acquired through the three-dimensional scanning unit 100 in a storage unit (not shown) in units of a predetermined frame.
- control unit 300 maps (or applies) the calculated coordinate information to the entire area of the representative frame of the surrounding image to set the focus.
- control unit 300 extracts an area corresponding to the blind spot by the object based on the set focus, and shields an image of an area corresponding to the blind spot by the extracted object from the surrounding image. Create an image.
- the controller 300 may adjust the size (or resolution) of the generated shielding image in consideration of the screen size of the display unit 400.
- control unit 300 generates a closed curve for the first user's face and both eyes by performing a binarization process on the obtained first user image, and detects a binocular part on a face region formed by the generated closed curve. , First coordinate information according to the detected binocular portion is calculated.
- control unit 300 temporarily stores the acquired first surrounding image 600 in the storage unit in units of a preset frame, and calculates the entire area of the representative frame of the first surrounding image 600 After setting a focus by mapping the obtained first coordinate information, a first area corresponding to the first blind spot by the trash can is extracted based on the set focus, and as shown in FIG. 1 A first shielding image 700, which is an image for the extracted first area, is generated from the surrounding images 600 (S430).
- the display unit 400 configured on the front surface of the object displays the shielding image generated by the control unit 300.
- the display unit 400 may display the shielding image generated in real time by the control unit 300 in consideration of the user's gaze tracking result according to the gaze tracking unit 200.
- the first display unit 400 displays the first shielding image 700 generated by the control unit 300. Display (S440).
- an embodiment of the present invention acquires an omnidirectional peripheral image related to an object, tracks a user's gaze in real time, and corresponds to a blind spot in the acquired surrounding image in response to the tracked user's gaze.
- the embodiment of the present invention converts the shielding image in real time according to the change of gaze of one or multiple users or the position of both eyes and provides the external situation as a stereoscopic image through the display unit, providing rear information. It is possible to eliminate or reduce the sense of heterogeneity that the user feels according to, and increase the user's satisfaction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims (9)
- 객체 전면에 위치한 사용자에 대해서 사용자의 시선 방향에서 상기 객체에 의해 차폐되는 사각지대를 포함하는 상기 객체 주변의 주변 영상을 획득하는 입체 스캐닝부;상기 객체 전면에 위치한 사용자의 시선을 추적하는 시선 추적부;상기 추적된 사용자의 시선에 대응하여 상기 획득된 주변 영상에서 상기 객체에 의한 사각지대에 해당하는 영역에 대한 차폐 영상을 생성하는 제어부; 및상기 생성된 차폐 영상을 표시하는 디스플레이부를 포함하는 객체의 후면 정보 제공 장치.
- 제 1 항에 있어서,상기 입체 스캐닝부는,상기 객체의 타측에 구성되며, 상기 객체의 전면에 위치한 사용자가 상기 객체를 바라볼 때 상기 객체에 의해 생성되는 사각지대를 포함하는 상기 주변 영상을 획득하는 것을 특징으로 하는 객체의 후면 정보 제공 장치.
- 제 1 항에 있어서,상기 시선 추적부는,상기 객체의 일측에 구성되며, 상기 객체의 전면에 위치한 사용자를 포함하는 사용자 영상을 획득하고, 상기 획득된 사용자 영상에서 상기 사용자를 인식하고, 상기 인식된 사용자의 시선을 실시간으로 추적하는 것을 특징으로 하는 객체의 후면 정보 제공 장치.
- 제 1 항에 있어서,상기 제어부는,상기 시선 추적부에 의해 획득된 사용자 영상을 근거로 사용자의 시선에 따른 좌표 정보를 출력하는 양안 추적 모듈;상기 입체 스캐닝부로부터 획득된 주변 영상을 근거로 각 프레임별 상기 좌표 정보에 대응하는 영역을 추출하는 영상 제어 모듈; 및상기 주변 영상에서 상기 추출된 영역에 대한 영상을 상기 차폐 영상으로 상기 디스플레이부에 제공하는 영상 생성 모듈을 포함하는 것을 특징으로 하는 객체의 후면 정보 제공 장치.
- 제 4 항에 있어서,상기 양안 추적 모듈은,상기 사용자 영상에 대해 이진화 처리를 수행하여 사용자 얼굴 및 양안에 대한 폐곡선을 생성하는 전처리부;상기 생성된 폐곡선이 이루는 얼굴 영역에서 양안 부분을 검출하는 눈 검출부; 및상기 검출된 양안 부분에 따른 좌표 정보를 산출하는 좌표 산출부를 포함하는 것을 특징으로 하는 객체의 후면 정보 제공 장치.
- 제 5 항에 있어서,상기 영상 제어 모듈은,상기 주변 영상을 일정 프레임 단위로 임시 저장하는 주변 영상 입력부;상기 임시 저장된 주변 영상의 대표 프레임의 전체 영역에 대해 상기 산출된 좌표 정보를 매핑하여 포커스를 설정하는 포커스 설정부; 및상기 설정된 포커스를 기준으로 상기 객체에 의한 사각지대에 대응하는 영역을 추출하는 영역 추출부를 포함하는 것을 특징으로 하는 객체의 후면 정보 제공 장치.
- 입체 스캐닝부에 의해, 객체 전면에 위치한 사용자에 대해서 사용자의 시선 방향에서 상기 객체에 의해 차폐되는 사각지대를 포함하는 상기 객체 주변의 주변 영상을 획득하는 단계;시선 추적부에 의해, 상기 객체 전면에 위치한 사용자의 시선을 추적하는 단계;제어부에 의해, 상기 추적된 사용자의 시선에 대응하여 상기 획득된 주변 영상에서 상기 객체에 의한 사각지대에 해당하는 영역에 대한 차폐 영상을 생성하는 단계; 및디스플레이부에 의해, 상기 생성된 차폐 영상을 표시하는 단계를 포함하는 객체의 후면 정보 제공 방법.
- 제 7 항에 있어서,상기 차폐 영상을 생성하는 단계는,상기 시선 추적부를 통해 획득된 사용자가 포함된 사용자 영상에 대해 이진화 처리를 수행하여 사용자 얼굴 및 양안에 대한 폐곡선을 생성하는 과정;상기 생성된 폐곡선이 이루는 얼굴 영역에서 양안 부분을 검출하는 과정;상기 검출된 양안 부분에 따른 좌표 정보를 산출하는 과정;상기 입체 스캐닝부를 통해 획득된 주변 영상을 일정 프레임 단위로 임시 저장하는 과정;상기 임시 저장된 주변 영상의 대표 프레임의 전체 영역에 대해 상기 산출된 좌표 정보를 매핑하여 포커스를 설정하는 과정;상기 설정된 포커스를 기준으로 상기 객체에 의한 사각지대에 대응하는 영역을 추출하는 과정; 및상기 주변 영상에서 상기 추출된 객체에 의한 사각지대에 대응하는 영역에 대한 영상인 차폐 영상을 생성하는 과정을 포함하는 것을 특징으로 하는 객체의 후면 정보 제공 방법.
- 제 8 항에 있어서,상기 좌표 정보는,사용자와 양안 서로 간의 거리 및 상기 사용자와 상기 디스플레이부 간의 거리를 포함하는 것을 특징으로 하는 객체의 후면 정보 제공 방법.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0114221 | 2019-09-17 | ||
KR1020190114221A KR102333598B1 (ko) | 2019-09-17 | 2019-09-17 | 객체의 후면 정보 제공 장치 및 그 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021054580A1 true WO2021054580A1 (ko) | 2021-03-25 |
Family
ID=74883808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/008066 WO2021054580A1 (ko) | 2019-09-17 | 2020-06-22 | 객체의 후면 정보 제공 장치 및 그 방법 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102333598B1 (ko) |
WO (1) | WO2021054580A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060118137A (ko) * | 2005-05-16 | 2006-11-23 | 주식회사 아이디테크 | 영상에서 감시대상의 눈을 검출하는 방법 |
KR20130066850A (ko) * | 2011-12-13 | 2013-06-21 | 현대자동차주식회사 | 사각지대를 발생시키지 않는 영상출력장치 |
KR20150114589A (ko) * | 2014-04-01 | 2015-10-13 | 주식회사 유비벨록스모바일 | 피사체 재구성을 위한 장치 및 방법 |
KR20160003355A (ko) * | 2014-06-30 | 2016-01-11 | 주식회사 유니에보 | 3차원 입체 영상 처리 방법 및 시스템 |
US20190158809A1 (en) * | 2016-06-08 | 2019-05-23 | Sony Interactive Entertainment Inc. | Image generation apparatus and image generation method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170126149A (ko) | 2016-05-09 | 2017-11-17 | 주식회사 서연전자 | 차량용 전방필러 영상 제공 시스템 |
KR102019257B1 (ko) * | 2017-10-11 | 2019-09-06 | 윤만구 | 차량의 에이 필러 프레임의 사각지대 영상을 제공하기 위한 프레임 영상 표시 장치 |
-
2019
- 2019-09-17 KR KR1020190114221A patent/KR102333598B1/ko active IP Right Grant
-
2020
- 2020-06-22 WO PCT/KR2020/008066 patent/WO2021054580A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060118137A (ko) * | 2005-05-16 | 2006-11-23 | 주식회사 아이디테크 | 영상에서 감시대상의 눈을 검출하는 방법 |
KR20130066850A (ko) * | 2011-12-13 | 2013-06-21 | 현대자동차주식회사 | 사각지대를 발생시키지 않는 영상출력장치 |
KR20150114589A (ko) * | 2014-04-01 | 2015-10-13 | 주식회사 유비벨록스모바일 | 피사체 재구성을 위한 장치 및 방법 |
KR20160003355A (ko) * | 2014-06-30 | 2016-01-11 | 주식회사 유니에보 | 3차원 입체 영상 처리 방법 및 시스템 |
US20190158809A1 (en) * | 2016-06-08 | 2019-05-23 | Sony Interactive Entertainment Inc. | Image generation apparatus and image generation method |
Also Published As
Publication number | Publication date |
---|---|
KR102333598B1 (ko) | 2021-12-01 |
KR20210032779A (ko) | 2021-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016068560A1 (ko) | 반투명 마크, 반투명 마크 합성 및 검출 방법, 투명 마크 그리고 투명 마크 합성 및 검출 방법 | |
WO2016186441A1 (en) | Multi-view image display apparatus and control method thereof, controller, and multi-view image generation method | |
WO2019050360A1 (en) | ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY SEGMENTING TO BE HUMAN IN AN IMAGE | |
WO2021112406A1 (en) | Electronic apparatus and method for controlling thereof | |
WO2020105863A1 (en) | Electronic device including camera module in display and method for compensating for image around camera module | |
KR101296900B1 (ko) | 입체 영상의 뷰 제어방법과 이를 이용한 입체 영상표시장치 | |
WO2018174535A1 (en) | System and method for depth map | |
WO2013039347A1 (ko) | 영상 처리 장치 및 그 영상 처리 방법 | |
WO2011112028A2 (ko) | 입체 영상 생성 방법 및 그 장치 | |
WO2015122566A1 (en) | Head mounted display device for displaying augmented reality image capture guide and control method for the same | |
WO2017065517A1 (en) | 3d display apparatus and control method thereof | |
WO2011152634A2 (ko) | 모니터 기반 증강현실 시스템 | |
EP3750304A1 (en) | Semi-dense depth estimation from a dynamic vision sensor (dvs) stereo pair and a pulsed speckle pattern projector | |
US11682183B2 (en) | Augmented reality system and anchor display method thereof | |
CN111095348A (zh) | 基于摄像头的透明显示器 | |
WO2022265347A1 (en) | Three-dimensional scene recreation using depth fusion | |
JPWO2019207923A1 (ja) | 表示撮像装置 | |
EP4042670A1 (en) | Electronic device and method for displaying image at the electronic device | |
WO2021054580A1 (ko) | 객체의 후면 정보 제공 장치 및 그 방법 | |
KR101731113B1 (ko) | 2d-3d 영상 변환 방법 및 이를 이용한 입체 영상 표시장치 | |
WO2019098421A1 (ko) | 모션 정보를 이용한 객체 복원 장치 및 이를 이용한 객체 복원 방법 | |
WO2019004754A1 (en) | ADVERTISEMENTS WITH INCREASED REALITY ON OBJECTS | |
WO2014058234A1 (ko) | 편광 차분 카메라를 이용한 영상처리 시스템 | |
CN104767985A (zh) | 使用区域分布分析以自动检测三维图像格式的方法 | |
WO2011040653A1 (ko) | 3차원 객체를 제공하는 사진 장치 및 그 제공방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20864930 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20864930 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20864930 Country of ref document: EP Kind code of ref document: A1 |