WO2018042633A1 - Image management device, image management method, and image management program - Google Patents
Image management device, image management method, and image management program Download PDFInfo
- Publication number
- WO2018042633A1 WO2018042633A1 PCT/JP2016/075872 JP2016075872W WO2018042633A1 WO 2018042633 A1 WO2018042633 A1 WO 2018042633A1 JP 2016075872 W JP2016075872 W JP 2016075872W WO 2018042633 A1 WO2018042633 A1 WO 2018042633A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- unit
- disclosure
- management apparatus
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
Definitions
- the present invention relates to an image management apparatus, an image management method, and an image management program, and more particularly to a technique for managing disclosure / non-disclosure of images.
- Japanese Patent Laid-Open No. 2004-133867 provides “an image disclosing device capable of automatically determining whether each image is disclosed / not disclosed according to a predetermined rule without setting the user to be disclosed / not disclosed.
- the image publishing apparatus includes an image storage unit that stores images owned by the user, and an image analysis unit that performs a plurality of image analysis processes on each image stored in the image storage unit.” And a disclosure / non-disclosure determination unit that determines whether each image is disclosed or not based on the result of each image analysis process performed on each image by the image analysis unit (summary excerpt). Yes.
- Patent Document 1 it is possible to automatically determine whether or not to disclose each image according to a predetermined rule without setting the disclosure or non-disclosure by the user.
- the rule for deciding whether to disclose or not is fixed, and there remains a problem that the rule cannot be changed even if the rule does not match the user's preference or situation.
- the present invention has been made in view of the above problems, and an object of the present invention is to provide an image management technique capable of reflecting a user's preference in a rule for determining whether to open or close.
- an image management device is based on an image acquisition unit that acquires image data to be processed, and information that represents a behavior of a photographer of the image data or a viewer of the image data.
- An action feature amount collection unit that collects an action feature amount used when calculating a disclosure determination score for classifying the image data as public or private, and a score calculation formula for calculating the disclosure determination score.
- a score calculation unit that calculates whether the disclosure determination score satisfies a disclosure condition, and makes the image data public or private based on a result of the determination
- a feedback unit that changes the score calculation formula based on a public or non-public change history indicated by, and an image publishing unit that publishes the image data when it is decided to publish the image data. It is characterized by.
- the image management apparatus may be of any type as long as it can store and output images, such as a smartphone, a digital camera, a tablet, and a digital photo frame.
- a smartphone is an image management apparatus according to the present embodiment. An example used as will be described.
- FIG. 1 is a diagram illustrating an example of a hardware configuration of the image management apparatus 100.
- the image management apparatus 100 includes, for example, a communication I / F 113, a control unit 114, an input unit 115, a display unit 117, a signal separation unit 121, a tuner / demodulation unit 122, a storage 125, a mobile communication I / F 131, a memory 132, and an acceleration.
- a sensor unit 133, a geomagnetic sensor unit 134, a GPS receiver unit 135, a gyro sensor unit 136, an RTC (real-time clock) 137, a camera 140, a switch input unit 150, a voice input / output unit 160, and an external device I / F 170 are included.
- Each processing unit is connected via the bus 101.
- the image management apparatus 100 stores application programs in the storage 125, the control unit 114 expands the programs from the storage 125 to the memory 132, and the control unit 114 executes the programs to realize various functions. can do.
- various functions realized by the control unit 114 executing each application program will be described as being realized mainly by various program function units.
- the application program may be stored in the storage 125 in advance before the image management apparatus 100 is shipped, or a medium such as a CD (Compact Disk) or a DVD (Digital Versatile Disk) or a semiconductor memory. And installed in the image management apparatus 100 via a medium connection unit (not shown).
- a medium connection unit not shown.
- the application program may be downloaded and installed from an external network (not shown) via the communication I / F 113 and a wireless router (not shown). Or you may download and install from a delivery source via the base station which is not illustrated via mobile communication I / F131. Furthermore, it is also possible to connect to a personal computer (PC) that has acquired an application program via a network via an external device connection I / F (not shown), and move or copy from the PC to the image management apparatus 100 for installation. .
- PC personal computer
- the application program can be realized by hardware as a processing unit having the same function.
- each processing unit takes the lead in realizing each function.
- the communication I / F 113 is connected to a wireless router (not shown) via a wireless LAN or the like.
- the communication I / F 113 is connected to an external network via a wireless router, and transmits / receives information to / from a server on the external network.
- the communication I / F 113 may be mounted with chips that perform different communication methods. Moreover, you may mount as one chip
- the mobile communication I / F 131 is a third generation such as GSM (registered trademark) (Global System for Mobile Communications) system, W-CDMA (Wideband Code Division Multiple Access) system, CDMA2000 system, UMTS (Universal Mobile Telecommunications System) system, etc.
- GSM Global System for Mobile Communications
- W-CDMA Wideband Code Division Multiple Access
- CDMA2000 Code Division Multiple Access
- UMTS Universal Mobile Telecommunications System
- a mobile communication system hereinafter referred to as “3G”) or a mobile communication network such as LTE (Long Term Evolution) is used to connect to a communication network through a base station, and send and receive information to and from a server on the communication network. It can be carried out.
- LTE Long Term Evolution
- the control unit 114 is configured by, for example, a CPU (Central Processing Unit).
- the control unit 114 receives a user operation request via the input unit 115, and controls various program function units such as the signal separation unit 121, the display unit 117, and the communication I / F 113.
- control unit 114 can acquire various information from a server on the external network via the communication I / F 113 and the wireless router, or from the server on the external network via the mobile communication I / F 131 and the base station. It also has a function to be passed to the functional part.
- the storage 125 is configured using, for example, a ROM (Read Only Memory) or a HDD (Hard Disk Drive).
- the storage 125 is controlled by an instruction from the control unit 114 and can store an application program. In addition, various information created by the application program is stored.
- content such as a video / audio stream may be stored from a signal received from the tuner / demodulator 122, the communication I / F 113, or the mobile communication I / F 131.
- the storage 125 may be built in the image management apparatus 100 or may be a portable memory that can be attached to and detached from the image management apparatus 100.
- a memory (RAM: Random Access Memory) 132 is controlled by an instruction from the control unit 114.
- the function unit of the application program stored in the storage 125 is expanded in the memory 132 by the control unit 114.
- the display unit 117 is configured by a liquid crystal display, for example.
- the display unit 117 displays images and videos stored in the storage 125, captured and received images and videos, UI for various operations, and the like.
- the image or video to be displayed may be an image generated by an application program or a content image or video received via the tuner / demodulator 122, or an image received from a server on the external network via the communication I / F 113. It may be a video or an image or video distributed from a server on a communication network via the mobile communication I / F 131.
- the display part 117 may be comprised integrally with the following touch panel etc., for example.
- the input unit 115 is an input unit that receives an operation on the image management apparatus 100 from the user and inputs control information related to the input operation.
- a touch panel stacked on the display unit 117 can be used.
- voice input using the voice input / output unit 160 and gesture input using the camera 140 can be considered.
- the acceleration sensor unit 133 measures the acceleration applied to the image management apparatus 100.
- the control unit 114 can know which part of the image management apparatus 100 is above, for example, by measuring the gravitational acceleration by the acceleration sensor unit 133.
- the geomagnetic sensor unit 134 measures geomagnetism by using a plurality of magnetic sensors.
- the GPS receiving unit 135 receives signals transmitted from a plurality of satellites using GPS (Global Positioning System).
- the control unit 114 can calculate the position information of the image management apparatus 100 based on the signal received by the GPS reception unit 135.
- the gyro sensor unit 136 measures the angular velocity of the image management apparatus 100 that occurs when the user moves the image management apparatus 100.
- the image management apparatus 100 can estimate the behavior and situation of the owner of the image management apparatus 100 such as walking, sleeping, and driving by using the measured acceleration, position information, and angular velocity.
- the camera 140 includes an optical system such as a lens, an image sensor, a signal processing circuit, and the like.
- the control unit 114 controls exposure control, focus control, acquired image pixel number control, compression control of the camera 140, and recording control of an image captured by the camera 140 in the storage 125 according to a camera control program recorded in the storage 125. Control.
- the camera 140 includes an inner camera 141 provided on the same surface as the screen of the display unit 117 and an outer camera 142 provided on a surface opposite to the screen of the display unit 117 in the image management apparatus 100.
- the switch input unit 150 takes in switch information in response to an operation of a physical button 151 (one or more), takes in switch information through the bus 101, and is used to control various application programs as necessary.
- a physical button 151 one or more
- the two buttons 151 are used to control the sound output magnitude adjustment.
- One or a plurality of physical buttons 151 may be provided.
- the audio input / output unit 160 inputs / outputs an audio input signal from the microphone 161 provided in the image management apparatus 100 and an audio output signal to the speaker 162. Is controlled.
- FIG. 2 is a functional block diagram of the image management apparatus 100.
- Part of the storage 125 constitutes a face image storage unit 221, an image storage unit 222, an operation / behavior history information storage unit 223, and a score calculation formula storage unit 224.
- the face image storage unit 221 includes a face image of a person who allows the disclosure of an image or video (hereinafter collectively referred to as image data) in which he / she is photographed, or a person who does not permit the disclosure of an image in which he / she is photographed. Memorize face images.
- the image storage unit 222 stores image data to be processed.
- the operation / behavior history information storage unit 223 stores at least one of operation history information of the image management apparatus 100 and movement history information of the image management apparatus 100.
- the score calculation formula storage unit 224 stores a score calculation formula for calculating a disclosure determination score for classifying image data as public or private.
- the program for determining whether to open / close an image includes an image acquisition unit 201, an image feature amount collection unit 202, an image storage unit 203, an action feature amount collection unit 204, a score calculation unit 205, a public image determination unit 206, and a user correction unit. 207, a feedback unit 208, and an image disclosure unit 209.
- the image feature amount collection unit 202 includes a subject extraction unit 210, a face recognition unit 211, and an open information acquisition unit 212.
- the behavior feature amount collection unit 204 includes a voice recognition unit 213.
- the image acquisition unit 201 acquires image data to be processed.
- the image data may be an image (still image) or a video (moving image).
- the image data may be taken by the camera 140 built in the image management apparatus 100, may be received from the outside by the communication I / F 113 and the mobile communication I / F 131, or may be tuner / demodulation. It may be received by the unit 122.
- the image feature amount collection unit 202 collects image feature amounts used for calculating the public determination score from the analysis result of the image data or incidental information attached to the image data, for example, Exif (Exchangeable image file format) information.
- the collected image feature amount is digitized.
- the subject extraction unit 210 extracts each subject region that is captured in the image data.
- the face recognition unit 211 recognizes whether or not a person's face is shot in the extracted subject area, and the subject area (face image) where the face is shot and the face image stored in the face image storage unit 221. , And digitizes information indicating whether a person who is permitted to publish is photographed or a non-public person is photographed, and outputs the information to the score calculation unit 205.
- the face recognizing unit 211 identifies a person based on the face image in the subject area or the photographer's face image of the image data captured by the inner camera (the face only needs to be recognized), and the open information acquiring unit 212. Reads out the profile information of those persons from the storage of the image management apparatus 100 and digitizes them, or obtains them by connecting to an external network, for example, SNS (Social Networking Service), and digitizes the profile information according to the disclosure allowance. Or you may.
- SNS Social Networking Service
- the open information acquisition unit 212 may refer to the shooting date / time and location information from the incidental information, and collect external information such as the date / time, weather conditions and map information at that location by connecting to an external network. Also good.
- the image feature amount may be information on the date and time or place where the image data was shot, created, or edited, or the sex, age, clothes, posture, action, facial expression, etc. regarding the person who is the subject of the image data.
- Information information such as the type, distance, motion, and brightness of the object to be imaged in the image data, or information related to the image capturing condition such as the image quality, focus, and image time of the image data may be used.
- image analysis such as personal identification, the personal profile information, the relationship with the photographer, or when multiple persons are subjects May be information on the relationship between the persons.
- the behavior feature amount collection unit 204 performs a process of quantifying information related to the behavior performed by the user on the image data.
- an operation history with respect to the input unit 115 may be used or analyzed, or the image input / output unit 160 may display the image data of the image management apparatus 100.
- the surrounding voice may be used or analyzed.
- the behavior feature amount collecting unit 204 may acquire visual information around the image management apparatus 100 when the image data is displayed by the camera 140 and collect the analysis result as the behavior feature amount.
- the behavior feature amount collection unit 204 may use or analyze values of various sensors (sensors that collect exercise state and position information) such as the GPS reception unit 135 and the acceleration sensor unit 133.
- the behavior feature amount collection unit 204 analyzes various sensor information such as a GPS unit and an acceleration sensor, and then takes a photographer and a person who has viewed the screen of the display unit 117 that has moved the image data ( (Hereinafter referred to as “browser”) whether it was at home, whether it was in a public place, whether it was in a car or airplane, whether it was a selfie, Good.
- the behavior feature amount is an operation history information, for example, copy, delete, rename, folder move, share, upload, download, operation history information, for example, image data (image data file) It may be a history of file operations such as email attachment, compression, decompression, etc., or a history of image editing or video editing such as editing, appending, image quality adjustment, image size change, and video effect addition. Further, it may be an operation history related to meta information such as favorite registration, tagging, and application registration.
- the behavior feature amount collection unit 204 acquires the states of the control unit 114, the display unit 117, the memory 132, and the storage 125, and is an application that is used when or before or after browsing an image or video based on the image data.
- information related to a browsed file or Web page may be collected as an action feature amount.
- the inner camera 141 captures the face image of the viewer with the image data displayed on the display unit 117, and the face recognition unit 211 performs face recognition processing of the image data from the inner camera 141, Identify the photographer's face image.
- the behavior feature amount collection unit 204 collects a comparison result between the face image of the viewer or the photographer and the face image stored in the face image storage unit 221 as the behavior feature amount. For example, information such as the sex, age, clothes, posture, behavior, facial expression, etc. of the photographer or viewer may be collected as the behavior feature amount.
- the behavior feature amount collection unit 204 has a function of connecting to an external network, and includes profile information of the viewer, information regarding the relationship with the owner of the image management apparatus 100, Information regarding the relationship with the person who is the subject of the data may be retrieved and collected from an external network, for example, a social working service.
- the image storage unit 203 has a function of associating image feature amount information and action feature amount information with the image data and writing them in the image storage unit 222 and reading out image data to be processed from the image storage unit 222.
- the score calculation unit 205 reads the score calculation formula from the score calculation formula storage unit 224, applies the behavior feature quantity and the image feature quantity corresponding to the image data, and calculates the disclosure determination score of the image data.
- the public image determination unit 206 tentatively determines whether the image data is public or non-public based on the comparison result of the score determination threshold and the public determination score indicating that the disclosure is permitted.
- two states of public / non-public are assumed, but it is also possible to set the disclosure range to three or more states in stages, such as disclosure to acquaintances and disclosure to family members. is there.
- the public image determination unit 206 has a function of executing a masking process on a non-public face image, details of which will be described in the third embodiment.
- the user correction unit 207 accepts a correction instruction from the user for the public or non-public determination result tentatively determined for the image data.
- the feedback unit 208 changes the score calculation formula stored in the score calculation formula storage unit 224 based on the change history of image disclosure / non-publication, the image feature amount, and the behavior feature amount.
- the score calculation formula is changed using, for example, machine learning.
- machine learning of class classification such as support vector machine and neural network
- training data consisting of feature quantity represented by vector and correct answer class
- calculation formula to classify unknown feature quantity into correct class is output can do.
- the image feature value and the action feature value are regarded as feature values
- the public / non-disclosure change history of the image manually operated by the user is regarded as a correct class, and machine learning is performed.
- the calculation formula for classifying into two public classes can be updated.
- the image publishing unit 209 publishes the image data when it is decided to publish. Prior to publication, a confirmation screen showing the determination result of the image data is displayed on the display unit 117. However, the image data determined to be private by the public image determination unit 206 is not displayed. Alternatively, it is displayed that the image data cannot be displayed.
- FIG. 3 is a flowchart showing the first half process in the image management apparatus according to the first embodiment.
- FIG. 4 is a flowchart illustrating the latter half of the process in the image management apparatus according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of a score calculation process.
- FIG. 6 is a diagram illustrating an example of a user correction screen according to the first embodiment. In the following, description will be given along the order of steps in FIGS. 3 and 4.
- the behavior feature amount collection unit 204 starts accumulation of behavior feature amounts (S02).
- the behavior feature amount is information obtained by quantifying a factor (hereinafter referred to as “disclosure determination factor”) for determining whether or not to allow a predetermined captured image to be disclosed.
- a factor hereinafter referred to as “disclosure determination factor”
- information obtained by digitizing the operation history of the button 151 received by the switch input unit 150 may be used, or the acceleration sensor unit 133, the geomagnetic sensor unit 134, the GPS reception unit 135, and the gyro sensor unit 136 may be used.
- the sensor result (raw data) detected by may be stored as movement history information.
- the behavior feature amount collection unit 204 is based on the variation amount of the sensor result, and the movement state of the person carrying the image management apparatus 100 (hereinafter referred to as “user”), for example, stationary, walking, or moving by a vehicle.
- the moving state may be determined and the result may be temporarily stored.
- a flag “1” is used when corresponding to each disclosure determination factor, and “0” is set when the disclosure determination factor is not applicable. You may define action feature-value using the positive / negative number according to.
- the image acquisition unit 201 acquires image data generated by shooting by the camera 140, and acquires date and time information acquired from the RTC 137, The position information of the shooting location is acquired from the GPS receiving unit 135, the Exif information is written, the image data is attached to the image storage unit 203, and is output.
- the Exif information includes various information defined by the Exif standard, such as device information for specifying a photographing device, in addition to photographing conditions such as screen brightness and image resolution.
- the behavior feature amount collection unit 204 outputs the behavior feature amount collection result regarding the captured image to the image storage unit 203.
- the image storage unit 203 attaches Exif information and behavior feature amount data to the image data and stores them in the storage 125 (S04).
- the action feature quantity output by the action feature quantity collection unit 204 to the image storage unit 203 includes the action feature quantity continuously accumulated from step S01, information identifying the photographer, and environment information of the shooting location. May be included.
- the behavior feature amount collection unit 204 causes the inner camera 141 to simultaneously capture the photographer's image when the outer camera 142 captures the image, and causes the face recognition unit 211 to output the photographer image.
- the face recognition unit 211 compares the face image of the owner of the image management apparatus 100 stored in the storage 125 in advance with the photographer image to determine whether or not they are the same person, and outputs the determination result to the behavior feature amount collection unit 204. .
- the behavior feature amount collection unit 204 inputs “1” as the value of the owner flag if the owner of the image management apparatus 100 and “0” if it is a non-owner.
- the self-shooting stick control unit 171 when the self-shooting stick control unit 171 is connected to the external device I / F 170 and the fact that the shooting instruction signal is output to the self-shooting stick control unit 171 is recorded as the operation information, behavior feature amount collection is performed.
- the unit 204 sets “1” as the flag value of the “selfie” item as a disclosure determination factor.
- the behavior feature amount collection unit 204 collects sounds around the time when the shooting button is pressed with the microphone 161 and inputs the collected sounds to the voice input / output unit 160.
- the voice recognition unit 213 detects from the voice signals collected before and after the image is taken that a voice such as “upload” or “show to everyone” is spoken by the photographer's voice, or a cheering sound was raised during the shooting. If it is detected that the laughter has risen, the behavior feature amount collection unit 204 sets the value of the public recommendation flag to “1” for the image.
- the behavior feature amount collection unit 204 for example, when there is an operation history of posting a photo to the SNS immediately after the image is taken, to send mail to a specific acquaintance, or to “share” or “good photo” in the content of the email If a specific phrase such as “I want to see” is included, it is determined that the image may be disclosed, and the value of the recommendation flag is set to “1”.
- the score calculation unit 205 reads the score calculation formula stored in the score calculation formula storage unit 224, and calculates the score by substituting the image feature amount and behavior feature amount values attached to the image into the score calculation formula (S05). ). Then, the public image determination unit 206 compares the score calculated for determining whether the public condition is satisfied with the public determination threshold, and if the calculated score is equal to or greater than the public determination threshold (S06 / YES). It is tentatively determined to publish the image, and a value “1” is entered in the disclosure flag (S07). If the calculated score is less than the disclosure determination threshold value (S06 / NO), it is tentatively determined that the image is not disclosed and a value “0” is set in the disclosure flag (S08). Thereafter, the image storage unit 203 adds a public flag to the image and stores it in the storage (S09).
- FIG. 5 shows the weighting coefficient of the equation (1) and the flag values of the images 1 to 6. Note that the items and weighting coefficients are stored in the score calculation formula storage unit 224, and the flag values of the items for the images 1 to 6 are attached to the images and stored. However, FIG. Therefore, they are shown in the same table.
- the disclosure determination threshold value Sth1 is 50, the image 1 is determined to be disclosed.
- the score calculation formula may be the following formula (2).
- Expression (2) sets the disclosure determination threshold value Sth2 to a value higher than Sth1, that is, a value that further restricts the disclosure, but when the disclosure recommendation flag is set to “1”, It is a score calculation formula for determining. In this way, not only the score value alone but also the value of the recommendation recommendation flag digitized by the behavior feature amount collection unit 204 may be incorporated into the logical operation, and the disclosure determination may be performed with emphasis on the behavior feature amount.
- the image 4 is the same image as the image 3, but the cheering of the drinking scene in the image taken in the party scene at home is detected by the voice recognition unit 213.
- the voice recognition unit 213 Suppose that in this case, although it is determined that it is not disclosed in the equation (1), it is determined that it is disclosed in the equation (2).
- the image 5 is an image that the behavior feature amount collection unit 204 determines to be an “in driving” image.
- the behavior feature amount collection unit 204 may be configured to set the value of the “non-public recommendation flag” to “1” and determine that the value is private regardless of the value of the score S1 according to the equation (2). .
- the action feature amount collection unit 204 captures a through image from the inner camera 141 while the image is captured on the display unit 117 after capturing, and determines whether the non-owner of the image management apparatus 100 is captured in the face recognition unit 211. You may judge. When the non-owner is shown, it is determined that the captured image is an image that may be shown to others, and “1” may be set in the “public recommendation flag”.
- AND may be used as a logical operation instead of OR.
- the image whose disclosure determination flag value is “1” is disclosed.
- a process of automatically tentatively determining whether an image is public / non-public is executed. Next, correction by the user is performed, and release / non-disclosure of the image is confirmed and the image is released.
- “determined” here is not intended to prohibit further correction after user correction, but, for example, after tentatively determined to be released is corrected to open by user correction, then the user corrects it again to non-public You can also
- the public image determination unit 206 reads the public determination target image from the storage (S11), and reads the value of the public flag attached to each image (S12). Then, a “public” decision button is made close to each image (including superposition) for an image with the public flag value “1”, and a “non-public” decision button is made for an image with the public flag value “0”.
- a disclosure confirmation screen is displayed on the screen of the display unit 117 (S13).
- FIG. 6 is a diagram illustrating an example of a user correction screen according to the first embodiment, and illustrates a state in which a UI for accepting a manual operation is displayed on a touch panel in which the display unit 117 and the input unit 115 are integrated.
- the icon 301 indicates that the image 302 is determined to be public.
- An icon 311 indicates that whether to disclose the image 312 is automatically determined and manual correction is not performed.
- An icon 321 indicates that the image 322 is not disclosed.
- An icon 331 indicates that the disclosure / non-disclosure of the image 332 is automatically determined. The user can change the disclosure / non-disclosure of the image by touching the icon. For example, when the icon 301 is touched, that is, when manual correction is performed, the icon 301 changes to the appearance of the icon 321 and the image 302 is not disclosed.
- the icons 311 and 331 indicating that disclosure / non-disclosure is automatically determined disappear.
- the icon 313 is touched to make the image 312 private, the icon 311 disappears.
- the user correction unit 207 need not be operated on the touch panel, and may be an operation such as an input using a physical switch, an audio input using the audio input / output unit 160, or a gesture input using the camera 140.
- the user correction unit 207 When the user correction unit 207 receives a manual correction operation by the user (S14 / YES), the user correction unit 207 rewrites and stores the value of the image disclosure flag in accordance with the content of the manual correction operation (S15).
- the feedback unit 208 performs a so-called feedback process in which the score calculation formula used for the manually corrected image is corrected based on the manual correction content (S16).
- the weight coefficient of an item whose public determination flag of the manually corrected image is “1” is increased.
- the weight coefficient of the item for which the public determination flag of the manually corrected image is “1” is decreased.
- the image publishing unit 209 extracts and publishes only the images whose disclosure flag value is “1” from the storage (S18).
- the present embodiment it is possible to determine whether the image is open / closed according to the image feature amount and the behavior feature amount. Further, when manual correction is performed on the automatically / publicly disclosed result, the score calculation formula used for the public determination is corrected according to the manual correction content. While repeating this, the user's preference can be reflected in the score calculation formula.
- the second embodiment is an embodiment in which public / private information is input from a photographer at the time of shooting and is reflected in score calculation.
- a second embodiment will be described with reference to FIG.
- the second embodiment is a diagram illustrating an example of a user correction screen according to the second embodiment.
- the shooting step (S03) of the first embodiment when a through image is displayed on the screen of the display unit 117 when shooting with the camera 140 activated, the display unit 117 as shown in FIG.
- a public icon 401 for designating a public image and a private icon 402 for designating a private image are displayed on the touch panel integrated with the input unit 115.
- the public icon 401 and the private icon 402 have a function as a shutter button for taking a picture with the camera 140. Therefore, the public icon 401 corresponds to the first shooting button, and the non-public icon 402 corresponds to the second shooting button.
- the user correction unit 207 executes the display process of the public icon 401 and the non-public icon 402 and the operation of writing the result of pressing the public icon 401 and the result of the pressing on the public flag.
- the action feature amount collection unit 204 writes 1 in the “public flag” in the photographed image, and when the photographer touches the private icon 402, the photograph is taken. 0 is written in the “public flag” of the image.
- a touch operation on the public icon 401 or the non-public icon 402 can simultaneously perform an operation of pressing the shutter button and a public / non-public determination process, thereby improving operability.
- the third embodiment is an embodiment in which when a private person, for example, an image showing a person other than himself / herself is disclosed, masking processing such as mosaic or blurring is performed on the face of the private person and then the public person is released.
- the face recognition unit 211 identifies a private person among the facial images recognized from the photographed image, and the identification result (of the photographed image).
- the position information of the area where the non-public person is photographed is output to the image disclosure unit 209.
- the image publishing unit 209 performs a processing process (masking process) for not identifying a face on the face image of a non-public person, and the image publishing unit 209 publishes the image subjected to the masking process.
- the image storage unit 203 may store an image that has undergone masking processing, but the image data stores an image that has not been subjected to masking processing, and the image disclosure unit 209 writes “1” in the masking processing flag. You may save it. Then, when the image publishing unit 209 next publishes an image, the masking process flag may be referred to. If the value of the flag is “1”, the masking process may be performed and then the image may be published. .
- the image feature amount collection unit 202 acquires information specifying the destination to which the image publishing unit 209 publishes (outputs) an image until the actual publication after the publishing operation. To do. Then, the face image in the SNS designated as the disclosure destination may be searched, and a subscriber to the same SNS may be handled as a publicly permitted person (not a private person). In other words, when the subject in the captured image matches the face image of the member in the SNS designated as the connection destination, the SNS is treated as a public image and output without subjecting the subject's face image to masking processing. To do.
- the subject when attempting to upload the same image to another SNS, the subject is disclosed after being subjected to a masking process on the assumption that the subject is a private person.
- the image management apparatus can store an image that is not subjected to masking processing, so that the user can view the original image.
- the behavior feature amount collection unit 204 determines that the image placed in a specific folder may be disclosed, and sets “1” in the “publication recommended flag” of the image. May be included.
- a family face image is stored in advance in the face image storage unit 221. Then, the face recognizing unit 211 compares the face photographed in the image with the family face image, and determines that the image taken together with the opposite sex other than the family should not be disclosed. “1” may be entered in the “flag”.
- the behavior feature amount collection unit 204 refers to the shooting date / time information and the position information, determines that a photograph shown in a place other than the home at night or in an inappropriate place should not be disclosed, “1” may be entered in the “flag”.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The objective of the present invention is to provide an image management technique capable of reflecting a preference of a user in a rule for determining disclosure/non-disclosure. To this end, an image management device 100 includes: a behavior characteristic amount collection unit 202 for collecting a behavior characteristic amount to be used when calculating a disclosure determination score for classifying image data to be processed, for disclosure or non-disclosure, on the basis of information indicating a behavior of a person who captured an image relating to the image data or a viewer of the image data; a score calculation unit 205 for calculating the disclosure determination score of the image data by applying the behavior characteristic amount corresponding to the image data to a score calculation expression; a disclosure image determination unit 206 for provisionally determining that the image data is to be disclosed or non-disclosed on the basis of the disclosure determination score; a user correction unit 207 for receiving a correction instruction from the user with respect to a result of the determination of the disclosure or non-disclosure provisionally determined for the image data; a feedback unit 208 for changing the score calculation expression on the basis of a change history of the disclosure or non-disclosure indicated by the correction instruction; and an image disclosure unit 209 for disclosing the image if it has been determined that the image data is to be disclosed.
Description
本発明は、画像管理装置、画像管理方法、及び画像管理プログラムに係り、特に、画像の公開・非公開を管理する技術に関する。
The present invention relates to an image management apparatus, an image management method, and an image management program, and more particularly to a technique for managing disclosure / non-disclosure of images.
特許文献1には、「各々の画像について、ユーザが公開・非公開の設定を行わなくても、公開・非公開を所定のルールに従って自動的に決定することができる画像公開装置を提供する。」ことを課題とし、「画像公開装置は、ユーザが所有する画像を保存する画像保存部と、画像保存部に保存された各々の画像に対して複数の画像解析処理を実施する画像解析部と、画像解析部による各々の画像に対する各々の画像解析処理の結果に基づいて、各々の画像の公開、非公開を決定する公開・非公開決定部とを備える(要約抜粋)」構成が開示されている。
Japanese Patent Laid-Open No. 2004-133867 provides “an image disclosing device capable of automatically determining whether each image is disclosed / not disclosed according to a predetermined rule without setting the user to be disclosed / not disclosed. "The image publishing apparatus includes an image storage unit that stores images owned by the user, and an image analysis unit that performs a plurality of image analysis processes on each image stored in the image storage unit." And a disclosure / non-disclosure determination unit that determines whether each image is disclosed or not based on the result of each image analysis process performed on each image by the image analysis unit (summary excerpt). Yes.
近年、電子カメラ、スマートフォンなどで撮影した写真を特定の複数人で共有したり、不特定多数の人に対して公開したりする場面が増えている。また、撮影した写真を自動的に共有するサービスや機能が出現している。その際、画像ごとに公開・非公開を決定する作業をする必要があり、多数の写真を撮影したり管理したりする場合に労力がかかることが問題である。
In recent years, there are an increasing number of scenes where photos taken with an electronic camera, a smartphone, etc. are shared by a plurality of specific people or made public to an unspecified number of people. In addition, services and functions that automatically share captured photos have appeared. At that time, it is necessary to determine whether to open or close each image, and it is a problem that a lot of work is required when taking and managing a large number of photos.
特許文献1では、各々の画像について、ユーザが公開・非公開の設定を行わなくても、公開・非公開を所定のルールに従って自動的に決定することができる。しかし公開・非公開を決定するルールは固定であり、ルールがユーザの好みや状況に適合しない場合にもルールを変更することができないという課題が残る。
According to Patent Document 1, it is possible to automatically determine whether or not to disclose each image according to a predetermined rule without setting the disclosure or non-disclosure by the user. However, the rule for deciding whether to disclose or not is fixed, and there remains a problem that the rule cannot be changed even if the rule does not match the user's preference or situation.
本発明は、上記課題に鑑みてなされたものであり、公開・非公開を決定するルールにユーザの好みを反映させることができる画像管理技術を提供することを目的とする。
The present invention has been made in view of the above problems, and an object of the present invention is to provide an image management technique capable of reflecting a user's preference in a rule for determining whether to open or close.
上記目的を達成するために、本発明は請求の範囲に記載の構成を備える。本発明の一態様として、本発明に係る画像管理装置は、処理対象となる画像データを取得する画像取得部と、前記画像データの撮影者又は当該画像データの閲覧者の行動を表す情報に基づいて、前記画像データを公開又は非公開に分類するための公開判定スコアを算出する際に用いる行動特徴量を収集する行動特徴量収集部と、前記公開判定スコアを算出するためのスコア算出式を記憶するスコア算出式記憶部と、前記スコア算出式記憶部から前記スコア算出式を読み出し、当該スコア算出式に前記画像データに対応する前記行動特徴量を適用して、前記画像データの公開判定スコアを算出するスコア算出部と、前記公開判定スコアが公開条件を充足しているかを判断し、その判断の結果に基づいて、前記画像データを公開又は非公開にすることを暫定的に決定する公開画像決定部と、前記画像データに対して暫定的に決定された公開又は非公開の決定結果に対し、ユーザからの補正指示を受け付けるユーザ補正部と、前記補正指示が示す公開又は非公開の変更履歴に基づいて、前記スコア算出式を変更するフィードバック部と、前記画像データが公開する決定された場合に、当該画像データを公開する画像公開部と、を備えることを特徴とする。
In order to achieve the above object, the present invention has the structure described in the claims. As one aspect of the present invention, an image management device according to the present invention is based on an image acquisition unit that acquires image data to be processed, and information that represents a behavior of a photographer of the image data or a viewer of the image data. An action feature amount collection unit that collects an action feature amount used when calculating a disclosure determination score for classifying the image data as public or private, and a score calculation formula for calculating the disclosure determination score. A score calculation formula storage unit to be stored, and the score calculation formula is read from the score calculation formula storage unit, the behavior feature amount corresponding to the image data is applied to the score calculation formula, and the disclosure determination score of the image data And a score calculation unit that calculates whether the disclosure determination score satisfies a disclosure condition, and makes the image data public or private based on a result of the determination A tentatively determined public image determination unit, a user correction unit that accepts a correction instruction from a user for a determination result of disclosure or non-disclosure tentatively determined for the image data, and the correction instruction A feedback unit that changes the score calculation formula based on a public or non-public change history indicated by, and an image publishing unit that publishes the image data when it is decided to publish the image data. It is characterized by.
本発明によれば、公開・非公開を決定するルールにユーザの好みを反映させることができる画像管理技術を提供することができる。なお、上記した以外の課題、構成、効果は以下の実施形態において明らかにされる。
According to the present invention, it is possible to provide an image management technique capable of reflecting the user's preference in the rule for determining whether to open or close. Note that problems, configurations, and effects other than those described above will be clarified in the following embodiments.
以下、本発明の実施形態を、図面を用いて説明する。以下の説明において同一の構成には同一の符号を付して重複説明を省略する。本実施形態に係る画像管理装置は、スマートフォン、デジタルカメラ、タブレット、デジタルフォトフレームなど、画像を保存し出力できる機器であれば種類を問わないが、以下ではスマートフォンを本実施形態に係る画像管理装置として用いる例について説明する。
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description is omitted. The image management apparatus according to the present embodiment may be of any type as long as it can store and output images, such as a smartphone, a digital camera, a tablet, and a digital photo frame. Hereinafter, a smartphone is an image management apparatus according to the present embodiment. An example used as will be described.
<第一実施形態>
図1は画像管理装置100のハードウェア構成の一例を示す図である。 <First embodiment>
FIG. 1 is a diagram illustrating an example of a hardware configuration of theimage management apparatus 100.
図1は画像管理装置100のハードウェア構成の一例を示す図である。 <First embodiment>
FIG. 1 is a diagram illustrating an example of a hardware configuration of the
画像管理装置100は、例えば、通信I/F113、制御部114、入力部115、表示部117、信号分離部121、チューナ・復調部122、ストレージ125、移動体通信I/F131、メモリ132、加速度センサ部133、地磁気センサ部134、GPS受信部135、ジャイロセンサ部136、RTC(real-time clock)137、カメラ140、スイッチ入力部150、音声入出力部160、外部装置I/F170を含んで構成され、それぞれの処理部がバス101を介して接続される。
The image management apparatus 100 includes, for example, a communication I / F 113, a control unit 114, an input unit 115, a display unit 117, a signal separation unit 121, a tuner / demodulation unit 122, a storage 125, a mobile communication I / F 131, a memory 132, and an acceleration. A sensor unit 133, a geomagnetic sensor unit 134, a GPS receiver unit 135, a gyro sensor unit 136, an RTC (real-time clock) 137, a camera 140, a switch input unit 150, a voice input / output unit 160, and an external device I / F 170 are included. Each processing unit is connected via the bus 101.
また、画像管理装置100は、アプリケーションプログラムをストレージ125に格納しており、制御部114がストレージ125から上記プログラムをメモリ132に展開し、制御部114が上記プログラムを実行することで各種機能を実現することができる。以後の説明では説明を簡単にするため、制御部114が各アプリケーションプログラムを実行して実現する各種機能を、各種プログラム機能部が主体となって実現することとして説明する。
Further, the image management apparatus 100 stores application programs in the storage 125, the control unit 114 expands the programs from the storage 125 to the memory 132, and the control unit 114 executes the programs to realize various functions. can do. In the following description, in order to simplify the description, various functions realized by the control unit 114 executing each application program will be described as being realized mainly by various program function units.
なお、アプリケーションプログラムは、画像管理装置100が出荷されるまでに予めストレージ125に格納されていても良いし、CD(Compact Disk)、DVD(Digital Versatile Disk)などの光学メディアや半導体メモリ等の媒体に格納されて図示しない媒体接続部を介して画像管理装置100にインストールされても良い。
The application program may be stored in the storage 125 in advance before the image management apparatus 100 is shipped, or a medium such as a CD (Compact Disk) or a DVD (Digital Versatile Disk) or a semiconductor memory. And installed in the image management apparatus 100 via a medium connection unit (not shown).
また、アプリケーションプログラムは、通信I/F113及び図示しない無線ルーターを介して図示しない外部ネットワークからダウンロードしてインストールしてもよい。若しくは移動体通信I/F131を介して図示しない基地局を介して配信元からダウンロードしてインストールしてもよい。さらに、ネットワークを介してアプリケーションプログラムを取得したパーソナルコンピュータ(PC)に図示しない外部機器接続I/Fを介して接続し、PCから画像管理装置100にムーブ又はコピーしてインストールすることも可能である。
Further, the application program may be downloaded and installed from an external network (not shown) via the communication I / F 113 and a wireless router (not shown). Or you may download and install from a delivery source via the base station which is not illustrated via mobile communication I / F131. Furthermore, it is also possible to connect to a personal computer (PC) that has acquired an application program via a network via an external device connection I / F (not shown), and move or copy from the PC to the image management apparatus 100 for installation. .
また、アプリケーションプログラムは、同様の機能を持つ処理部として、ハードウェアで実現することも可能である。ハードウェアとして実現する場合は、各処理部が主体となり各機能を実現する。
Also, the application program can be realized by hardware as a processing unit having the same function. When implemented as hardware, each processing unit takes the lead in realizing each function.
通信I/F113は、無線LANなどにより、図示しない無線ルーターに接続される。また、通信I/F113は、無線ルーターを介して外部ネットワークと接続され、外部ネットワーク上のサーバー等と情報の送受信を行う。無線ルーターとの通信機能に加えて、又はこれに替えて、Wi‐Fi(登録商標)などの無線LANなどの方法で、無線ルーターを介さずにサーバーと直接通信することが可能である。通信I/F113は、異なる通信方式を行うチップをそれぞれ実装しても良い。また、複数の通信方式を扱う1つのチップとして実装されていても良い。
The communication I / F 113 is connected to a wireless router (not shown) via a wireless LAN or the like. The communication I / F 113 is connected to an external network via a wireless router, and transmits / receives information to / from a server on the external network. In addition to or instead of the communication function with the wireless router, it is possible to directly communicate with the server without using the wireless router by a method such as a wireless LAN such as Wi-Fi (registered trademark). The communication I / F 113 may be mounted with chips that perform different communication methods. Moreover, you may mount as one chip | tip which handles several communication systems.
移動体通信I/F131は、GSM(登録商標)(Global System for Mobile Communications)方式、W-CDMA(Wideband Code Division Multiple Access)方式やCDMA2000方式、UMTS(Universal Mobile Telecommunications System)方式などの第3世代移動通信システム(以下「3G」と表記する)、又はLTE(Long Term Evolution)方式などの移動体通信網を利用して基地局を通して通信ネットワークに接続され、通信ネットワーク上のサーバーと情報の送受信を行うことができる。また、通信I/F113による外部ネットワークとの接続を移動体通信I/F131による通信ネットワーク接続より優先するようにすることもできる。
The mobile communication I / F 131 is a third generation such as GSM (registered trademark) (Global System for Mobile Communications) system, W-CDMA (Wideband Code Division Multiple Access) system, CDMA2000 system, UMTS (Universal Mobile Telecommunications System) system, etc. A mobile communication system (hereinafter referred to as “3G”) or a mobile communication network such as LTE (Long Term Evolution) is used to connect to a communication network through a base station, and send and receive information to and from a server on the communication network. It can be carried out. In addition, the connection with the external network by the communication I / F 113 can be given priority over the communication network connection by the mobile communication I / F 131.
制御部114は、例えばCPU(Central Processing Unit)により構成される。制御部114は、入力部115を経由してユーザの操作要求を受け取り、信号分離部121、表示部117、通信I/F113等、各種プログラム機能部を制御する。
The control unit 114 is configured by, for example, a CPU (Central Processing Unit). The control unit 114 receives a user operation request via the input unit 115, and controls various program function units such as the signal separation unit 121, the display unit 117, and the communication I / F 113.
さらに、制御部114は、通信I/F113及び無線ルーターを経由して外部ネットワーク、又は、移動体通信I/F131及び基地局を経由して外部ネットワーク上のサーバーから各種情報を取得でき、各種プログラム機能部に渡す機能も有する。
Further, the control unit 114 can acquire various information from a server on the external network via the communication I / F 113 and the wireless router, or from the server on the external network via the mobile communication I / F 131 and the base station. It also has a function to be passed to the functional part.
ストレージ125は、例えばROM(Read Only Memory)やHDD(Hard Disk Drive)を用いて構成される。ストレージ125は、制御部114の指示により制御され、アプリケーションプログラムを保存することができる。また、アプリケーションプログラムで作成した各種情報を保存する。
The storage 125 is configured using, for example, a ROM (Read Only Memory) or a HDD (Hard Disk Drive). The storage 125 is controlled by an instruction from the control unit 114 and can store an application program. In addition, various information created by the application program is stored.
また、チューナ・復調部122、通信I/F113、又は移動体通信I/F131から受信した信号から映像音声ストリーム等のコンテンツを保存してもよい。また、ストレージ125は画像管理装置100に内蔵されたものでもよいし、画像管理装置100に着脱可能な可搬型のメモリでもよい。
Further, content such as a video / audio stream may be stored from a signal received from the tuner / demodulator 122, the communication I / F 113, or the mobile communication I / F 131. The storage 125 may be built in the image management apparatus 100 or may be a portable memory that can be attached to and detached from the image management apparatus 100.
メモリ(RAM:Random Access Memory)132は、制御部114の指示により制御される。制御部114によってメモリ132に、ストレージ125に格納しているアプリケーションプログラムの機能部が展開される。
A memory (RAM: Random Access Memory) 132 is controlled by an instruction from the control unit 114. The function unit of the application program stored in the storage 125 is expanded in the memory 132 by the control unit 114.
表示部117は、例えば液晶ディスプレイにより構成される。表示部117は、ストレージ125に格納された画像や映像、撮影、受信された画像や映像、各種操作をするためのUI等を表示する。表示する画像、映像はアプリケーションプログラムで生成された画像でもよく、チューナ・復調部122を介して受信したコンテンツの画像、映像でもよく、通信I/F113を介して外部ネットワーク上のサーバーから受信した画像、映像でもよく、移動体通信I/F131を介して通信ネットワーク上のサーバーから配信された画像、映像でもよい。また、表示部117は、例えば下記のタッチパネル等と一体として構成されてもよい。
The display unit 117 is configured by a liquid crystal display, for example. The display unit 117 displays images and videos stored in the storage 125, captured and received images and videos, UI for various operations, and the like. The image or video to be displayed may be an image generated by an application program or a content image or video received via the tuner / demodulator 122, or an image received from a server on the external network via the communication I / F 113. It may be a video or an image or video distributed from a server on a communication network via the mobile communication I / F 131. Moreover, the display part 117 may be comprised integrally with the following touch panel etc., for example.
入力部115は、ユーザからの画像管理装置100に対する操作を受け付け、入力操作に関する制御情報を入力する入力部であり、例えば、表示部117に積層されたタッチパネルなどを用いることができる。また、音声入出力部160を用いた音声入力、カメラ140を用いたジェスチャー入力も考えられる。
The input unit 115 is an input unit that receives an operation on the image management apparatus 100 from the user and inputs control information related to the input operation. For example, a touch panel stacked on the display unit 117 can be used. Also, voice input using the voice input / output unit 160 and gesture input using the camera 140 can be considered.
加速度センサ部133は、画像管理装置100にかかる加速度を測定する。制御部114は、例えば加速度センサ部133により重力加速度を測定することによって、画像管理装置100のどの部分が上方であるのかを知ることができる。
The acceleration sensor unit 133 measures the acceleration applied to the image management apparatus 100. The control unit 114 can know which part of the image management apparatus 100 is above, for example, by measuring the gravitational acceleration by the acceleration sensor unit 133.
地磁気センサ部134は、複数の磁気センサを利用するなどして地磁気を測定する。
The geomagnetic sensor unit 134 measures geomagnetism by using a plurality of magnetic sensors.
GPS受信部135は、GPS(Grobal Positioning System)を利用して複数の衛星から送信される信号を受信する。制御部114は、GPS受信部135が受信した信号を基に、画像管理装置100の位置情報を計算することができる。
The GPS receiving unit 135 receives signals transmitted from a plurality of satellites using GPS (Global Positioning System). The control unit 114 can calculate the position information of the image management apparatus 100 based on the signal received by the GPS reception unit 135.
ジャイロセンサ部136は、画像管理装置100をユーザが動かした場合などに発生する画像管理装置100の角速度を測定する。
The gyro sensor unit 136 measures the angular velocity of the image management apparatus 100 that occurs when the user moves the image management apparatus 100.
画像管理装置100は、測定した加速度、位置情報、角速度を用いることで、歩行中、就寝中、運転中など、画像管理装置100の所持者の行動や状況を推定することができる。
The image management apparatus 100 can estimate the behavior and situation of the owner of the image management apparatus 100 such as walking, sleeping, and driving by using the measured acceleration, position information, and angular velocity.
カメラ140は、レンズなどの光学系と、イメージセンサ、信号処理回路などからなる。カメラ140の露光制御、フォーカス制御、取得画像画素数制御、圧縮制御、及びカメラ140で撮影された画像をストレージ125への記録制御を、ストレージ125に記録されているカメラ制御のプログラムに従って制御部114が制御する。カメラ140は画像管理装置100において表示部117の画面と同じ面に設けられた内側カメラ141と、表示部117の画面とは反対側の面に設けられた外側カメラ142とを含む。
The camera 140 includes an optical system such as a lens, an image sensor, a signal processing circuit, and the like. The control unit 114 controls exposure control, focus control, acquired image pixel number control, compression control of the camera 140, and recording control of an image captured by the camera 140 in the storage 125 according to a camera control program recorded in the storage 125. Control. The camera 140 includes an inner camera 141 provided on the same surface as the screen of the display unit 117 and an outer camera 142 provided on a surface opposite to the screen of the display unit 117 in the image management apparatus 100.
スイッチ入力部150は物理的なボタン151(1個でも複数でもよい)の操作に応じてスイッチ情報を取込み、バス101を通じて制御部114に取り込まれ、必要に応じて各種アプリケーションプログラムの制御に使用される。たとえば一例として、ボタン151の2個のボタンに応じて音声出力の大きさ調整の制御に使用される。物理的なボタン151は1個でも複数でもよい。
The switch input unit 150 takes in switch information in response to an operation of a physical button 151 (one or more), takes in switch information through the bus 101, and is used to control various application programs as necessary. The For example, as an example, the two buttons 151 are used to control the sound output magnitude adjustment. One or a plurality of physical buttons 151 may be provided.
音声入出力部160には、画像管理装置100に装備されたマイク161からの音声入力信号、及びスピーカ162への音声出力信号を入出力するもので、制御部114にてその音声入出力のボリュームの制御が行われる。
The audio input / output unit 160 inputs / outputs an audio input signal from the microphone 161 provided in the image management apparatus 100 and an audio output signal to the speaker 162. Is controlled.
図2は、画像管理装置100の機能ブロック図である。ストレージ125の一部領域は、顔画像記憶部221、画像記憶部222、操作・行動履歴情報記憶部223、スコア算出式記憶部224を構成する。
FIG. 2 is a functional block diagram of the image management apparatus 100. Part of the storage 125 constitutes a face image storage unit 221, an image storage unit 222, an operation / behavior history information storage unit 223, and a score calculation formula storage unit 224.
顔画像記憶部221は、自分が写っている画像又は映像(以下これらを総称して画像データという)の公開を許容する者の顔画像、又は自分が写っている画像の公開を許容しない者の顔画像を記憶する。
The face image storage unit 221 includes a face image of a person who allows the disclosure of an image or video (hereinafter collectively referred to as image data) in which he / she is photographed, or a person who does not permit the disclosure of an image in which he / she is photographed. Memorize face images.
画像記憶部222は、処理対象となる画像データを記憶する。
The image storage unit 222 stores image data to be processed.
操作・行動履歴情報記憶部223は、画像管理装置100の操作履歴情報及び画像管理装置100の移動履歴情報の少なくとも一つを蓄積する。
The operation / behavior history information storage unit 223 stores at least one of operation history information of the image management apparatus 100 and movement history information of the image management apparatus 100.
スコア算出式記憶部224は、画像データを公開又は非公開に分類するための公開判定スコアを算出するためのスコア算出式を記憶する。
The score calculation formula storage unit 224 stores a score calculation formula for calculating a disclosure determination score for classifying image data as public or private.
画像の公開/非公開を決定するプログラムは、画像取得部201、画像特徴量収集部202、画像保存部203、行動特徴量収集部204、スコア算出部205、公開画像決定部206、ユーザ補正部207、フィードバック部208、画像公開部209を含む。画像特徴量収集部202は、被写体抽出部210、顔認識部211、オープン情報取得部212を含む。更に行動特徴量収集部204は音声認識部213を含む。
The program for determining whether to open / close an image includes an image acquisition unit 201, an image feature amount collection unit 202, an image storage unit 203, an action feature amount collection unit 204, a score calculation unit 205, a public image determination unit 206, and a user correction unit. 207, a feedback unit 208, and an image disclosure unit 209. The image feature amount collection unit 202 includes a subject extraction unit 210, a face recognition unit 211, and an open information acquisition unit 212. Further, the behavior feature amount collection unit 204 includes a voice recognition unit 213.
画像取得部201は、処理対象となる画像データを取得する。画像データは画像(静止画像)又は映像(動画像)でもよい。ここで、画像データは画像管理装置100に内蔵されているカメラ140で撮影されたものでもよいし、通信I/F113、移動体通信I/F131で外部から受信したものでもよいし、チューナ・復調部122で受信したものでもよい。
The image acquisition unit 201 acquires image data to be processed. The image data may be an image (still image) or a video (moving image). Here, the image data may be taken by the camera 140 built in the image management apparatus 100, may be received from the outside by the communication I / F 113 and the mobile communication I / F 131, or may be tuner / demodulation. It may be received by the unit 122.
画像特徴量収集部202は、画像データの解析結果又は画像データに付帯する付帯情報、例えばExif(Exchangeable image file format)情報から公開判定スコアの算出に用いる画像特徴量を収集する。本実施形態では、収集した画像特徴量は数値化する。
The image feature amount collection unit 202 collects image feature amounts used for calculating the public determination score from the analysis result of the image data or incidental information attached to the image data, for example, Exif (Exchangeable image file format) information. In the present embodiment, the collected image feature amount is digitized.
被写体抽出部210は、画像データに撮影されている各被写体領域を抽出する。顔認識部211は、抽出された被写体領域に人物の顔が撮影されているかを認識し、顔が撮影された被写体領域(顔画像)と、顔画像記憶部221に記憶されている顔画像とを比較し、公開を許容する者が撮影されているか又は非公開の者が撮影されているかを示す情報を数値化してスコア算出部205に出力する。
The subject extraction unit 210 extracts each subject region that is captured in the image data. The face recognition unit 211 recognizes whether or not a person's face is shot in the extracted subject area, and the subject area (face image) where the face is shot and the face image stored in the face image storage unit 221. , And digitizes information indicating whether a person who is permitted to publish is photographed or a non-public person is photographed, and outputs the information to the score calculation unit 205.
また顔認識部211は、被写体領域内の顔画像、又は内側カメラで撮影した画像データの撮影者の顔画像を基に人物を特定(顔が認識されればよい)し、オープン情報取得部212がそれらの人物のプロフィール情報を画像管理装置100のストレージから読み出して数値化したり、外部ネットワーク、例えばSNS(Social Networking Service)に接続して取得し、プロフィール情報を公開許容度に対応させて数値化したりしてもよい。
The face recognizing unit 211 identifies a person based on the face image in the subject area or the photographer's face image of the image data captured by the inner camera (the face only needs to be recognized), and the open information acquiring unit 212. Reads out the profile information of those persons from the storage of the image management apparatus 100 and digitizes them, or obtains them by connecting to an external network, for example, SNS (Social Networking Service), and digitizes the profile information according to the disclosure allowance. Or you may.
またオープン情報取得部212は、付帯情報から撮影日時や位置情報を参照してもよいし、その日時、その場所での気象条件や地図情報などの外部情報を外部ネットワークに接続して収集してもよい。
The open information acquisition unit 212 may refer to the shooting date / time and location information from the incidental information, and collect external information such as the date / time, weather conditions and map information at that location by connecting to an external network. Also good.
このように画像特徴量は、画像データが撮影又は作成又は編集された日時や場所に関する情報でもよいし、画像データの被写体となっている人物に関する性別、年齢、服装、姿勢、行動、表情などの情報でもよいし、画像データの撮影対象の物体に関する種類、距離、動き、明るさなどの情報でもよいし、画像データの画質、焦点、映像であれば撮影時間といった撮影条件に関する情報でもよい。あるいは、画像データの被写体となっている人物の個人を個人識別などの画像解析によって特定した上で、個人のプロフィール情報や、撮影者との関係性や、複数の人物が被写体となっている場合はその人物同士の関係性の情報でもよい。
As described above, the image feature amount may be information on the date and time or place where the image data was shot, created, or edited, or the sex, age, clothes, posture, action, facial expression, etc. regarding the person who is the subject of the image data. Information, information such as the type, distance, motion, and brightness of the object to be imaged in the image data, or information related to the image capturing condition such as the image quality, focus, and image time of the image data may be used. Or, if the individual of the person who is the subject of the image data is identified by image analysis such as personal identification, the personal profile information, the relationship with the photographer, or when multiple persons are subjects May be information on the relationship between the persons.
行動特徴量収集部204は、ユーザが画像データに対して行った行動に関する情報を数値化する処理を行う。ここで、情報を数値化するために、入力部115に対する操作履歴を利用したり解析したりしてもよいし、音声入出力部160によって画像データを表示しているときの画像管理装置100の周囲の音声を利用したり解析したりしてもよい。
The behavior feature amount collection unit 204 performs a process of quantifying information related to the behavior performed by the user on the image data. Here, in order to digitize the information, an operation history with respect to the input unit 115 may be used or analyzed, or the image input / output unit 160 may display the image data of the image management apparatus 100. The surrounding voice may be used or analyzed.
また行動特徴量収集部204は、カメラ140によって画像データを表示しているときの画像管理装置100の周囲の視覚的情報を取得し、解析した結果を行動特徴量とて収集してもよい。
Further, the behavior feature amount collecting unit 204 may acquire visual information around the image management apparatus 100 when the image data is displayed by the camera 140 and collect the analysis result as the behavior feature amount.
更に行動特徴量収集部204は、GPS受信部135や加速度センサ部133など各種センサ(運動状態や位置情報を収集するセンサ)の値を利用したり解析したりしてもよい。例えば行動特徴量収集部204は、GPS部や加速度センサなど各種センサ情報を解析した上で、画像データを撮影したときに撮影者やその画像データを移した表示部117の画面を見た者(以下「閲覧者」という)が自宅にいたかどうか、公共の場所にいたかどうか、自動車や飛行機に乗車中だったかどうか、自撮りだったかどうか、といった撮影時の状況に関する情報を収集してもよい。
Further, the behavior feature amount collection unit 204 may use or analyze values of various sensors (sensors that collect exercise state and position information) such as the GPS reception unit 135 and the acceleration sensor unit 133. For example, the behavior feature amount collection unit 204 analyzes various sensor information such as a GPS unit and an acceleration sensor, and then takes a photographer and a person who has viewed the screen of the display unit 117 that has moved the image data ( (Hereinafter referred to as “browser”) whether it was at home, whether it was in a public place, whether it was in a car or airplane, whether it was a selfie, Good.
また行動特徴量は、ユーザが画像データに対して行った行動として、操作履歴情報、例えば、画像データ(画像データのファイル)に対する、コピー、削除、名前変更、フォルダ移動、共有、アップロード、ダウンロード、メール添付、圧縮、解凍などのファイル操作の履歴でもよいし、編集、追記、画質調整、画像サイズ変更、映像効果追加、といった画像編集又は映像編集の履歴でもよい。更にお気に入り登録やタグ付け、アプリケーションへの登録といったメタ情報に関する操作履歴でもよい。
In addition, the behavior feature amount is an operation history information, for example, copy, delete, rename, folder move, share, upload, download, operation history information, for example, image data (image data file) It may be a history of file operations such as email attachment, compression, decompression, etc., or a history of image editing or video editing such as editing, appending, image quality adjustment, image size change, and video effect addition. Further, it may be an operation history related to meta information such as favorite registration, tagging, and application registration.
また行動特徴量収集部204は、制御部114、表示部117、メモリ132、ストレージ125の状態を取得し、画像データに基づく画像や映像を閲覧しているとき又はその前後に使われていたアプリケーションや閲覧されていたファイルやWebページに関する情報を行動特徴量として収集してもよい。
In addition, the behavior feature amount collection unit 204 acquires the states of the control unit 114, the display unit 117, the memory 132, and the storage 125, and is an application that is used when or before or after browsing an image or video based on the image data. Alternatively, information related to a browsed file or Web page may be collected as an action feature amount.
更には、画像データを表示部117に表示させた状態で内側カメラ141が閲覧者の顔画像を撮影し、顔認識部211が内側カメラ141からの画像データの顔認識処理を行い、閲覧者や撮影者の顔画像を特定する。行動特徴量収集部204は、閲覧者や撮影者の顔画像と、顔画像記憶部221に記憶された顔画像との比較結果を行動特徴量として収集する。例えば、撮影者や閲覧者の性別、年齢、服装、姿勢、行動、表情などの情報を行動特徴量として収集してもよい。
Further, the inner camera 141 captures the face image of the viewer with the image data displayed on the display unit 117, and the face recognition unit 211 performs face recognition processing of the image data from the inner camera 141, Identify the photographer's face image. The behavior feature amount collection unit 204 collects a comparison result between the face image of the viewer or the photographer and the face image stored in the face image storage unit 221 as the behavior feature amount. For example, information such as the sex, age, clothes, posture, behavior, facial expression, etc. of the photographer or viewer may be collected as the behavior feature amount.
また行動特徴量収集部204は、オープン情報取得部212と同様、外部ネットワークへの接続機能を有し、閲覧者のプロフィール情報や、画像管理装置100の所有者との関係性に関する情報や、画像データの被写体となっている人物との関係性に関する情報を外部ネットワーク、例えばソーシャルワーキングサービスから検索・収集してもよい。
Similarly to the open information acquisition unit 212, the behavior feature amount collection unit 204 has a function of connecting to an external network, and includes profile information of the viewer, information regarding the relationship with the owner of the image management apparatus 100, Information regarding the relationship with the person who is the subject of the data may be retrieved and collected from an external network, for example, a social working service.
画像保存部203は、画像特徴量情報や行動特徴量情報を画像データに関連付けて画像記憶部222に書き込んだり、画像記憶部222から処理対象となる画像データを読み出したりする機能を有する。
The image storage unit 203 has a function of associating image feature amount information and action feature amount information with the image data and writing them in the image storage unit 222 and reading out image data to be processed from the image storage unit 222.
スコア算出部205は、スコア算出式記憶部224からスコア算出式を読み出し、画像データに対応する行動特徴量や画像特徴量を適用し、画像データの公開判定スコアを算出する。
The score calculation unit 205 reads the score calculation formula from the score calculation formula storage unit 224, applies the behavior feature quantity and the image feature quantity corresponding to the image data, and calculates the disclosure determination score of the image data.
公開画像決定部206は、公開を許容することを示すスコア判定閾値及び公開判定スコアの比較結果に基づいて、画像データの公開/非公開を暫定的に決定する。なお、本実施形態では公開/非公開の2つの状態を想定するが、知人のみに公開、家族の身に公開といったように公開範囲を3つ以上の状態に段階的に設定することも可能である。なお、第三実施形態では公開画像決定部206は、非公開の顔画像に対してマスキング処理を実行する機能を有するが、その詳細は第三実施形態にて説明する。
The public image determination unit 206 tentatively determines whether the image data is public or non-public based on the comparison result of the score determination threshold and the public determination score indicating that the disclosure is permitted. In this embodiment, two states of public / non-public are assumed, but it is also possible to set the disclosure range to three or more states in stages, such as disclosure to acquaintances and disclosure to family members. is there. In the third embodiment, the public image determination unit 206 has a function of executing a masking process on a non-public face image, details of which will be described in the third embodiment.
ユーザ補正部207は、画像データに対して暫定的に決定された公開又は非公開の決定結果に対し、ユーザからの補正指示を受け付ける。
The user correction unit 207 accepts a correction instruction from the user for the public or non-public determination result tentatively determined for the image data.
フィードバック部208は、画像の公開/非公開の変更履歴と、画像特徴量と、行動特徴量を元にスコア算出式記憶部224に保存されているスコア算出式を変更する。スコア算出式の変更は、たとえば機械学習を用いて行われる。サポートベクターマシン、ニューラルネットワークといったクラス分類の機械学習では、ベクトルで表される特徴量と正解クラスからなる訓練データを入力することで、未知の特徴量を正しいクラスに分類するための算出式を出力することができる。本画像管理装置においては、画像特徴量及び行動特徴量を特徴量とみなしユーザの手動操作による画像の公開/非公開の変更履歴を正解クラスとみなして機械学習を実行することで、公開/非公開の2クラスに分類する算出式を更新することができる。
The feedback unit 208 changes the score calculation formula stored in the score calculation formula storage unit 224 based on the change history of image disclosure / non-publication, the image feature amount, and the behavior feature amount. The score calculation formula is changed using, for example, machine learning. In machine learning of class classification such as support vector machine and neural network, by inputting training data consisting of feature quantity represented by vector and correct answer class, calculation formula to classify unknown feature quantity into correct class is output can do. In this image management apparatus, the image feature value and the action feature value are regarded as feature values, and the public / non-disclosure change history of the image manually operated by the user is regarded as a correct class, and machine learning is performed. The calculation formula for classifying into two public classes can be updated.
画像公開部209は、公開する決定された場合に、当該画像データを公開する。また公開に先立ち、画像データの決定結果を示す確認画面を表示部117に表示させる。ただし、公開画像決定部206によって非公開にすると判定された画像データは表示させない。あるいは画像データが表示不可である旨を表示する。
The image publishing unit 209 publishes the image data when it is decided to publish. Prior to publication, a confirmation screen showing the determination result of the image data is displayed on the display unit 117. However, the image data determined to be private by the public image determination unit 206 is not displayed. Alternatively, it is displayed that the image data cannot be displayed.
次に図3から図6を参照して、第一実施形態に係る画像管理装置の処理について説明する。図3は、第一実施形態に係る画像管理装置における前半の処理を示すフローチャートである。図4は、第一実施形態に係る画像管理装置における後半の処理を示すフローチャートである。図5は、スコア算出処理の一例を示す図である。図6は、第一実施形態に係るユーザ補正画面の一例を示す図である。以下、図3、図4の各ステップ順に沿って説明する。
Next, processing of the image management apparatus according to the first embodiment will be described with reference to FIGS. FIG. 3 is a flowchart showing the first half process in the image management apparatus according to the first embodiment. FIG. 4 is a flowchart illustrating the latter half of the process in the image management apparatus according to the first embodiment. FIG. 5 is a diagram illustrating an example of a score calculation process. FIG. 6 is a diagram illustrating an example of a user correction screen according to the first embodiment. In the following, description will be given along the order of steps in FIGS. 3 and 4.
図3に示すように、画像管理装置100の主電源を投入すると(S01)、行動特徴量収集部204が行動特徴量の蓄積を開始する(S02)。
As shown in FIG. 3, when the main power of the image management apparatus 100 is turned on (S01), the behavior feature amount collection unit 204 starts accumulation of behavior feature amounts (S02).
行動特徴量は、予め定められた撮影画像を公開することを許容するか否かを決定するための要因(以下「公開判定要因」という)を数値化した情報である。行動特徴量の例として、例えばスイッチ入力部150が受け付けたボタン151の操作履歴を数値化した情報でもよいし、また加速度センサ部133、地磁気センサ部134、GPS受信部135、及びジャイロセンサ部136が検出したセンサ結果(ローデータ)を移動履歴情報として蓄積してもよい。
The behavior feature amount is information obtained by quantifying a factor (hereinafter referred to as “disclosure determination factor”) for determining whether or not to allow a predetermined captured image to be disclosed. As an example of the behavior feature amount, for example, information obtained by digitizing the operation history of the button 151 received by the switch input unit 150 may be used, or the acceleration sensor unit 133, the geomagnetic sensor unit 134, the GPS reception unit 135, and the gyro sensor unit 136 may be used. The sensor result (raw data) detected by may be stored as movement history information.
更に行動特徴量収集部204は、センサ結果の変動量を基に画像管理装置100を携行している人(以下「ユーザ」という)の移動状態、例えば静止中、歩行中、乗り物による移動中といった移動状態を判定し、その結果を一時的に保持しておいてもよい。
Further, the behavior feature amount collection unit 204 is based on the variation amount of the sensor result, and the movement state of the person carrying the image management apparatus 100 (hereinafter referred to as “user”), for example, stationary, walking, or moving by a vehicle. The moving state may be determined and the result may be temporarily stored.
本実施形態では行動特徴量の数値化例として、各公開判定要因に該当する場合は「1」、非該当の場合は「0」とするフラグを用いることとするが、公開判定要因のマッチング度に応じた正負の数を用いて行動特徴量を定義してもよい。
In this embodiment, as an example of quantifying the behavior feature value, a flag “1” is used when corresponding to each disclosure determination factor, and “0” is set when the disclosure determination factor is not applicable. You may define action feature-value using the positive / negative number according to.
ユーザが画像管理装置100のカメラ140を用いて撮影を行うと(S03)、画像取得部201はカメラ140が撮影して生成した画像データを取得すると共に、RTC137から撮影した日時情報を取得し、GPS受信部135から撮影場所の位置情報を取得してExif情報を書込み、画像データに付帯して画像保存部203に出力する。Exif情報には、画面の明るさや画像の解像度などの撮影条件の他、撮影機器を特定する機器情報等、Exif規格で定義された各種情報が含まれる。
When the user performs shooting using the camera 140 of the image management apparatus 100 (S03), the image acquisition unit 201 acquires image data generated by shooting by the camera 140, and acquires date and time information acquired from the RTC 137, The position information of the shooting location is acquired from the GPS receiving unit 135, the Exif information is written, the image data is attached to the image storage unit 203, and is output. The Exif information includes various information defined by the Exif standard, such as device information for specifying a photographing device, in addition to photographing conditions such as screen brightness and image resolution.
それと同時に行動特徴量収集部204は、撮影画像に関する行動特徴量の収集結果を画像保存部203に出力する。画像保存部203が画像データにExif情報及び行動特徴量データを付帯させてストレージ125に記憶する(S04)。
At the same time, the behavior feature amount collection unit 204 outputs the behavior feature amount collection result regarding the captured image to the image storage unit 203. The image storage unit 203 attaches Exif information and behavior feature amount data to the image data and stores them in the storage 125 (S04).
本ステップで行動特徴量収集部204が画像保存部203に出力する行動特徴量には、ステップS01から継続して蓄積した行動特徴量の他に、撮影者を特定する情報や撮影場所の環境情報を含んでもよい。
In this step, the action feature quantity output by the action feature quantity collection unit 204 to the image storage unit 203 includes the action feature quantity continuously accumulated from step S01, information identifying the photographer, and environment information of the shooting location. May be included.
例えば、行動特徴量収集部204に、外側カメラ142で撮影する際に同時に内側カメラ141に撮影者の画像を撮影させ、その撮影者画像を顔認識部211に出力させる。顔認識部211は予めストレージ125に格納した画像管理装置100の所有者の顔画像と撮影者画像とを比較し同一人物か否かを判定し、判定結果を行動特徴量収集部204に出力する。行動特徴量収集部204は、画像管理装置100の所有者であれば所有者フラグの値を「1」、非所有者であれば「0」を入力する。
For example, the behavior feature amount collection unit 204 causes the inner camera 141 to simultaneously capture the photographer's image when the outer camera 142 captures the image, and causes the face recognition unit 211 to output the photographer image. The face recognition unit 211 compares the face image of the owner of the image management apparatus 100 stored in the storage 125 in advance with the photographer image to determine whether or not they are the same person, and outputs the determination result to the behavior feature amount collection unit 204. . The behavior feature amount collection unit 204 inputs “1” as the value of the owner flag if the owner of the image management apparatus 100 and “0” if it is a non-owner.
また、外部装置I/F170に自撮り棒制御部171が接続され、自撮り棒制御部171に対して撮影指示信号が出力されたことが操作情報として記録されている場合は、行動特徴量収集部204は、公開判定要因として「自撮り」項目のフラグの値を「1」を入れる。
In addition, when the self-shooting stick control unit 171 is connected to the external device I / F 170 and the fact that the shooting instruction signal is output to the self-shooting stick control unit 171 is recorded as the operation information, behavior feature amount collection is performed. The unit 204 sets “1” as the flag value of the “selfie” item as a disclosure determination factor.
更に行動特徴量収集部204は、撮影ボタンが押されたときの周りの音をマイク161で集音し音声入出力部160に入力させる。音声認識部213が画像の撮影前後に集音した音声信号から撮影者の声で「アップロードする」「みんなに見せる」といった発言があったことを検出した場合、また、撮影時に、歓声が上がった場合や笑い声が上がったことを検出した場合は、行動特徴量収集部204はその画像に対して公開推奨フラグの値を「1」にする。
Further, the behavior feature amount collection unit 204 collects sounds around the time when the shooting button is pressed with the microphone 161 and inputs the collected sounds to the voice input / output unit 160. When the voice recognition unit 213 detects from the voice signals collected before and after the image is taken that a voice such as “upload” or “show to everyone” is spoken by the photographer's voice, or a cheering sound was raised during the shooting. If it is detected that the laughter has risen, the behavior feature amount collection unit 204 sets the value of the public recommendation flag to “1” for the image.
また行動特徴量収集部204は、画像の撮影直後に、たとえば写真をSNSに投稿した操作履歴がある場合や、特定の知人にメールを送ったり、そのメールの内容に「共有する」「いい写真」「見てほしい」といった特定の語句が含まれていたり、といった場合には、その画像を公開してもよいと判断し、公開推奨フラグの値を「1」にする。
In addition, the behavior feature amount collection unit 204, for example, when there is an operation history of posting a photo to the SNS immediately after the image is taken, to send mail to a specific acquaintance, or to “share” or “good photo” in the content of the email If a specific phrase such as “I want to see” is included, it is determined that the image may be disclosed, and the value of the recommendation flag is set to “1”.
スコア算出部205は、スコア算出式記憶部224に保存されたスコア計算式を読み出し、画像に付帯された画像特徴量及び行動特徴量の値をスコア計算式に代入してスコアを算出する(S05)。そして、公開画像決定部206は、公開条件を充足しているかを判断するために算出されたスコアを公開判定閾値と比較し、算出されたスコアが公開判定閾値以上であれば(S06/YES)画像を公開すると暫定的に判断して公開フラグに値「1」を入れる(S07)。算出されたスコアが公開判定閾値未満であれば(S06/NO)画像を非公開すると暫定的に判断して公開フラグに値「0」を入れる(S08)。その後、画像保存部203は公開フラグを画像に付帯してストレージに保存する(S09)。
The score calculation unit 205 reads the score calculation formula stored in the score calculation formula storage unit 224, and calculates the score by substituting the image feature amount and behavior feature amount values attached to the image into the score calculation formula (S05). ). Then, the public image determination unit 206 compares the score calculated for determining whether the public condition is satisfied with the public determination threshold, and if the calculated score is equal to or greater than the public determination threshold (S06 / YES). It is tentatively determined to publish the image, and a value “1” is entered in the disclosure flag (S07). If the calculated score is less than the disclosure determination threshold value (S06 / NO), it is tentatively determined that the image is not disclosed and a value “0” is set in the disclosure flag (S08). Thereafter, the image storage unit 203 adds a public flag to the image and stores it in the storage (S09).
スコア算出式を一例として、下式(1)でもよい。
Taking the score calculation formula as an example, the following formula (1) may be used.
図5に式(1)の重み係数と、画像1から画像6のフラグの値とを示す。なお、項目と重み係数はスコア算出式記憶部224に記憶され、画像1から画像6についての各項目のフラグの値は、各画像に付帯されて記憶されるが、図5では説明の便宜のため同一のテーブルに図示している。
FIG. 5 shows the weighting coefficient of the equation (1) and the flag values of the images 1 to 6. Note that the items and weighting coefficients are stored in the score calculation formula storage unit 224, and the flag values of the items for the images 1 to 6 are attached to the images and stored. However, FIG. Therefore, they are shown in the same table.
図5に示すように、式(1)によれば、画像1「観光地で所有者が子供と共に自撮りをした画像」のスコアS1は、100+1+50+0.5=151.5となる。このとき、公開判定閾値Sth1が50の場合、画像1は公開と判定される。
As shown in FIG. 5, according to the equation (1), the score S1 of the image 1 “image taken by the owner with the child in the sightseeing spot” is 100 + 1 + 50 + 0.5 = 151.5. At this time, when the disclosure determination threshold value Sth1 is 50, the image 1 is determined to be disclosed.
別例として、画像2「自宅で子供が寝ている画像」のスコアS1は、1+0.5-10=-8.5となり、非公開と判定される。
As another example, the score S1 of the image 2 “image where the child is sleeping at home” is 1 + 0.5-10 = −8.5, and it is determined that the score is not disclosed.
更に画像3「自宅で所有者が飲酒しながら自撮りをした画像」のスコアS1は、1+1+50-100=-48となり、非公開と判定される。
Further, the score S1 of the image 3 “image taken by the owner while drinking at home” is 1 + 1 + 50-100 = −48, and is determined to be private.
式(2)は、公開判定閾値Sth2をSth1よりも高い値、即ち公開に対してより制限を加える値に設定しておくが、公開推奨フラグが「1」に設定されているときは公開すると判定するためのスコア算出式である。このように、スコアの値単体だけでなく、行動特徴量収集部204が数値化した公開推奨フラグの値を論理演算に組み込んで、行動特徴量に重きを置いて公開判定を行ってもよい。
Expression (2) sets the disclosure determination threshold value Sth2 to a value higher than Sth1, that is, a value that further restricts the disclosure, but when the disclosure recommendation flag is set to “1”, It is a score calculation formula for determining. In this way, not only the score value alone but also the value of the recommendation recommendation flag digitized by the behavior feature amount collection unit 204 may be incorporated into the logical operation, and the disclosure determination may be performed with emphasis on the behavior feature amount.
例えば画像4は、画像3と同じ画像であるが、自宅でのパーティーシーンで撮影された画像で飲酒場面ではあるものの歓声が音声認識部213により検出されたことにより、公開推奨フラグに「1」がついているとする。この場合、式(1)では非公開と判定されるが、式(2)では公開と判断される。
For example, the image 4 is the same image as the image 3, but the cheering of the drinking scene in the image taken in the party scene at home is detected by the voice recognition unit 213. Suppose that In this case, although it is determined that it is not disclosed in the equation (1), it is determined that it is disclosed in the equation (2).
また画像5は、行動特徴量収集部204が「運転中」の画像であると判断した画像である。この場合、行動特徴量収集部204は「非公開推奨フラグ」の値を「1」とし、式(2)によりスコアS1の値に関らず、非公開と判断するように構成してもよい。
Further, the image 5 is an image that the behavior feature amount collection unit 204 determines to be an “in driving” image. In this case, the behavior feature amount collection unit 204 may be configured to set the value of the “non-public recommendation flag” to “1” and determine that the value is private regardless of the value of the score S1 according to the equation (2). .
また行動特徴量収集部204は、撮影後、表示部117に画像を撮影している間は内側カメラ141からスルー画像を取り込み、顔認識部211に画像管理装置100の非所有者が写ったかを判断してもよい。そして、非所有者が写っている場合には、撮影画像は他人に見せてもよい画像であると判断して、「公開推奨フラグ」に「1」を入れてもよい。
In addition, the action feature amount collection unit 204 captures a through image from the inner camera 141 while the image is captured on the display unit 117 after capturing, and determines whether the non-owner of the image management apparatus 100 is captured in the face recognition unit 211. You may judge. When the non-owner is shown, it is determined that the captured image is an image that may be shown to others, and “1” may be set in the “public recommendation flag”.
上記式(2)において、論理演算としてORではなくANDを用いてもよい。この場合、スコアの値だけでなく、撮影したときに収集した行動特徴量に基づいて公開決定フラグの値が「1」とされた画像を公開と判定することができる。上記処理により、自動的に画像を公開/非公開に暫定的に決定する処理が実行される。次いで、ユーザによる補正を行い、画像の公開/非公開を確定させて画像を公開する。なお、ここでいう確定とは、ユーザ補正後に更なる修正を禁止する意図ではなく、例えば公開と暫定的に決定されたものをユーザ補正により公開に修正した後、再度非公開にユーザが補正することもできる。
In the above equation (2), AND may be used as a logical operation instead of OR. In this case, based on not only the score value but also the behavior feature amount collected when the image is taken, it is possible to determine that the image whose disclosure determination flag value is “1” is disclosed. Through the above process, a process of automatically tentatively determining whether an image is public / non-public is executed. Next, correction by the user is performed, and release / non-disclosure of the image is confirmed and the image is released. Note that “determined” here is not intended to prohibit further correction after user correction, but, for example, after tentatively determined to be released is corrected to open by user correction, then the user corrects it again to non-public You can also
図4に示すように、公開画像決定部206はストレージから公開判定対象画像を読み出し(S11)、各画像に付帯された公開フラグの値を読み取る(S12)。そして公開フラグの値が「1」の画像には「公開」決定ボタンを、公開フラグの値が「0」の画像には「非公開」決定ボタンを各画像に近接(重畳を含む)させた公開確認画面を表示部117の画面に表示する(S13)。
As shown in FIG. 4, the public image determination unit 206 reads the public determination target image from the storage (S11), and reads the value of the public flag attached to each image (S12). Then, a “public” decision button is made close to each image (including superposition) for an image with the public flag value “1”, and a “non-public” decision button is made for an image with the public flag value “0”. A disclosure confirmation screen is displayed on the screen of the display unit 117 (S13).
図6は第一実施形態に係るユーザ補正画面の一例を示す図であり、表示部117と入力部115とが一体化したタッチパネルに手動操作を受け付けるUIを表示させた状態を示す。
FIG. 6 is a diagram illustrating an example of a user correction screen according to the first embodiment, and illustrates a state in which a UI for accepting a manual operation is displayed on a touch panel in which the display unit 117 and the input unit 115 are integrated.
アイコン301は画像302が公開と判断されていることを示している。アイコン311は画像312の公開/非公開が自動的に決定され、手動補正はされていないことを示す。アイコン321は画像322が公開されていないことを示している。アイコン331は画像332の公開/非公開が自動的に決定されたことを示している。ユーザはアイコンをタッチすることで画像の公開/非公開を変更することができる。たとえばアイコン301をタッチする、即ち手動補正をすると、アイコン301はアイコン321の外見に変化し、画像302は非公開になる。
The icon 301 indicates that the image 302 is determined to be public. An icon 311 indicates that whether to disclose the image 312 is automatically determined and manual correction is not performed. An icon 321 indicates that the image 322 is not disclosed. An icon 331 indicates that the disclosure / non-disclosure of the image 332 is automatically determined. The user can change the disclosure / non-disclosure of the image by touching the icon. For example, when the icon 301 is touched, that is, when manual correction is performed, the icon 301 changes to the appearance of the icon 321 and the image 302 is not disclosed.
公開/非公開が自動的に決定された画像に対して同様の操作をした場合、公開/非公開が自動的に決定されたことを示すアイコン311,331は消える。たとえばアイコン313をタッチして画像312を非公開にした場合、アイコン311が消える。
When the same operation is performed on an image for which disclosure / non-disclosure is automatically determined, the icons 311 and 331 indicating that disclosure / non-disclosure is automatically determined disappear. For example, when the icon 313 is touched to make the image 312 private, the icon 311 disappears.
また、ユーザ補正部207の操作はタッチパネルで行う必要はなく、物理的なスイッチによる入力、音声入出力部160を用いた音声入力、カメラ140を用いたジェスチャー入力といった操作でもよい。
Further, the user correction unit 207 need not be operated on the touch panel, and may be an operation such as an input using a physical switch, an audio input using the audio input / output unit 160, or a gesture input using the camera 140.
ユーザ補正部207は、ユーザによる手動補正操作を受け付けると(S14/YES)、画像の公開フラグの値を手動補正操作の内容に沿って書き換え、保存する(S15)。
When the user correction unit 207 receives a manual correction operation by the user (S14 / YES), the user correction unit 207 rewrites and stores the value of the image disclosure flag in accordance with the content of the manual correction operation (S15).
フィードバック部208は、手動補正がされた画像に対して用いられたスコア計算式を、手動補正の内容を基に修正する、所謂フィードバック処理を行う(S16)。フィードバック処理の一例として、非公開から公開に変更された場合は、手動補正された画像の公開判定フラグが「1」となっている項目の重み係数を増加させる。公開から非公開に変更された場合は、手動補正された画像の公開判定フラグが「1」となっている項目の重み係数を減少させる。
The feedback unit 208 performs a so-called feedback process in which the score calculation formula used for the manually corrected image is corrected based on the manual correction content (S16). As an example of feedback processing, when the status is changed from non-public to public, the weight coefficient of an item whose public determination flag of the manually corrected image is “1” is increased. When it is changed from public to non-public, the weight coefficient of the item for which the public determination flag of the manually corrected image is “1” is decreased.
入力部がユーザから画像公開操作を受け付けると(S17/YES)、画像公開部209がストレージから公開フラグの値が「1」になっている画像のみを抽出して公開する(S18)。
When the input unit accepts an image publishing operation from the user (S17 / YES), the image publishing unit 209 extracts and publishes only the images whose disclosure flag value is “1” from the storage (S18).
本実施形態によれば、画像特徴量及び行動特徴量に応じて画像の公開/非公開を決定することができる。更に、自動判定された公開/非公開の結果に対して手動補正を行った場合は、手動補正の内容に従って公開判定に用いるスコア計算式を修正する。これを繰り返すうちに、ユーザの好みをスコア計算式に反映させることができる。
According to the present embodiment, it is possible to determine whether the image is open / closed according to the image feature amount and the behavior feature amount. Further, when manual correction is performed on the automatically / publicly disclosed result, the score calculation formula used for the public determination is corrected according to the manual correction content. While repeating this, the user's preference can be reflected in the score calculation formula.
<第二実施形態>
第二実施形態は、撮影時に撮影者から公開/非公開情報を入力させ、スコアの算出に反映させる実施形態である。以下、図7を参照して第二実施形態について説明する。第二実施形態は、第二実施形態に係るユーザ補正画面の一例を示す図である。 <Second embodiment>
The second embodiment is an embodiment in which public / private information is input from a photographer at the time of shooting and is reflected in score calculation. Hereinafter, a second embodiment will be described with reference to FIG. The second embodiment is a diagram illustrating an example of a user correction screen according to the second embodiment.
第二実施形態は、撮影時に撮影者から公開/非公開情報を入力させ、スコアの算出に反映させる実施形態である。以下、図7を参照して第二実施形態について説明する。第二実施形態は、第二実施形態に係るユーザ補正画面の一例を示す図である。 <Second embodiment>
The second embodiment is an embodiment in which public / private information is input from a photographer at the time of shooting and is reflected in score calculation. Hereinafter, a second embodiment will be described with reference to FIG. The second embodiment is a diagram illustrating an example of a user correction screen according to the second embodiment.
具体的には、第一実施形態の撮影ステップ(S03)において、カメラ140を起動して撮影する際に表示部117の画面にスルー画像を表示する際、図7に示すように表示部117と入力部115が一体化したタッチパネルに公開画像とすることを指定するための公開アイコン401及び非公開画像とすることを指定する非公開アイコン402を表示する。公開アイコン401及び非公開アイコン402は、カメラ140で撮影をするためのシャッターボタンとしての機能を兼ね備える。よって、公開アイコン401は第一撮影ボタン、非公開アイコン402は第二撮影ボタンに相当する。この公開アイコン401及び非公開アイコン402の表示処理及びこれが押された結果を公開フラグに書き込む動作はユーザ補正部207が実行する。
Specifically, in the shooting step (S03) of the first embodiment, when a through image is displayed on the screen of the display unit 117 when shooting with the camera 140 activated, the display unit 117 as shown in FIG. A public icon 401 for designating a public image and a private icon 402 for designating a private image are displayed on the touch panel integrated with the input unit 115. The public icon 401 and the private icon 402 have a function as a shutter button for taking a picture with the camera 140. Therefore, the public icon 401 corresponds to the first shooting button, and the non-public icon 402 corresponds to the second shooting button. The user correction unit 207 executes the display process of the public icon 401 and the non-public icon 402 and the operation of writing the result of pressing the public icon 401 and the result of the pressing on the public flag.
そして、撮影者が公開アイコン401をタッチした場合には撮影される画像には、行動特徴量収集部204が「公開フラグ」に1を書込み、非公開アイコン402をタッチした場合には撮影される画像の「公開フラグ」に0を書き込む。
Then, when the photographer touches the public icon 401, the action feature amount collection unit 204 writes 1 in the “public flag” in the photographed image, and when the photographer touches the private icon 402, the photograph is taken. 0 is written in the “public flag” of the image.
本実施形態によれば、公開アイコン401又は非公開アイコン402へのタッチ操作で、シャッターボタンを押す操作及び公開/非公開の決定処理を同時に行うことができ、操作性が向上する。
According to the present embodiment, a touch operation on the public icon 401 or the non-public icon 402 can simultaneously perform an operation of pressing the shutter button and a public / non-public determination process, thereby improving operability.
<第三実施形態>
第三実施形態は、非公開人物、例えば自分以外の他人が写っている画像を公開する際に非公開人物の顔にモザイクやぼかし等のマスキング処理を行ってから公開する実施形態である。 <Third embodiment>
The third embodiment is an embodiment in which when a private person, for example, an image showing a person other than himself / herself is disclosed, masking processing such as mosaic or blurring is performed on the face of the private person and then the public person is released.
第三実施形態は、非公開人物、例えば自分以外の他人が写っている画像を公開する際に非公開人物の顔にモザイクやぼかし等のマスキング処理を行ってから公開する実施形態である。 <Third embodiment>
The third embodiment is an embodiment in which when a private person, for example, an image showing a person other than himself / herself is disclosed, masking processing such as mosaic or blurring is performed on the face of the private person and then the public person is released.
図5の画像6「観光地で所有者が非公開人物とともに自撮り撮影をした画像」は、既述の式(1)によると100+1+50-200=-49となり、ステップS08において暫定的に公開フラグの値に「0」が書き込まれる。
The image 6 “image taken by the owner with a private person in the tourist spot” in FIG. 5 is 100 + 1 + 50−200 = −49 according to the above-described formula (1), and is provisional in step S08. Thus, “0” is written in the value of the public flag.
これに対してステップS14でユーザが手動補正を行い非公開から公開に変更すると、顔認識部211は撮影画像から認識した顔画像のうち非公開人物を特定し、その特定結果(撮影画像のうち非公開人物が撮影された領域の位置情報)を画像公開部209に出力する。画像公開部209は非公開人物の顔画像に対して、顔を特定されないための加工処理(マスキング処理)を実行し、画像公開部209がマスキング処理をされた画像を公開する。
On the other hand, when the user manually corrects and changes from private to public in step S14, the face recognition unit 211 identifies a private person among the facial images recognized from the photographed image, and the identification result (of the photographed image). The position information of the area where the non-public person is photographed is output to the image disclosure unit 209. The image publishing unit 209 performs a processing process (masking process) for not identifying a face on the face image of a non-public person, and the image publishing unit 209 publishes the image subjected to the masking process.
画像保存部203は、マスキング処理をした画像を保存してもよいが、画像データはマスキング処理を施さない画像を保存しておき、画像公開部209がマスキング処理フラグに「1」を書き込んで付帯させて保存してもよい。そして、画像公開部209が次に画像公開をする際にはマスキング処理フラグを参照し、フラグの値が「1」であればマスキング処理を施してから画像を公開するように構成してもよい。
The image storage unit 203 may store an image that has undergone masking processing, but the image data stores an image that has not been subjected to masking processing, and the image disclosure unit 209 writes “1” in the masking processing flag. You may save it. Then, when the image publishing unit 209 next publishes an image, the masking process flag may be referred to. If the value of the flag is “1”, the masking process may be performed and then the image may be published. .
本実施形態によれば、非公開人物が撮像されている画像を手動補正で公開画像として取り扱った場合にも、非公開人物の顔が公開されることを防ぐことができる。
According to the present embodiment, it is possible to prevent a private person's face from being disclosed even when an image of a private person being captured is handled as a public image by manual correction.
なお、本実施形態の変形例として、画像公開部209が画像を公開(出力)する先を特定する情報を、公開操作後、実際に公開されるまでの間に画像特徴量収集部202が取得する。そして、公開先として指定されたSNS内の顔画像を検索し、同一のSNSへの加入者は公開許容人物(非公開人物ではない)として取り扱ってもよい。すなわち、撮影画像内の被写体が接続先に指定されたSNS内のメンバーの顔画像と一致する場合は、そのSNSに対しては公開画像として取り扱い、被写体の顔画像にマスキング処理を施すことなく出力する。
As a modification of the present embodiment, the image feature amount collection unit 202 acquires information specifying the destination to which the image publishing unit 209 publishes (outputs) an image until the actual publication after the publishing operation. To do. Then, the face image in the SNS designated as the disclosure destination may be searched, and a subscriber to the same SNS may be handled as a publicly permitted person (not a private person). In other words, when the subject in the captured image matches the face image of the member in the SNS designated as the connection destination, the SNS is treated as a public image and output without subjecting the subject's face image to masking processing. To do.
一方、別のSNSに同一画像をアップしようとした場合は、被写体は非公開人物であるとしてマスキング処理を施してから公開する。
On the other hand, when attempting to upload the same image to another SNS, the subject is disclosed after being subjected to a masking process on the assumption that the subject is a private person.
この変形例によれば、公開先に応じて非公開人物と公開許容人物とを動的に切り替えることができる。一方、画像管理装置にはマスキング処理を施さない画像を保存しておけるので、ユーザは原画像を見ることができる。
According to this modified example, it is possible to dynamically switch between a private person and a publicly permitted person according to the publication destination. On the other hand, the image management apparatus can store an image that is not subjected to masking processing, so that the user can view the original image.
上記各実施形態は、本発明の一実施形態を説明したに過ぎず、本発明を限定する趣旨ではない。本明細書に開示される技術的思想の範囲内において当業者による様々な変更及び修正が可能である。
The above embodiments are merely examples of the present invention and are not intended to limit the present invention. Various changes and modifications can be made by those skilled in the art within the scope of the technical idea disclosed in this specification.
例えば行動特徴量収集部204は画像をプライベートフォルダから出し入れする操作履歴がある場合、特定のフォルダに入れられた画像は公開してもよいと判断し、その画像の「公開推奨フラグ」に「1」を入れてもよい。
For example, if there is an operation history for taking in / out an image from / from a private folder, the behavior feature amount collection unit 204 determines that the image placed in a specific folder may be disclosed, and sets “1” in the “publication recommended flag” of the image. May be included.
更に、顔画像記憶部221に予め家族の顔画像を記憶しておく。そして顔認識部211が画像に撮影した顔と家族の顔画像とを比較し、家族以外の異性と一緒に写っている画像は、公開してはいけないと判断し、その画像の「非公開推奨フラグ」に「1」を入れてもよい。
Furthermore, a family face image is stored in advance in the face image storage unit 221. Then, the face recognizing unit 211 compares the face photographed in the image with the family face image, and determines that the image taken together with the opposite sex other than the family should not be disclosed. “1” may be entered in the “flag”.
また行動特徴量収集部204は、撮影日時情報及び位置情報を参照し、夜間に自宅以外の場所や相応しくない場所で映っている写真は、公開してはいけないと判断し、の「非公開推奨フラグ」に「1」を入れてもよい。
In addition, the behavior feature amount collection unit 204 refers to the shooting date / time information and the position information, determines that a photograph shown in a place other than the home at night or in an inappropriate place should not be disclosed, “1” may be entered in the “flag”.
100:画像管理装置、313:アイコン(公開)、321:アイコン(非公開)、331:アイコン(自動)
100: Image management apparatus, 313: Icon (public), 321: Icon (non-public), 331: Icon (automatic)
Claims (14)
- 処理対象となる画像データを取得する画像取得部と、
前記画像データの撮影者又は当該画像データの閲覧者の行動を表す情報に基づいて、前記画像データを公開又は非公開に分類するための公開判定スコアを算出する際に用いる行動特徴量を収集する行動特徴量収集部と、
前記公開判定スコアを算出するためのスコア算出式を記憶するスコア算出式記憶部と、
前記スコア算出式記憶部から前記スコア算出式を読み出し、当該スコア算出式に前記画像データに対応する前記行動特徴量を適用して、前記画像データの公開判定スコアを算出するスコア算出部と、
前記公開判定スコアが公開条件を充足しているかを判断し、その判断の結果に基づいて前記画像データを公開又は非公開にすることを暫定的に決定する公開画像決定部と、
前記画像データに対して暫定的に決定された公開又は非公開の決定結果に対し、ユーザからの補正指示を受け付けるユーザ補正部と、
前記補正指示が示す公開又は非公開の変更履歴に基づいて、前記スコア算出式を変更するフィードバック部と、
前記画像データが公開する決定された場合に、当該画像データを公開する画像公開部と、
を備えることを特徴とする画像管理装置。 An image acquisition unit for acquiring image data to be processed;
Based on information representing the behavior of the photographer of the image data or the viewer of the image data, action feature amounts used for calculating a disclosure determination score for classifying the image data as public or private are collected. An action feature collection unit;
A score calculation formula storage unit that stores a score calculation formula for calculating the disclosure determination score;
A score calculation unit that reads the score calculation formula from the score calculation formula storage unit, applies the behavior feature amount corresponding to the image data to the score calculation formula, and calculates a disclosure determination score of the image data;
Determining whether or not the disclosure determination score satisfies a disclosure condition, and based on a result of the determination, a disclosure image determination unit that tentatively determines that the image data is to be disclosed or not disclosed;
A user correction unit that receives a correction instruction from a user for a determination result of disclosure or non-disclosure determined provisionally for the image data;
A feedback unit that changes the score calculation formula based on a public or private change history indicated by the correction instruction;
An image publishing unit for publishing the image data when the image data is decided to be published;
An image management apparatus comprising: - 請求項1に記載の画像管理装置において、
前記画像データの画像解析結果又は前記画像データに付帯する付帯情報から前記公開判定スコアの算出に用いる画像特徴量を収集する画像特徴量収集部を更に備え、
前記スコア算出部は、前記画像特徴量を前記スコア算出式に更に適用して前記画像データの公開判定スコアを算出する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
An image feature amount collecting unit for collecting an image feature amount used for calculating the disclosure determination score from an image analysis result of the image data or incidental information attached to the image data;
The score calculation unit further applies the image feature amount to the score calculation formula to calculate a disclosure determination score of the image data;
An image management apparatus. - 請求項2に記載の画像管理装置において、
自分が写っている画像データの公開を許容する者の顔画像、又は自分が写っている画像の公開を許容しない者の顔画像を記憶する顔画像記憶部を更に備え、
前記画像特徴量収集部は、前記画像データの被写体を抽出する被写体抽出部と、
前記被写体に含まれる顔画像を認識し、前記顔画像記憶部に記憶された顔画像との比較を行う顔認識部と、を含み、
前記スコア算出部は、前記顔認識部の比較結果に基づいて、前記公開判定スコアを算出する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 2,
A face image storage unit for storing a face image of a person who allows the release of image data showing himself or a person who does not allow the release of an image showing himself;
The image feature amount collection unit, a subject extraction unit that extracts a subject of the image data;
A face recognition unit that recognizes a face image included in the subject and compares the face image stored in the face image storage unit;
The score calculation unit calculates the disclosure determination score based on a comparison result of the face recognition unit;
An image management apparatus. - 請求項3に記載の顔画像管理装置において、
前記画像特徴量収集部は、前記画像管理装置が過去に接続したソーシャルネットワーキングサービスに接続し、前記顔認識部が認識した顔画像が前記ソーシャルネットワーキングサービスにアップされているかを判断するオープン情報取得部を更に備え、
前記スコア算出部は、前記オープン情報取得部の判断結果に基づいて、前記公開判定スコアを算出する、
ことを特徴とする画像管理装置。 The face image management apparatus according to claim 3.
The image feature amount collection unit is connected to a social networking service that the image management device has connected in the past, and an open information acquisition unit that determines whether the face image recognized by the face recognition unit is uploaded to the social networking service Further comprising
The score calculation unit calculates the disclosure determination score based on a determination result of the open information acquisition unit;
An image management apparatus. - 請求項1に記載の画像管理装置において、
前記画像管理装置の操作履歴情報及び前記画像管理装置の移動履歴情報の少なくとも一つを蓄積する操作行動履歴情報記憶部を更に備え、
前記行動特徴量収集部は、前記操作行動履歴情報記憶部に蓄積された操作履歴情報及び前記移動履歴情報を基に前記移動特徴量を収集する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
An operation action history information storage unit that stores at least one of operation history information of the image management apparatus and movement history information of the image management apparatus;
The behavior feature amount collecting unit collects the movement feature amount based on the operation history information and the movement history information accumulated in the operation behavior history information storage unit;
An image management apparatus. - 請求項1に記載の画像管理装置において、
被写体を撮影して画像データを生成するカメラと、
自分が写っている画像データの公開を許容する者の顔画像、又は自分が写っている画像の公開を許容しない者の顔画像を記憶する顔画像記憶部と、を更に備え、
前記画像取得部は前記カメラから前記処理対象となる画像データを取得し、
前記行動特徴量収集部は、前記撮影者の顔画像と、前記顔画像記憶部に記憶された顔画像との比較結果を行動特徴量として収集し、
前記画像公開部は、前記画像データの被写体の顔画像と前記顔画像記憶部に記憶された公開を許容する者の画像とが一致する場合に、前記画像は公開すると暫定的に決定する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
A camera that shoots a subject and generates image data;
A face image storage unit for storing a face image of a person who allows the release of image data showing himself or a face image of a person who does not allow the release of an image showing himself;
The image acquisition unit acquires image data to be processed from the camera,
The behavior feature amount collection unit collects a comparison result between the photographer's face image and the face image stored in the face image storage unit as a behavior feature amount,
The image publishing unit tentatively determines that the image is to be published when the face image of the subject of the image data matches the image of the person permitted to publish stored in the face image storage unit.
An image management apparatus. - 請求項1に記載の画像管理装置において、
マイクを更に備え、
前記行動特徴量収集部は、前記マイクが集音した音声データを解析する音声認識部を含み、当該音声認識部の識別結果に基づいて行動特徴量を収集する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
A microphone,
The behavior feature amount collecting unit includes a voice recognition unit that analyzes voice data collected by the microphone, and collects a behavior feature amount based on an identification result of the voice recognition unit.
An image management apparatus. - 請求項1に記載の画像管理装置において、
前記画像管理装置の運動状態を収集するセンサを少なくとも一つ備え、
前記行動特徴量収集部は、前記センサの出力結果を基に前記行動特徴量を収集する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
Comprising at least one sensor for collecting the motion state of the image management device;
The behavior feature amount collecting unit collects the behavior feature amount based on an output result of the sensor;
An image management apparatus. - 請求項1に記載の画像管理装置において、
前記画像データを表示する画面を有する表示部を更に備え、
前記公開画像決定部は、前記画面に前記画像データ及び当該画像データに対して暫定的に決定された公開又は非公開の決定結果を示すアイコンを表示する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
A display unit having a screen for displaying the image data;
The public image determination unit displays the image data and an icon indicating a determination result of disclosure or non-disclosure determined provisionally for the image data on the screen.
An image management apparatus. - 請求項1に記載の画像管理装置において、
被写体を撮影して画像データを生成するカメラと、
前記画像データを表示する画面を有する表示部と、
前記画面に表示された前記カメラが生成した前記被写体のスルー画像を撮影し、公開画像として保存することを同時に入力させる第一撮影ボタン、前記スルー画像を撮影し、非公開画像として保存することを同時に入力させる第二撮影ボタンと、を更に備える、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 1,
A camera that shoots a subject and generates image data;
A display unit having a screen for displaying the image data;
A first shooting button for simultaneously inputting that a through image of the subject generated by the camera displayed on the screen is generated and stored as a public image, the through image is captured and stored as a private image; A second shooting button for inputting simultaneously,
An image management apparatus. - 請求項6に記載の画像管理装置において、
前記公開画像決定部が暫定的に非公開と決定した前記画像データに自分が写っている画像の公開を許容しない者の顔画像が含まれている画像に対して、前記ユーザ補正部が公開とする補正指示の入力を受け付けた場合、前記画像公開部は、前記画像データに含まれる公開を許容しないものの顔画像に対してマスキング処理を実行した画像を公開する、
ことを特徴とする画像管理装置。 The image management apparatus according to claim 6.
The user correction unit releases the image including the face image of the person who does not allow the image of the image in which the image is shown to be disclosed to the image data that the release image determination unit has determined to be unpublished provisionally. When receiving an input of a correction instruction to perform, the image publication unit publishes an image that has been subjected to masking processing for a face image that does not allow publication included in the image data,
An image management apparatus. - 請求項2に記載の画像管理装置において、
前記フィードバック部は、公開から非公開への変更履歴を有する画像データのスコア算出に用いた画像特徴量又は行動特徴量は前記公開判定スコアへの寄与度が低くなるように前記スコア算出式を変更し、非公開から公開への変更履歴を有する画像データのスコア算出に用いた画像特徴量又は行動特徴量は前記公開判定スコアへの寄与度が高くなるように前記スコア算出式を変更する
ことを特徴とする画像管理装置。 The image management apparatus according to claim 2,
The feedback unit changes the score calculation formula so that an image feature amount or an action feature amount used for calculating a score of image data having a change history from public to private does not contribute to the disclosure determination score. And changing the score calculation formula so that the image feature amount or the behavior feature amount used for calculating the score of the image data having the history of change from private to public has a high contribution to the public determination score. A featured image management apparatus. - 処理対象となる画像データを取得するステップと、
前記画像データの撮影者又は当該画像データの閲覧者の行動を表す情報に基づいて、前記画像データを公開又は非公開に分類するための公開判定スコアを算出する際に用いる行動特徴量を収集するステップと、
前記公開判定スコアを算出するためのスコア算出式に前記画像データに対応する前記行動特徴量を適用して、前記画像データの公開判定スコアを算出するステップと、
前記公開判定スコアが公開条件を充足しているかを判断し、その判断の結果に基づいて、前記画像データを公開又は非公開にすることを暫定的に決定するステップと、
前記画像データに対して暫定的に決定された公開又は非公開の決定結果に対し、ユーザからの補正指示を受け付けるステップと、
前記補正指示が示す公開又は非公開の変更履歴に基づいて、前記スコア算出式を変更するステップと、
前記画像データが公開する決定された場合に、当該画像データを公開するステップと、
を含むことを特徴とする画像管理方法。 Obtaining image data to be processed;
Based on information representing the behavior of the photographer of the image data or the viewer of the image data, action feature amounts used for calculating a disclosure determination score for classifying the image data as public or private are collected. Steps,
Applying the behavior feature amount corresponding to the image data to a score calculation formula for calculating the disclosure determination score, and calculating a disclosure determination score of the image data;
Determining whether or not the disclosure determination score satisfies a disclosure condition, and tentatively determining whether to make the image data public or private based on a result of the determination;
Accepting a correction instruction from a user for a determination result of disclosure or non-disclosure determined provisionally for the image data;
Changing the score calculation formula based on a public or non-public change history indicated by the correction instruction;
Publishing the image data when the image data is determined to be published;
An image management method comprising: - 処理対象となる画像データを取得するステップと、
前記画像データの撮影者又は当該画像データの閲覧者の行動を表す情報に基づいて、前記画像データを公開又は非公開に分類するための公開判定スコアを算出する際に用いる行動特徴量を収集するステップと、
前記公開判定スコアを算出するためのスコア算出式に前記画像データに対応する前記行動特徴量を適用して、前記画像データの公開判定スコアを算出するステップと、
前記公開判定スコアが公開条件を充足しているかを判断し、その判断の結果に基づいて、前記画像データを公開又は非公開にすることを暫定的に決定するステップと、
前記画像データに対して暫定的に決定された公開又は非公開の決定結果に対し、ユーザからの補正指示を受け付けるステップと、
前記補正指示が示す公開又は非公開の変更履歴に基づいて、前記スコア算出式を変更するステップと、
前記画像データが公開する決定された場合に、当該画像データを公開するステップと、
をコンピュータに実行させることを特徴とする画像管理プログラム。 Obtaining image data to be processed;
Based on information representing the behavior of the photographer of the image data or the viewer of the image data, action feature amounts used for calculating a disclosure determination score for classifying the image data as public or private are collected. Steps,
Applying the behavior feature amount corresponding to the image data to a score calculation formula for calculating the disclosure determination score, and calculating a disclosure determination score of the image data;
Determining whether or not the disclosure determination score satisfies a disclosure condition, and tentatively determining whether to make the image data public or private based on a result of the determination;
Accepting a correction instruction from a user for a determination result of disclosure or non-disclosure determined provisionally for the image data;
Changing the score calculation formula based on a public or non-public change history indicated by the correction instruction;
Publishing the image data when the image data is determined to be published;
An image management program for causing a computer to execute the above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/075872 WO2018042633A1 (en) | 2016-09-02 | 2016-09-02 | Image management device, image management method, and image management program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/075872 WO2018042633A1 (en) | 2016-09-02 | 2016-09-02 | Image management device, image management method, and image management program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018042633A1 true WO2018042633A1 (en) | 2018-03-08 |
Family
ID=61300429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/075872 WO2018042633A1 (en) | 2016-09-02 | 2016-09-02 | Image management device, image management method, and image management program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018042633A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009009184A (en) * | 2007-06-26 | 2009-01-15 | Sony Corp | Information processor and processing method, and program |
JP2013003631A (en) * | 2011-06-13 | 2013-01-07 | Sony Corp | Information processor, information processing method, information processing system, and program |
JP2013191035A (en) * | 2012-03-14 | 2013-09-26 | Fujifilm Corp | Image disclosure device, image disclosure method, image disclosure system, and program |
US20140086493A1 (en) * | 2012-09-25 | 2014-03-27 | Google Inc. | Providing privacy in a social network system |
-
2016
- 2016-09-02 WO PCT/JP2016/075872 patent/WO2018042633A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009009184A (en) * | 2007-06-26 | 2009-01-15 | Sony Corp | Information processor and processing method, and program |
JP2013003631A (en) * | 2011-06-13 | 2013-01-07 | Sony Corp | Information processor, information processing method, information processing system, and program |
JP2013191035A (en) * | 2012-03-14 | 2013-09-26 | Fujifilm Corp | Image disclosure device, image disclosure method, image disclosure system, and program |
US20140086493A1 (en) * | 2012-09-25 | 2014-03-27 | Google Inc. | Providing privacy in a social network system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9058375B2 (en) | Systems and methods for adding descriptive metadata to digital content | |
US10170157B2 (en) | Method and apparatus for finding and using video portions that are relevant to adjacent still images | |
US20120008011A1 (en) | Digital Camera and Associated Method | |
CN116320782A (en) | Control method, electronic equipment, computer readable storage medium and chip | |
WO2017084182A1 (en) | Method and apparatus for image processing | |
CN105069083B (en) | The determination method and device of association user | |
TW201011696A (en) | Information registering device for detection, target sensing device, electronic equipment, control method of information registering device for detection, control method of target sensing device, information registering device for detection control progr | |
US9639778B2 (en) | Information processing apparatus, control method thereof, and storage medium | |
JP5587390B2 (en) | Method, system, and apparatus for selecting images picked up by image pickup apparatus | |
US9973649B2 (en) | Photographing apparatus, photographing system, photographing method, and recording medium recording photographing control program | |
JP2010079788A (en) | Content management device, system, method and program | |
JP2012023502A (en) | Photographing support system, photographing support method, server, photographing device, and program | |
TWI556640B (en) | Media file management method and system, and computer-readable medium | |
US10070175B2 (en) | Method and system for synchronizing usage information between device and server | |
JP2012105205A (en) | Key frame extractor, key frame extraction program, key frame extraction method, imaging apparatus, and server device | |
JP2004280254A (en) | Contents categorizing method and device | |
JP2010021638A (en) | Device and method for adding tag information, and computer program | |
JP2015198300A (en) | Information processor, imaging apparatus, and image management system | |
JP2015008385A (en) | Image selection device, imaging device, and image selection program | |
JP2013149034A (en) | Image display apparatus, image display method, and program | |
WO2018042633A1 (en) | Image management device, image management method, and image management program | |
JP5550114B2 (en) | Imaging device | |
JP2009049886A (en) | Image retrieval device, photographing device, image retrieval method, and program | |
KR20090044313A (en) | Photography service method and system based on the robot | |
KR101135222B1 (en) | Method of managing multimedia file and apparatus for generating multimedia file |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16915192 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16915192 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |