[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108234868B - Intelligent shooting system and method based on case reasoning - Google Patents

Intelligent shooting system and method based on case reasoning Download PDF

Info

Publication number
CN108234868B
CN108234868B CN201711435295.5A CN201711435295A CN108234868B CN 108234868 B CN108234868 B CN 108234868B CN 201711435295 A CN201711435295 A CN 201711435295A CN 108234868 B CN108234868 B CN 108234868B
Authority
CN
China
Prior art keywords
case
main body
image
mapping relation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711435295.5A
Other languages
Chinese (zh)
Other versions
CN108234868A (en
Inventor
孔凡国
李智宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuyi University
Original Assignee
Wuyi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuyi University filed Critical Wuyi University
Priority to CN201711435295.5A priority Critical patent/CN108234868B/en
Publication of CN108234868A publication Critical patent/CN108234868A/en
Application granted granted Critical
Publication of CN108234868B publication Critical patent/CN108234868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent shooting system and method based on case reasoning, which comprises an image acquisition module, an RAM module, an analysis processing module and a case library module; the image acquisition module acquires a target preview image and caches the target preview image in the RAM module, then the analysis processing module acquires the target preview image and carries out target subject feature coding and target mapping relation coding, finally the case library module calculates the similarity between the target subject feature coding and the source case subject feature coding and calculates the similarity between the target mapping relation coding and the source case mapping relation coding, and selects the shooting angle with the highest similarity for intelligent shooting composition. The invention intelligently obtains the relevant characteristic information of the target image by utilizing the algorithm in the analysis processing module, and intelligently recommends the optimal shooting angle for the user by utilizing the algorithm in the case library module based on case reasoning, thereby overcoming the defect that the mobile phone shooting system in the prior art can not guide the user to carry out shooting angle adjustment.

Description

Intelligent shooting system and method based on case reasoning
Technical Field
The invention relates to the technical field of mobile phone shooting, in particular to an intelligent shooting system and method based on case-based reasoning.
Background
The mobile phone shooting system has the functions of face recognition, focusing, scene selection, facial beautification and the like, pixels are higher and higher, and the current mobile phone shooting system can also be provided with a shooting reference line to help a user to shoot. However, to take high quality photographs requires users to understand professional photography techniques, which most users do not understand; at present, mobile phone shooting systems automatically identify the position relationship between a person and a background according to a preview picture, however, a specific shooting angle between the person and the background is often determined by a user through own experience, and the quality of a shot picture is not high due to insufficient experience of the user, so that a system capable of guiding the user to adjust the shooting angle is needed to facilitate the user to shoot a high-quality picture.
Disclosure of Invention
In view of the above, the invention provides an intelligent shooting system and method based on case-based reasoning, which solve the defect that the mobile phone shooting system in the prior art cannot guide a user to adjust the shooting angle.
In order to achieve the purpose, the invention provides the following technical scheme:
an intelligent shooting system based on case reasoning comprises an image acquisition module, an RAM module, an analysis processing module and a case library module; the image acquisition module is used for acquiring a camera preview; the RAM module is used for caching the preview image; the analysis processing module is used for analyzing the preview image background and determining an image main body, the background, the mapping relation of the image main body and the background and the related attributes of the image main body; the analysis processing module specifically comprises a focus acquisition unit, a feature analysis unit, a mapping relation unit and a feature coding unit; the focus acquisition unit acquires a focus position and determines an image main body; the characteristic analysis unit acquires the main characteristics of the image; the mapping relation unit determines the mapping relation between the image background and the shooting subject; the feature encoding unit encodes the image subject feature and the mapping relationship.
The case library module is used for storing the attribute information and composition information of the image main body and is also used for matching attribute similarity; the case library module specifically comprises a code acquisition unit, a similarity matching unit and a source case storage unit; the code acquiring unit is used for acquiring a feature code and a mapping relation code; the similarity matching unit is used for performing similarity matching calculation on the target case subject characteristics and the source case subject characteristics, then performing mapping relation similarity matching calculation, and finally outputting a shooting angle to guide a user to adjust the shooting angle; the source case storage unit is used for storing source cases, including source cases set by the system and source cases added by users;
the image acquisition module acquires a preview photo and stores the preview photo in the RAM module; then, the preview picture stored in the RAM module is called through the focus acquisition unit and an image main body is determined; then the characteristic analysis unit calls the image subject information in the focus acquisition unit and determines the image subject characteristics according to the image subject; then the mapping relation unit calls the image main body characteristic information of the characteristic analysis unit to determine the mapping relation between the image background and the image main body; then the feature coding unit calls the image main body feature information of the feature analysis unit for coding on one hand, and calls the mapping relation information between the image background and the image main body of the mapping relation unit for coding on the other hand; then the code acquiring unit calls a target feature code and a target mapping relation code in the feature coding unit and transmits the feature code and the mapping relation code to the similarity matching unit; then the similarity matching unit calls a source case main body feature code and a source case mapping relation code stored in a source case storage unit, carries out main body feature similarity matching calculation on a target case main body feature code and the source case main body feature code, carries out mapping relation similarity matching calculation on the target case mapping relation code and the source case mapping relation code, calls and outputs the most similar source case main body feature information and mapping relation information in the source case storage unit according to a main body feature similarity matching calculation result and a mapping relation similarity matching calculation result, replaces and integrates the source case main body feature information into the source case mapping relation information, obtains the most similar source case shooting angle, and calls and outputs the newly integrated most similar source case to guide a user to adjust the shooting angle; and finally, the source case storage unit calls the target feature codes and the target mapping relation codes in the code acquisition unit, and stores the target feature codes and the target mapping relation codes as the feature codes and the mapping relation codes of the newly added source case.
Preferably, the focus acquisition unit uses an algorithm g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) < 2+ > (f (i +1, j) -f (i, j +1)) < 2 >, where f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1), f (i +1, j +1) and f (i +1, j +1) are the image pixels to be processed, and g (i, j) is the processed pixel.
Preferably, the similarity matching unit has an algorithm of
Figure BDA0001525708760000031
Where X is the target subject, Y is the source case subject, and both the target subject X and the source case subject Y include N-dimensional features, that is, X is (X1, X2, X3, … …, xn), Y is (Y1, Y2, Y3, … …, yn), and the smaller the cosine value in the algorithm, the higher the similarity.
An intelligent shooting method based on case-based reasoning is characterized in that: the method comprises the following steps: step 1, acquiring a camera preview, and intercepting and caching the preview after a user executes focusing operation; step 2, analyzing the preview image background, determining an image main body and the background, mapping relations of the image main body and the background, and attribute characteristics of the main body, and encoding; and step 3, prompting the user whether the source case is needed: if the user needs the source case, entering step 4; if the user does not need the source case, clicking a cancel case button displayed in the interface to automatically complete composition, and entering step 8; step 4, searching a case base, matching the case base with a target case, and respectively matching and outputting the optimal source case main body characteristic information and the mapping relation information between the optimal source case main body and the background; step 5, replacing and integrating the characteristic information of the optimal source case main body into the mapping relation information between the optimal source case main body and the background to form a new optimal source case; step 6, displaying the new and optimal source case main body outline on a shooting interface; step 7, according to the source case body contour, manually adjusting the angle by a user to enable the target body contour to be overlapped with the source case body contour as much as possible; and 8, triggering a shooting button to finish shooting, storing the image and prompting a user whether to store the attribute characteristics of the subject and the mapping relation between the subject and the background as a source case.
Preferably, the following implementation procedures are included in the step 2: a. determining the basic position of a target image main body through focusing operation, and determining the outline of the target main body by using an edge detection method; b. extracting the main feature attributes of the target image by using an SIFT algorithm; c. determining a mapping relation between a target image background and a target shooting subject; d. and coding the target subject characteristics and the target mapping relation so as to calculate the case similarity.
Preferably, the algorithm formula of the edge detection method is g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) < 2+ > (f (i +1, j) -f (i, j +1)) < 2 >, where f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1), f (i +1, j +1) are all pixels of the image to be processed, and g (i, j) is the processed pixel.
Preferably, the SIFT algorithm comprises the steps of: a. firstly, constructing a scale space, detecting extreme points and obtaining scale invariance; b. then, filtering extreme value characteristic points and carrying out accurate positioning; c. then distributing a direction value for the extreme value characteristic point; d. and finally generating a feature descriptor.
Preferably, the following implementation procedures are included in the step 4: a. acquiring a feature code and a mapping relation code; b. calculating the similarity between the target case main body feature codes and the source case main body feature codes, and pre-storing a plurality of source cases with high similarity according to a set threshold; c. and performing mapping relation similarity calculation on the source case with high feature similarity, and outputting the corresponding shooting angle with the highest mapping relation similarity.
From the technical scheme, on one hand, the method calls preview target image information by using the focus obtaining unit, then uses the feature analysis unit to analyze and filter target case main body feature information, uses the mapping relation unit to analyze and filter mapping relation information between a target main body and a background, then uses the feature coding unit to code the target case main body feature information and the mapping relation information so as to facilitate system record identification, then uses the similarity matching unit in the case library module to call storage information in the source case storage unit, carries out main body feature similarity matching calculation on the target case main body feature codes and the source case main body feature codes, carries out mapping relation similarity matching calculation on the target case mapping relation codes and the source case mapping relation codes, calls and outputs the closest similar image information in the source case storage unit according to the main body feature similarity matching calculation result and the mapping relation similarity matching calculation result The source case subject characteristic information and the mapping relation information are integrated into the source case mapping relation information in a replacement mode, so that the closest source case shooting angle is obtained, the shooting angle in the source case is output to an interface and used for guiding a user to adjust the shooting angle in time, the outline of the target subject is basically overlapped with the outline of the source case subject, and the purpose of intelligently guiding the user to adjust the shooting angle is achieved; on the other hand, the method utilizes the feature analysis unit to analyze and filter the feature information of the target case main body, utilizes the mapping relation unit to analyze and filter the mapping relation information between the target main body and the background, so that the feature information in the target image is filtered out of the main body features and the main body and background mapping relation features, and respectively performs main body feature similarity matching calculation and main body and background mapping relation feature similarity matching calculation through the case library module based on case reasoning, thereby overcoming the defects of low matching accuracy and large matching error caused by performing feature similarity matching calculation on the whole image, and further improving the matching accuracy of the target case and the source case.
Drawings
Fig. 1 is a system diagram of an intelligent shooting system and method based on case-based reasoning according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an intelligent shooting system and method based on case-based reasoning according to an embodiment of the present invention.
The attached drawings indicate the following:
10-an image acquisition module; 20-RAM module; 30-an analysis processing module; 40-case library module; 31-a focus acquisition unit; 32-a feature analysis unit; 33-a mapping relation unit; 34-a feature encoding unit; 41-a feature acquisition unit; 42-similarity matching unit; 43-Source case store Unit.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings described in the embodiments or the description in the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Detailed Description
The embodiment of the invention provides an intelligent shooting system and method based on case-based reasoning.
As shown in fig. 1-2, an intelligent shooting system based on case-based reasoning includes an image acquisition module 10, a RAM module 20, an analysis processing module 30, and a case library module 40; the image acquisition module 10 is configured to acquire a camera preview; the RAM module 20 is used for caching the preview images; the analysis processing module 30 analyzes the preview background to determine the image main body and the background, the mapping relationship between the image main body and the background, and the related attributes of the main body; the analysis processing module 30 specifically includes a focus acquisition unit 31, a feature analysis unit 32, a mapping relation unit 33, and a feature coding unit 34; the focus acquisition unit 31 acquires a focus position and determines an image subject; the feature analysis unit 32 acquires image subject features; the mapping relation unit 33 determines the mapping relation between the image background and the shooting subject; the feature encoding unit 34 encodes the image subject feature and the mapping relationship. The case library module 40 is used for storing the attribute information and composition information of the image main body and matching attribute similarity; the case library module 40 specifically includes a code obtaining unit, a similarity matching unit 42 and a source case storage unit 43; the code acquiring unit is used for acquiring a feature code and a mapping relation code; the similarity matching unit 42 performs similarity matching calculation on the target case subject characteristics and the source case subject characteristics, then performs mapping relation similarity matching calculation, and finally outputs a shooting angle to guide a user to adjust the shooting angle; the source case storage unit 43 is configured to store source cases, which include source cases set by the system and source cases added by the user; in the embodiment of the present invention, the image obtaining module 10 is an existing integrated camera module, and is intended to obtain a target image, a user may select the image obtaining module 10 with different pixels according to actual requirements, and it is mainly ensured that the target image is obtained smoothly; similarly, the purpose of the RAM module 20 is to cache the target image information, and therefore, the RAM module 20 may include a memory bank in the prior art, and may also include other storage electronic devices, which mainly achieve the purpose of caching the target image. Also, the source case storage unit 43 may include a database or a cloud platform, or may be other removable storage devices, and is used for storing the source case file information.
The overall working principle of the invention is as follows: firstly, a target image is obtained through an image obtaining module 10, and then a target image main body feature code of the target image and a mapping relation code between a target image main body and a background are obtained through an analysis processing module 30; then based on case reasoning, on one hand, the similarity matching unit 42 calls the target image main body feature coding information and the source case image main body feature coding information in the source case storage unit 43 for similarity matching, and finds out and displays the most similar source case image main body feature coding information from the source cases; and on the other hand, mapping relation coding information between the target image main body and the background and mapping relation coding information between the source case main body and the background are called and subjected to similarity matching, the most similar mapping relation coding information between the source case main body and the background is found out from the source case and displayed, and finally, a user is guided to adjust the shooting angle, so that a high-quality picture is shot.
Accordingly, the focus acquisition unit 31 uses an algorithm g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) < 2+ > (f (i +1, j) -f (i, j +1)) < 2 >, where f (i-1, j-1), f (i-1, j +1), f (i, j-j), f (i, j +1), f (i +1, j-1), f (i +1, j +1) are to-be-processed image pixels, and g (i, j) is a processed pixel. In the algorithm of the focus acquisition unit 31, N × N pixel blocks, for example, 3 × 3 or 4 × 4, are often used. 3 × 3 pixel blocks are as follows, f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1) f (i +1, j) and f (i +1, j +1), the relative threshold value is obtained by utilizing the pixel information of each pixel block, because the contour position between the image main body contour and the image background often has a large difference value, each pixel point in the image is scanned by an array scanning mode and compared with the relative threshold value, if the two pixel points exist in the image main body contour and the image background, the pixel point is judged to be the contour point, and the contour information of the target image can be quickly obtained and the contour of the target image can be extracted by the cyclic array scanning mode, the aim of the method is to extract the main body contour characteristic information of the target image on one hand, on the other hand, background pixel point information in the target image can be extracted, so that the characteristic information of the mapping relation between the main body outline of the target image and the background can be achieved.
In the embodiment of the present invention, the algorithm of the similarity matching unit 42 is
Figure BDA0001525708760000081
Wherein X is the subject of interest and YFor the source case subject, the target subject X and the source case subject Y both include N-dimensional features, i.e., X ═ X (X1, X2, X3, … …, xn), Y ═ Y (Y1, Y2, Y3, … …, yn), and the smaller the cosine value in the algorithm, the higher the similarity. Because the main body contour feature information of the target image and the mapping relation feature information between the main body contour of the target image and the background are extracted in the focus acquisition unit 31, the main body contour feature information of the target image is integrated, and the algorithm in the similarity matching unit 42 is utilized to perform one-to-one similarity matching on the main body contour feature information of the target image and the main body contour feature information of the source case, so that the main body contour of the source case with the highest similarity is found out from the source case storage unit 43; then, the feature information of the mapping relationship between the main body contour of the target image and the background is integrated, the algorithm in the similarity matching unit 42 is used for performing one-to-one similarity matching on the feature information of the mapping relationship between the target image and the feature information of the mapping relationship between the source cases, the source case mapping relationship information with the highest similarity is found out from the source case storage unit 43, then the similarity matching unit 42 integrates the main body contour of the matched source case into the background source image with the strongest mapping relationship in the source case, and the integrated source image is used as a guide image for guiding a user to adjust the shooting angle.
The working process of the embodiment of the invention is as follows: the image acquisition module 10 acquires a preview photo and stores the preview photo in the RAM module 20; then, the preview photograph stored in the RAM module 20 is retrieved and an image subject is determined by the focus acquisition unit 31; then the feature analysis unit 32 retrieves the image subject information in the focus acquisition unit 31 and determines the image subject feature according to the image subject; then the mapping relation unit 33 calls the image subject feature information of the feature analysis unit 32 to determine the mapping relation between the image background and the image subject; then, the feature encoding unit 34 retrieves the feature information of the image subject of the feature analysis unit 32 for encoding on one hand, and retrieves the mapping relationship information between the image background and the image subject of the mapping relationship unit 33 for encoding on the other hand; then, the code acquiring unit retrieves the target feature code and the target mapping relationship code in the feature coding unit 34 and transmits the feature code and the mapping relationship code to the similarity matching unit 42; then, the similarity matching unit 42 retrieves the source case principal feature codes and the source case mapping relationship codes stored in the source case storage unit 43, performs principal feature similarity matching calculation on the target case principal feature codes and the source case principal feature codes, performs mapping relationship similarity matching calculation on the target case mapping relationship codes and the source case mapping relationship codes, and retrieves and outputs the closest source case shooting angle in the source case storage unit 43 according to the principal feature similarity matching calculation result and the mapping relationship similarity matching calculation result to guide the user to adjust the shooting angle; finally, the source case storage unit 43 retrieves the target feature codes and the target mapping relationship codes in the code obtaining unit, and stores the target feature codes and the target mapping relationship codes as the feature codes and the mapping relationship codes of the newly added source case.
An intelligent shooting method based on case-based reasoning is characterized in that: the method comprises the following steps: step 1, acquiring a camera preview, and intercepting and caching the preview after a user executes focusing operation; step 2, analyzing the preview image background, determining an image main body and the background, mapping relations of the image main body and the background, and attribute characteristics of the main body, and encoding; and step 3, prompting the user whether the source case is needed: if the user needs the source case, entering step 4; if the user does not need the source case, clicking a cancel case button displayed in the interface to automatically complete composition, and entering step 8; step 4, searching a case base, matching the case base with a target case, and respectively matching and outputting the optimal source case main body characteristic information and the mapping relation information between the optimal source case main body and the background; step 5, replacing and integrating the characteristic information of the optimal source case main body into the mapping relation information between the optimal source case main body and the background to form a new optimal source case; step 6, displaying the new and optimal source case main body outline on a shooting interface; step 7, according to the source case body contour, manually adjusting the angle by a user to enable the target body contour to be overlapped with the source case body contour as much as possible; and 8, triggering a shooting button to finish shooting, storing the image and prompting a user whether to store the attribute characteristics of the subject and the mapping relation between the subject and the background as a source case.
Specifically, the step 2 includes the following steps: a. determining the basic position of a target image main body through focusing operation, and determining the outline of the target main body by using an edge detection method; b. extracting the main feature attributes of the target image by using an SIFT algorithm; c. determining a mapping relation between a target image background and a target shooting subject; d. coding the target subject characteristics and the target mapping relation so as to calculate the case similarity; the algorithm formula of the edge detection method is g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) ^2+ (f (i +1, j) -f (i, j +1)) ^2), wherein f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1), f (i +1, j +1) are to-be-processed image pixels, and g (i, j) is a processed pixel; the SIFT algorithm comprises the following steps: a. firstly, constructing a scale space, detecting extreme points and obtaining scale invariance; b. then, filtering extreme value characteristic points and carrying out accurate positioning; c. then distributing a direction value for the extreme value characteristic point; d. finally, generating a feature descriptor; the step 4 comprises the following execution processes: a. acquiring a feature code and a mapping relation code; b. calculating the similarity between the target case main body feature codes and the source case main body feature codes, and pre-storing a plurality of source cases with high similarity according to a set threshold; c. and performing mapping relation similarity calculation on the source case with high feature similarity, and outputting the corresponding shooting angle with the highest mapping relation similarity.
It can be seen from the above technical solutions that, on one hand, the present invention utilizes the focus obtaining unit 31 to retrieve preview target image information, then utilizes the feature analyzing unit 32 to analyze and filter target case main body feature information, utilizes the mapping relation unit 33 to analyze and filter mapping relation information between a target main body and a background, then utilizes the feature encoding unit 34 to encode the target case main body feature information and the mapping relation information, so as to facilitate system record identification, then utilizes the similarity matching unit 42 in the case library module 40 to retrieve storage information in the source case storage unit 43, performs main body feature similarity matching calculation on the target case main body feature codes and the source case main body feature codes, performs mapping relation similarity matching calculation on the target case mapping relation codes and the source case mapping relation codes, and retrieves and outputs the source case storage codes according to the main body feature similarity matching calculation result and the mapping relation similarity matching calculation result The closest shooting angle of the source case in the storage unit 43 is output to an interface for guiding a user to adjust the shooting angle in time, so that the outline of the target subject is basically overlapped with the outline of the subject of the source case, and the purpose of intelligently guiding the user to adjust the shooting angle is achieved; on the other hand, the invention utilizes the feature analysis unit 32 to analyze and filter the main body feature information of the target case, utilizes the mapping relation unit 33 to analyze and filter the mapping relation information between the target main body and the background, so that the feature information in the target image is filtered out of the main body feature and the main body and background mapping relation feature, and respectively carries out main body feature similarity matching calculation and main body and background mapping relation feature similarity matching calculation through the case library module 40, thereby overcoming the defects of low matching accuracy and large matching error caused by carrying out feature similarity matching calculation on the whole image, and further improving the matching accuracy of the target case and the source case. Therefore, the invention has the advantages that: 1. by extracting the main features of the preview image and analyzing the mapping relation between the main features and the background, the matching precision is higher; 2. matching subject feature similarity and mapping relation similarity based on a case reasoning principle, and further providing an optimal matching source case; 3. based on the shooting position of the user, the system provides the optimal shooting angle for the user without the need of adjusting the shooting angle and position in a large range by the user; 4. the system displays the contour line of the main body of the source case on the same screen, the user can complete the adjustment of the shooting angle by overlapping the main body, and simultaneously displays the overlap ratio (and the similarity), so that the adjustment of the shooting angle is more convenient and faster; 5. the user can add new source cases, so that the source case resources are more perfect, and the main body is suitable for more shooting adjustment angles.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments can be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An intelligent shooting system based on case-based reasoning is characterized in that: the system comprises an image acquisition module, an RAM module, an analysis processing module and a case library module; wherein,
the image acquisition module is used for acquiring a camera preview;
the RAM module is used for caching the preview image;
the analysis processing module is used for analyzing the preview image background and determining an image main body, the background, the mapping relation of the image main body and the background and the related attributes of the image main body; the analysis processing module specifically comprises a focus acquisition unit, a feature analysis unit, a mapping relation unit and a feature coding unit; the focus acquisition unit acquires a focus position and determines an image main body; the characteristic analysis unit acquires the main characteristics of the image; the mapping relation unit determines the mapping relation between the image background and the shooting subject; the feature coding unit is used for coding the image main body features and the mapping relation;
the case library module is used for storing the attribute information and composition information of the image main body and is also used for matching attribute similarity; the case library module specifically comprises a code acquisition unit, a similarity matching unit and a source case storage unit; the code acquiring unit is used for acquiring a feature code and a mapping relation code; the similarity matching unit is used for performing similarity matching calculation on the target case subject characteristics and the source case subject characteristics, then performing mapping relation similarity matching calculation, and finally outputting an intelligent shooting angle mode to guide a user to adjust a shooting angle; what is needed isThe algorithm of the similarity matching unit is
Figure FDA0002543968950000011
Wherein X is a target subject, Y is a source case subject, and both the target subject X and the source case subject Y include N-dimensional features, that is, X is (X1, X2, X3, … …, xn), Y is (Y1, Y2, Y3, … …, yn), and the smaller the cosine value in the algorithm, the higher the similarity; the source case storage unit is used for storing source cases, including source cases set by the system and source cases added by users;
the image acquisition module acquires a preview photo and stores the preview photo in the RAM module; then, the preview picture stored in the RAM module is called through the focus acquisition unit and an image main body is determined; then the characteristic analysis unit calls the image subject information in the focus acquisition unit and determines the image subject characteristics according to the image subject; then the mapping relation unit calls the image main body characteristic information of the characteristic analysis unit to determine the mapping relation between the image background and the image main body; then the feature coding unit calls the image main body feature information of the feature analysis unit for coding on one hand, and calls the mapping relation information between the image background and the image main body of the mapping relation unit for coding on the other hand; then the code acquiring unit calls a target feature code and a target mapping relation code in the feature coding unit and transmits the feature code and the mapping relation code to the similarity matching unit; then the similarity matching unit calls a source case main body feature code and a source case mapping relation code stored in a source case storage unit, carries out main body feature similarity matching calculation on a target case main body feature code and the source case main body feature code, carries out mapping relation similarity matching calculation on the target case mapping relation code and the source case mapping relation code, calls and outputs the most similar source case main body feature information and mapping relation information in the source case storage unit according to a main body feature similarity matching calculation result and a mapping relation similarity matching calculation result, replaces and integrates the source case main body feature information into the source case mapping relation information, obtains the most similar source case shooting angle, and calls and outputs the newly integrated most similar source case to guide a user to adjust the shooting angle; and finally, the source case storage unit calls the target feature codes and the target mapping relation codes in the code acquisition unit, and stores the target feature codes and the target mapping relation codes as the feature codes and the mapping relation codes of the newly added source case.
2. The intelligent shooting system based on case-based reasoning, as claimed in claim 1, wherein: the focus acquisition unit uses an algorithm g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) ^2+ (f (i +1, j) -f (i, j +1)) ^2), wherein f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1), f (i +1, j +1) and f (i +1, j +1) are to-be-processed image pixels, and g (i, j) is a processed pixel.
3. An intelligent shooting method based on case-based reasoning is characterized in that: the method comprises the following steps:
step 1, acquiring a camera preview, and intercepting and caching the preview after a user executes focusing operation;
step 2, analyzing the preview image background, determining an image main body and the background, mapping relations of the image main body and the background, and attribute characteristics of the main body, and encoding;
and step 3, prompting the user whether the source case is needed: if the user needs the source case, entering step 4; if the user does not need the source case, clicking a cancel case button displayed in the interface to automatically complete composition, and entering step 8;
step 4, searching a case base, matching the case base with a target case, and respectively matching and outputting the optimal source case main body characteristic information and the mapping relation information between the optimal source case main body and the background;
step 5, replacing and integrating the characteristic information of the optimal source case main body into the mapping relation information between the optimal source case main body and the background to form a new optimal source case;
step 6, displaying the new and optimal source case main body outline on a shooting interface;
step 7, according to the source case body contour, manually adjusting the angle by a user to enable the target body contour to be overlapped with the source case body contour as much as possible;
and 8, triggering a shooting button to finish shooting, storing the image and prompting a user whether to store the attribute characteristics of the subject and the mapping relation between the subject and the background as a source case.
4. The intelligent shooting method based on case-based reasoning, as claimed in claim 3, wherein: the step 2 comprises the following execution processes:
a. determining the basic position of a target image main body through focusing operation, and determining the outline of the target main body by using an edge detection method;
b. extracting the main feature attributes of the target image by using an SIFT algorithm;
c. determining a mapping relation between a target image background and a target shooting subject;
d. and coding the target subject characteristics and the target mapping relation so as to calculate the case similarity.
5. The intelligent shooting method based on case-based reasoning, as claimed in claim 4, wherein: the algorithm formula of the edge detection method is g (i, j) ═ sqrt ((f (i, j) -f (i +1, j)) ^2+ (f (i +1, j) -f (i, j +1)) ^2), wherein f (i-1, j-1), f (i-1, j +1), f (i, j-1), f (i, j +1), f (i +1, j-1), f (i +1, j +1) are all pixels of an image to be processed, and g (i, j) is the processed pixel.
6. An intelligent photographing method based on case-based reasoning, according to any one of claims 4-5, characterized in that: the SIFT algorithm comprises the following steps: a. firstly, constructing a scale space, detecting extreme points and obtaining scale invariance; b. then, filtering extreme value characteristic points and carrying out accurate positioning; c. then distributing a direction value for the extreme value characteristic point; d. and finally generating a feature descriptor.
7. The intelligent shooting method based on case-based reasoning, as claimed in claim 3, wherein: the step 4 comprises the following execution processes:
a. acquiring a feature code and a mapping relation code;
b. calculating the similarity between the target case main body feature codes and the source case main body feature codes, and pre-storing a plurality of source cases with high similarity according to a set threshold;
c. and performing mapping relation similarity calculation on the source case with high feature similarity, and outputting the corresponding shooting angle with the highest mapping relation similarity.
CN201711435295.5A 2017-12-26 2017-12-26 Intelligent shooting system and method based on case reasoning Active CN108234868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711435295.5A CN108234868B (en) 2017-12-26 2017-12-26 Intelligent shooting system and method based on case reasoning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711435295.5A CN108234868B (en) 2017-12-26 2017-12-26 Intelligent shooting system and method based on case reasoning

Publications (2)

Publication Number Publication Date
CN108234868A CN108234868A (en) 2018-06-29
CN108234868B true CN108234868B (en) 2020-10-16

Family

ID=62648956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711435295.5A Active CN108234868B (en) 2017-12-26 2017-12-26 Intelligent shooting system and method based on case reasoning

Country Status (1)

Country Link
CN (1) CN108234868B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898416A (en) * 2020-06-17 2020-11-06 绍兴埃瓦科技有限公司 Video stream processing method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174316A (en) * 2006-11-02 2008-05-07 中国移动通信集团公司 Device and method for cases illation based on cases tree
CN105279751A (en) * 2014-07-17 2016-01-27 腾讯科技(深圳)有限公司 Picture processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866352B (en) * 2010-05-28 2012-05-30 广东工业大学 Appearance design patent retrieval method based on image content analysis
JP5013282B2 (en) * 2010-08-31 2012-08-29 カシオ計算機株式会社 Imaging apparatus and program
CN103077529B (en) * 2013-02-27 2016-04-06 电子科技大学 Based on the plant leaf blade characteristic analysis system of image scanning
CN103632626B (en) * 2013-12-03 2016-06-29 四川省计算机研究院 A kind of intelligent guide implementation method based on mobile Internet, device and mobile client
CN107025437A (en) * 2017-03-16 2017-08-08 南京邮电大学 Intelligent photographing method and device based on intelligent composition and micro- Expression analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174316A (en) * 2006-11-02 2008-05-07 中国移动通信集团公司 Device and method for cases illation based on cases tree
CN105279751A (en) * 2014-07-17 2016-01-27 腾讯科技(深圳)有限公司 Picture processing method and device

Also Published As

Publication number Publication date
CN108234868A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
EP3650807B1 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photography measurement and three-dimensional scanning functions
US7327890B2 (en) Imaging method and system for determining an area of importance in an archival image
JP4772839B2 (en) Image identification method and imaging apparatus
JP6320075B2 (en) Image processing apparatus and control method thereof
CN103856617A (en) Photographing method and user terminal
JP6456031B2 (en) Image recognition apparatus, image recognition method, and program
TWI791405B (en) Method for depth estimation for variable focus camera, computer system and computer-readable storage medium
US12038966B2 (en) Method and apparatus for data retrieval in a lightfield database
CN113132717A (en) Data processing method, terminal and server
CN107787463A (en) The capture of optimization focusing storehouse
US8670609B2 (en) Systems and methods for evaluating images
TW201544995A (en) Object recognition method and object recognition apparatus using the same
CN112969023A (en) Image capturing method, apparatus, storage medium, and computer program product
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN108234868B (en) Intelligent shooting system and method based on case reasoning
CN111145153A (en) Image processing method, circuit, visual impairment assisting device, electronic device, and medium
US20210258495A1 (en) Subject tracking device, subject tracking method, and storage medium
CN112683798A (en) Identification and identification system based on hyperspectral imaging camera
TW201301874A (en) Method and device of document scanning and portable electronic device
JPH10254903A (en) Image retrieval method and device therefor
JP2016054409A (en) Image recognition device, image recognition method, and program
JP2015198340A (en) Image processing system and control method therefor, and program
JP2014116789A (en) Photographing device, control method therefor, and program
JP2018061292A (en) Image processing apparatus, image processing method, and program
KR20130036839A (en) Apparatus and method for image matching in augmented reality service system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant