[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111145194B - Processing method, processing device and electronic equipment - Google Patents

Processing method, processing device and electronic equipment Download PDF

Info

Publication number
CN111145194B
CN111145194B CN201911415251.5A CN201911415251A CN111145194B CN 111145194 B CN111145194 B CN 111145194B CN 201911415251 A CN201911415251 A CN 201911415251A CN 111145194 B CN111145194 B CN 111145194B
Authority
CN
China
Prior art keywords
identified
image
color
partition
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911415251.5A
Other languages
Chinese (zh)
Other versions
CN111145194A (en
Inventor
彭方振
陈锋
王娜
黄卡尔
严毅强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911415251.5A priority Critical patent/CN111145194B/en
Publication of CN111145194A publication Critical patent/CN111145194A/en
Application granted granted Critical
Publication of CN111145194B publication Critical patent/CN111145194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a processing method, comprising: adjusting environmental parameters of a first environment in which the object to be identified is located according to a determined adjustment strategy at least based on color information of the object to be identified, wherein the adjustment strategy at least enables the color of the first environment in which the object to be identified is located to be different from that of the object to be identified; and obtaining a first image of the object to be identified, so as to obtain identification information of the object to be identified based on at least identification processing of the first image. The disclosure also provides a processing device and an electronic device.

Description

Processing method, processing device and electronic equipment
Technical Field
The present disclosure relates to a processing method, a processing apparatus, and an electronic device.
Background
With the rapid development of artificial intelligence, automatic control, communication and computer technology, unmanned shops are vigorously developed, and the application scene of the visual recognition technology of the object to be recognized is wider and wider. The identification technology of the object to be identified based on the settlement table mode is practical. The most critical factor of the recognition technology of the object to be recognized is to divide the image of the object to be recognized and then recognize the divided image.
In carrying out the disclosed concept, the inventors have found that at least the following problems exist in the prior art. Whether the image is segmented accurately or not influences the identification accuracy of the object to be identified. Due to the variety of colors of the objects to be identified, the colors of some objects to be identified are similar to the background color of the settlement table, and the accuracy of identifying the objects to be identified is not high.
Disclosure of Invention
One aspect of the present disclosure provides a processing method, including: firstly, adjusting environmental parameters of a first environment where an object to be identified is located according to a determined adjustment strategy at least based on color information of the object to be identified, wherein the adjustment strategy at least enables the color of the first environment where the object to be identified is located to be different from that of the object to be identified. Then, a first image of the object to be identified is obtained to obtain identification information of the object to be identified based at least on identification processing of the first image.
Optionally, adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified may include: determining the color type and distribution condition of the object to be identified based on the color information of the object to be identified; determining a first adjustment strategy matched with the color type and the distribution condition so as to compensate a first color to a region to be identified where the object to be identified is located based on the first adjustment strategy; the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
Optionally, adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified may include: acquiring gesture information of an object to be identified in an area to be identified where the object to be identified is located; determining a second adjustment strategy for adjusting the environmental parameter based on the gesture information and the color type and distribution condition of the object to be identified, so as to determine a compensation parameter of the area to be identified based on the second adjustment strategy; compensating a second color and brightness matched with the second color for an object to be identified in the area to be identified based on at least the compensation parameter; wherein the second color is different from the color type of the object to be identified.
Optionally, compensating the brightness matching the second color to the object to be identified in the area to be identified based at least on the compensation parameter may include: determining a brightness-adjustable light source that is the same as the second color based on the pose information; and controlling the brightness-adjustable light source to compensate the light matched with the second color to the object to be identified based on the compensation parameter.
Optionally, the obtaining the identification information of the object to be identified based on at least the identification processing of the first image may include: determining a partition granularity of the first image based at least on attribute information of an object to be identified; image partitioning is carried out on the first image according to the partition granularity so as to remove environmental noise, and at least one image partition of the object to be identified is obtained; extracting vector features of the at least one image partition at least based on the attribute information, and marking the object to be identified at least based on the vector features to obtain identification information of the object to be identified; wherein, the number of the edge pixel points of the image partition corresponding to different partition granularities is different.
Optionally, the performing image partitioning on the first image according to the partition granularity to remove environmental noise, and obtaining at least one image partition of the object to be identified may include:
Obtaining image gradients of the first image at different pixel points; determining pixel points with the image gradient larger than a first threshold value as edge pixel points of the first image; determining partition edge pixel points from the determined edge pixel points according to the partition granularity, so as to form at least one image partition based on the partition edge pixel points; and removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be identified, so as to obtain the at least one image partition of the object to be identified.
Optionally, the method further comprises: and detecting that the object to be identified is in the area to be identified of the first environment, and obtaining a second image of the object to be identified so as to determine color information of the object to be identified based on the second object.
Optionally, the method further comprises: comparing the first image of the object to be identified with the sample in the sample database to output prompt information for updating the sample database or identification information of the object to be identified.
Another aspect of the present disclosure provides a processing apparatus including an environmental parameter adjustment module and a first image acquisition module. The environment parameter adjustment module is used for adjusting the environment parameters of the first environment where the object to be identified is located according to a determined adjustment strategy at least based on the color information of the object to be identified, and the adjustment strategy at least enables the color of the first environment where the object to be identified is located to be different from that of the object to be identified; and the first image obtaining module is used for obtaining a first image of the object to be identified so as to obtain the identification information of the object to be identified at least based on the identification processing of the first image.
Optionally, the environmental parameter adjustment module includes: an object color determination sub-module, a first adjustment strategy determination sub-module. The object color determining submodule is used for determining the color type and distribution condition of the object to be identified based on the color information of the object to be identified; the first adjustment strategy determination submodule is used for determining a first adjustment strategy matched with the color type and the distribution situation so as to compensate a first color for a region to be identified where the object to be identified is located based on the first adjustment strategy; the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
Optionally, the environmental parameter adjustment module includes: the system comprises a gesture obtaining sub-module, a first adjustment strategy determining sub-module and a compensation sub-module. The gesture obtaining sub-module is used for obtaining gesture information of the object to be identified in the area to be identified where the object to be identified is located; the first adjustment strategy determining submodule is used for determining a second adjustment strategy for adjusting the environment parameters based on the gesture information and the color type and the distribution condition of the object to be identified so as to determine the compensation parameters of the area to be identified based on the second adjustment strategy; the compensation sub-module is used for compensating a second color and brightness matched with the second color for the object to be identified in the area to be identified based on at least the compensation parameter; wherein the second color is different from the color type of the object to be identified.
Optionally, the compensation submodule includes a light source determining unit and a compensation unit. The light source determining unit is used for determining a light source with adjustable brightness, which is the same as the second color, based on the gesture information; the compensation unit is used for controlling the brightness-adjustable light source to compensate the light matched with the second color to the object to be identified based on the compensation parameters.
Optionally, the device further comprises an image recognition module, including a granularity determination sub-module, a partition sub-module and an identification information obtaining sub-module. The granularity determining submodule is used for determining the partition granularity of the first image at least based on the attribute information of the object to be identified; the partitioning sub-module is used for performing image partitioning on the first image according to the partitioning granularity so as to remove environmental noise and obtain at least one image partition of the object to be identified; the identification information obtaining sub-module is used for extracting vector features of the at least one image partition at least based on the attribute information so as to mark the object to be identified at least based on the vector features and obtain identification information of the object to be identified; wherein, the number of the edge pixel points of the image partition corresponding to different partition granularities is different.
Optionally, the partitioning submodule includes: the device comprises an image gradient obtaining unit, an edge pixel point determining unit, a partitioning unit and a screening unit. The image gradient obtaining unit is used for obtaining the image gradients of the first image at different pixel points; the edge pixel point determining unit is used for determining the pixel point with the image gradient larger than a first threshold value as the edge pixel point of the first image; the partition unit is used for determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points; the screening unit is used for removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be identified, and obtaining the at least one image partition of the object to be identified.
Optionally, the device further includes a second image obtaining module, where the second image obtaining module is configured to detect that an object to be identified is located in an area to be identified of the first environment, and obtain a second image of the object to be identified, so as to determine color information of the object to be identified based on the second object.
Optionally, the device further comprises a prompt module, and the prompt module is used for comparing the first image of the object to be identified with the samples in the sample database so as to output prompt information for updating the sample database or identification information of the object to be identified.
Another aspect of the present disclosure provides an electronic device, comprising: the image acquisition component is used for acquiring images; a light source assembly for providing light sources of a plurality of colors; one or more processors, a computer-readable storage medium storing one or more computer programs that, when executed by the processors, implement the methods as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as described above.
According to the processing method provided by the embodiment of the disclosure, the object image to be identified is pre-collected, the color in the object image to be identified is identified, then the color with large contrast with the color in the object image to be identified is calculated, so that the junction environment parameter is automatically adjusted, the object image to be identified and the environment image are obviously separated, the segmentation precision of the object image to be identified can be effectively improved, and the identification precision of the object to be identified is improved.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a processing method, a processing apparatus, and an electronic device according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a system architecture suitable for use in a processing method, processing apparatus, and electronic device in accordance with embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a processing method according to an embodiment of the disclosure;
fig. 4 schematically shows a schematic view of a prior art object to be detected in a first scenario;
fig. 5 schematically shows a schematic view of a prior art object to be detected in a second scenario;
FIG. 6 schematically illustrates a schematic view of an object to be detected in a first scenario according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic view of an object to be detected in a second scenario according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a schematic diagram of an edge pixel point according to an embodiment of the disclosure;
FIG. 9 schematically illustrates a schematic diagram of an edge pixel point according to another embodiment of the present disclosure;
FIG. 10 schematically illustrates a schematic view of a light source of an electronic device according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a schematic view of a light source of an electronic device according to another embodiment of the disclosure;
FIG. 12 schematically illustrates a flow chart of a processing method according to another embodiment of the present disclosure;
FIG. 13 schematically illustrates a block diagram of a processing device according to an embodiment of the disclosure; and
Fig. 14 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
Fig. 1 schematically illustrates an application scenario of a processing method, a processing apparatus, and an electronic device according to an embodiment of the present disclosure. It should be noted that fig. 1 is merely an example of a scenario in which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a scenario in which settlement is performed using a settlement table will be described as an example. The settlement table comprises a table body 1, a settlement area 2, a display device 3 and an image acquisition device 4. Of course, the checkout station may also include such things as: warehouse entry testing platform, weighing equipment, payment equipment, two-dimensional code, packing equipment, sweep a yard device etc.. The settlement station can acquire images of the commodities in the settlement area 2 by using the image acquisition device 4, and then identify the commodities so as to determine the price and other information of the commodities, thus facilitating settlement of users. Compared with a settlement mode of scanning the commodity by adopting the scanning gun, the method has the advantages that a user or a cashier does not need to scan a designated area of the commodity, and information such as the price of the commodity can be identified only by placing the commodity in the settlement area 2, so that the convenience of operation of the user or the cashier is improved.
However, commodity identification techniques based on the checkout counter approach have been put to practical use. The most critical factor of this technique is to image segment the merchandise and then identify it. Whether the image is segmented accurately or not influences the commodity identification accuracy. In actual use, due to the diversity of commodity image colors, some commodity images have similar colors to the background of the settlement table, so that the image segmentation precision is low, and the commodity identification precision is reduced.
The embodiment of the disclosure provides a processing method, a processing device and electronic equipment. The method includes an environment adjustment process and an image processing process. In the environment adjustment process, at least adjusting the environment parameters of the first environment where the object to be identified is located according to a determined adjustment strategy based on the color information of the object to be identified, wherein the adjustment strategy at least makes the color of the first environment where the object to be identified is located different from that of the object to be identified. After the environment adjustment process is completed, an image processing process is performed to obtain a first image of the object to be identified, so as to obtain identification information of the object to be identified based on at least identification processing of the first image. The object image to be identified is pre-collected, the color in the object image to be identified is identified, and then the color with large contrast with the color in the object image to be identified is calculated, so that the environment parameters are automatically adjusted, the object image to be identified and the environment image are obviously separated, the image segmentation precision is improved, and the commodity identification precision is further improved.
Fig. 2 schematically illustrates a system architecture suitable for use in a processing method, a processing apparatus, and an electronic device according to an embodiment of the disclosure.
It should be noted that fig. 2 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 2, the system architecture 200 according to this embodiment may include terminal devices 201, 202, 203, a network 204, and a server 205. The network 204 may include a number of gateways, routers, hubs, network cables, etc. to provide a medium for communication links between the terminal devices 201, 202, 203 and the server 205. The network 204 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user can interact with other terminal devices and the server 205 through the network 204 using the terminal devices 201, 202 to receive or transmit information or the like, such as transmitting a payment request, an image processing request, receiving a processing result, and the like. The terminal devices 201, 202, 203 may be installed with various applications having communication clients, such as banking applications, shopping applications, web browser applications, search applications, office applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
Terminal devices 201, 202, electronic devices including but not limited to smart phones, virtual reality devices, augmented reality devices, tablet computers, laptop computers, etc. may implement online payment functionality.
The terminal device 203 may be an electronic device having a camera and a specific color light supplementing function, which can identify an object to be identified by light supplementing, photographing, image processing, etc., including but not limited to a settlement table, an object identifying apparatus, etc.
The server 205 may receive the request and process the request. For example, the server 205 may be a background management server, a server cluster, or the like. The background management server can analyze and process the received payment request, image processing request, compensation color request and the like, and feed back the processing result to the terminal equipment.
It should be noted that, the processing method provided in the embodiments of the present disclosure may be generally executed by the terminal device 203 and the server 205. Accordingly, the processing apparatus provided in the embodiments of the present disclosure may be generally disposed in the terminal device 203 and the server 205. The processing method provided by the embodiments of the present disclosure may also be performed by a terminal device, a server or a server cluster in communication with the terminal device 201, 202, 203 and/or the server 205.
It should be understood that the number of terminal devices, networks and servers is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 3 schematically illustrates a flow chart of a processing method according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S303.
In operation S301, environmental parameters of a first environment in which an object to be identified is located are adjusted according to a determined adjustment policy based at least on color information of the object to be identified, where the adjustment policy at least makes a color of the first environment in which the object to be identified is located different from that of the object to be identified.
In this embodiment, the color information of the object to be identified may be color information determined by image processing, may be color information input by a user, or may be color information received from other electronic devices. Environmental parameters include, but are not limited to: any one or more of color, light supplementing brightness, compensation color and light supplementing duration. The adjustment strategy includes, but is not limited to, at least one of: a light-filling related strategy, a switching background related strategy, etc. The light-supplementing related strategy may be to supplement light to the object to be identified and/or the ambient light by a light supplementing lamp or the like. The background related strategy can be switched to switch the background of the environment where the object to be identified is located, for example, the object to be identified is placed on a device capable of adjusting the display image (such as a display panel, a backlight source capable of performing color adjustment, and the like), and the adjustment of the environmental parameters is realized by adjusting the display image. By the method, the color of the first environment where the object to be identified is located is different from that of the object to be identified, and the image of the object to be identified can be accurately determined from the images.
In one embodiment, the adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified may include the following operations.
First, the color type and distribution of the object to be identified are determined based on the color information of the object to be identified. For example, an image including the object to be identified and the environment in which the object to be identified is located may be captured first, and image processing, such as color identification, may be performed on the image to determine the color type and distribution of the object to be identified. For example, the image includes four colors of red, yellow, blue and white, wherein the general area where the object to be identified is located includes four colors of red, yellow, blue and white, the edge area of the object to be identified mainly includes blue and white, and the red and yellow are located in the internal area of the object to be identified.
And then, determining a first adjustment strategy matched with the color type and the distribution condition so as to compensate a first color to the area to be identified where the object to be identified is located based on the first adjustment strategy. For example, since the edge area of the object to be recognized mainly includes blue and white, and the background color of the first environment in which the object to be recognized is located is white, it is necessary to adjust the background color of the first environment to a color having a large difference from the color difference of white and blue, such as red, yellow, and the like. In addition, the first adjustment strategy may further include a background color adjustment mode: such as by adjusting the background color by means of light filling or by adjusting the displayed image. In addition, the first adjustment policy may further include time length information for adjusting the background color, and the like. Specifically, the first adjustment policy may be determined by looking up a database or the like. For example, the database stores the mapping relation between the color type and the distribution condition and the first adjustment policy. Each of the first adjustment policies may be determined by means of simulation, calibration, or the like.
The identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value. For example, when the degree of recognition between a certain specific color and the color type and distribution of the object to be recognized exceeds 90%, the specific color may be the first color. Wherein the distribution of the colors is such that 90% of the distribution area is different from the first color, etc.
The mapping relationship between the color type and distribution situation and the first adjustment policy may be as follows. And adjusting environmental parameters based on a first adjustment strategy aiming at specified objects (such as test samples, including multiple colors distributed in multiple areas), and dividing images of objects to be identified according to images obtained by photographing according to various first adjustment strategies to obtain the image segmentation accuracy of the objects to be identified. And then, calibrating the mapping relation between the color types and the distribution conditions and the first adjustment strategy based on the image segmentation accuracy. The image segmentation accuracy refers to the ratio of the area (or the number of included pixels) of the segmented image to the area (or the number of included pixels) of an accurate image (such as an image determined by calibration) after the image of the object to be identified is automatically segmented from the image shot by the camera. The mapping relationship between the color type and the distribution condition and the first adjustment strategy can be calibrated in the following manner: if the first adjustment policy is determined based on the color type and the distribution condition, adjusting environmental parameters of the first environment based on the first adjustment policy, then performing image segmentation of the object to be identified after obtaining the image of the object to be identified, and determining a mapping relationship between the color type and the distribution condition and the first adjustment policy based on whether the image segmentation accuracy of the object to be identified exceeds a preset accuracy threshold (e.g. establishing a correlation relationship when the image segmentation accuracy of the object to be identified exceeds the preset accuracy threshold).
In another embodiment, the adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified may include the following operations.
Firstly, attitude information of an object to be identified in an area to be identified where the object to be identified is located is obtained. Wherein the gesture information may include: standing and lying posture information, folding posture information and the like of an object to be identified.
And then, determining a second adjustment strategy for adjusting the environmental parameters based on the gesture information and the color type and distribution condition of the object to be identified, so as to determine the compensation parameters of the area to be identified based on the second adjustment strategy.
Then, a second color and a brightness matching the second color are compensated for the object to be identified in the area to be identified based at least on the compensation parameters. Wherein the second color is different from the color type of the object to be identified. In this embodiment, the compensation parameter may include compensation brightness in addition to compensation color. The image sensor has incomplete sensitivity to different colors, and for colors with high sensitivity, the brightness of the compensation light can be properly reduced, so that the electric energy is saved and the service life of the light filling lamp is prolonged. For the color with low sensitivity, the brightness of the compensating light can be properly improved so as to improve the identification effect.
Taking an example of adjusting an environmental parameter of the first environment by means of compensation light as an example, compensating for brightness matching the second color to the object to be identified in the area to be identified based at least on the compensation parameter may include the following operations.
First, a luminance-adjustable light source that is the same as the second color is determined based on the posture information. The brightness-adjustable light sources can be respectively distributed in different directions of the object to be identified. A light source of adjustable brightness may emit light of one or more colors. For example, a luminance-tunable light source having a blue light source (e.g., blue LED) and a yellow light source (e.g., yellow LED) integrated therein may be used to adjust the luminance of the blue light and/or the luminance of the yellow light by adjusting the current applied to the blue light source and/or the yellow light source. In addition, when the brightness of blue light and the brightness of yellow light are adjusted, adjustment of colors (such as blue light, green light, yellow light, white light, or the like by adjusting the ratio of blue light and yellow light) can also be achieved. For another example, a plurality of monochromatic light sources with adjustable brightness may be provided respectively.
Then, the brightness-adjustable light source is controlled to compensate the light of the brightness matching the second color to the object to be identified based on the compensation parameter.
In operation S303, a first image of the object to be identified is obtained to obtain identification information of the object to be identified based on at least identification processing of the first image.
Wherein the identification process of the first image may include the operations of: and extracting the commodity image from the shot first image, then extracting the characteristics of the commodity image to obtain image characteristics, and then obtaining an image recognition result based on the image characteristics. In addition, image preprocessing can be performed on the first image and/or the commodity image.
In order to facilitate understanding of the technical solutions of the present disclosure, the following description will take fig. 4 to 5 as an example, and then take fig. 6 to 7 as an example based on the same scenario.
Fig. 4 schematically shows a schematic view of a prior art object to be detected in a first scenario.
As shown in fig. 4, the settlement table embodiment of fig. 1 is described as an example. As shown in the upper diagram of fig. 4, the commodity placed on the settlement area 2 includes color areas 51 and 52. Wherein the color area 51 is the same as or similar to the color of the settlement area 2, such as the color of the two in the photographed image is not easily distinguished. In the prior art, when the image of the commodity is identified, the color of the color area 51 is not easily distinguished from the color of the settlement area 2, so that the commodity image is likely to be obtained as an area shown in the lower graph 52 of fig. 4, and the image area 51 may be misjudged as the settlement area 2. Therefore, the result of image recognition may be inaccurate due to the erroneous determination described above.
Fig. 5 schematically shows a schematic diagram of a prior art object to be detected in a second scenario.
As shown in fig. 5, the settlement table embodiment of fig. 1 is described as an example. As shown in the upper diagram of fig. 5, the commodity placed on the settlement area 2 includes color areas 51 and 52. Wherein the color area 52 is the same as or similar to the color of the settlement area 2, such as the color of the two in the photographed image is not easily distinguished. In the prior art, when the image of the commodity is identified, the color of the color area 52 is not easily distinguished from the color of the settlement area 2, so that the commodity image is likely to be obtained as the area shown in fig. 51 below in fig. 4, and the image area 52 may be misjudged as the settlement area 2. Therefore, the result of image recognition may be inaccurate due to the erroneous determination described above.
Fig. 6 schematically shows a schematic view of an object to be detected in a first scenario according to an embodiment of the present disclosure.
As shown in fig. 6, the upper graph of fig. 6 is the scene shown in fig. 4, and when two colors are detected to be included in an image, a compensation color complementary to the two colors can be found, and then the environment is subjected to light filling, so that the result shown in the lower graph of fig. 6 can be obtained. At this time, the difference between the color of the settlement area 2 and the colors of the color area 51 and the color area 52 on the commodity is large, so that the recognition accuracy of the commodity image can be effectively improved, and the accuracy of commodity information based on the commodity image recognition can be further improved.
Fig. 7 schematically illustrates a schematic view of an object to be detected in a second scenario according to an embodiment of the present disclosure.
As shown in fig. 7, the upper graph of fig. 7 is a scene shown in fig. 5, and when two colors are detected to be included in an image, a compensation color complementary to the two colors can be found, and then the environment is subjected to light filling, so that a result as shown in the lower graph of fig. 7 can be obtained. At this time, the difference between the color of the settlement area 2 and the colors of the color area 51 and the color area 52 on the commodity is large, so that the recognition accuracy of the commodity image can be effectively improved, and the accuracy of commodity information based on the commodity image recognition can be further improved.
In another embodiment, the obtaining the identification information of the object to be identified based at least on the identification processing of the first image may include the following operations.
First, a partition granularity of the first image is determined based at least on attribute information of an object to be identified. Wherein the attribute information of the object to be identified includes, but is not limited to, at least one of the following: the attribute information of the object to be identified can be default attribute or configured information, and the like, according to the configuration attribute input by the user on the pretreatment requirement of the object. Such as commodity name, quantity, price, total price, manufacturer, model, etc.
And then, carrying out image partition on the first image according to the partition granularity so as to remove environmental noise and obtain at least one image partition of the object to be identified. For example, the merchandise image of one or more merchandise is determined from the first image (e.g., allowing multiple merchandise to be identified simultaneously), and the image other than the merchandise image in the first image is removed as ambient noise, i.e., the background image may be removed.
And extracting vector features of the at least one image partition at least based on the attribute information so as to mark the object to be identified at least based on the vector features and obtain identification information of the object to be identified. The vector feature extraction process of the image partition can be the same as the prior art. For example, the vector features may include vectors for at least one of: edges, corners, regions, ridges, colors, textures, shapes, spatial relationships, and the like.
Wherein, the number of the edge pixel points of the image partition corresponding to different partition granularities is different.
Specifically, the image partitioning the first image according to the partition granularity to remove environmental noise, and obtaining at least one image partition of the object to be identified may include the following operations.
First, an image gradient of the first image at different pixels is obtained. For example, the image gradient may be a gray value or a pixel value. According to the embodiment, the image partition can be realized by utilizing a gray level image or an original color image in an edge detection mode, so that the image of the object to be identified is obtained. The purpose of the edge detection mode is to detect points in the image where parameters such as brightness change significantly. Significant changes in image attributes typically reflect important events and changes in the attributes. Such as discontinuities in depth, surface direction discontinuities, material property changes, scene lighting changes, etc. Through edge detection, the data volume can be greatly reduced, irrelevant information can be eliminated, and important structural attributes of the image are reserved.
And then, determining the pixel points with the image gradient larger than a first threshold value as edge pixel points of the first image. In particular, edge detection may be performed based on search and zero crossing based methods. Where the search-based method detects the boundary by searching for the maximum and minimum values in the first derivative of the image, typically locating the boundary in the direction of maximum gradient. The zero-crossing based method finds boundaries by finding the second derivative zero-crossing of the image, typically a laplace operator (Laplacian) zero-crossing or a zero-crossing of a nonlinear differential representation. The first threshold may be determined by calibration or the like.
And determining partition edge pixel points from the determined edge pixel points according to the partition granularity, so as to form at least one image partition based on the partition edge pixel points.
Fig. 8 schematically illustrates a schematic diagram of an edge pixel point according to an embodiment of the present disclosure. Fig. 9 schematically illustrates a schematic diagram of an edge pixel point according to another embodiment of the present disclosure.
As shown in fig. 8 and 9, the same pattern is shown as a schematic diagram of pixels with different partition granularities. When the granularity of the partitions is different, the number of the determined edge pixel points is different. In general, the smaller the partition granularity, the more edge pixels and the closer the resulting edge is to the real edge of the image.
And then, removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be identified, and obtaining the at least one image partition of the object to be identified.
The luminance tunable light source is exemplarily described below with reference to fig. 10 and 11.
Fig. 10 schematically illustrates a schematic view of a light source of an electronic device according to an embodiment of the disclosure.
As shown in fig. 10, a plurality of brightness adjustable light sources 6 may be respectively disposed in one or more directions above the table body 1 and/or the settlement area 2. The plurality of luminance adjustable light sources 6 may be monochromatic light sources or color adjustable light sources. The plurality of luminance-adjustable light sources 6 may be divided into a plurality of groups, and each group may include a plurality of light sources of the same or different colors, respectively, and disposed in a plurality of directions.
Fig. 11 schematically illustrates a schematic view of a light source of an electronic device according to another embodiment of the disclosure.
As shown in fig. 11, the settlement area 2 may be a transparent or translucent area. A plurality of brightness adjustable light sources 6 may be respectively disposed under the settlement area 2. The plurality of luminance adjustable light sources 6 may be monochromatic light sources or color adjustable light sources. The plurality of luminance-adjustable light sources 6 may be divided into a plurality of groups, and each group may include a plurality of light sources of the same or different colors, respectively, and disposed in a plurality of directions.
The environment parameter adjusting device can simply, conveniently and rapidly realize the function of adjusting the environment parameter, has lower cost and is convenient to popularize.
Fig. 12 schematically illustrates a flow chart of a processing method according to another embodiment of the present disclosure.
As shown in fig. 12, the method may further include operation S1201.
And detecting that the object to be identified is in the area to be identified of the first environment, and obtaining a second image of the object to be identified so as to determine color information of the object to be identified based on the second object.
For example, the area to be identified may be a settlement area 2, and when it is detected that an article is placed on the settlement area 2, photographing may be performed to obtain a second image of the object to be identified, and color information of the object to be identified is determined. Wherein it can be determined whether there is an item on the settlement area 2 by an image sensor, an electronic scale, or the like. Further, it is also possible to determine whether or not there is an article on the settlement area 2 by a signal inputted manually, such as clicking a setting button or the like.
It should be noted that, for a camera set at a fixed position, the view range of the camera may be used as the area to be identified. Further, when determining the color information of the object to be recognized, all colors included in the area to be recognized may be taken as the color information of the object to be recognized. The method can simply and quickly determine the color information of the object to be identified.
In another embodiment, the method may further include operation S1203.
In operation S1203, the first image of the object to be identified is compared with the samples in the sample database to output prompt information for updating the sample database or identification information of the object to be identified.
In this embodiment, a sample most similar to an image may be determined from a sample database by an image similarity comparison method, and further, identification information of an object to be identified may be determined based on identification information of the sample. Wherein the sample database may comprise a mapping between samples and identification information.
In addition, the checkout stand is described as an example, and since the posture of the articles placed on the checkout stand can be varied, such as placement position, placement angle, bending degree, folding degree, and the like. Therefore, there may be a plurality of images for one commodity, and when a sample database is constructed, association relationship can be established between a photograph of a limited number of poses and corresponding identification information for one sample. But if the photos of the samples in different postures can be increased, the recognition accuracy of the object to be detected can be improved. Therefore, the sample database can be updated based on the first image of the object to be identified, and the updating operation can be performed automatically, or the sample database can be updated after the first image of the object to be identified is sent to an auditing personnel for auditing. This makes the accuracy of recognition of the object to be recognized based thereon higher as the number of uses of the sample data increases.
According to the processing method provided by the embodiment of the disclosure, the environmental parameters (such as background color and brightness of the settlement table) are adjusted in a hardware dimming mode, for example, the settlement table is optimized to be a transparent glass table, colored lamps are added in the table, and the environmental parameters are adjusted by controlling the color of the lamplight and the illumination intensity. Specifically, the image including the commodity is pre-collected, the color in the image is identified, then the proper background color is calculated according to a specific algorithm, and the environmental parameters such as the background light color and the illumination intensity are automatically adjusted, so that the commodity image and the background of the settlement table are obviously separated, the image segmentation precision is improved, and the commodity identification precision is improved.
Fig. 13 schematically illustrates a block diagram of a processing device according to an embodiment of the disclosure.
As shown in fig. 13, the electronic device 1300 may include an environmental parameter adjustment module 1310 and a first image acquisition module 1330.
The environmental parameter adjustment module 1310 is configured to adjust environmental parameters of a first environment where the object to be identified is located according to a determined adjustment policy based at least on color information of the object to be identified, where the adjustment policy at least makes a color of the first environment where the object to be identified is located different from that of the object to be identified.
The first image obtaining module 1330 is configured to obtain a first image of the object to be identified, so as to obtain identification information of the object to be identified based on at least identification processing of the first image.
In one embodiment, the environmental parameter adjustment module 1310 may include: an object color determination sub-module, a first adjustment strategy determination sub-module.
The object color determining submodule is used for determining the color type and distribution condition of the object to be identified based on the color information of the object to be identified.
The first adjustment policy determination submodule is used for determining a first adjustment policy matched with the color type and the distribution condition.
The first adjustment strategy compensates a first color for a region to be identified where the object to be identified is located. The identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
In another embodiment, the environmental parameter adjustment module may include: the system comprises a gesture obtaining sub-module, a first adjustment strategy determining sub-module and a compensation sub-module.
The gesture obtaining sub-module is used for obtaining gesture information of the object to be identified in the area to be identified where the object to be identified is located.
The first adjustment strategy determination submodule is used for determining a second adjustment strategy for adjusting the environment parameters based on the gesture information and the color type and the distribution condition of the object to be identified, so as to determine the compensation parameters of the area to be identified based on the second adjustment strategy.
The compensation sub-module is used for compensating a second color and brightness matched with the second color for the object to be identified in the area to be identified based on at least the compensation parameter. Wherein the second color is different from the color type of the object to be identified.
In particular, the compensation sub-module may comprise a light source determining unit, a compensation unit.
Wherein the light source determining unit is configured to determine a luminance-adjustable light source that is the same as the second color based on the posture information.
The compensation unit is used for controlling the brightness-adjustable light source to compensate the light matched with the second color to the object to be identified based on the compensation parameters.
In another embodiment, the apparatus 1300 may further include an image recognition module, which may include a granularity determination sub-module, a partition sub-module, and an identification information obtaining sub-module.
The granularity determination submodule is used for determining the partition granularity of the first image at least based on the attribute information of the object to be identified.
The partitioning sub-module is used for performing image partitioning on the first image according to the partitioning granularity so as to remove environmental noise and obtain at least one image partition of the object to be identified.
The identification information obtaining sub-module is used for extracting vector features of the at least one image partition at least based on the attribute information so as to mark the object to be identified at least based on the vector features and obtain the identification information of the object to be identified. Wherein, the number of the edge pixel points of the image partition corresponding to different partition granularities is different.
For example, the partitioning sub-module may include: the device comprises an image gradient obtaining unit, an edge pixel point determining unit, a partitioning unit and a screening unit.
The image gradient obtaining unit is used for obtaining the image gradients of the first image at different pixel points.
The edge pixel point determining unit is used for determining the pixel point with the image gradient larger than a first threshold value as the edge pixel point of the first image.
The partition unit is used for determining partition edge pixel points from the determined edge pixel points according to the partition granularity so as to form at least one image partition based on the partition edge pixel points.
The screening unit is used for removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be identified, and obtaining the at least one image partition of the object to be identified.
In another embodiment, the apparatus 1300 may further include: and a second image acquisition module. The second image obtaining module is used for detecting that an object to be identified is located in an area to be identified of the first environment and obtaining a second image of the object to be identified so as to determine color information of the object to be identified based on the second object.
In another embodiment, the apparatus 1300 may further comprise a prompt module. The prompting module is used for comparing the first image of the object to be identified with the sample in the sample database so as to output prompting information for updating the sample database or identification information of the object to be identified.
The operations performed by the modules, sub-modules, and units included in the apparatus 1300 may be referred to in the description above and are not repeated here.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Or one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which, when executed, may perform the corresponding functions.
For example, any of the environmental parameter adjustment module 1310 and the first image acquisition module 1330 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the environmental parameter adjustment module 1310 and the first image acquisition module 1330 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging circuitry, or in any one of or a suitable combination of any of three implementations of software, hardware, and firmware. Or at least one of the environmental parameter adjustment module 1310 and the first image acquisition module 1330 may be at least partially implemented as a computer program module that, when executed, may perform the corresponding functions.
Fig. 14 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 14 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 14, the electronic device 1400 includes: one or more processors 1410, a computer-readable storage medium 1420, an image acquisition component 1430, and a light source component 1440. The image acquisition component 1430 is used to acquire images. The light source assembly 1440 is used to provide light sources of multiple colors. The electronic device may perform a method according to embodiments of the present disclosure.
In particular, processor 1410 may include, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 1410 may also include on-board memory for caching purposes. Processor 1410 may be a single processing unit or multiple processing units for performing different actions of a method flow in accordance with an embodiment of the disclosure.
Computer-readable storage media 1420, which may be, for example, non-volatile computer-readable storage media, specific examples include, but are not limited to: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); memory such as Random Access Memory (RAM) or flash memory, and the like.
The computer-readable storage medium 1420 may include a program 1421, which program 1421 may include code/computer-executable instructions that, when executed by the processor 1410, cause the processor 1410 to perform a method according to an embodiment of the present disclosure or any variation thereof.
Program 1421 may be configured to have computer program code including, for example, computer program modules. For example, in an example embodiment, code in the program 1421 may include one or more program modules, including, for example, the program modules 1421A, 1421B, … …. It should be noted that the manner and number of program modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules depending on the actual situation, which when executed by the processor 1410, enable the processor 1410 to perform a method according to an embodiment of the disclosure or any variations thereof.
According to an embodiment of the present disclosure, the processor 1410 may interact with a computer readable storage medium 1420 to perform a method according to an embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present disclosure, at least one of the environmental parameter adjustment module 1310 and the first image obtaining module 1330 may be implemented as a program module described with reference to fig. 14, which when executed by the processor 1410, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (10)

1. A method of processing, comprising:
adjusting environmental parameters of a first environment in which an object to be identified is located according to a determined adjustment strategy at least based on color information of the object to be identified, wherein the adjustment strategy at least enables the color of the first environment in which the object to be identified is located to be different from that of the object to be identified, the adjustment strategy comprises at least one of a light supplementing related strategy or a background switching related strategy, and the background switching related strategy comprises switching of a background light source or a background image of the environment in which the object to be identified is located; and
Obtaining a first image of the object to be identified under the adjusted environmental parameters, so as to obtain identification information of the object to be identified based on at least identification processing of the first image.
2. The method of claim 1, wherein the adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified comprises:
determining the color type and distribution condition of the object to be identified based on the color information of the object to be identified;
Determining a first adjustment strategy matched with the color type and the distribution condition so as to compensate a first color to a region to be identified where the object to be identified is located based on the first adjustment strategy;
the identification degree between the first color and the color type and distribution condition of the object to be identified accords with a first threshold value.
3. The method of claim 1, wherein the adjusting the environmental parameter of the first environment in which the object to be identified is located according to the determined adjustment policy based at least on the color information of the object to be identified comprises:
acquiring gesture information of an object to be identified in an area to be identified where the object to be identified is located;
determining a second adjustment strategy for adjusting the environmental parameter based on the gesture information and the color type and distribution condition of the object to be identified, so as to determine a compensation parameter of the area to be identified based on the second adjustment strategy;
Compensating a second color and brightness matched with the second color for an object to be identified in the area to be identified based on at least the compensation parameter;
wherein the second color is different from the color type of the object to be identified.
4. A method according to claim 3, wherein compensating for the brightness matching the second color to the object to be identified in the area to be identified based at least on the compensation parameters comprises:
determining a brightness-adjustable light source that is the same as the second color based on the pose information;
And controlling the brightness-adjustable light source to compensate the light matched with the second color to the object to be identified based on the compensation parameter.
5. The method according to claim 1, wherein the obtaining the identification information of the object to be identified based at least on the identification processing of the first image includes:
determining a partition granularity of the first image based at least on attribute information of an object to be identified;
Image partitioning is carried out on the first image according to the partition granularity so as to remove environmental noise, and at least one image partition of the object to be identified is obtained;
Extracting vector features of the at least one image partition at least based on the attribute information, and marking the object to be identified at least based on the vector features to obtain identification information of the object to be identified;
Wherein, the number of the edge pixel points of the image partition corresponding to different partition granularities is different.
6. The method of claim 5, wherein said image partitioning the first image at the partition granularity to remove ambient noise to obtain at least one image partition of the object to be identified comprises:
Obtaining image gradients of the first image at different pixel points;
determining pixel points with the image gradient larger than a first threshold value as edge pixel points of the first image;
Determining partition edge pixel points from the determined edge pixel points according to the partition granularity, so as to form at least one image partition based on the partition edge pixel points;
And removing the image partition corresponding to the first environment from the at least one image partition based on the color information of the object to be identified, so as to obtain the at least one image partition of the object to be identified.
7. The method of claim 1, further comprising:
And detecting that the object to be identified is in the area to be identified of the first environment, and obtaining a second image of the object to be identified so as to determine color information of the object to be identified based on the second image.
8. The method of any of claims 1 to 7, further comprising:
Comparing the first image of the object to be identified with the sample in the sample database to output prompt information for updating the sample database or identification information of the object to be identified.
9. A processing apparatus, comprising:
The environment parameter adjusting module is used for adjusting the environment parameters of the first environment where the object to be identified is located according to at least the determined adjusting strategy based on the color information of the object to be identified, wherein the adjusting strategy at least enables the color of the first environment where the object to be identified is located to be different from that of the object to be identified, the adjusting strategy comprises at least one of a light supplementing related strategy or a background switching related strategy, and the background switching related strategy comprises a background light source or a background image of the environment where the object to be identified is located; and
The first image obtaining module is used for obtaining a first image of the object to be identified under the adjusted environmental parameters so as to obtain the identification information of the object to be identified at least based on the identification processing of the first image.
10. An electronic device, comprising:
The image acquisition component is used for acquiring images;
a light source assembly for providing light sources of a plurality of colors;
one or more processors; and
A computer readable storage medium storing one or more computer programs which, when executed by the processor, implement the method of any of claims 1-7.
CN201911415251.5A 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment Active CN111145194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415251.5A CN111145194B (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415251.5A CN111145194B (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111145194A CN111145194A (en) 2020-05-12
CN111145194B true CN111145194B (en) 2024-09-20

Family

ID=70522743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415251.5A Active CN111145194B (en) 2019-12-31 2019-12-31 Processing method, processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111145194B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640179B (en) * 2020-06-26 2023-09-01 百度在线网络技术(北京)有限公司 Display method, device, equipment and storage medium of pet model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN110490852A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Search method, device, computer-readable medium and the electronic equipment of target object

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
US9736446B1 (en) * 2016-01-28 2017-08-15 International Business Machines Corporation Automated color adjustment of media files
CN108038889A (en) * 2017-11-10 2018-05-15 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image color cast

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product
CN107729099A (en) * 2017-09-25 2018-02-23 联想(北京)有限公司 Background method of adjustment and its system
CN110490852A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Search method, device, computer-readable medium and the electronic equipment of target object

Also Published As

Publication number Publication date
CN111145194A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
US10083522B2 (en) Image based measurement system
Gao et al. Recognition of traffic signs based on their colour and shape features extracted using human vision models
CN110334708A (en) Difference automatic calibrating method, system, device in cross-module state target detection
US11265481B1 (en) Aligning and blending image data from multiple image sensors
CN106993112A (en) Background-blurring method and device and electronic installation based on the depth of field
US11869256B2 (en) Separation of objects in images from three-dimensional cameras
US11354885B1 (en) Image data and simulated illumination maps
CN112149348A (en) Simulation space model training data generation method based on unmanned container scene
US20220114396A1 (en) Methods, apparatuses, electronic devices and storage media for controlling image acquisition
CN110232315A (en) Object detection method and device
US20230146924A1 (en) Neural network analysis of lfa test strips
Muraji et al. Depth from phasor distortions in fog
Lukoyanov et al. Modification of YAPE keypoint detection algorithm for wide local contrast range images
Wang et al. Combining semantic scene priors and haze removal for single image depth estimation
CN111145194B (en) Processing method, processing device and electronic equipment
CN110910379A (en) Incomplete detection method and device
CN113065521B (en) Object identification method, device, equipment and medium
Drew Robust specularity detection from a single multi-illuminant color image
CN110210401B (en) Intelligent target detection method under weak light
Mittal et al. FEMT: a computational approach for fog elimination using multiple thresholds
EP3671410A1 (en) Method and device to control a virtual reality display unit
Hagg et al. On recognizing transparent objects in domestic environments using fusion of multiple sensor modalities
Li et al. Color constancy using achromatic surface
CN106164988B (en) POS terminal, information processing apparatus, white balance adjustment method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant