CN113359734A - Logistics auxiliary robot based on AI - Google Patents
Logistics auxiliary robot based on AI Download PDFInfo
- Publication number
- CN113359734A CN113359734A CN202110662436.7A CN202110662436A CN113359734A CN 113359734 A CN113359734 A CN 113359734A CN 202110662436 A CN202110662436 A CN 202110662436A CN 113359734 A CN113359734 A CN 113359734A
- Authority
- CN
- China
- Prior art keywords
- face image
- information
- robot body
- user
- central control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004891 communication Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims description 37
- 238000012795 verification Methods 0.000 claims description 25
- 230000003993 interaction Effects 0.000 claims description 21
- 238000001914 filtration Methods 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 15
- 238000000354 decomposition reaction Methods 0.000 claims description 10
- 101100243951 Caenorhabditis elegans pie-1 gene Proteins 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000002955 isolation Methods 0.000 description 20
- 210000000352 storage cell Anatomy 0.000 description 17
- 230000003287 optical effect Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006806 disease prevention Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Manipulator (AREA)
Abstract
The invention provides an AI-based logistics auxiliary robot, which comprises a robot body, wherein the robot body is provided with a storage cavity, and the storage cavity is used for placing logistics distribution articles; the bottom of the robot body is provided with a walking device, and the walking device is used for driving the robot body to move; the robot body is also provided with an environment sensing device, and the environment sensing device is used for acquiring the current positioning pose information of the robot body; the robot body is also internally provided with a central control module and a wireless communication module; the wireless communication module is used for establishing communication connection with the remote control terminal and receiving a logistics distribution instruction sent by the remote control terminal; the central control module is used for acquiring path planning information according to the target position in the logistics distribution instruction and the positioning pose information of the robot body, and controlling the traveling device to move to the target position along the planned path according to the path planning information. The invention is beneficial to improving the convenience and the safety of the isolated object logistics distribution.
Description
Technical Field
The invention relates to the technical field of logistics auxiliary robots, in particular to a logistics auxiliary robot based on AI.
Background
At present, most of logistics distribution modes are carried out manually, and distribution personnel distribute articles to designated addresses or express cabinets. Meanwhile, with the need of disease prevention and control, a large number of people are arranged in a designated isolation area for isolated observation at present. When the addressees of the logistics distribution are isolated personnel or the articles need to be distributed into the isolation area, the isolated personnel cannot leave the isolated area, and the distribution personnel cannot enter the isolation area, and can only take the articles from the distribution personnel through the staff in the isolation area and then uniformly arrange the articles to the hands of the isolated personnel; however, the above distribution mode increases the workload of workers in the isolation area, and also causes that the isolated workers cannot timely take needed articles. The normal operation of logistics distribution work in the isolation area is influenced.
Disclosure of Invention
In view of the above problems, the present invention is directed to an AI-based logistics auxiliary robot.
The purpose of the invention is realized by adopting the following technical scheme:
the invention discloses an AI-based logistics auxiliary robot, which comprises a robot body, wherein the robot body is provided with a storage cavity, and the storage cavity is used for placing logistics distribution articles; the bottom of the robot body is provided with a walking device, and the walking device is used for driving the robot body to move; the robot body is also provided with an environment sensing device, and the environment sensing device is used for acquiring the current positioning pose information of the robot body; the robot body is also internally provided with a central control module and a wireless communication module, wherein the central control module is respectively connected with the wireless communication module, the walking device and the environment sensing device; the wireless communication module is used for establishing communication connection with the remote control terminal and receiving a logistics distribution instruction sent by the remote control terminal; the central control module is used for acquiring path planning information according to the target position in the logistics distribution instruction and the positioning pose information of the robot body, and controlling the traveling device to move to the target position along the planned path according to the path planning information.
In one embodiment, the storage cavity comprises at least one storage grid, and an intelligent lock is arranged on a grid door of each storage grid;
the robot body is also provided with a human-computer interaction module, and the human-computer interaction module is used for acquiring a door opening instruction and transmitting the door opening instruction to the central control module;
the central control module is also respectively connected with the intelligent lock of each storage lattice and used for controlling the corresponding intelligent lock to be opened according to the received door opening instruction.
In one embodiment, a walking device comprises a driving module and a driving wheel;
the driving module is used for controlling the driving wheels to operate according to the walking control instruction sent by the central control module so as to realize the walking actions of advancing, retreating and steering of the robot body.
In one embodiment, the environment sensing device comprises a speed sensor, an angular velocity sensor, a GPS unit, a binocular camera and a laser radar;
the binocular camera is arranged in front of the robot body and used for acquiring image information in front of the robot body;
the laser radars are arranged in front of and at two sides of the robot body and used for acquiring depth information of the environment where the robot body is located;
the speed sensor and the angle sensor are respectively arranged on the walking device and are used for respectively acquiring the speed and the angular speed information of the robot body;
the GPS unit is arranged on the walking device and used for acquiring the positioning information of the robot body;
the central control module analyzes the current pose information of the robot body according to the received image information, depth information, speed information, angular speed information and positioning information, completes path planning when the robot body reaches a target position by combining the current pose information, and completes obstacle avoidance control of the robot body in the walking process.
In one embodiment, the human-computer interaction module comprises a touch display screen;
the touch display screen is arranged in front of the robot body and used for displaying the current robot character stream distribution information and allowing a user to input corresponding verification information so as to generate a corresponding door opening instruction and transmit the door opening instruction to the central control module.
In one embodiment, the wireless communication module comprises a 4G/5G communication unit and/or a WIFI communication unit, and is used for realizing data interaction with the remote control terminal in a 4G/5G wireless network or WIFI mode.
In one embodiment, the central control module is further configured to receive and store the storage cell identification information sent by the remote control terminal, where the storage cell identification information includes a storage cell number, logistics order information corresponding to the storage cell number, and storage cell door opening verification information;
the touch display screen is used for acquiring user authentication information and transmitting the user authentication information to the central control module;
and the central control module matches the storage lattice door opening verification information corresponding to each storage lattice according to the received user verification information, and controls the lattice door of the corresponding storage lattice to be opened after the matching is successful.
In one embodiment, the user authentication information includes user facial image information;
the man-machine interaction module also comprises a camera unit, wherein the camera unit is used for acquiring the face image of the fetching user, preprocessing the face image of the fetching user and transmitting the preprocessed face image of the fetching user to the central control module as a first face image;
the central control module carries out matching verification according to the received first face image and the second face image corresponding to each storage lattice, and controls the lattice door of the corresponding storage lattice to be opened after the first face image is successfully matched with the second face image corresponding to the storage lattice; the second face image corresponding to the storage lattice is shot by the fetching user through the user terminal of the fetching user in real time and is acquired and transmitted to the remote control terminal, and the remote control terminal binds the acquired second face image with the logistics order information and the storage lattice number to generate storage lattice identification information and transmits the storage lattice identification information to the central control module.
The invention has the beneficial effects that: the logistics auxiliary robot provided by the invention can realize the unmanned distribution of logistics distribution articles between distribution personnel and isolation personnel, on one hand, the labor cost of article distribution can be reduced, on the other hand, the non-contact distribution of articles in an isolation area is also facilitated, and the convenience and the safety of the logistics distribution of the isolation articles are improved.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an AI-based logistics auxiliary robot according to the present invention;
fig. 2 is a frame structure view of an AI-based logistics auxiliary robot according to the present invention.
Reference numerals:
the robot comprises a robot body, a walking device, a 3-environment sensing device, a 31-binocular camera, a 32-laser radar, a 33-speed sensor, an 34-angular speed sensor, a 35-GPS unit, a 4-central control module, a 5-wireless communication module, a 6-storage grid, a 7-intelligent lock and an 8-man-machine interaction module.
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to fig. 1 and 2, the logistics auxiliary robot based on the AI is shown, and comprises a robot body 1, wherein a storage cavity is arranged on the robot body 1, and is used for placing logistics distribution articles; the bottom of the robot body 1 is provided with a walking device 2, and the walking device 2 is used for driving the robot body 1 to move; the robot body 1 is also provided with an environment sensing device 3, and the environment sensing device 3 is used for acquiring the current positioning pose information of the robot body 1; the robot body 1 is also internally provided with a central control module 4 and a wireless communication module 5, wherein the central control module 4 is respectively connected with the wireless communication module 5, the walking device 2 and the environment sensing device 3; the wireless communication module 5 is used for establishing communication connection with a remote control terminal and receiving a logistics distribution instruction sent by the remote control terminal; the central control module 4 is used for acquiring path planning information according to the target position in the logistics distribution instruction and the positioning pose information of the robot body 1, and controlling the traveling device 2 to move to the target position along the planned path according to the path planning information.
In the above embodiment, an AI-based logistics auxiliary robot is provided, wherein a robot body 1 is provided with a storage cavity for a delivery person to place an article to be delivered; meanwhile, the robot body 1 is also provided with a walking device 2, a central control module 4 and an environment sensing device 3 to complete path planning and movement control of the robot body 1 in a matching manner, so that the robot can move to a specified target position; the robot body 1 is also provided with a wireless communication module 5 so as to realize data and information interaction between the robot body 1 and a remote control terminal. The logistics auxiliary robot provided by the invention can realize the unmanned distribution of logistics distribution articles between distribution personnel and isolation personnel, on one hand, the labor cost of article distribution can be reduced, on the other hand, the non-contact distribution of articles in an isolation area is also facilitated, and the convenience and the safety of the logistics distribution of the isolation articles are improved.
In one scenario, a logistics auxiliary robot is parked outside an isolation area, when a delivery person needs to deliver an article into the isolation area, the delivery person places the article in a storage compartment 6 of the logistics auxiliary robot, and sends the storage compartment number, logistics order information and target delivery address information to a remote control terminal (such as a dedicated server of the logistics auxiliary robot, or a cloud service end) through a delivery terminal (mobile phone) of the delivery person, the remote control terminal sends a delivery instruction (where the delivery instruction carries a target position, such as a location in the isolation area, including a lobby of an isolation building, or a preset position of a designated floor of the isolation building, and a nearest target position can be automatically matched according to the target delivery address information) to the logistics auxiliary robot, and planning a path according to the self pose information and the target position, and starting to move to the target position according to the planned path. Meanwhile, after receiving the telephone notification of the distribution personnel, the isolation personnel also arrive at the designated target position and take the corresponding articles from the logistics auxiliary robot. When the logistics auxiliary robot detects that all articles are taken away, the logistics auxiliary robot automatically returns to the outside of the isolation area, and intelligent non-contact distribution of the articles is completed.
In one embodiment, the storage cavity comprises at least one storage compartment 6, and an intelligent lock 7 is arranged on a compartment door of each storage compartment 6;
the robot body 1 is also provided with a human-computer interaction module 8, and the human-computer interaction module 8 is used for acquiring a door opening instruction and transmitting the door opening instruction to the central control module 4;
the central control module 4 is also respectively connected with the intelligent lock 7 of each storage lattice 6 and used for controlling the corresponding intelligent lock 7 to be opened according to the received door opening instruction.
The storage chamber on the logistics auxiliary robot can be arranged into one or more storage grids 6 so as to meet the requirement of simultaneously delivering a plurality of articles. Meanwhile, the intelligent locks 7 are arranged on the lattice doors of the storage lattices 6, and the opening and closing of the intelligent locks 7 on each lattice door are controlled through the central control module 4;
wherein be provided with human-computer interaction module 8 on the auxiliary robot of commodity circulation, human-computer interaction module 8 is used for supplying to get thing user input verification information or selects the matter storage lattice 6 that need open, verifies the back by central control module 4 to verification information and controls corresponding matter storage lattice 6 check door and open.
In one embodiment, the running gear 2 comprises a drive module and a drive wheel;
the driving module is used for controlling the driving wheels to operate according to the walking control instruction sent by the central control module 4 so as to realize the walking actions of advancing, retreating and steering of the robot body 1.
In one embodiment, the environment sensing device 3 includes a speed sensor 33, an angular velocity sensor 34, a GPS unit 35, a binocular camera 31, and a laser radar 32;
the binocular camera 31 is arranged in front of the robot body 1 and used for acquiring image information in front of the robot body 1;
the laser radars 32 are arranged in front of and at two sides of the robot body 1 and are used for acquiring depth information of the environment where the robot body 1 is located;
the speed sensor 33 and the angle sensor are respectively arranged on the walking device 2 and are used for respectively acquiring the speed and the angular speed information of the robot body 1;
the GPS unit 35 is arranged on the walking device 2 and used for acquiring the positioning information of the robot body 1;
the central control module 4 analyzes the current pose information of the robot body 1 according to the received image information, depth information, speed information, angular speed information and positioning information, completes path planning when the robot body 1 reaches a target position by combining the current pose information, and completes obstacle avoidance control of the robot body 1 in the walking process.
The robot body 1 of the logistics auxiliary robot is also provided with different environment sensing devices 3, such as a speed sensor 33, an angular velocity sensor 34, a GPS unit 35, a binocular camera 31, a laser radar 32 and other different sensors for sensing the environment where the logistics auxiliary robot is located or the pose information of the logistics auxiliary robot; the central control module 4 is used for processing the information collected by the various sensors, performing data fusion and other processing, so as to obtain the current positioning and pose information of the logistics auxiliary robot, and further combining path planning to control the logistics auxiliary robot to move to a target position, or realizing obstacle avoidance control in a small range in the moving process.
In one embodiment, the human-computer interaction module 8 comprises a touch display screen;
the touch display screen is arranged in front of the robot body 1 and used for displaying the current robot character stream distribution information and allowing a user to input corresponding verification information so as to generate a corresponding door opening instruction and transmit the door opening instruction to the central control module 4.
In one embodiment, the wireless communication module 5 includes a 4G/5G communication unit and/or a WIFI communication unit, and is configured to implement data interaction with the remote control terminal in a 4G/5G wireless network or WIFI manner.
In one embodiment, the central control module 4 is further configured to receive and store the storage cell identification information sent by the remote control terminal, where the storage cell 6 identification information includes a storage cell number, and logistics order information and storage cell 6 door opening verification information corresponding to the storage cell number;
the touch display screen is used for acquiring user authentication information and transmitting the user authentication information to the central control module 4;
the central control module 4 matches the storage compartment door opening verification information corresponding to each storage compartment 6 according to the received user verification information, and controls the compartment door of the corresponding storage compartment 6 to be opened after the matching is successful.
In one scenario, after a delivery person puts an article into an empty storage cell 6, the delivery person logs in a special background server of a delivery auxiliary robot, the storage cell number and logistics order information corresponding to the article are input into the background server, after a group of corresponding door opening verification codes are generated by the background server, the door opening verification codes are associated with the storage cell number and the logistics order information to generate storage cell identification information, and the storage cell identification information is transmitted to the logistics auxiliary robot. Meanwhile, the door opening verification code is sent to corresponding distribution personnel, and the distribution personnel sends or informs the door opening verification code to the fetching user. After the article taking user reaches the position of the logistics auxiliary robot, the door opening verification code is input through the human-computer interaction module 8, the logistics auxiliary robot verifies the door opening verification code and opens the corresponding lattice door of the storage lattice 6, and the article stored in the storage lattice 6 is taken away by the user.
Wherein, with the needs of epidemic prevention isolation work, the security that the contactless logistics distribution can effectively improve logistics distribution. By combining the above embodiments, when the isolated user needs to take the object through the logistics auxiliary robot, the human-computer interaction module 8 (touch display screen) still needs to be operated, so that the touch display screen may be touched by a plurality of object taking users in sequence, and the touch display screen is easy to become a propagation medium of bacteria, viruses and the like, and has potential safety hazards. To the above problem, this application still provides a control technical scheme based on face recognition technology on the basis of the auxiliary robot of commodity circulation to make and get the thing user and can open the check door of corresponding matter storage check 6 based on face recognition technology, thereby take away corresponding article, realize the contactless delivery of auxiliary robot of commodity circulation.
In one embodiment, the user authentication information includes user facial image information;
the human-computer interaction module 8 further comprises a camera unit, the camera unit is used for collecting the face image of the fetching user, preprocessing the face image of the fetching user and transmitting the preprocessed face image of the fetching user to the central control module 4 as a first face image;
the central control module 4 performs matching verification according to the received first face image and the second face image corresponding to each storage lattice 6, and controls the lattice door of the corresponding storage lattice 6 to be opened after the first face image is successfully matched with the second face image corresponding to the storage lattice 6; the second face image corresponding to the storage lattice 6 is shot by the fetching user through the user terminal of the fetching user in real time and is acquired and transmitted to the remote control terminal, and the remote control terminal binds the acquired second face image with the logistics order information and the storage lattice number to generate storage lattice 6 identification information and transmits the storage lattice 6 identification information to the central control module 4;
in one embodiment, a real person detection subunit is arranged in the camera unit to perform living body face detection on the face image of the object-taking user, and when the face image of the object-taking user is detected to be a living body face image, the face image of the object-taking user is further transmitted to the central control module 4 as a first face image;
wherein, the real person detects the subunit and carries out live body face detection to getting thing user face image, specifically includes: and detecting the face image sequence of the object-taking user based on an optical flow method, and generating a detection result for indicating whether the face corresponding to the object-taking face image is a living face.
According to an optical flow method, the 'motion' of each pixel position is determined by using the time domain change and the correlation of pixel intensity data in an image sequence, the operation information of each pixel point is obtained from the image sequence, and a Gaussian difference filter, LBP (local binary Pattern) characteristics and a support vector machine are adopted for carrying out data statistical analysis. Meanwhile, the optical flow field is sensitive to the movement of an object, and eyeball movement and blink can be uniformly detected by using the optical flow field. The living body detection mode can realize blind detection under the condition that the fetching user is not matched. The method and the device are favorable for improving the reliability of identity verification of the object-taking user according to the face image information of the user.
In one embodiment, an image preprocessing subunit is built in the camera unit to preprocess the face image of the fetching user, and specifically includes:
carrying out wavelet transform-based enhancement filtering processing on the collected user face image Pic0 to obtain an object-taking user face image Pic1 after the enhancement filtering processing;
further extracting a face region of the object-taking user face image Pic1 subjected to the enhanced filtering processing to obtain a face region Ar1 and a background region Ar2 in the image;
respectively carrying out self-adaptive brightness adjustment processing on the face area Ar1 and the background area Ar2 to obtain a preprocessed object user face image;
the wavelet transform-based enhancement filtering processing specifically comprises the following steps: performing wavelet transformation on the face image of the object-taking user based on a set Gabor wavelet basis and a set decomposition scale to obtain a low-frequency wavelet coefficient and a high-frequency wavelet coefficient of the face image of the object-taking user; and sequentially processing each high-frequency wavelet coefficient:
when the absolute value | gw (i, j) | of the jth high-frequency wavelet coefficient of the ith decomposition scale is less than or equal to the set threshold value T, processing the high-frequency wavelet coefficient by adopting the following enhanced filtering function:
wherein gw' (i, j) represents the jth high frequency wavelet coefficient of the ith decomposition scale after the enhancement filter function processing, α represents the set change adjustment factor, wherein α ∈ [0.01,100], sgn (×) represents the sign function;
when the absolute value | gw (i, j) | of the jth high-frequency wavelet coefficient of the ith decomposition scale is larger than a set threshold value T, processing the high-frequency wavelet coefficient by adopting the following enhanced filtering function:
wherein gw' (i, j) represents the jth high frequency wavelet coefficient of the ith decomposition scale after the enhancement filter function processing, α represents the set change adjustment factor, wherein α ∈ [0.01,100], β represents the amplitude adjustment factor, wherein β ∈ [0.1,10], sgn (×) represents the sign function;
and performing wavelet inverse transformation on each high-frequency wavelet coefficient and each low-frequency wavelet coefficient processed based on the enhanced filtering function to obtain an object user face image Pic1 after the enhanced filtering processing.
The face region extraction is performed on the extraction user face image Pic1 after the enhancement filtering processing, and the face region in the image may be detected by adopting a processing mode based on edge detection or based on image depth information.
The method specifically includes the following steps of respectively performing adaptive brightness adjustment processing on a face area Ar1 and a background area Ar 2:
converting the fetching user face image Pic1 from an RGB color space to an LAB color space, and respectively acquiring a brightness component L, a color component a and a color component b of the image;
respectively counting the average brightness component value L of the face area Ar1 aiming at the acquired brightness component LAr1And the average luminance component value L of the background area Ar2Ar2;
When (L)Ar2-LAr1) τ 1, where τ 1 represents the set backlight luminance threshold, τ 1 ∈ [25,40 ]]The following brightness adjustment functions are used to process the brightness components:
in the formula, L' (x, y) represents the luminance component value of the pixel (x, y) after the luminance adjustment function processing, L (x, y) represents the luminance component value of the pixel (x, y) in the acquired luminance component, and LAr1Average luminance component value, L, of the face area Ar1Ar2Denotes an average luminance component value of the background area Ar2, τ 1 denotes a set backlight luminance threshold value, ω1And ω 2, respectively, represent the brightness adjustment factor to be set, DΔRepresenting the size of the face image of the fetching user, whereinlΔAnd hΔRespectively representing the total number of pixel points of the length and the width of the face image of the object-taking user; d (x, y) represents the distance between the pixel point (x, y) and the central pixel point of the face area Ar 1; (x, y) ∈ Ar1The pixel point (x, y) belongs to the face area, (x, y) belongs to Ar2 to indicate that the pixel point (x, y) belongs to the background area, tau 2 represents a set brightness standard value, tau 2 belongs to [70,80 ]];
When (L)Ar2-LAr1) When the value is less than or equal to tau 1, processing the brightness component by adopting the following brightness adjusting function:
and converting the object-fetching user face image from an LAB color space to an RGB color space according to the brightness component L' and the color component a and the color component b which are processed by the brightness mediation function, and acquiring the preprocessed object-fetching user face image.
In one scenario, after a delivery person puts an article into a storage cell 6 of a logistics auxiliary robot, the delivery person logs in a background server of the logistics auxiliary robot to input a storage cell number and logistics order information of the article, and receives face information returned by the background server to obtain a page; then, the distribution personnel sends the access address of the face information acquisition page to the object-taking user through a short message or a third-party communication application program and the like, after the object-taking user opens the face information acquisition page through a mobile terminal such as a mobile phone, the object-taking user shoots the face information acquisition page in real time through the mobile phone, uses the face image of the object-taking user as a second face image and uploads the second face image to the background server, the background server generates storage lattice identification information according to the received first face image and the face information acquisition page, the storage lattice identification information is transmitted to the logistics auxiliary robot, and the storage lattice identification information is stored by the central control module 4 of the logistics auxiliary robot. And after the distribution personnel transmit the distribution address to the background server, the background server generates a corresponding target position and sends a distribution instruction to the logistics auxiliary robot, and the logistics auxiliary robot is driven to start to distribute the articles to the target position.
When getting the thing user and arriving the auxiliary robot of commodity circulation in order to take away article, get the thing personnel and gather self face image information as first face image through the camera unit that sets up on the auxiliary robot of commodity circulation, match the verification by central control module 4 based on the first face image of gathering and the second face image that has saved, pass through the back when verifying, open by the corresponding lattice door of storing check 6 of central control module 4 control, supply to get the thing user and take away corresponding article, the thing user is getting the contactless of thing in-process and is got the thing, the security of carrying out article distribution based on the auxiliary robot of commodity circulation has further been improved.
Meanwhile, aiming at the situation that the target position reached by the logistics auxiliary robot is a corridor or a hall, due to the fact that the illumination quality in the corridor is uneven, the situation that the collected face image is unclear due to the influence of illumination factors in the process that the logistics auxiliary robot collects the face image of the object taking user is easily caused (for example, when the object taking user faces away from a lighting window in the corridor, the whole brightness of the corridor is low, and strong light is emitted into the lighting window, the face image of the object taking user is in a backlight mode, so that the face part is unclear, or when the light of the corridor is dark, the whole brightness of the image is dark, and the logistics auxiliary robot cannot perform identity recognition according to the face image). The application also particularly provides a condition that the acquired face image is preprocessed immediately after the face image of the object-taking user is acquired based on the camera unit so as to improve the definition of the face image, wherein the face image of the object-taking user is firstly subjected to filtering enhancement processing based on wavelet transformation to effectively remove noise interference contained in the image, meanwhile, the region division is carried out based on the image, and the detection of a face region and a background region in the image is subjected to regional brightness adjustment processing, wherein a technical scheme of adaptive brightness adjustment aiming at the face image is particularly provided, the image is firstly converted into an LAB color space, the brightness characteristic of the image is detected based on the acquired brightness component, and the brightness level of the face part in the face image of the object-taking user can be adaptively adjusted aiming at different conditions such as backlight or insufficient brightness, the definition of the face image of the fetching user is improved; meanwhile, in the process of regional brightness adjustment, the brightness level of the face region is particularly highlighted, meanwhile, the brightness of the background region is subjected to self-adaptive gradual change adjustment, the image quality and the image display effect are improved, the preprocessed face image is transmitted to the central control module 4 for further identity recognition processing, and the reliability and the accuracy of recognition of the identity of the user who takes the object based on the face image can be improved. And the environmental adaptability of the logistics auxiliary robot is indirectly improved.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. An AI-based logistics auxiliary robot is characterized by comprising a robot body, wherein the robot body is provided with a storage cavity for placing logistics distribution articles; the bottom of the robot body is provided with a walking device, and the walking device is used for driving the robot body to move; the robot body is also provided with an environment sensing device, and the environment sensing device is used for acquiring the current positioning pose information of the robot body; the robot body is also internally provided with a central control module and a wireless communication module, wherein the central control module is respectively connected with the wireless communication module, the walking device and the environment sensing device; the wireless communication module is used for establishing communication connection with the remote control terminal and receiving a logistics distribution instruction sent by the remote control terminal; the central control module is used for acquiring path planning information according to the target position in the logistics distribution instruction and the positioning pose information of the robot body, and controlling the traveling device to move to the target position along the planned path according to the path planning information.
2. The AI-based logistics auxiliary robot of claim 1, wherein the storage chamber comprises at least one storage compartment, and an intelligent lock is arranged on a compartment door of each storage compartment;
the robot body is also provided with a human-computer interaction module, and the human-computer interaction module is used for acquiring a door opening instruction and transmitting the door opening instruction to the central control module;
the central control module is also respectively connected with the intelligent lock of each storage lattice and used for controlling the corresponding intelligent lock to be opened according to the received door opening instruction.
3. The AI-based logistics auxiliary robot of claim 1, wherein the walking means comprises a drive module and a drive wheel;
the driving module is used for controlling the driving wheels to operate according to the walking control instruction sent by the central control module so as to realize the walking actions of advancing, retreating and steering of the robot body.
4. The AI-based logistics auxiliary robot of claim 1, wherein the environmental sensing means comprises a speed sensor, an angular velocity sensor, a GPS unit, a binocular camera and a lidar;
the binocular camera is arranged in front of the robot body and used for acquiring image information in front of the robot body;
the laser radars are arranged in front of and at two sides of the robot body and used for acquiring depth information of the environment where the robot body is located;
the speed sensor and the angle sensor are respectively arranged on the walking device and are used for respectively acquiring the speed and the angular speed information of the robot body;
the GPS unit is arranged on the walking device and used for acquiring the positioning information of the robot body;
the central control module analyzes the current pose information of the robot body according to the received image information, depth information, speed information, angular speed information and positioning information, completes path planning when the robot body reaches a target position by combining the current pose information, and completes obstacle avoidance control of the robot body in the walking process.
5. The AI-based logistics auxiliary robot of claim 2, wherein the human-machine interaction module comprises a touch display screen;
the touch display screen is arranged in front of the robot body and used for displaying the current robot character stream distribution information and allowing a user to input corresponding verification information so as to generate a corresponding door opening instruction and transmit the door opening instruction to the central control module.
6. The AI-based logistics auxiliary robot of claim 1, wherein the wireless communication module comprises a 4G/5G communication unit and/or a WIFI communication unit for enabling data interaction with the remote control terminal via a 4G/5G wireless network or WIFI.
7. The AI-based logistics auxiliary robot of claim 5, wherein the central control module is further configured to receive and store cell identification information sent by the remote control terminal, wherein the cell identification information comprises a cell number, logistics order information corresponding to the cell number, and cell opening verification information;
the touch display screen is used for acquiring user authentication information and transmitting the user authentication information to the central control module;
and the central control module matches the storage lattice door opening verification information corresponding to each storage lattice according to the received user verification information, and controls the lattice door of the corresponding storage lattice to be opened after the matching is successful.
8. The AI-based logistics auxiliary robot of claim 7, wherein the user authentication information comprises user facial image information;
the man-machine interaction module also comprises a camera unit, wherein the camera unit is used for acquiring the face image of the fetching user, preprocessing the face image of the fetching user and transmitting the preprocessed face image of the fetching user to the central control module as a first face image;
the central control module carries out matching verification according to the received first face image and the second face image corresponding to each storage lattice, and controls the lattice door of the corresponding storage lattice to be opened after the first face image is successfully matched with the second face image corresponding to the storage lattice; the second face image corresponding to the storage lattice is shot by the fetching user through the user terminal of the fetching user in real time and is acquired and transmitted to the remote control terminal, and the remote control terminal binds the acquired second face image with the logistics order information and the storage lattice number to generate storage lattice identification information and transmits the storage lattice identification information to the central control module.
9. The AI-based logistics auxiliary robot of claim 8, wherein the camera unit is internally provided with a real person detection subunit for performing living body face detection on the object-taking user face image, and when the object-taking user face image is detected to be a living body face image, the object-taking user face image is further transmitted to the central control module as a first face image.
10. The AI-based logistics auxiliary robot of claim 8, wherein an image preprocessing subunit is built in the camera unit to preprocess the facial image of the fetching user, and specifically comprises:
carrying out wavelet transform-based enhancement filtering processing on the collected user face image Pic0 to obtain an object-taking user face image Pic1 after the enhancement filtering processing;
further extracting a face region of the object-taking user face image Pic1 subjected to the enhanced filtering processing to obtain a face region Ar1 and a background region Ar2 in the image;
respectively carrying out self-adaptive brightness adjustment processing on the face area Ar1 and the background area Ar2 to obtain a preprocessed object user face image;
the wavelet transform-based enhancement filtering processing specifically comprises the following steps: performing wavelet transformation on the face image of the object-taking user based on a set Gabor wavelet basis and a set decomposition scale to obtain a low-frequency wavelet coefficient and a high-frequency wavelet coefficient of the face image of the object-taking user; and sequentially processing each high-frequency wavelet coefficient:
when the absolute value | gw (i, j) | of the jth high-frequency wavelet coefficient of the ith decomposition scale is less than or equal to the set threshold value T, processing the high-frequency wavelet coefficient by adopting the following enhanced filtering function:
wherein gw' (i, j) represents the jth high frequency wavelet coefficient of the ith decomposition scale after the enhancement filter function processing, α represents the set change adjustment factor, wherein α ∈ [0.01,100], sgn (×) represents the sign function;
when the absolute value | gw (i, j) | of the jth high-frequency wavelet coefficient of the ith decomposition scale is larger than a set threshold value T, processing the high-frequency wavelet coefficient by adopting the following enhanced filtering function:
wherein gw' (i, j) represents the jth high frequency wavelet coefficient of the ith decomposition scale after the enhancement filter function processing, α represents the set change adjustment factor, wherein α ∈ [0.01,100], β represents the amplitude adjustment factor, wherein β ∈ [0.1,10], sgn (×) represents the sign function;
carrying out inverse wavelet transform on each high-frequency wavelet coefficient and each low-frequency wavelet coefficient processed based on the enhanced filtering function to obtain an object user face image Pic1 after the enhanced filtering processing;
the method specifically includes the following steps of respectively performing adaptive brightness adjustment processing on a face area Ar1 and a background area Ar 2:
converting the fetching user face image Pic1 from an RGB color space to an LAB color space, and respectively acquiring a brightness component L, a color component a and a color component b of the image;
respectively counting the average brightness component value L of the face area Ar1 aiming at the acquired brightness component LAr1And the average luminance component value L of the background area Ar2Ar2;
When (L)Ar2-LAr1) τ 1, where τ 1 represents the set backlight luminance threshold, τ 1 ∈ [25,40 ]]The following brightness adjustment functions are used to process the brightness components:
in the formula, L' (x, y) represents the luminance component value of the pixel (x, y) after the luminance adjustment function processing, L (x, y) represents the luminance component value of the pixel (x, y) in the acquired luminance component, and LAr1Average luminance component value, L, of the face area Ar1Ar2Denotes an average luminance component value of the background area Ar2, τ 1 denotes a set backlight luminance threshold value, ω1And ω 2, respectively, represent the brightness adjustment factor to be set, DΔRepresenting the size of the face image of the fetching user, whereinlΔAnd hΔRespectively representing the total number of pixel points of the length and the width of the face image of the object-taking user; d (x, y) represents the distance between the pixel point (x, y) and the central pixel point of the face area Ar 1; (x, y) belongs to Ar1 to indicate that the pixel (x, y) belongs to the face area, (x, y) belongs to Ar2 to indicate that the pixel (x, y) belongs to the background area, tau 2 represents a set brightness standard value, and tau 2 belongs to [70,80 ]];
When (L)Ar2-LAr1) When the value is less than or equal to tau 1, processing the brightness component by adopting the following brightness adjusting function:
and converting the object-fetching user face image from an LAB color space to an RGB color space according to the brightness component L' and the color component a and the color component b which are processed by the brightness mediation function, and acquiring the preprocessed object-fetching user face image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110662436.7A CN113359734B (en) | 2021-06-15 | 2021-06-15 | Logistics auxiliary robot based on AI |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110662436.7A CN113359734B (en) | 2021-06-15 | 2021-06-15 | Logistics auxiliary robot based on AI |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113359734A true CN113359734A (en) | 2021-09-07 |
CN113359734B CN113359734B (en) | 2022-02-22 |
Family
ID=77534353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110662436.7A Active CN113359734B (en) | 2021-06-15 | 2021-06-15 | Logistics auxiliary robot based on AI |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113359734B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114200927A (en) * | 2021-11-12 | 2022-03-18 | 北京时代富臣智能科技有限公司 | Logistics robot system |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004126800A (en) * | 2002-09-30 | 2004-04-22 | Secom Co Ltd | Transport robot and transport system using the same |
CN105117882A (en) * | 2015-08-25 | 2015-12-02 | 深圳市唯传科技有限公司 | Intelligent logistics delivery method, apparatus, and system |
CN105427257A (en) * | 2015-11-18 | 2016-03-23 | 四川汇源光通信有限公司 | Image enhancement method and apparatus |
CN205855212U (en) * | 2016-07-05 | 2017-01-04 | 嘉兴小吾网络科技有限公司 | Logistics distribution robot |
US9741010B1 (en) * | 2016-12-02 | 2017-08-22 | Starship Technologies Oü | System and method for securely delivering packages to different delivery recipients with a single vehicle |
CN107609514A (en) * | 2017-09-12 | 2018-01-19 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
CN108876251A (en) * | 2018-07-05 | 2018-11-23 | 北京智行者科技有限公司 | Operational method is sent in a kind of logistics with charge free |
CN109081028A (en) * | 2018-08-21 | 2018-12-25 | 江苏木盟智能科技有限公司 | A kind of article delivery method and system based on robot |
CN109110249A (en) * | 2018-09-12 | 2019-01-01 | 苏州博众机器人有限公司 | A kind of dispensing machine people |
CN110405770A (en) * | 2019-08-05 | 2019-11-05 | 北京云迹科技有限公司 | Allocator, device, dispensing machine people and computer readable storage medium |
CN110650178A (en) * | 2019-08-17 | 2020-01-03 | 坎德拉(深圳)科技创新有限公司 | Control method of intelligent distribution cabinet, server and storage medium |
CN209936925U (en) * | 2019-03-21 | 2020-01-14 | 广州映博智能科技有限公司 | Intelligent distribution robot |
CN111251270A (en) * | 2020-02-27 | 2020-06-09 | 浙江工贸职业技术学院 | Community logistics distribution robot |
CN111340152A (en) * | 2018-11-30 | 2020-06-26 | 蒙牛高科鲜乳制品有限公司 | Intelligent storage cabinet and control system and control method thereof |
CN211427154U (en) * | 2020-07-27 | 2020-09-04 | 韩震宇 | Multi-machine cooperation robot material distribution system based on SLAM technology |
CN212061318U (en) * | 2020-04-03 | 2020-12-01 | 中国建设银行股份有限公司 | Intelligent cargo delivery box and delivery system |
CN112783174A (en) * | 2020-12-31 | 2021-05-11 | 深圳市普渡科技有限公司 | Robot article distribution method and robot |
-
2021
- 2021-06-15 CN CN202110662436.7A patent/CN113359734B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004126800A (en) * | 2002-09-30 | 2004-04-22 | Secom Co Ltd | Transport robot and transport system using the same |
CN105117882A (en) * | 2015-08-25 | 2015-12-02 | 深圳市唯传科技有限公司 | Intelligent logistics delivery method, apparatus, and system |
CN105427257A (en) * | 2015-11-18 | 2016-03-23 | 四川汇源光通信有限公司 | Image enhancement method and apparatus |
CN205855212U (en) * | 2016-07-05 | 2017-01-04 | 嘉兴小吾网络科技有限公司 | Logistics distribution robot |
US9741010B1 (en) * | 2016-12-02 | 2017-08-22 | Starship Technologies Oü | System and method for securely delivering packages to different delivery recipients with a single vehicle |
CN107609514A (en) * | 2017-09-12 | 2018-01-19 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
CN108876251A (en) * | 2018-07-05 | 2018-11-23 | 北京智行者科技有限公司 | Operational method is sent in a kind of logistics with charge free |
CN109081028A (en) * | 2018-08-21 | 2018-12-25 | 江苏木盟智能科技有限公司 | A kind of article delivery method and system based on robot |
CN109110249A (en) * | 2018-09-12 | 2019-01-01 | 苏州博众机器人有限公司 | A kind of dispensing machine people |
CN111340152A (en) * | 2018-11-30 | 2020-06-26 | 蒙牛高科鲜乳制品有限公司 | Intelligent storage cabinet and control system and control method thereof |
CN209936925U (en) * | 2019-03-21 | 2020-01-14 | 广州映博智能科技有限公司 | Intelligent distribution robot |
CN110405770A (en) * | 2019-08-05 | 2019-11-05 | 北京云迹科技有限公司 | Allocator, device, dispensing machine people and computer readable storage medium |
CN110650178A (en) * | 2019-08-17 | 2020-01-03 | 坎德拉(深圳)科技创新有限公司 | Control method of intelligent distribution cabinet, server and storage medium |
CN111251270A (en) * | 2020-02-27 | 2020-06-09 | 浙江工贸职业技术学院 | Community logistics distribution robot |
CN212061318U (en) * | 2020-04-03 | 2020-12-01 | 中国建设银行股份有限公司 | Intelligent cargo delivery box and delivery system |
CN211427154U (en) * | 2020-07-27 | 2020-09-04 | 韩震宇 | Multi-machine cooperation robot material distribution system based on SLAM technology |
CN112783174A (en) * | 2020-12-31 | 2021-05-11 | 深圳市普渡科技有限公司 | Robot article distribution method and robot |
Non-Patent Citations (2)
Title |
---|
李骜 等: "基于小波阈值去噪的收缩函数改进方法", 《计算机工程与设计》 * |
黎彪 等: "一种图像小波去噪的改进阈值函数", 《计算机与数字工程》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114200927A (en) * | 2021-11-12 | 2022-03-18 | 北京时代富臣智能科技有限公司 | Logistics robot system |
Also Published As
Publication number | Publication date |
---|---|
CN113359734B (en) | 2022-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10807230B2 (en) | Bistatic object detection apparatus and methods | |
JP2022549656A (en) | Vehicle door control method and device, system, vehicle, electronic device, and storage medium | |
US11113526B2 (en) | Training methods for deep networks | |
CN111899131B (en) | Article distribution method, apparatus, robot, and medium | |
CN106429657A (en) | Flexible destination dispatch passenger support system | |
CN104902258A (en) | Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera | |
KR102391771B1 (en) | Method for operation unmanned moving vehivle based on binary 3d space map | |
CN105912980A (en) | Unmanned plane and unmanned plane system | |
GB2476869A (en) | System and method for tracking and counting objects near an entrance | |
US11967139B2 (en) | Adversarial masks for false detection removal | |
CN110040394A (en) | A kind of interactive intelligent rubbish robot and its implementation | |
CN1936969A (en) | Hotel guest-room on-line management system based on infrared ray detection | |
CN107622596A (en) | Intelligent cargo cabinet and Intelligent cargo cabinet Compliance control method | |
CN103544757A (en) | Door access control method | |
CN110689725A (en) | Security robot system for crossing comprehensive inspection and processing method thereof | |
CN113359734B (en) | Logistics auxiliary robot based on AI | |
US11321790B2 (en) | System and method for vehicle identification based on fueling captures | |
CN110939351A (en) | Visual intelligent control method and visual intelligent control door | |
US11074471B2 (en) | Assisted creation of video rules via scene analysis | |
JP6905849B2 (en) | Image processing system, information processing device, program | |
US20220027648A1 (en) | Anti-spoofing visual authentication | |
CN115718445A (en) | Intelligent Internet of things management system suitable for museum | |
CN114800615A (en) | Robot real-time scheduling system and method based on multi-source perception | |
CN108022351A (en) | A kind of guest machine | |
CN112414224A (en) | Airspace security method and system for specific target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |