[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109159113A - A kind of robot manipulating task method of view-based access control model reasoning - Google Patents

A kind of robot manipulating task method of view-based access control model reasoning Download PDF

Info

Publication number
CN109159113A
CN109159113A CN201810924992.5A CN201810924992A CN109159113A CN 109159113 A CN109159113 A CN 109159113A CN 201810924992 A CN201810924992 A CN 201810924992A CN 109159113 A CN109159113 A CN 109159113A
Authority
CN
China
Prior art keywords
crawl
scene
robot
anchor point
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810924992.5A
Other languages
Chinese (zh)
Other versions
CN109159113B (en
Inventor
兰旭光
张翰博
周欣文
张扬
田智强
郑南宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201810924992.5A priority Critical patent/CN109159113B/en
Publication of CN109159113A publication Critical patent/CN109159113A/en
Application granted granted Critical
Publication of CN109159113B publication Critical patent/CN109159113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of robot manipulating task methods of view-based access control model reasoning, comprising: obtains scene image currently comprising multiple target objects by sensor;Using the inspection operation relational network based on depth convolutional network, the detection process to object in scene is completed, and obtains correct object operative relationship;Using the full convolution crawl detection network based on oriented anchor point frame, complete to the detection process for potentially grabbing position in scene;Based on sensing results, object and crawl position are matched by center point matching algorithm, the crawl vector in robot coordinate system is obtained by coordinate system transformation, completes the process of current scene job execution.It is improved using oriented anchor point frame and presets crawl frame in crawl position detection algorithm to the adaptability at multi-angle crawl position, improve the precision of crawl location detection.The present invention can make robot complete the job task to multiple target object in the case where pure vision inputs.

Description

A kind of robot manipulating task method of view-based access control model reasoning
Technical field
The invention belongs to computer visions and field in intelligent robotics, and in particular to a kind of robot of view-based access control model reasoning Operational method.
Background technique
Robot manipulating task method based on machine vision is greatly reduced compared to traditional robot manipulating task to structuring The degree of dependence of environment and complex information perception, simplifies the environment sensing during robot manipulating task and the process of modeling, more Adapt to the complex job task in non-structure environment.The algorithm of operation of robot includes two parts: perception and execution.It passes In the robot manipulating task algorithm of system, completed by merging the information of multiple sensors to ring in the environment sensing stage of robot The modeling of the target object in border, and in the stage of execution, by analyzing and optimizing, the movement for obtaining robot is configured and is completed final Implementation procedure.However, traditional robot manipulating task method is not able to satisfy the demand of application scenes.Firstly, in big portion It in time-sharing environment, obtains complete environmental information and information on target object is more difficult, and the modeling process of complex environment is needed Expend a large amount of computing resource;Secondly, for traditional work algorithm, since the system of hand-designed is to the abstract energy of environmental information Power is limited, therefore is difficult to be adapted in non-structured complex scene.For this purpose, the robot manipulating task system based on machine vision is logical It crosses simple scene image to input as system, completes the planning process of perception and the robot motion configuration to environment, To solve the deficiency of traditional work algorithm.But it still cannot preferably be suitable for complexity currently based on the robot manipulating task method of vision Job task in scene is primarily due to current visual task algorithm and has ignored complete environmental information and limitation item Part.For this purpose, how the scene understanding and current visual machine operational method of combining environmental, to obtain, to can adapt to more objects multiple The robot manipulating task algorithm of miscellaneous scene is current outstanding problem.
Summary of the invention
The purpose of the present invention is to overcome the above shortcomings and to provide a kind of robot manipulating task method based on machine vision, energy It is adapted to the job task of the complex scene of multiple target object, enough to guarantee the safety and reliability during job execution.
In order to achieve the above object, the present invention the following steps are included:
A kind of robot manipulating task method of view-based access control model reasoning, comprising the following steps:
Step 1: scene image currently comprising multiple target objects is obtained by sensor;
Step 2: using the inspection operation relational network based on depth convolutional network, completes the detection to object in scene Process, and obtain correct object operative relationship;
Step 3: using the full convolution crawl detection network based on oriented anchor point frame, completion is potentially grabbed in scene The detection process at position;
Step 4: to match object by center point matching algorithm and grab Step 2: based on the sensing results of step 3 Position is taken, the crawl vector in robot coordinate system is obtained by coordinate system transformation, completes the process of current scene job execution.
It is further improved as present aspect, detailed process is as follows for step 2:
It completes to obtain the convolution feature of scene to the characteristic extraction procedure of scene image I by trained convolutional network CNN(I);The convolution feature CNN (I) of usage scenario obtains scene in conjunction with the object detection algorithms based on multiple dimensioned convolution feature In object detection result;In conjunction with scene convolution feature CNN (I) and object detection result, by object to pond module, traversal All existing objects are to (Oi, O in scenej), obtain the convolution feature CNN (O of the object pair of fixed sizei,Oj);With object It is the input of relation inference network to feature, the operative relationship in scene between any pair of object is obtained by sorting algorithm, most The complete operative relationship tree of scene is obtained to operative relationship by merging all objects eventually.
It is further improved as present aspect, detailed process is as follows for step 3:
Feature extraction is carried out to scene image I by convolutional network, obtains multiple dimensioned scene characteristic figure;In each scale The oriented anchor point frame that fixed size is preset on characteristic pattern, the reference frame as crawl location detection;Using full convolution detector, with Scene characteristic figure is input, is returned to crawl position relative to the deviation for presetting oriented anchor point frame position, scale and angle Return;In conjunction with deviation regressand value and oriented anchor point frame is preset, obtains final crawl position testing result.
It is further improved as present aspect, in step 3, used crawl location detection network grabs number in Connell It is completed according to training on collection: in the training process, carrying out calibration crawl position first and preset the matching of oriented anchor point frame;When default Anchor point frame and the rotation angle difference at calibration crawl position are less than specific threshold, that is, think successful match, and to successful match Anchor point frame is preset with positive label;By combining there is the default anchor point frame of positive label to grab position relative to its its matched calibration Position, size, angle value recurrence loss and all default anchor point frames whether be matched to calibration crawl position Classification Loss, Final loss function is obtained, the training of network is completed.
As present aspect be further improved, step 4 the following steps are included:
By matching algorithm, location detection is grabbed to obtained in object detection result obtained in step 2 and step 3 As a result it is matched, obtains the Optimal Grasp position of all target objects in scene;In conjunction with Kinect depth characteristic figure, by image Crawl position in coordinate system is converted to the crawl vector in robot coordinate system, completes robot manipulating task action planning and executes Movement.
It is further improved as present aspect, detailed process is as follows for step 4:
Acquired object central point is detected detected in acquired crawl position central point and step 2 in step 3 Distance be measurement, carry out crawl position and object matching;After the completion of matching, that is, obtain that current scene is lower should carry out operation Object and corresponding crawl position;In conjunction with the deep image information that Kinect is obtained, it is the smallest to obtain depth in crawl position Point calculates the surface normal near it as crawl vector as crawl point;It is sat by Kinect coordinate system and robot The coordinate transform for marking system, converts the crawl vector in robot coordinate system for the surface normal in Kinect coordinate system, complete At robot manipulating task action planning and execute movement.
Compared with prior art, the invention has the following advantages that
The present invention is grasped by being input with the scene image comprising target object using the vision based on depth convolutional network Make relational network and carry out object detection and operative relationship reasoning, using the full convolution crawl detection network based on oriented anchor point frame into Row grasping body location detection matches object and crawl position by center point matching algorithm, obtains machine by coordinate system transformation Crawl vector in device people's coordinate system completes robot manipulating task.Whole process combines under complex scene operative relationship between object Understanding process, with help robot can in autonomous classification scene object correct operation relationship, with safer and more reliable Mode carries out robot manipulating task;Oriented anchor point frame mechanism is preset by introducing, is improved in the detection algorithm of existing crawl position not The shortcomings that multi-angle crawl location detection can preferably be adapted to, improve the precision of crawl position detection algorithm.By using depth Character displacement hand-designed feature, the validity and reliability of method for improving;Using object to pond solve object detection and The two-part end-to-end training of operative relationship reasoning and test, are greatly saved the training time, improve the real-time of algorithm, take Obtained higher precision;It is improved using oriented anchor point frame and presets crawl frame in crawl position detection algorithm to multi-angle crawl section The adaptability of position improves the precision of crawl location detection.The present invention can make robot in the case where pure vision inputs The job task to multiple target object is completed, to help the further universal of intelligent robot and develop.
Detailed description of the invention
Fig. 1 is method frame figure of the invention;
Fig. 2 is inspection operation relational network schematic diagram of the invention;
Fig. 3 is crawl location detection Principles of Network figure of the invention.
Specific embodiment
It elaborates with reference to the accompanying drawings and examples to the present invention.
As shown in Figure 1, a kind of robot manipulating task method of view-based access control model reasoning of the present invention, comprising the following steps:
Step 1: scene image I currently comprising multiple target objects is obtained by Kinect sensor;
Step 2: by inspection operation relational network, the detection process to object in scene is completed, and obtains correct object Body operative relationship.
As shown in Fig. 2, detailed process is as follows: completing the feature extraction to scene image I by trained convolutional network Process obtains the convolution feature CNN (I) of scene;The convolution feature CNN (I) of usage scenario is based on multiple dimensioned convolution in conjunction with one kind The object detection algorithms of feature obtain the object detection result in scene;In conjunction with scene convolution feature CNN (I) and object detection As a result, by object to pond module, traverse it is all in scene existing for objects to (Oi,Oj), obtain the object pair of fixed size Convolution feature CNN (Oi,Oj);Using object to feature as the input of relation inference network, by sorting algorithm obtain in scene appoint Operative relationship between a pair of of anticipating object obtains the complete operative relationship of scene to operative relationship eventually by all objects are merged Tree.
Preferably due to which the scene image I of input is admitted in trained convolutional network.Therefore, energy is obtained for training The inspection operation relational network for enough carrying out scene characteristic extraction, object detection and operative relationship reasoning acquires inspection operation pass It is data set.In order to realize the end-to-end training of object detection and operative relationship reasoning, object is devised to pond module, is passed through Object pair all in scene is traversed, the convolution feature of object pair is obtained, and carry out adaptive pool, as operative relationship reasoning The input of network.During trained loss includes recurrence loss and Classification Loss and relation inference during object detection Classification Loss.By minimizing above-mentioned loss function, network parameter is adjusted using the algorithm of stochastic gradient descent, is obtained Take final inspection operation relational network model.In this step, pass through end-to-end object detection and operative relationship reasoning Training, the feature for extracting convolutional network are suitable for improving method precision in object detection and operative relationship reasoning, regard Feel and achieve precision improvement more than 13% on operative relationship data set, and overcomes existing method not to be able to satisfy robot real-time Property require the shortcomings that.
Step 3: it by grabbing location detection network, completes to the detection process for potentially grabbing position in scene.
As shown in figure 3, detailed process is as follows: carrying out feature extraction to scene image I by convolutional network, obtain multiple dimensioned Scene characteristic figure;The oriented anchor point frame that fixed size is preset on each scale feature figure, the base as crawl location detection Quasi- frame;Be input with scene characteristic figure using full convolution detector, to crawl position relative to preset oriented anchor point frame position, The deviation of scale and angle is returned;In conjunction with deviation regressand value and oriented anchor point frame is preset, obtains final crawl position Testing result.
Preferably, the training on Connell crawl data set of crawl location detection network used in this step is completed.? In training process, calibration crawl position is carried out first and presets the matching of oriented anchor point frame.When default anchor point frame and calibration crawl The rotation angle difference at position is less than specific threshold, it can think successful match, and to the default anchor point frame of successful match with Positive label.By combine have the default anchor point frame of positive label relative to the position at its its matched calibration crawl position, size, Whether the recurrence loss of angle value and all default anchor point frames are matched to the Classification Loss at calibration crawl position, obtain final damage Function is lost, the training of network is completed.What is proposed presets oriented anchor point frame mechanism, can overcome existing method that cannot adapt to very well more Angle grabs the shortcomings that location detection, the performance boost more than 7% is achieved on Connell crawl data set, and pass through angle Deviation carries out default anchor point frame and calibration crawl position is matched, instead of originally based on the matching process of overlapping area, Improve the training speed of algorithm.
Step 4: to complete the process of current scene job execution Step 2: based on the sensing results of step 3.
Detailed process is as follows: by matching algorithm, to obtaining in object detection result obtained in step 2 and step 3 Crawl position testing result matched, obtain scene in all target objects Optimal Grasp position;In conjunction with Kinect depth Characteristic pattern is spent, the crawl position in image coordinate system is converted into the crawl vector in robot coordinate system, robot is completed and makees Industry action planning simultaneously executes movement.
Acquired object central point is detected detected in acquired crawl position central point and step 2 in step 3 Distance be measurement, carry out crawl position and object matching.After the completion of matching, it can be obtained that current scene is lower to be made The object of industry and corresponding crawl position;In conjunction with the deep image information that Kinect is obtained, it is minimum to obtain depth in crawl position Point as crawl point, and calculate the surface normal near it as crawl vector;Pass through Kinect coordinate system and robot The surface normal in Kinect coordinate system is converted the crawl vector in robot coordinate system by the coordinate transform of coordinate system, Complete final operation.
1 inspection operation relational dataset verification result of table
2 robot experiment success rate of table
Tables 1 and 2 illustrates the partial simulation and experimental result of context of methods.Algorithm is carried out on data set from table 1 As can be seen that object detection, operative relationship of this paper algorithm on inspection operation relational dataset are examined in the test result of performance Higher performance is all achieved in terms of surveying precision and the precision of images.Wherein, operative relationship accuracy representing is single with operative relationship The accuracy rate of position, the precision of images are illustrated by all objects in an image and the accuracy rate as unit of all operative relationships.Table 2 Illustrate robot experiment success rate, it can be seen that after inspection operation relational network is added, view-based access control model proposed in this paper is pushed away Operation success rate of the robot manipulating task method of reason in more object scenes, which has, significantly to be promoted.
The robot manipulating task method of a kind of view-based access control model reasoning of the invention, by with the scene image comprising target object For input, object detection and operative relationship reasoning are carried out using the inspection operation relational network based on depth convolutional network, is used Full convolution crawl detection network based on oriented anchor point frame carries out grasping body location detection, is matched by center point matching algorithm Object and crawl position, obtain the crawl vector in robot coordinate system by coordinate system transformation, complete robot manipulating task.This hair Bright operative relationship reasoning process and crawl position detection process is based on deep learning algorithm, is substituted by using depth characteristic Hand-designed feature, the validity and reliability of method for improving;Solve object detection and operative relationship to pond using object The two-part end-to-end training of reasoning and test, are greatly saved the training time, improve the real-time of algorithm, achieve higher Precision;It is improved using oriented anchor point frame and presets adaptation of the crawl frame to multi-angle crawl position in crawl position detection algorithm Ability improves the precision of crawl location detection.The present invention can be such that robot completes in the case where pure vision inputs to more The job task of target object, to help the further universal of intelligent robot and develop.
Although specific embodiments of the present invention are described in conjunction with attached drawing above, the invention is not limited to upper The specific embodiment stated, above-mentioned specific embodiment are only schematical, directiveness rather than restrictive.This The those of ordinary skill in field under the enlightenment of this specification, in the feelings for not departing from scope of the claimed protection of the invention Under condition, a variety of forms can also be made, these belong to the column of protection of the invention.

Claims (6)

1. a kind of robot manipulating task method of view-based access control model reasoning, which comprises the following steps:
Step 1: scene image currently comprising multiple target objects is obtained by sensor;
Step 2: using the inspection operation relational network based on depth convolutional network, completes the detection process to object in scene, And obtain correct object operative relationship;
Step 3: it using the full convolution crawl detection network based on oriented anchor point frame, completes to potentially grabbing position in scene Detection process;
Step 4: to match object and crawl section by center point matching algorithm Step 2: based on the sensing results of step 3 Position obtains the crawl vector in robot coordinate system by coordinate system transformation, completes the process of current scene job execution.
2. a kind of robot manipulating task method of view-based access control model reasoning according to claim 1, which is characterized in that step 2 Detailed process is as follows:
It completes to obtain the convolution feature CNN of scene to the characteristic extraction procedure of scene image I by trained convolutional network (I);The convolution feature CNN (I) of usage scenario is obtained in scene in conjunction with the object detection algorithms based on multiple dimensioned convolution feature Object detection result;Field is traversed by object to pond module in conjunction with scene convolution feature CNN (I) and object detection result All existing objects are to (O in scapei,Oj), obtain the convolution feature CNN (O of the object pair of fixed sizei,Oj);With object pair Feature is the input of relation inference network, obtains the operative relationship in scene between any pair of object by sorting algorithm, finally The complete operative relationship tree of scene is obtained to operative relationship by merging all objects.
3. a kind of robot manipulating task method of view-based access control model reasoning according to claim 1, which is characterized in that step 3 Detailed process is as follows:
Feature extraction is carried out to scene image I by convolutional network, obtains multiple dimensioned scene characteristic figure;In each scale feature The oriented anchor point frame that fixed size is preset on figure, the reference frame as crawl location detection;Using full convolution detector, with scene Characteristic pattern is input, is returned to crawl position relative to the deviation for presetting oriented anchor point frame position, scale and angle;Knot It closes deviation regressand value and presets oriented anchor point frame, obtain final crawl position testing result.
4. a kind of robot manipulating task method of view-based access control model reasoning according to claim 3, which is characterized in that step 3 In, the training on Connell crawl data set of used crawl location detection network is completed: in the training process, being carried out first Calibration crawl position and the matching for presetting oriented anchor point frame;When the rotation angle difference of default anchor point frame and calibration crawl position is small In specific threshold, that is, think successful match, and to the default anchor point frame of successful match with positive label;There is positive label by combining Default anchor point frame relative to the position at its its matched calibration crawl position, size, recurrence loss of angle value and all pre- If whether anchor point frame is matched to the Classification Loss at calibration crawl position, final loss function is obtained, the training of network is completed.
5. a kind of robot manipulating task method of view-based access control model reasoning according to claim 1, which is characterized in that step 4 packet Include following steps:
By matching algorithm, to crawl position testing result obtained in object detection result obtained in step 2 and step 3 It is matched, obtains the Optimal Grasp position of all target objects in scene;In conjunction with Kinect depth characteristic figure, by image coordinate Crawl position in system is converted to the crawl vector in robot coordinate system, completes robot manipulating task action planning and executes dynamic Make.
6. a kind of robot manipulating task method of view-based access control model reasoning according to claim 5, which is characterized in that step 4 tool Body process is as follows:
With detected in step 3 the object central point acquired in being detected in acquired crawl position central point and step 2 away from From to measure, the matching at crawl position and object is carried out;After the completion of matching, that is, obtain the lower object that should carry out operation of current scene Body and corresponding crawl position;In conjunction with the deep image information that Kinect is obtained, obtains the smallest point of depth in crawl position and make To grab point, and the surface normal near it is calculated as crawl vector;Pass through Kinect coordinate system and robot coordinate system Coordinate transform, convert the crawl vector in robot coordinate system for the surface normal in Kinect coordinate system, complete machine Device people's operation action planning simultaneously executes movement.
CN201810924992.5A 2018-08-14 2018-08-14 Robot operation method based on visual reasoning Active CN109159113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810924992.5A CN109159113B (en) 2018-08-14 2018-08-14 Robot operation method based on visual reasoning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810924992.5A CN109159113B (en) 2018-08-14 2018-08-14 Robot operation method based on visual reasoning

Publications (2)

Publication Number Publication Date
CN109159113A true CN109159113A (en) 2019-01-08
CN109159113B CN109159113B (en) 2020-11-10

Family

ID=64895691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810924992.5A Active CN109159113B (en) 2018-08-14 2018-08-14 Robot operation method based on visual reasoning

Country Status (1)

Country Link
CN (1) CN109159113B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919151A (en) * 2019-01-30 2019-06-21 西安交通大学 A kind of robot vision reasoning grasping means based on ad-hoc network
CN110271000A (en) * 2019-06-18 2019-09-24 清华大学深圳研究生院 A kind of grasping body method based on oval face contact
CN110302981A (en) * 2019-06-17 2019-10-08 华侨大学 A kind of solid waste sorts online grasping means and system
CN111618842A (en) * 2019-02-28 2020-09-04 因特利格雷特总部有限责任公司 Vision calibration system for robotic carton unloading
CN111906782A (en) * 2020-07-08 2020-11-10 西安交通大学 Intelligent robot grabbing method based on three-dimensional vision
CN112099493A (en) * 2020-08-31 2020-12-18 西安交通大学 Autonomous mobile robot trajectory planning method, system and equipment
CN113799124A (en) * 2021-08-30 2021-12-17 贵州大学 Robot flexible grabbing detection method in unstructured environment
CN116416444A (en) * 2021-12-29 2023-07-11 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
US20180181809A1 (en) * 2016-12-28 2018-06-28 Nvidia Corporation Unconstrained appearance-based gaze estimation
CN108229665A (en) * 2018-02-02 2018-06-29 上海建桥学院 A kind of the System of Sorting Components based on the convolutional neural networks by depth

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180181809A1 (en) * 2016-12-28 2018-06-28 Nvidia Corporation Unconstrained appearance-based gaze estimation
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108229665A (en) * 2018-02-02 2018-06-29 上海建桥学院 A kind of the System of Sorting Components based on the convolutional neural networks by depth

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王昊然: "一种基于多层卷积特征高阶融合的多任务目标检测系统研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919151A (en) * 2019-01-30 2019-06-21 西安交通大学 A kind of robot vision reasoning grasping means based on ad-hoc network
CN111618842A (en) * 2019-02-28 2020-09-04 因特利格雷特总部有限责任公司 Vision calibration system for robotic carton unloading
CN110302981A (en) * 2019-06-17 2019-10-08 华侨大学 A kind of solid waste sorts online grasping means and system
CN110271000A (en) * 2019-06-18 2019-09-24 清华大学深圳研究生院 A kind of grasping body method based on oval face contact
CN110271000B (en) * 2019-06-18 2020-09-22 清华大学深圳研究生院 Object grabbing method based on elliptical surface contact
CN111906782B (en) * 2020-07-08 2021-07-13 西安交通大学 Intelligent robot grabbing method based on three-dimensional vision
CN111906782A (en) * 2020-07-08 2020-11-10 西安交通大学 Intelligent robot grabbing method based on three-dimensional vision
CN112099493A (en) * 2020-08-31 2020-12-18 西安交通大学 Autonomous mobile robot trajectory planning method, system and equipment
CN112099493B (en) * 2020-08-31 2021-11-19 西安交通大学 Autonomous mobile robot trajectory planning method, system and equipment
CN113799124A (en) * 2021-08-30 2021-12-17 贵州大学 Robot flexible grabbing detection method in unstructured environment
CN113799124B (en) * 2021-08-30 2022-07-15 贵州大学 Robot flexible grabbing detection method in unstructured environment
CN116416444A (en) * 2021-12-29 2023-07-11 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system
CN116416444B (en) * 2021-12-29 2024-04-16 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system

Also Published As

Publication number Publication date
CN109159113B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN109159113A (en) A kind of robot manipulating task method of view-based access control model reasoning
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN108108764B (en) Visual SLAM loop detection method based on random forest
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN108972494A (en) A kind of Apery manipulator crawl control system and its data processing method
CN107480730A (en) Power equipment identification model construction method and system, the recognition methods of power equipment
CN112446870B (en) Pipeline damage detection method, device, equipment and storage medium
Qian et al. Grasp pose detection with affordance-based task constraint learning in single-view point clouds
CN107705322A (en) Motion estimate tracking and system
CN110796700B (en) Multi-object grabbing area positioning method based on convolutional neural network
CN111598172A (en) Dynamic target grabbing posture rapid detection method based on heterogeneous deep network fusion
CN111906782B (en) Intelligent robot grabbing method based on three-dimensional vision
CN115319739B (en) Method for grabbing workpiece based on visual mechanical arm
Ghazaei et al. Dealing with ambiguity in robotic grasping via multiple predictions
CN106599810A (en) Head pose estimation method based on stacked auto-encoding
Laili et al. Custom grasping: A region-based robotic grasping detection method in industrial cyber-physical systems
CN112947458B (en) Robot accurate grabbing method based on multi-mode information and computer readable medium
Jian et al. A fruit detection algorithm based on r-fcn in natural scene
Cheng et al. A grasp pose detection scheme with an end-to-end CNN regression approach
CN111241936A (en) Human body posture estimation method based on depth and color image feature fusion
CN104361573B (en) The SIFT feature matching algorithm of Fusion of Color information and global information
Frank et al. Stereo-vision for autonomous industrial inspection robots
Wang et al. GraspFusionNet: a two-stage multi-parameter grasp detection network based on RGB–XYZ fusion in dense clutter
Lin et al. Target recognition and optimal grasping based on deep learning
CN106682638A (en) System for positioning robot and realizing intelligent interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant