CN112947484A - Visual navigation method and device for mobile robot in intensive pedestrian environment - Google Patents
Visual navigation method and device for mobile robot in intensive pedestrian environment Download PDFInfo
- Publication number
- CN112947484A CN112947484A CN202110347180.0A CN202110347180A CN112947484A CN 112947484 A CN112947484 A CN 112947484A CN 202110347180 A CN202110347180 A CN 202110347180A CN 112947484 A CN112947484 A CN 112947484A
- Authority
- CN
- China
- Prior art keywords
- global path
- robot
- path planning
- mobile robot
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000000007 visual effect Effects 0.000 title claims description 18
- 230000002787 reinforcement Effects 0.000 claims abstract description 15
- 230000033001 locomotion Effects 0.000 claims abstract description 8
- 230000003068 static effect Effects 0.000 claims description 20
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 102100040653 Tryptophan 2,3-dioxygenase Human genes 0.000 description 1
- 101710136122 Tryptophan 2,3-dioxygenase Proteins 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a navigation method of a mobile robot in a dense pedestrian environment based on reinforcement learning and traditional path planning, which respectively carries out global path planning and local path planning by adopting a mode of combining the traditional path planning and the reinforcement learning. The method comprises the steps of learning the complex motion of pedestrians in the environment by using a reinforcement learning method, so that the autonomous obstacle avoidance of the mobile robot is realized, and the navigation of the mobile robot in a dynamic environment is further realized. The invention can quickly make obstacle avoidance decisions in the environment of dense pedestrians, and expands the application scene of the mobile robot.
Description
Technical Field
The invention belongs to the field of mobile robot navigation, relates to a visual navigation method and a visual navigation device of a mobile robot in a dense pedestrian environment, and particularly relates to a visual navigation method and a visual navigation device of a mobile robot in a dense pedestrian environment based on reinforcement learning and traditional path planning.
Background
When the robot uses the traditional navigation method to navigate, the method mainly comprises two steps: path planning and trajectory tracking. The first step is that according to the map of the current static environment, an optimal path is planned for the mobile robot under the premise that the current position and the target position are not collided; the best evaluation indexes are various, such as shortest path, lowest energy consumption and the like. And the second step is to plan the motion according to the track generated in the first step on the premise of satisfying the kinematic constraint of the robot. In a first step, global path planning based on static maps may ensure a basic optimality of the path. In the second step, although the conventional obstacle avoidance method can achieve collision avoidance with surrounding obstacles based on local information, global optimization cannot be guaranteed generally. However, the flexibility of local planning is emphasized too much in the process of trajectory tracking, and the navigation performance is not good due to the fact that the local optimal solution is easily trapped. Specific disadvantages of these methods are as follows: (1) algorithms contain a large number of parameters that need to be adjusted manually, which makes such algorithms very sensitive to scene changes and unable to adapt automatically to different scenes. (2) Acceptable action decision parameters for even a single scene adjustment require extensive experience and complex experimentation.
To address the above problems, more researchers have chosen to use learning-based methods to deal with manually adjusting difficult navigation environments. Many scholars use both mock learning and reinforcement learning in an attempt to solve decision-making problems in complex environments. Supervised learning-based mock learning uses an artificial neural network to fit state-to-motion mappings from a large number of expert samples, thereby enabling the robot to gain the ability to make acceptably reasonable motions in complex environments.
It should be noted that the assumption of supervised learning is that the samples satisfy independent and identically distributed features. However, the temporal correlation between the state pairs in the decision is so strong that the samples cannot effectively satisfy this premise assumption. Multi-modal problems in decision making also limit the generalization and applicability of mock learning.
Disclosure of Invention
The invention provides a visual navigation method of a mobile robot in a dense pedestrian environment, aiming at solving the problems in the prior art. The method for enhancing the learning is used for learning the complex motion of the pedestrian in the environment, so that the autonomous obstacle avoidance of the mobile robot is realized, the navigation of the mobile robot in the dynamic environment is further realized, and the application scene of the mobile robot is expanded.
In order to achieve the above object, an embodiment of the present invention provides a visual navigation method for a mobile robot in a dense pedestrian environment, including the following steps:
s101, acquiring a static environment map where the robot is located and the starting point position and the target point position of the robot;
s102, planning a global path for the starting-target point pair by using a Dijkstra algorithm;
s103, generating a plurality of global path points on the planned global path according to a fixed distance for subsequent local path planning;
and S104, carrying out local path planning by using a PPO algorithm to follow the global path.
Further, the Dijkstra algorithm inputs parameters including a static map, a current moment position of the robot and a target point position, and outputs parameters as global path points;
the PPO algorithm has the input parameters of a 2D RGB image, the current time position of the robot and the position of a global path point nearest to the robot, and the output parameters of the speed and the direction of the mobile robot.
Further, the global path planning only considers a static map where the mobile robot is located, and generates a global path to be realized by using a traditional path planning method Dijkstra algorithm.
Furthermore, the local path planning is used for completing a local navigation task between every two global path points, the motion condition of pedestrians near the mobile robot is judged according to 2D RGB image data returned by the RGB camera, and a PPO algorithm is input by combining the current position of the robot and the position of the nearest global path point, so that the reinforcement learning decision network can flexibly avoid surrounding static obstacles and pedestrians while following the planned global path.
Further, the 2D RGB image data input to the PPO algorithm is subjected to an attention mechanism in advance to extract visual features.
The embodiment of the invention also provides a visual navigation device of the mobile robot in the dense pedestrian environment, which comprises the following modules:
the acquisition module is used for acquiring a static environment map where the robot is located and the starting point position and the target point position of the robot;
a global path planning module for planning a global path for the start-target point pair using Dijkstra algorithm;
a global path point generating module, configured to generate a plurality of global path points on the planned global path according to a fixed distance, so as to be used for subsequent local path planning;
and the local path planning module is used for carrying out local path planning by adopting a PPO algorithm to follow the global path.
Compared with the prior art, the invention has the main advantages that: the invention combines reinforcement learning and the traditional path planning method, thereby not only ensuring the global optimality of the navigation path, but also enabling the mobile robot to flexibly avoid dynamic obstacles such as pedestrians, and the like, simultaneously having better generalization and being capable of adapting to the working environment which can change.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a visual navigation method of a mobile robot in a dense pedestrian environment according to the present invention;
FIG. 2 is a diagram of a simulation environment system architecture upon which the present invention is based;
fig. 3 is a functional block diagram of a visual navigation device of a mobile robot in a dense pedestrian environment according to the present invention.
Detailed Description
To facilitate understanding and implementing the present invention for those skilled in the art, the following technical solutions of the present invention are described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The navigation method of the reinforcement learning mobile robot in the intensive pedestrian environment of the invention has a flow chart as shown in fig. 1, and comprises the following steps:
s101, acquiring a static environment map where the robot is located and the starting point position and the target point position of the robot;
s102, planning a global path for the starting-target point pair by using a Dijkstra algorithm;
s103, generating a plurality of global path points on the planned global path according to a fixed distance for subsequent local path planning;
and S104, carrying out local path planning by using a PPO algorithm to follow the global path.
Dijkstra's algorithm was proposed in 1959 by the netherlands computer scientist dikstra, and is therefore also called the dikstra algorithm. The method is a shortest path algorithm from one vertex to the rest of the vertices, and solves the shortest path problem in the weighted graph. The dijkstra algorithm is mainly characterized in that a greedy algorithm strategy is adopted from a starting point, and adjacent nodes of vertexes which are nearest to the starting point and have not been visited are traversed each time until the nodes are expanded to a terminal point. The Dijkstra algorithm in step S102 inputs parameters including a static map, a current time position of the robot, and a target point position, and outputs parameters as global path points.
The PPO (Rapid Policy optimization) algorithm is proposed by OpenAI, is an on-Policy depth Reinforcement Learning algorithm based on Policy gradient optimization and oriented to a continuous or discrete action space, belongs to a DRL (Deep Reinforcement Learning) algorithm based on a random Policy, has not only good performance (especially for a continuous control problem), but also is easier to implement compared with the prior TRPO method. The PPO algorithm in step S104 has input parameters of the 2D RGB image, the current time position of the robot, and the position of the global path point nearest to the robot, and has output parameters of the speed and direction of the mobile robot.
Further, the global path planning only considers a static map where the mobile robot is located, and generates a global path to be realized by using a traditional path planning method Dijkstra algorithm.
Furthermore, the local path planning is used for completing a local navigation task between every two global path points, the motion condition of pedestrians near the mobile robot is judged according to 2D RGB image data returned by the RGB camera, and a PPO algorithm is input by combining the current position of the robot and the position of the nearest global path point, so that the reinforcement learning decision network can flexibly avoid surrounding static obstacles and pedestrians while following the planned global path.
Further, the 2D RGB image data input to the PPO algorithm is subjected to an attention mechanism in advance to extract visual features.
The simulation environment system shown in FIG. 2 is adopted for simulation, and the invention combines reinforcement learning and the traditional path planning method, thereby ensuring that the navigation path has global optimality, enabling the mobile robot to flexibly avoid dynamic obstacles such as pedestrians and the like, having better generalization and being capable of adapting to the working environment which can change.
Example two
As shown in fig. 3, the visual navigation device of a mobile robot in a dense pedestrian environment of the present invention includes the following modules:
the acquisition module 301 is configured to acquire a static environment map where the robot is located and the starting and target point positions of the robot;
a global path planning module 302 for planning a global path for the start-target point pair using Dijkstra's algorithm;
a global path point generating module 303, configured to generate a plurality of global path points on the planned global path according to a fixed distance, so as to be used for subsequent local path planning;
and a local path planning module 304, configured to perform local path planning by using a PPO algorithm to follow the global path.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes of the foregoing method embodiments, and are not described herein again.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart and block diagrams may represent a module, segment, or portion of code, which comprises one or more computer-executable instructions for implementing the logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. It will also be noted that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention, and is provided by way of illustration only and not limitation. It will be apparent to those skilled in the art from this disclosure that various other changes and modifications can be made without departing from the spirit and scope of the invention.
Claims (10)
1. A visual navigation method of a mobile robot in a dense pedestrian environment is characterized in that the method adopts a mode of combining traditional path planning and reinforcement learning to respectively carry out global path planning and local path planning, and the method comprises the following steps:
s101, acquiring a static environment map where the robot is located and the starting point position and the target point position of the robot;
s102, planning a global path for the starting-target point pair by using a Dijkstra algorithm;
s103, generating a plurality of global path points on the planned global path according to a fixed distance for subsequent local path planning;
and S104, carrying out local path planning by using a PPO algorithm to follow the global path.
2. The method of claim 1, wherein: the Dijkstra algorithm input parameters comprise a static map, the current moment position and the target point position of the robot, and the output parameters are global path points; the PPO algorithm comprises input parameters of a 2D RGB image, the current time position of the robot and the position of a global path point nearest to the robot, and output parameters of the PPO algorithm are the speed and the direction of the mobile robot.
3. The method of claim 2, wherein: the global path planning only considers a static map where the mobile robot is located, and generates a global path of an initial-target point pair to be realized by using a Dijkstra algorithm in a traditional path planning method.
4. The method of claim 2, wherein: the local path planning is used for completing a local navigation task between every two global path points, the motion condition of pedestrians nearby the mobile robot is judged according to 2D RGB image data returned by the RGB camera, and a PPO algorithm is input by combining the current position of the robot and the position of the nearest global path point, so that the reinforcement learning decision network can flexibly avoid surrounding static obstacles and pedestrians while following the planned global path.
5. The method according to any one of claims 1-4, wherein: the 2D RGB image data input into the PPO algorithm is subjected to attention mechanism extraction visual features in advance.
6. A visual navigation device for a mobile robot in a dense pedestrian environment, the device comprising:
the acquisition module is used for acquiring a static environment map where the robot is located and the starting point position and the target point position of the robot;
a global path planning module for planning a global path for the start-target point pair using Dijkstra algorithm;
a global path point generating module, configured to generate a plurality of global path points on the planned global path according to a fixed distance, so as to be used for subsequent local path planning;
and the local path planning module is used for carrying out local path planning by adopting a PPO algorithm to follow the global path.
7. The apparatus of claim 6, wherein: the Dijkstra algorithm input parameters comprise a static map, the current moment position and the target point position of the robot, and the output parameters are global path points; the PPO algorithm comprises input parameters of a 2D RGB image, the current time position of the robot and the position of a global path point nearest to the robot, and output parameters of the PPO algorithm are the speed and the direction of the mobile robot.
8. The apparatus of claim 7, wherein: the global path planning only considers a static map where the mobile robot is located, and generates a global path of an initial-target point pair to be realized by using a Dijkstra algorithm in a traditional path planning method.
9. The apparatus of claim 7, wherein: the local path planning is used for completing a local navigation task between every two global path points, the motion condition of pedestrians nearby the mobile robot is judged according to 2D RGB image data returned by the RGB camera, and a PPO algorithm is input by combining the current position of the robot and the position of the nearest global path point, so that the reinforcement learning decision network can flexibly avoid surrounding static obstacles and pedestrians while following the planned global path.
10. The apparatus according to any one of claims 6-9, wherein: the 2D RGB image data input into the PPO algorithm is subjected to attention mechanism extraction visual features in advance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110347180.0A CN112947484A (en) | 2021-03-31 | 2021-03-31 | Visual navigation method and device for mobile robot in intensive pedestrian environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110347180.0A CN112947484A (en) | 2021-03-31 | 2021-03-31 | Visual navigation method and device for mobile robot in intensive pedestrian environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112947484A true CN112947484A (en) | 2021-06-11 |
Family
ID=76231354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110347180.0A Pending CN112947484A (en) | 2021-03-31 | 2021-03-31 | Visual navigation method and device for mobile robot in intensive pedestrian environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112947484A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222719A (en) * | 2019-05-10 | 2019-09-10 | 中国科学院计算技术研究所 | A kind of character recognition method and system based on multiframe audio-video converged network |
CN111142542A (en) * | 2020-01-15 | 2020-05-12 | 苏州晨本智能科技有限公司 | Omnidirectional mobile robot autonomous navigation system based on VFH local path planning method |
CN111780777A (en) * | 2020-07-13 | 2020-10-16 | 江苏中科智能制造研究院有限公司 | Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning |
CN111949032A (en) * | 2020-08-18 | 2020-11-17 | 中国科学技术大学 | 3D obstacle avoidance navigation system and method based on reinforcement learning |
-
2021
- 2021-03-31 CN CN202110347180.0A patent/CN112947484A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222719A (en) * | 2019-05-10 | 2019-09-10 | 中国科学院计算技术研究所 | A kind of character recognition method and system based on multiframe audio-video converged network |
CN111142542A (en) * | 2020-01-15 | 2020-05-12 | 苏州晨本智能科技有限公司 | Omnidirectional mobile robot autonomous navigation system based on VFH local path planning method |
CN111780777A (en) * | 2020-07-13 | 2020-10-16 | 江苏中科智能制造研究院有限公司 | Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning |
CN111949032A (en) * | 2020-08-18 | 2020-11-17 | 中国科学技术大学 | 3D obstacle avoidance navigation system and method based on reinforcement learning |
Non-Patent Citations (3)
Title |
---|
QI LIU ETAL: "A 3D Simulation Environment and Navigation Approach for Robot Navigation via Deep Reinforcement Learning in Dense Pedestrian Environment", 《CASE》 * |
刘琼等: "基于视觉注意模型化计算的行人目标检测", 《北京信息科技大学学报(自然科学版)》 * |
赵谦等: "基于视觉注意机制的行人目标检测", 《计算机仿真》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111670468B (en) | Moving body behavior prediction device and moving body behavior prediction method | |
CN110007675B (en) | Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle | |
Smith et al. | Visual tracking for intelligent vehicle-highway systems | |
Tan et al. | Color model-based real-time learning for road following | |
US20190361456A1 (en) | Control systems, control methods and controllers for an autonomous vehicle | |
US20190361439A1 (en) | Control systems, control methods and controllers for an autonomous vehicle | |
US20190361454A1 (en) | Control systems, control methods and controllers for an autonomous vehicle | |
JP2020126639A (en) | Learning method for supporting safe autonomous driving, learning device, testing method, and testing device using the same | |
CN108983781A (en) | A kind of environment detection method in unmanned vehicle target acquisition system | |
CN109313857A (en) | Surrounding enviroment identification device | |
JP7092383B2 (en) | Methods and devices for performing seamless parameter changes by selecting a position-based algorithm to perform optimized autonomous driving in each region. | |
US11687079B2 (en) | Methods, devices, and systems for analyzing motion plans of autonomous vehicles | |
CN111830979A (en) | Trajectory optimization method and device | |
US12124269B2 (en) | Systems and methods for simultaneous localization and mapping using asynchronous multi-view cameras | |
CN113139696B (en) | Trajectory prediction model construction method and trajectory prediction method and device | |
Nuss et al. | Consistent environmental modeling by use of occupancy grid maps, digital road maps, and multi-object tracking | |
KR20160048530A (en) | Method and apparatus for generating pathe of autonomous vehicle | |
CN108694723A (en) | A kind of target in complex environment tenacious tracking method | |
Cardarelli et al. | Multisensor data fusion for obstacle detection in automated factory logistics | |
CN113741453A (en) | Path planning method, device, equipment and medium for unstructured environment | |
Rohrmuller et al. | Probabilistic mapping of dynamic obstacles using markov chains for replanning in dynamic environments | |
Zipfl et al. | Relation-based motion prediction using traffic scene graphs | |
Narksri et al. | Occlusion-aware motion planning with visibility maximization via active lateral position adjustment | |
Chen et al. | Multiple target tracking in occlusion area with interacting object models in urban environments | |
WO2022089627A1 (en) | Method and system for motion planning for an autonmous vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210611 |
|
RJ01 | Rejection of invention patent application after publication |